Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
1,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
seaborn的绘图函数练习
处理一组数据时,首先要做的第一件事是了解变量是如何分布的
Step1: 核密度KDE的估计是对每个rug进行估计,然后把所有的KDE曲线加起来,之后进行归一化,得到所有的数据的平均KDE曲线
Step2: 还可以使用distplot()拟合参数分布到数据集,并直观地评估它与观察数据的对应关系
Step3: 可视化数据集中的成对关系¶
要在数据集中绘制多个成对的双变量分布,可以使用该pairplot()函数。
这将创建一个轴矩阵并显示DataFrame中每对列的关系。默认情况下,它也绘制每个变量在对角轴上的单变量分布
Step4: 用分类数据绘图
Step5: 在一个条形图中,散点图通常会重叠。这使得很难看到数据的完整分布。一个简单的解决方案是使用一些随机的“抖动”来调整位置(仅沿分类轴)
Step6: 一种不同的方法是使用函数swarmplot(),该函数将分类轴上的每个散点图点与避免重叠点的算法对齐
Step7: 类别内观察变量的分布
Step8: Violinplots
A different approach is a violinplot(), which combines a boxplot with the kernel density estimation procedure described
Step9: 类别内的统计估计
A special case for the bar plot is when you want to show the number of observations in each category rather than computing a statistic for a second variable. This is similar to a histogram over a categorical, rather than quantitative, variable. In seaborn, it’s easy to do so with the countplot() function
Step10: 绘制“宽格式”数据
虽然使用“长格式”或“整洁”数据是优选的,但是这些功能也可以应用于各种格式的“宽格式”数据,
包括pandas DataFrame或二维numpy数组阵列。这些对象应该直接传递给数据参数
Step11: 绘制多层面板分类图 | Python Code:
import numpy as np
import pandas as pd
from scipy import stats, integrate
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
np.random.seed(sum(map(ord, "distributions")))
x = np.random.normal(size=100) # 单变量分布将绘制 直方图,并绘制 KDE (核心密度函数)
sns.distplot(x) # 分布绘制
plt.show()
sns.distplot(x, kde=False, rug=True) # 去除密度曲线并在每个观测点绘制一个小的垂直刻度
# 可以使用该rugplot()功能制作地毯本身,但它也可用于distplot()
plt.show()
sns.distplot(x, bins=20, kde=False, rug=True) # 一共可以分为20个段
plt.show()
Explanation: seaborn的绘图函数练习
处理一组数据时,首先要做的第一件事是了解变量是如何分布的
End of explanation
sns.kdeplot(x, shade=True) # KDE曲线绘制
plt.show()
Explanation: 核密度KDE的估计是对每个rug进行估计,然后把所有的KDE曲线加起来,之后进行归一化,得到所有的数据的平均KDE曲线
End of explanation
sns.set_style("whitegrid")
x = np.random.gamma(6, size=200)
sns.distplot(x, kde=False, fit=stats.gamma)
plt.show()
# 使双变量分布可视化的最熟悉的方法是散点图,其中每个观察点都以x和y值的点显示。这对于两个方面的rug是分不开的。
# 您可以使用matplotlib plt.scatter函数绘制一个散点图,它也是该jointplot()函数显示的默认类型:
sns.set()
mean, cov = [0, 1], [(1, .5), (.5, 1)]
data = np.random.multivariate_normal(mean, cov, 200)
df = pd.DataFrame(data, columns=["x", "y"])
print(df)
sns.jointplot(x="x", y="y", data=df)
plt.show()
# The bivariate analogue of a histogram is known as a “hexbin” plot, because it shows the counts
# of observations that fall within hexagonal bins. This plot works best with relatively large datasets.
# It’s available through the matplotlib plt.hexbin function and as a style in jointplot(). It looks best with a white background:
# 直方图的二元绘制,在数据量大的时候更有效
x, y = np.random.multivariate_normal(mean, cov, 1000).T
with sns.axes_style("white"):
sns.jointplot(x=x, y=y, kind="hex", color="k")
plt.show()
sns.jointplot(x="x", y="y", data=df, kind="kde") # 高维的同样可以绘制 KDE
plt.show()
Explanation: 还可以使用distplot()拟合参数分布到数据集,并直观地评估它与观察数据的对应关系
End of explanation
iris = sns.load_dataset("iris")
sns.pairplot(iris)
plt.show()
Explanation: 可视化数据集中的成对关系¶
要在数据集中绘制多个成对的双变量分布,可以使用该pairplot()函数。
这将创建一个轴矩阵并显示DataFrame中每对列的关系。默认情况下,它也绘制每个变量在对角轴上的单变量分布
End of explanation
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid", color_codes=True)
np.random.seed(sum(map(ord, "categorical")))
titanic = sns.load_dataset("titanic")
tips = sns.load_dataset("tips")
iris = sns.load_dataset("iris")
sns.stripplot(x="day", y="total_bill", data=tips) # 分类散点图
plt.show()
Explanation: 用分类数据绘图
End of explanation
sns.stripplot(x="day", y="total_bill", data=tips, jitter=True)
plt.show()
Explanation: 在一个条形图中,散点图通常会重叠。这使得很难看到数据的完整分布。一个简单的解决方案是使用一些随机的“抖动”来调整位置(仅沿分类轴)
End of explanation
sns.swarmplot(x="day", y="total_bill", data=tips) # 尽量分散的方式绘制
plt.show()
# 添加一个新的分类变量
sns.swarmplot(x="day", y="total_bill", hue="sex", data=tips) # hue 参数新增一个分类变量
plt.show()
# In general, the seaborn categorical plotting functions try to infer the order of categories from the data.
# If your data have a pandas Categorical datatype, then the default order of the categories can be set there.
# For other datatypes, string-typed categories will be plotted in the order they appear in the DataFrame,
# but categories that look numerical will be sorted
sns.swarmplot(x="size", y="total_bill", hue="sex", data=tips)
plt.show()
# 可以调整方向
sns.swarmplot(y="day", x="total_bill", hue="sex", data=tips)
plt.show()
Explanation: 一种不同的方法是使用函数swarmplot(),该函数将分类轴上的每个散点图点与避免重叠点的算法对齐
End of explanation
sns.boxplot(x="day", y="total_bill", hue="time", data=tips) # 箱线图
plt.show()
Explanation: 类别内观察变量的分布
End of explanation
sns.violinplot(x="total_bill", y="day", hue="time", data=tips)
plt.show()
sns.violinplot(y="total_bill", x="day", hue="time", data=tips,split=True)
plt.show()
sns.violinplot(x="day", y="total_bill", hue="sex", data=tips,
split=True, inner="stick", palette="Set3") # 画直方图而不是箱线图
plt.show()
# 可以相互结合
sns.violinplot(x="day", y="total_bill", data=tips, inner=None) # 默认的inner 是箱线图
sns.swarmplot(x="day", y="total_bill", data=tips, color="w", alpha=.5)
plt.show()
Explanation: Violinplots
A different approach is a violinplot(), which combines a boxplot with the kernel density estimation procedure described
End of explanation
sns.countplot(x="deck", data=titanic, palette="Greens_d")
plt.show()
sns.pointplot(x="sex", y="survived", hue="class", data=titanic) # 竖线表示 置信区间
plt.show()
sns.pointplot(x="class", y="survived", hue="sex", data=titanic,
palette={"male": "g", "female": "m"},
markers=["^", "o"], linestyles=["-", "--"])
plt.show()
Explanation: 类别内的统计估计
A special case for the bar plot is when you want to show the number of observations in each category rather than computing a statistic for a second variable. This is similar to a histogram over a categorical, rather than quantitative, variable. In seaborn, it’s easy to do so with the countplot() function:
End of explanation
sns.boxplot(data=iris, orient="h")
plt.show()
Explanation: 绘制“宽格式”数据
虽然使用“长格式”或“整洁”数据是优选的,但是这些功能也可以应用于各种格式的“宽格式”数据,
包括pandas DataFrame或二维numpy数组阵列。这些对象应该直接传递给数据参数
End of explanation
sns.factorplot(x="day", y="total_bill", hue="smoker", data=tips, kind="bar") # 带有误差线
tips = sns.load_dataset("tips")
print(tips.describe())
plt.show()
sns.factorplot(x="day", y="total_bill", hue="smoker", # 绘制多列数据
col="time", data=tips, kind="swarm")
plt.show()
Explanation: 绘制多层面板分类图
End of explanation |
1,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pythonic Syntactic Sugar <a href="https
Step1: Let us begin by developing a convenient method for displaying images in our notebooks.
Step2: Multi-dimension slice indexing
If you are familiar with numpy, sliced index then this should be cake for the SimpleITK image. The Python standard slice interface for 1-D object
Step3: Cropping
Step4: Flipping
Step5: Slice Extraction
A 2D image can be extracted from a 3D one.
Step6: Subsampling
Step7: Mathematical Operators
Most python mathematical operators are overloaded to call the SimpleITK filter which does that same operation on a per-pixel basis. They can operate on a two images or an image and a scalar.
If two images are used then both must have the same pixel type. The output image type is usually the same.
As these operators basically call ITK filter, which just use raw C++ operators, care must be taken to prevent overflow, and divide by zero etc.
<table>
<tr><td>Operators</td></tr>
<tr><td>+</td></tr>
<tr><td>-</td></tr>
<tr><td>*</td></tr>
<tr><td>/</td></tr>
<tr><td>//</td></tr>
<tr><td>**</td></tr>
</table>
Step8: Division Operators
All three Python division operators are implemented __floordiv__, __truediv__, and __div__.
The true division's output is a double pixel type.
See PEP 238 to see why Python changed the division operator in Python 3.
Bitwise Logic Operators
<table>
<tr><td>Operators</td></tr>
<tr><td>&</td></tr>
<tr><td>|</td></tr>
<tr><td>^</td></tr>
<tr><td>~</td></tr>
</table>
Step9: Comparative Operators
<table>
<tr><td>Operators</td></tr>
<tr><td>></td></tr>
<tr><td>>=</td></tr>
<tr><td><</td></tr>
<tr><td><=</td></tr>
<tr><td>==</td></tr>
</table>
These comparative operators follow the same convention as the reset of SimpleITK for binary images. They have the pixel type of sitkUInt8 with values of 0 and 1.
Step10: Amazingly make common trivial tasks really trivial | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rc("image", aspect="equal")
import SimpleITK as sitk
# Download data to work on
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
Explanation: Pythonic Syntactic Sugar <a href="https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F02_Pythonic_Image.ipynb"><img style="float: right;" src="https://mybinder.org/badge_logo.svg"></a>
The Image Basics Notebook was straight forward and closely follows ITK's C++ interface.
Sugar is great it gives your energy to get things done faster! SimpleITK has applied a generous about of syntactic sugar to help get things done faster too.
End of explanation
img = sitk.GaussianSource(size=[64] * 2)
plt.imshow(sitk.GetArrayViewFromImage(img))
img = sitk.GaborSource(size=[64] * 2, frequency=0.03)
plt.imshow(sitk.GetArrayViewFromImage(img))
def myshow(img):
nda = sitk.GetArrayViewFromImage(img)
plt.imshow(nda)
myshow(img)
Explanation: Let us begin by developing a convenient method for displaying images in our notebooks.
End of explanation
img[24, 24]
Explanation: Multi-dimension slice indexing
If you are familiar with numpy, sliced index then this should be cake for the SimpleITK image. The Python standard slice interface for 1-D object:
<table>
<tr><td>Operation</td> <td>Result</td></tr>
<tr><td>d[i]</td> <td>i-th item of d, starting index 0</td></tr>
<tr><td>d[i:j]</td> <td>slice of d from i to j</td></tr>
<tr><td>d[i:j:k]</td> <td>slice of d from i to j with step k</td></tr>
</table>
With this convenient syntax many basic tasks can be easily done.
End of explanation
myshow(img[16:48, :])
myshow(img[:, 16:-16])
myshow(img[:32, :32])
Explanation: Cropping
End of explanation
img_corner = img[:32, :32]
myshow(img_corner)
myshow(img_corner[::-1, :])
myshow(
sitk.Tile(
img_corner,
img_corner[::-1, ::],
img_corner[::, ::-1],
img_corner[::-1, ::-1],
[2, 2],
)
)
Explanation: Flipping
End of explanation
img = sitk.GaborSource(size=[64] * 3, frequency=0.05)
# Why does this produce an error?
myshow(img)
myshow(img[:, :, 32])
myshow(img[16, :, :])
Explanation: Slice Extraction
A 2D image can be extracted from a 3D one.
End of explanation
myshow(img[:, ::3, 32])
Explanation: Subsampling
End of explanation
img = sitk.ReadImage(fdata("cthead1.png"))
img = sitk.Cast(img, sitk.sitkFloat32)
myshow(img)
img[150, 150]
timg = img**2
myshow(timg)
timg[150, 150]
Explanation: Mathematical Operators
Most python mathematical operators are overloaded to call the SimpleITK filter which does that same operation on a per-pixel basis. They can operate on a two images or an image and a scalar.
If two images are used then both must have the same pixel type. The output image type is usually the same.
As these operators basically call ITK filter, which just use raw C++ operators, care must be taken to prevent overflow, and divide by zero etc.
<table>
<tr><td>Operators</td></tr>
<tr><td>+</td></tr>
<tr><td>-</td></tr>
<tr><td>*</td></tr>
<tr><td>/</td></tr>
<tr><td>//</td></tr>
<tr><td>**</td></tr>
</table>
End of explanation
img = sitk.ReadImage(fdata("cthead1.png"))
myshow(img)
Explanation: Division Operators
All three Python division operators are implemented __floordiv__, __truediv__, and __div__.
The true division's output is a double pixel type.
See PEP 238 to see why Python changed the division operator in Python 3.
Bitwise Logic Operators
<table>
<tr><td>Operators</td></tr>
<tr><td>&</td></tr>
<tr><td>|</td></tr>
<tr><td>^</td></tr>
<tr><td>~</td></tr>
</table>
End of explanation
img = sitk.ReadImage(fdata("cthead1.png"))
myshow(img)
Explanation: Comparative Operators
<table>
<tr><td>Operators</td></tr>
<tr><td>></td></tr>
<tr><td>>=</td></tr>
<tr><td><</td></tr>
<tr><td><=</td></tr>
<tr><td>==</td></tr>
</table>
These comparative operators follow the same convention as the reset of SimpleITK for binary images. They have the pixel type of sitkUInt8 with values of 0 and 1.
End of explanation
myshow(img > 90)
myshow(img > 150)
myshow((img > 90) + (img > 150))
Explanation: Amazingly make common trivial tasks really trivial
End of explanation |
1,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2017 Google LLC.
Step1: # Travail préalable | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2017 Google LLC.
End of explanation
from __future__ import print_function
import tensorflow as tf
c = tf.constant('Hello, world!')
with tf.Session() as sess:
print(sess.run(c))
Explanation: # Travail préalable : Hello World
Objectif de formation : Exécuter un programme TensorFlow dans le navigateur.
Voici un programme TensorFlow "Hello World" :
End of explanation |
1,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split("\n")]
target_id_text = [[target_vocab_to_int[word] for word in sentence.split()] + [target_vocab_to_int['<EOS>']] for sentence in target_text.split("\n")]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
inputs_ = tf.placeholder(tf.int32, [None, None], name="input")
targets_ = tf.placeholder(tf.int32, [None, None], name="target")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
keep_probability = tf.placeholder(tf.float32, name="keep_prob")
target_sequence_length = tf.placeholder(tf.int32, [None], name="target_sequence_length")
max_target_length = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return inputs_, targets_, learning_rate, keep_probability, target_sequence_length, max_target_length, source_sequence_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return decoder_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
embed = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
# enc_embeddings = tf.Variable(tf.random_uniform([source_vocab_size, encoding_embedding_size], -1, 1))
# embed = tf.nn.embedding_lookup(enc_embeddings, rnn_inputs)
def lstm_cell():
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
return tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, sequence_length=source_sequence_length,dtype=tf.float32)
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input,
sequence_length=target_sequence_length,
name = "training_helper")
basic_decode = tf.contrib.seq2seq.BasicDecoder(cell=dec_cell,
helper=training_helper,
initial_state=encoder_state,
output_layer=output_layer)
decoder_output,_ = tf.contrib.seq2seq.dynamic_decode(decoder=basic_decode,
maximum_iterations=max_summary_length)
return decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
start_tokens = tf.fill([batch_size], start_of_sequence_id)
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embedding=dec_embeddings,
start_tokens=start_tokens,
end_token=end_of_sequence_id)
inference_decoder = tf.contrib.seq2seq.BasicDecoder(cell=dec_cell, helper=inference_helper,
initial_state=encoder_state, output_layer=output_layer)
outputs, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished = True,
maximum_iterations=max_target_sequence_length)
return outputs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
# embed = tf.contrib.layers.embed_sequence(dec_input, target_vocab_size, decoding_embedding_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size], -1, 1))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
def lstm_cell():
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
return tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
output_layer = Dense(target_vocab_size,
kernel_initializer=tf.truncated_normal_initializer(mean=0.0,
stddev=0.1))
# outputs, final_state = tf.nn.dynamic_rnn(dec_cell, dec_embed_input, sequence_length=target_sequence_length,dtype=tf.float32)
# output_layer = tf.contrib.layers.fully_connected(outputs, target_vocab_size)
with tf.variable_scope("decode") as decoding_scope:
training_logits = decoding_layer_train(encoder_state,
dec_cell,
dec_embed_input,
target_sequence_length,
max_target_sequence_length,
output_layer,
keep_prob)
decoding_scope.reuse_variables()
inference_logits = decoding_layer_infer(encoder_state,
dec_cell,
dec_embeddings,
target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'],
max_target_sequence_length,
target_vocab_size,
output_layer,
batch_size,
keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
encode_output, encode_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
enc_embedding_size)
decoder_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_logits, inference_logits = decoding_layer(decoder_input, encode_state,
target_sequence_length,
max_target_sentence_length,
rnn_size, num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size, keep_prob,
dec_embedding_size)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.8
display_step = 100
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return [vocab_to_int[word] if word in vocab_to_int.keys() else vocab_to_int['<UNK>'] for word in sentence.split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
1,504 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I would like to delete selected columns in a numpy.array . This is what I do: | Problem:
import numpy as np
a = np.array([[np.nan, 2., 3., np.nan],
[1., 2., 3., 9]])
z = np.any(np.isnan(a), axis = 0)
a = a[:, ~z] |
1,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Изрази
Изразите в Python са като изразите в математиката.
Всеки изразе е изграден от сотйности (като напр. числата 1, 2, 3, ...) и оператори (+, -, ...).
Типове
Всяка стойност се характеризира с определн тип.
А типът е
Step1: Променливи
Променливата е име,с което се асоциира дадена стойност.
Валидни имена на променливи
Името на променлива може да съдържа главни и малки букви, цифри и символът _.
Името на променлива трябва да започва с буква или _.
За имена на променливи не може да се използват служебни думи от Python.
Препоръки за именуване на променливи
Имената трябва да са описателни и да обясняват за какво служи
дадената променлива. Например за име на човек подходящо име е
person_name, а неподходящо име е x.
Трябва да се използват само латински букви.
В Python e прието променливите да започват винаги с малка буква и да
съдържат само малки букви, като всяка следваща дума в тях е разделе от
предходната със символа _.
Името на променливите трябва да не е нито много дълго, нито много
късо – просто трябва да е ясно за какво служи променливата в
контекста, в който се използва.
Трябва да се внимава за главни и малки букви, тъй като Python прави
разлика между тях. Например age и Age са различни променливи.
Работа с променливи
Step2: Какво трябва да напишем, за да увеличим стойността на count с 1 (приемете, че не знаем каква е стойността на count)?
Step3: Грешки | Python Code:
2 * 3 + 2
2 * (3 + 2)
Explanation: Изрази
Изразите в Python са като изразите в математиката.
Всеки изразе е изграден от сотйности (като напр. числата 1, 2, 3, ...) и оператори (+, -, ...).
Типове
Всяка стойност се характеризира с определн тип.
А типът е:
- Множеството от стойности
- Множество от операции, които могат да се извършват с тези стойности
Целочислени числа (тип int)
сотйности | операции
--- | ---
..., -3, -2, -1, 0, 1, 2, 3, ...| +, -, , /, //, %, *
Реални числа (Числа с плаваща запетая, float)
сотйности | операции
--- | ---
-0.1, -0.11, ..., 0.0, ..., 0.1, ... | +, -, , /, //, %, *
### Числови низове (тип str)
сотйности | операции
--- | ---
"hello", "goodbye", ... | +
## Приоритет на операциите
1. *
2. -
3. , /, //, %
4. +, -
End of explanation
c = 10 # number of coins - прекалени късо
number_of_coins = 10 # прекалино детайлно име
coinsCount = 10 # ОК, но за Java
coins_count = 10 # OK
# Задаването на стойност на променлива се нарича `присвояване`
count = 1
# Когато Python срещне променлива в израз, той я заменя със стойността и
print(count + 1)
# Променливите се наричат променливи, защото стойността им може да се променя
count = 2
print(count + 1)
Explanation: Променливи
Променливата е име,с което се асоциира дадена стойност.
Валидни имена на променливи
Името на променлива може да съдържа главни и малки букви, цифри и символът _.
Името на променлива трябва да започва с буква или _.
За имена на променливи не може да се използват служебни думи от Python.
Препоръки за именуване на променливи
Имената трябва да са описателни и да обясняват за какво служи
дадената променлива. Например за име на човек подходящо име е
person_name, а неподходящо име е x.
Трябва да се използват само латински букви.
В Python e прието променливите да започват винаги с малка буква и да
съдържат само малки букви, като всяка следваща дума в тях е разделе от
предходната със символа _.
Името на променливите трябва да не е нито много дълго, нито много
късо – просто трябва да е ясно за какво служи променливата в
контекста, в който се използва.
Трябва да се внимава за главни и малки букви, тъй като Python прави
разлика между тях. Например age и Age са различни променливи.
Работа с променливи
End of explanation
count = 1
count = count + 1
print(count)
Explanation: Какво трябва да напишем, за да увеличим стойността на count с 1 (приемете, че не знаем каква е стойността на count)?
End of explanation
my var = 1
price = 1
print(pirce)
Explanation: Грешки
End of explanation |
1,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Neural Machine Translation with Attention
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https
Step1: Download and prepare the dataset
We'll use a language dataset provided by http
Step2: Limit the size of the dataset to experiment faster (optional)
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data)
Step3: Create a tf.data dataset
Step4: Write the encoder and decoder model
Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow Neural Machine Translation (seq2seq) tutorial. This example uses a more recent set of APIs. This notebook implements the attention equations from the seq2seq tutorial. The following diagram shows that each input word is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.
<img src="https
Step5: Define the optimizer and the loss function
Step6: Checkpoints (Object-based saving)
Step7: Training
Pass the input through the encoder which return encoder output and the encoder hidden state.
The encoder output, encoder hidden state and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
Step8: Translate
The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
Note
Step9: Restore the latest checkpoint and test | Python Code:
from __future__ import absolute_import, division, print_function
# Import TensorFlow >= 1.10 and enable eager execution
import tensorflow as tf
tf.enable_eager_execution()
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import time
print(tf.__version__)
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Neural Machine Translation with Attention
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using tf.keras and eager execution. This is an advanced example that assumes some knowledge of sequence to sequence models.
After training the model in this notebook, you will be able to input a Spanish sentence, such as "¿todavia estan en casa?", and return the English translation: "are you still at home?"
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 mintues to run on a single P100 GPU.
End of explanation
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='https://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return word_pairs
# This class creates a word -> index mapping (e.g,. "dad" -> 5) and vice-versa
# (e.g., 5 -> "dad") for each language,
class LanguageIndex():
def __init__(self, lang):
self.lang = lang
self.word2idx = {}
self.idx2word = {}
self.vocab = set()
self.create_index()
def create_index(self):
for phrase in self.lang:
self.vocab.update(phrase.split(' '))
self.vocab = sorted(self.vocab)
self.word2idx['<pad>'] = 0
for index, word in enumerate(self.vocab):
self.word2idx[word] = index + 1
for word, index in self.word2idx.items():
self.idx2word[index] = word
def max_length(tensor):
return max(len(t) for t in tensor)
def load_dataset(path, num_examples):
# creating cleaned input, output pairs
pairs = create_dataset(path, num_examples)
# index language using the class defined above
inp_lang = LanguageIndex(sp for en, sp in pairs)
targ_lang = LanguageIndex(en for en, sp in pairs)
# Vectorize the input and target languages
# Spanish sentences
input_tensor = [[inp_lang.word2idx[s] for s in sp.split(' ')] for en, sp in pairs]
# English sentences
target_tensor = [[targ_lang.word2idx[s] for s in en.split(' ')] for en, sp in pairs]
# Calculate max_length of input and output tensor
# Here, we'll set those to the longest sentence in the dataset
max_length_inp, max_length_tar = max_length(input_tensor), max_length(target_tensor)
# Padding the input and output tensor to the maximum length
input_tensor = tf.keras.preprocessing.sequence.pad_sequences(input_tensor,
maxlen=max_length_inp,
padding='post')
target_tensor = tf.keras.preprocessing.sequence.pad_sequences(target_tensor,
maxlen=max_length_tar,
padding='post')
return input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_tar
Explanation: Download and prepare the dataset
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
May I borrow this book? ¿Puedo tomar prestado este libro?
There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
Add a start and end token to each sentence.
Clean the sentences by removing special characters.
Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
Pad each sentence to a maximum length.
End of explanation
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_targ = load_dataset(path_to_file, num_examples)
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val)
Explanation: Limit the size of the dataset to experiment faster (optional)
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data):
End of explanation
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
N_BATCH = BUFFER_SIZE//BATCH_SIZE
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word2idx)
vocab_tar_size = len(targ_lang.word2idx)
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
Explanation: Create a tf.data dataset
End of explanation
def gru(units):
# If you have a GPU, we recommend using CuDNNGRU(provides a 3x speedup than GRU)
# the code automatically does that.
if tf.test.is_gpu_available():
return tf.keras.layers.CuDNNGRU(units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.enc_units)
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.dec_units)
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.W1 = tf.keras.layers.Dense(self.dec_units)
self.W2 = tf.keras.layers.Dense(self.dec_units)
self.V = tf.keras.layers.Dense(1)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
# hidden shape == (batch_size, hidden size)
# hidden_with_time_axis shape == (batch_size, 1, hidden size)
# we are doing this to perform addition to calculate the score
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying tanh(FC(EO) + FC(H)) to self.V
score = self.V(tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * enc_output
context_vector = tf.reduce_sum(context_vector, axis=1)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size * 1, vocab)
x = self.fc(output)
return x, state, attention_weights
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.dec_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
Explanation: Write the encoder and decoder model
Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow Neural Machine Translation (seq2seq) tutorial. This example uses a more recent set of APIs. This notebook implements the attention equations from the seq2seq tutorial. The following diagram shows that each input word is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
The input is put through an encoder model which gives us the encoder output of shape (batch_size, max_length, hidden_size) and the encoder hidden state of shape (batch_size, hidden_size).
Here are the equations that are implemented:
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
We're using Bahdanau attention. Lets decide on notation before writing the simplified form:
FC = Fully connected (dense) layer
EO = Encoder output
H = hidden state
X = input to the decoder
And the pseudo-code:
score = FC(tanh(FC(EO) + FC(H)))
attention weights = softmax(score, axis = 1). Softmax by default is applied on the last axis but here we want to apply it on the 1st axis, since the shape of score is (batch_size, max_length, 1). Max_length is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.
context vector = sum(attention weights * EO, axis = 1). Same reason as above for choosing axis as 1.
embedding output = The input to the decoder X is passed through an embedding layer.
merged vector = concat(embedding output, context vector)
This merged vector is then given to the GRU
The shapes of all the vectors at each step have been specified in the comments in the code:
End of explanation
optimizer = tf.train.AdamOptimizer()
def loss_function(real, pred):
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
return tf.reduce_mean(loss_)
Explanation: Define the optimizer and the loss function
End of explanation
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
Explanation: Checkpoints (Object-based saving)
End of explanation
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
total_loss += batch_loss
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / N_BATCH))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
Explanation: Training
Pass the input through the encoder which return encoder output and the encoder hidden state.
The encoder output, encoder hidden state and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
End of explanation
def evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word2idx[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen=max_length_inp, padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input, dec_hidden, enc_out)
# storing the attention weights to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.idx2word[predicted_id] + ' '
if targ_lang.idx2word[predicted_id] == '<end>':
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
plt.show()
def translate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
result, sentence, attention_plot = evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
Explanation: Translate
The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
Note: The encoder output is calculated only once for one input.
End of explanation
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'esta es mi vida.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate(u'todavia estan en casa?', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
# wrong translation
translate(u'trata de averiguarlo.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
Explanation: Restore the latest checkpoint and test
End of explanation |
1,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Selecting source and uid based on some criteria
Step1: Select calibrator with Flux > 0.1 Jy
Calibrator list here is downloaded (2017-06-23) from ALMA calibrator source catalogue (https
Step2: We can found 1685 objects with F>0.1Jy* from ALMA Calibrator source catalogue
*from the first Band found in the list, usually B3
Query all information about the projects that use these objects as calibrator
using astroquery
Step3: result of the query 'data' is in the form of Pandas DataFrame, but we save it in sql database also.
Note
Step4: write the report in a file | Python Code:
import sys
sys.path.append('../src/')
from ALMAQueryCal import *
q = queryCal()
Explanation: Selecting source and uid based on some criteria
End of explanation
fileCal = "alma_sourcecat_searchresults.csv"
listCal = q.readCal(fileCal, fluxrange=[0.1, 9999999999])
print "Number of selected sources: ", len(listCal)
Explanation: Select calibrator with Flux > 0.1 Jy
Calibrator list here is downloaded (2017-06-23) from ALMA calibrator source catalogue (https://almascience.eso.org/sc/)
End of explanation
data = q.queryAlma(listCal, public = True, savedb=True, dbname='calibrators_gt_0.1Jy.db')
Explanation: We can found 1685 objects with F>0.1Jy* from ALMA Calibrator source catalogue
*from the first Band found in the list, usually B3
Query all information about the projects that use these objects as calibrator
using astroquery
End of explanation
report = q.selectDeepfield_fromsql("calibrators_gt_0.1Jy.db", maxFreqRes=999999999, array='12m', \
excludeCycle0=True, selectPol=False, minTimeBand={3:60., 6:60., 7:60.}, verbose=True, silent=True)
Explanation: result of the query 'data' is in the form of Pandas DataFrame, but we save it in sql database also.
Note: Many of them only listed in calibrator list, but not observed/shown in any public data yet.
Selection criteria
*We already choose the calibrator with F>0.1Jy + only Public data
Select sources with other criteria:
ignore freq res (for imaging)
ignore polarization product
excludeCycle0 data
only accept data from 12m array (or 12m7m)
minimum integration time per band is 1h for B3, B6, and B7 (after filtering all above)
End of explanation
q.writeReport(report, "report6_nonALMACAL.txt", silent=True)
Explanation: write the report in a file
End of explanation |
1,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras 利用資料擴增法訓練貓狗分類器
先從 Kaggle 下載資料集,這裡需要註冊 Kaggle 的帳號,並且取得 API Key,記得置換為自己的 API Token 才能下載資料。
Step1: 看看資料的基本結構,貓與狗訓練資料夾各有 4000 張,測試資料夾各有 1000 張影像
Step2: 資料處理
我們上面從 Kaggle 下載的檔案全部都是圖片,由於我們想要從頭訓練一個可以分辨貓或狗的 CNN 網路,開始以前需要先將影像資料進行處理,轉換為可以送進 Keras 網路進行訓練的「張量」。並且依據我們選用的損失函數方法,處理對應標準答案的格式。
由於我們今天不會用到全部的資料,我們只會在兩種類別取用 1000 張進行訓練,500 張進行測試。接著利用「資料擴增工法」來提高正確率。
Step3: 接著建構我們需要的網路,使用四層網路組合 CNN,最後的 Full Connection Layer 使用 512 個神經元,Keras 建立 CNN 網路的方法如下:
Step4: 顯示網路的組成架構,整個網路有 3,453,121 個參數需要訓練,這樣的運算量非常需要高速的 GPU 來協助運算。
Step5: 這次要解決的問題屬於「二元分類」,算是 CNN 裡最典型的問題,因此損失函式採用「binary_crossentropy」,優化器選用「RMSprop」
Step6: 再來定義 ImageDataGenerator,ImageDataGenerator 是 Keras 用來讀取影像資料的方法,可以想像為 Dataset Provider 的概念,這樣可以不需要把所有資料都讀進記憶體,如果要訓練的資料影像很多,這會是一個解套的好方。這裡我們只是利用 ImageDataGenerator 來走訪需要訓練的資料,稍後會用來產生更多資料提高模型訓練準確度。
Step7: 剛剛的 batch_size 設定 20,表示每次從 train_generator 抓 20 筆資料送進 Network 進行訓練
我們有 2000 筆資料 (貓與狗各 1000 筆),一共要做 100 次才會做完,這樣訓練一輪就是一個 epoch,訓練如下:
Step8: 觀察圖表分析訓練情況
Step9: 從上面的圖片可以發現,在 5 Epoch 之後就開始走針了,Model 已經出現 Over Fitting 的現象。接下來透過 ImageDataGenerator 來擴增訓練樣本數量。
Keras 利用資料擴增法提高訓練準確性
由於我們總共只用了 2000 筆資料,假設樣本真的很難取得,那麼我們可以透過資料擴增法,將影像資料做一點「變化」,稍微加工一下這樣就有更多資料可以讓模型進行學習。Keras ImageDataGenerator 可以幫助我們實現影像資料擴增,如下:
Step10: 以下我們將每一張影像隨機進行變化,產生四張經過加工的圖片。這樣一來我們的資料集忽然就便多了,如下:
Step11: 重新建立模型,這裡我們有動一點手腳,就是在卷積層傳入全連接層 (Full Connection Layer) 的時候加入了 Dropout Layer,這樣會隨機丟棄 50% 的資訊,用意是不要讓網路的學習過於狹隘,不然很容易造成 Over Fitting,如下:
Step12: 進行模型訓練,這裡設定 batch_size=32, steps_per_epoch=100,相當於透過資料擴增法增加到 3,200 筆訓練資料,運算量很高,需要讓子彈飛一下,如下: | Python Code:
#!pip install kaggle
api_token = {"username":"your_username","key":"your_token"}
import json
import zipfile
import os
if not os.path.exists("/root/.kaggle"):
os.makedirs("/root/.kaggle")
with open('/root/.kaggle/kaggle.json', 'w') as file:
json.dump(api_token, file)
!chmod 600 /root/.kaggle/kaggle.json
if not os.path.exists("/kaggle"):
os.makedirs("/kaggle")
os.chdir('/kaggle')
!kaggle datasets download -d chetankv/dogs-cats-images
!unzip 'dogs-cats-images.zip' > /dev/null
Explanation: Keras 利用資料擴增法訓練貓狗分類器
先從 Kaggle 下載資料集,這裡需要註冊 Kaggle 的帳號,並且取得 API Key,記得置換為自己的 API Token 才能下載資料。
End of explanation
!echo "training_set cats: "
!echo `ls -alh '/kaggle/dataset/training_set/cats' | grep cat | wc -l`
!echo "training_set dogs: "
!echo `ls -alh '/kaggle/dataset/training_set/dogs' | grep dog | wc -l`
!echo "test_set cats: "
!echo `ls -alh '/kaggle/dataset/test_set/cats' | grep cat | wc -l`
!echo "test_set dogs: "
!echo `ls -alh '/kaggle/dataset/test_set/dogs' | grep dog | wc -l`
Explanation: 看看資料的基本結構,貓與狗訓練資料夾各有 4000 張,測試資料夾各有 1000 張影像
End of explanation
import os, shutil
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '/kaggle/dataset'
# The directory where we will
# store our smaller dataset
base_dir = '/play'
if not os.path.exists(base_dir):
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
if not os.path.exists(train_dir):
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
if not os.path.exists(validation_dir):
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
if not os.path.exists(test_dir):
os.mkdir(test_dir)
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
if not os.path.exists(train_cats_dir):
os.mkdir(train_cats_dir)
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
if not os.path.exists(train_dogs_dir):
os.mkdir(train_dogs_dir)
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
if not os.path.exists(validation_cats_dir):
os.mkdir(validation_cats_dir)
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
if not os.path.exists(validation_dogs_dir):
os.mkdir(validation_dogs_dir)
# Directory with our validation cat pictures
test_cats_dir = os.path.join(test_dir, 'cats')
if not os.path.exists(test_cats_dir):
os.mkdir(test_cats_dir)
# Directory with our validation dog pictures
test_dogs_dir = os.path.join(test_dir, 'dogs')
if not os.path.exists(test_dogs_dir):
os.mkdir(test_dogs_dir)
# Copy first 1000 cat images to train_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1, 1001)]
for fname in fnames:
src = os.path.join(original_dataset_dir, 'training_set', 'cats', fname)
dst = os.path.join(train_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to validation_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(4001, 4501)]
for fname in fnames:
src = os.path.join(original_dataset_dir, 'test_set', 'cats', fname)
dst = os.path.join(validation_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to test_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(4501, 5001)]
for fname in fnames:
src = os.path.join(original_dataset_dir, 'test_set', 'cats', fname)
dst = os.path.join(test_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy first 1000 dog images to train_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1, 1001)]
for fname in fnames:
src = os.path.join(original_dataset_dir, 'training_set', 'dogs', fname)
dst = os.path.join(train_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to validation_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(4001, 4501)]
for fname in fnames:
src = os.path.join(original_dataset_dir, 'test_set', 'dogs', fname)
dst = os.path.join(validation_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to test_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(4501, 5001)]
for fname in fnames:
src = os.path.join(original_dataset_dir, 'test_set', 'dogs', fname)
dst = os.path.join(test_dogs_dir, fname)
shutil.copyfile(src, dst)
print('total training cat images:', len(os.listdir(train_cats_dir)))
print('total training dog images:', len(os.listdir(train_dogs_dir)))
print('total validation cat images:', len(os.listdir(validation_cats_dir)))
print('total validation dog images:', len(os.listdir(validation_dogs_dir)))
print('total test cat images:', len(os.listdir(test_cats_dir)))
print('total test dog images:', len(os.listdir(test_dogs_dir)))
Explanation: 資料處理
我們上面從 Kaggle 下載的檔案全部都是圖片,由於我們想要從頭訓練一個可以分辨貓或狗的 CNN 網路,開始以前需要先將影像資料進行處理,轉換為可以送進 Keras 網路進行訓練的「張量」。並且依據我們選用的損失函數方法,處理對應標準答案的格式。
由於我們今天不會用到全部的資料,我們只會在兩種類別取用 1000 張進行訓練,500 張進行測試。接著利用「資料擴增工法」來提高正確率。
End of explanation
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
Explanation: 接著建構我們需要的網路,使用四層網路組合 CNN,最後的 Full Connection Layer 使用 512 個神經元,Keras 建立 CNN 網路的方法如下:
End of explanation
model.summary()
Explanation: 顯示網路的組成架構,整個網路有 3,453,121 個參數需要訓練,這樣的運算量非常需要高速的 GPU 來協助運算。
End of explanation
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
Explanation: 這次要解決的問題屬於「二元分類」,算是 CNN 裡最典型的問題,因此損失函式採用「binary_crossentropy」,優化器選用「RMSprop」
End of explanation
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
Explanation: 再來定義 ImageDataGenerator,ImageDataGenerator 是 Keras 用來讀取影像資料的方法,可以想像為 Dataset Provider 的概念,這樣可以不需要把所有資料都讀進記憶體,如果要訓練的資料影像很多,這會是一個解套的好方。這裡我們只是利用 ImageDataGenerator 來走訪需要訓練的資料,稍後會用來產生更多資料提高模型訓練準確度。
End of explanation
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50)
Explanation: 剛剛的 batch_size 設定 20,表示每次從 train_generator 抓 20 筆資料送進 Network 進行訓練
我們有 2000 筆資料 (貓與狗各 1000 筆),一共要做 100 次才會做完,這樣訓練一輪就是一個 epoch,訓練如下:
End of explanation
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
Explanation: 觀察圖表分析訓練情況
End of explanation
datagen = ImageDataGenerator(
rotation_range=40, # 隨機旋轉的角度
width_shift_range=0.2, # 隨機水平平移的 % 比例
height_shift_range=0.2, # 隨機垂直平移的 % 比例
shear_range=0.2, # 隨機傾斜的角度
zoom_range=0.2, # 隨機縮放的比例
horizontal_flip=True, # 隨機左右翻轉
fill_mode='nearest') # 邊界像素填補,由於影像調整後周圍會出現缺少的像素,設定 nearest 會以最接近的像素填補
Explanation: 從上面的圖片可以發現,在 5 Epoch 之後就開始走針了,Model 已經出現 Over Fitting 的現象。接下來透過 ImageDataGenerator 來擴增訓練樣本數量。
Keras 利用資料擴增法提高訓練準確性
由於我們總共只用了 2000 筆資料,假設樣本真的很難取得,那麼我們可以透過資料擴增法,將影像資料做一點「變化」,稍微加工一下這樣就有更多資料可以讓模型進行學習。Keras ImageDataGenerator 可以幫助我們實現影像資料擴增,如下:
End of explanation
# This is module with image preprocessing utilities
from keras.preprocessing import image
fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)]
# We pick one image to "augment"
img_path = fnames[3]
# Read the image and resize it
img = image.load_img(img_path, target_size=(150, 150))
# Convert it to a Numpy array with shape (150, 150, 3)
x = image.img_to_array(img)
# Reshape it to (1, 150, 150, 3)
x = x.reshape((1,) + x.shape)
# The .flow() command below generates batches of randomly transformed images.
# It will loop indefinitely, so we need to `break` the loop at some point!
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
Explanation: 以下我們將每一張影像隨機進行變化,產生四張經過加工的圖片。這樣一來我們的資料集忽然就便多了,如下:
End of explanation
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5)) # 加入 Dropout 0.5 隨機丟棄 50%
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
model.summary()
Explanation: 重新建立模型,這裡我們有動一點手腳,就是在卷積層傳入全連接層 (Full Connection Layer) 的時候加入了 Dropout Layer,這樣會隨機丟棄 50% 的資訊,用意是不要讓網路的學習過於狹隘,不然很容易造成 Over Fitting,如下:
End of explanation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
Explanation: 進行模型訓練,這裡設定 batch_size=32, steps_per_epoch=100,相當於透過資料擴增法增加到 3,200 筆訓練資料,運算量很高,需要讓子彈飛一下,如下:
End of explanation |
1,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The typical problem we have to solve
Step1: Create the distribution and visualize it
Step2: Fit a function to the distribution and obtain its properties
Step3: Now do the fit | Python Code:
#Necessary imports
# lib for numeric calculations
import numpy as np
# standard lib for python plotting
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# seaborn lib for more option in DS
import seaborn as sns
# so to obtain pseudo-random numbers
import random
# fits a curve to data
from scipy.optimize import curve_fit
# Lets say that a physics process gives us a gaussian distribution with a
# mean value somewhere between 3 nad 5
# and a sigma with a value around 1
# We do not know exactly their precise number and this is what we want to figure out with the fit.
mu = random.randrange(3, 5, 1)
sigma = 0.1 * random.randrange(8, 12, 1)
# Create the data
data = np.random.normal(mu, sigma, size=10000)
bin_values, bin_edges = np.histogram(data, density=False, bins=100)
bin_centres = (bin_edges[:-1] + bin_edges[1:])/2
# Define model function to be used to fit to the data above:
def gauss(x, *p):
A, mu, sigma = p
return A*np.exp(-(x-mu)**2/(2.*sigma**2))
# p0 is the initial guess for the fitting coefficients (A, mu and sigma above)
p0 = [1., 0., 1.]
coeff, var_matrix = curve_fit(gauss, bin_centres, bin_values, p0=p0)
# Get the fitted curve
hist_fit = gauss(bin_centres, *coeff)
#plt.plot(bin_centres, hist, label='Test data')
plt.plot(bin_centres, hist_fit, label='Fitted data', color='red', linewidth=2)
_ = plt.hist(data, bins=100, color='blue', alpha=.3)
# Finally, lets get the fitting parameters, i.e. the mean and standard deviation:
print 'Fitted mean = ', coeff[1]
print 'Fitted standard deviation = ', coeff[2]
plt.show()
Explanation: The typical problem we have to solve:
Lets say we want to study the properties of a distribution e.g. physics process.
Typically we see the histogram and we want to see to which class of distribution belongs to.
we can obtain this by fitting our data and checking the residuals (goodness of fit)
End of explanation
# Lets say that a physics process gives us a gaussian distribution with a
# mean value somewhere between 3 nad 5
# while it has a broad sigma with a value around 1
mu = random.randrange(3, 5, 1)
sigma = 0.1 * random.randrange(8, 12, 1)
# Lets create now the distribution
data = np.random.normal(mu, sigma, 1000)
# Lets visualize it
_ = plt.hist(data, bins=100, alpha=.5)
Explanation: Create the distribution and visualize it
End of explanation
# get the binned data
# density = True => histo integral = 1, normalized
hist_data = np.histogram(data, density=True, bins=100)
# Define model function to be used to fit to the data above, we assume here that we know that is a gaussian function:
def gauss(x, *p):
A, mu, sigma = p
return A * np.exp( - (x - mu) ** 2 / (2. * sigma ** 2))
# get distribution information
bin_edges = hist_data[1]
bin_centres = (bin_edges[:-1] + bin_edges[1:]) / 2
bin_values = hist_data[0]
# p0 is the initial guess for the fitting coefficients (A, mu and sigma above)
p0 = [1., 4., 1.]
# Fit the data
coeff, var_matrix = curve_fit(gauss, bin_centres, bin_values, p0=p0)
print "The fit coeff. are:", coeff
# Obtain the final fit function
hist_fit = gauss(bin_centres, *coeff)
hist_data = plt.hist(data, bins=100, alpha = 0.5)
plt.plot(bin_centres, hist_fit, label='Fitted data')
#hist_data = plt.hist(s, bins=100)
Explanation: Fit a function to the distribution and obtain its properties
End of explanation
# Get the fitted curve
hist_fit = gauss(bin_centres, *coeff)
hist_data = plt.hist(s, bins=100, alpha = 0.5)
plt.plot(bin_centres, hist_fit, label='Fitted data')
coeff
Explanation: Now do the fit
End of explanation |
1,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Misdemeanor Amounts New York City
Data Bootcamp Final Project (Fall 2016)
by Zak Kukoff ([email protected])
About this project
There's been much discussion in New York about the city's fluctuating amount of petty crime under Mayor Michael Bloomberg. Using publicly available data from the City of New York's website, I wanted to check and see if there really was a change in petty crime over the course of his three terms as Mayor of New York City. For the purposes of this project, I chose to consider only the rate of misdemeanors in New York City as representative of petty crime.
Importing packages and data
I began by importing a variety of Python packages that would allow me to properly plot and analyze the data found on the City's website. I then imported the data found on the City of New York's data website
Step1: After importing the required packages, I then imported the data I previously downloaded. For the below code, replace the pathway to the excel spreadsheet with its location on your computer. For the sake of convinience, I created a CSV file from the data so that it would be easier to work with directly.
Step2: This gives us a sense for the data we have to work with | Python Code:
# import packages
import pandas as pd
import matplotlib.pyplot as plt
import sys
from itertools import cycle, islice
import math
import numpy as np
%matplotlib inline
Explanation: Misdemeanor Amounts New York City
Data Bootcamp Final Project (Fall 2016)
by Zak Kukoff ([email protected])
About this project
There's been much discussion in New York about the city's fluctuating amount of petty crime under Mayor Michael Bloomberg. Using publicly available data from the City of New York's website, I wanted to check and see if there really was a change in petty crime over the course of his three terms as Mayor of New York City. For the purposes of this project, I chose to consider only the rate of misdemeanors in New York City as representative of petty crime.
Importing packages and data
I began by importing a variety of Python packages that would allow me to properly plot and analyze the data found on the City's website. I then imported the data found on the City of New York's data website: https://data.cityofnewyork.us/Public-Safety/Historical-New-York-City-Crime-Data/hqhv-9zeg.
That dataset includes a variety of crime data on not only misdemeanors but also on felonies and violation offenses. For this project, I'll only be examining the file called Misdemeanor Offenses 2000-2011.xls
End of explanation
path = '/Users/zak/Dropbox/*Classes Fall 2016/Data Bootcamp/Misdemeanor Data.csv'
data = pd.read_csv(path)
data.columns
Explanation: After importing the required packages, I then imported the data I previously downloaded. For the below code, replace the pathway to the excel spreadsheet with its location on your computer. For the sake of convinience, I created a CSV file from the data so that it would be easier to work with directly.
End of explanation
data.plot(kind ='bar')
path = '/Users/zak/Dropbox/*Classes Fall 2016/Data Bootcamp/Misdemeanor Data.csv'
newdata = pd.read_csv(path, skiprows = 0-17, usecols = [3,4,5,6,7,8,9,10])
newd1 = newdata.transpose()
newd1.plot(kind='bar')
Explanation: This gives us a sense for the data we have to work with: at both the individual offense and total offenses levels, we have data on the number of offenses from 2000-2011. Because this doesn't fully cover Mayor Bloomberg's third term in office (which began in 2010 and ended in 2013), we'll only be examining his first two terms in office: 2002 through 2009.
Plotting the total number of offenses during Mayor Bloomberg's two terms
To begin with, I'll plot all of the petty crimes committed in New York over the full period of the dataset. The following labels apply to the below numbers on the graph:
0 MISDEMEANOR POSSESSION OF STOLEN PROPERTY
1 MISDEMEANOR SEX CRIMES (4)
2 MISDEMEANOR DANGEROUS DRUGS (1)
3 MISDEMEANOR DANGEROUS WEAPONS (5)
4 PETIT LARCENY
5 ASSAULT 3 & RELATED OFFENSES
6 INTOXICATED & IMPAIRED DRIVING
7 VEHICLE AND TRAFFIC LAWS
8 MISD. CRIMINAL MISCHIEF & RELATED OFFENSES
9 CRIMINAL TRESPASS
10 UNAUTHORIZED USE OF A VEHICLE
11 OFFENSES AGAINST THE PERSON (7)
12 OFFENSES AGAINST PUBLIC ADMINISTRATION (2)
13 ADMINISTRATIVE CODE (6)
14 FRAUDS (3)
15 AGGRAVATED HARASSMENT 2
16 OTHER MISDEMEANORS (8)
17 TOTAL MISDEMEANOR OFFENSES
Then, I'll plot the total number of crimes over the years 2002-2009. That will give us a baseline idea of the total amount of crime in the city of New York over those years.
End of explanation |
1,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Neural Structured Learning Authors
Step1: Graph regularization for document classification using natural graphs
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Dependencies and imports
Step3: Cora dataset
The Cora dataset is a citation graph where
nodes represent machine learning papers and edges represent citations between
pairs of papers. The task involved is document classification where the goal is
to categorize each paper into one of 7 categories. In other words, this is a
multi-class classification problem with 7 classes.
Graph
The original graph is directed. However, for the purpose of this example, we
consider the undirected version of this graph. So, if paper A cites paper B, we
also consider paper B to have cited A. Although this is not necessarily true, in
this example, we consider citations as a proxy for similarity, which is usually
a commutative property.
Features
Each paper in the input effectively contains 2 features
Step4: Convert the Cora data to the NSL format
In order to preprocess the Cora dataset and convert it to the format required by
Neural Structured Learning, we will run the 'preprocess_cora_dataset.py'
script, which is included in the NSL github repository. This script does the
following
Step5: Global variables
The file paths to the train and test data are based on the command line flag
values used to invoke the 'preprocess_cora_dataset.py' script above.
Step7: Hyperparameters
We will use an instance of HParams to include various hyperparameters and
constants used for training and evaluation. We briefly describe each of them
below
Step10: Load train and test data
As described earlier in this notebook, the input training and test data have
been created by the 'preprocess_cora_dataset.py'. We will load them into two
tf.data.Dataset objects -- one for train and one for test.
In the input layer of our model, we will extract not just the 'words' and the
'label' features from each sample, but also corresponding neighbor features
based on the hparams.num_neighbors value. Instances with fewer neighbors than
hparams.num_neighbors will be assigned dummy values for those non-existent
neighbor features.
Step11: Let's peek into the train dataset to look at its contents.
Step12: Let's peek into the test dataset to look at its contents.
Step14: Model definition
In order to demonstrate the use of graph regularization, we build a base model
for this problem first. We will use a simple feed-forward neural network with 2
hidden layers and dropout in between. We illustrate the creation of the base
model using all model types supported by the tf.Keras framework -- sequential,
functional, and subclass.
Sequential base model
Step16: Functional base model
Step19: Subclass base model
Step20: Create base model(s)
Step21: Train base MLP model
Step23: Evaluate base MLP model
Step24: Train MLP model with graph regularization
Incorporating graph regularization into the loss term of an existing
tf.Keras.Model requires just a few lines of code. The base model is wrapped to
create a new tf.Keras subclass model, whose loss includes graph
regularization.
To assess the incremental benefit of graph regularization, we will create a new
base model instance. This is because base_model has already been trained for a
few iterations, and reusing this trained model to create a graph-regularized
model will not be a fair comparison for base_model.
Step25: Evaluate MLP model with graph regularization | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Neural Structured Learning Authors
End of explanation
!pip install --quiet neural-structured-learning
Explanation: Graph regularization for document classification using natural graphs
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_mlp_cora"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_mlp_cora.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_mlp_cora.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/neural-structured-learning/g3doc/tutorials/graph_keras_mlp_cora.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
Graph regularization is a specific technique under the broader paradigm of
Neural Graph Learning
(Bui et al., 2018). The core
idea is to train neural network models with a graph-regularized objective,
harnessing both labeled and unlabeled data.
In this tutorial, we will explore the use of graph regularization to classify
documents that form a natural (organic) graph.
The general recipe for creating a graph-regularized model using the Neural
Structured Learning (NSL) framework is as follows:
Generate training data from the input graph and sample features. Nodes in
the graph correspond to samples and edges in the graph correspond to
similarity between pairs of samples. The resulting training data will
contain neighbor features in addition to the original node features.
Create a neural network as a base model using the Keras sequential,
functional, or subclass API.
Wrap the base model with the GraphRegularization wrapper class, which
is provided by the NSL framework, to create a new graph Keras model. This
new model will include a graph regularization loss as the regularization
term in its training objective.
Train and evaluate the graph Keras model.
Setup
Install the Neural Structured Learning package.
End of explanation
import neural_structured_learning as nsl
import tensorflow as tf
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print(
"GPU is",
"available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
Explanation: Dependencies and imports
End of explanation
!wget --quiet -P /tmp https://linqs-data.soe.ucsc.edu/public/lbc/cora.tgz
!tar -C /tmp -xvzf /tmp/cora.tgz
Explanation: Cora dataset
The Cora dataset is a citation graph where
nodes represent machine learning papers and edges represent citations between
pairs of papers. The task involved is document classification where the goal is
to categorize each paper into one of 7 categories. In other words, this is a
multi-class classification problem with 7 classes.
Graph
The original graph is directed. However, for the purpose of this example, we
consider the undirected version of this graph. So, if paper A cites paper B, we
also consider paper B to have cited A. Although this is not necessarily true, in
this example, we consider citations as a proxy for similarity, which is usually
a commutative property.
Features
Each paper in the input effectively contains 2 features:
Words: A dense, multi-hot bag-of-words representation of the text in the
paper. The vocabulary for the Cora dataset contains 1433 unique words. So,
the length of this feature is 1433, and the value at position 'i' is 0/1
indicating whether word 'i' in the vocabulary exists in the given paper or
not.
Label: A single integer representing the class ID (category) of the paper.
Download the Cora dataset
End of explanation
!wget https://raw.githubusercontent.com/tensorflow/neural-structured-learning/master/neural_structured_learning/examples/preprocess/cora/preprocess_cora_dataset.py
!python preprocess_cora_dataset.py \
--input_cora_content=/tmp/cora/cora.content \
--input_cora_graph=/tmp/cora/cora.cites \
--max_nbrs=5 \
--output_train_data=/tmp/cora/train_merged_examples.tfr \
--output_test_data=/tmp/cora/test_examples.tfr
Explanation: Convert the Cora data to the NSL format
In order to preprocess the Cora dataset and convert it to the format required by
Neural Structured Learning, we will run the 'preprocess_cora_dataset.py'
script, which is included in the NSL github repository. This script does the
following:
Generate neighbor features using the original node features and the graph.
Generate train and test data splits containing tf.train.Example instances.
Persist the resulting train and test data in the TFRecord format.
End of explanation
### Experiment dataset
TRAIN_DATA_PATH = '/tmp/cora/train_merged_examples.tfr'
TEST_DATA_PATH = '/tmp/cora/test_examples.tfr'
### Constants used to identify neighbor features in the input.
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
Explanation: Global variables
The file paths to the train and test data are based on the command line flag
values used to invoke the 'preprocess_cora_dataset.py' script above.
End of explanation
class HParams(object):
Hyperparameters used for training.
def __init__(self):
### dataset parameters
self.num_classes = 7
self.max_seq_length = 1433
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 1
### model architecture
self.num_fc_units = [50, 50]
### training parameters
self.train_epochs = 100
self.batch_size = 128
self.dropout_rate = 0.5
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
Explanation: Hyperparameters
We will use an instance of HParams to include various hyperparameters and
constants used for training and evaluation. We briefly describe each of them
below:
num_classes: There are a total 7 different classes
max_seq_length: This is the size of the vocabulary and all instances in
the input have a dense multi-hot, bag-of-words representation. In other
words, a value of 1 for a word indicates that the word is present in the
input and a value of 0 indicates that it is not.
distance_type: This is the distance metric used to regularize the sample
with its neighbors.
graph_regularization_multiplier: This controls the relative weight of
the graph regularization term in the overall loss function.
num_neighbors: The number of neighbors used for graph regularization.
This value has to be less than or equal to the max_nbrs command-line
argument used above when running preprocess_cora_dataset.py.
num_fc_units: The number of fully connected layers in our neural
network.
train_epochs: The number of training epochs.
batch_size: Batch size used for training and evaluation.
dropout_rate: Controls the rate of dropout following each fully
connected layer
eval_steps: The number of batches to process before deeming evaluation
is complete. If set to None, all instances in the test set are evaluated.
End of explanation
def make_dataset(file_path, training=False):
Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
def parse_example(example_proto):
Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth label.
# The 'words' feature is a multi-hot, bag-of-words representation of the
# original raw text. A default value is required for examples that don't
# have the feature.
feature_spec = {
'words':
tf.io.FixedLenFeature([HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0,
dtype=tf.int64,
shape=[HPARAMS.max_seq_length])),
'label':
tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above during training.
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i,
NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.FixedLenFeature(
[HPARAMS.max_seq_length],
tf.int64,
default_value=tf.constant(
0, dtype=tf.int64, shape=[HPARAMS.max_seq_length]))
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
label = features.pop('label')
return features, label
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset(TRAIN_DATA_PATH, training=True)
test_dataset = make_dataset(TEST_DATA_PATH)
Explanation: Load train and test data
As described earlier in this notebook, the input training and test data have
been created by the 'preprocess_cora_dataset.py'. We will load them into two
tf.data.Dataset objects -- one for train and one for test.
In the input layer of our model, we will extract not just the 'words' and the
'label' features from each sample, but also corresponding neighbor features
based on the hparams.num_neighbors value. Instances with fewer neighbors than
hparams.num_neighbors will be assigned dummy values for those non-existent
neighbor features.
End of explanation
for feature_batch, label_batch in train_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, 0, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, 0, NBR_WEIGHT_SUFFIX)
print('Batch of neighbor inputs:', feature_batch[nbr_feature_key])
print('Batch of neighbor weights:',
tf.reshape(feature_batch[nbr_weight_key], [-1]))
print('Batch of labels:', label_batch)
Explanation: Let's peek into the train dataset to look at its contents.
End of explanation
for feature_batch, label_batch in test_dataset.take(1):
print('Feature list:', list(feature_batch.keys()))
print('Batch of inputs:', feature_batch['words'])
print('Batch of labels:', label_batch)
Explanation: Let's peek into the test dataset to look at its contents.
End of explanation
def make_mlp_sequential_model(hparams):
Creates a sequential multi-layer perceptron model.
model = tf.keras.Sequential()
model.add(
tf.keras.layers.InputLayer(
input_shape=(hparams.max_seq_length,), name='words'))
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
model.add(
tf.keras.layers.Lambda(lambda x: tf.keras.backend.cast(x, tf.float32)))
for num_units in hparams.num_fc_units:
model.add(tf.keras.layers.Dense(num_units, activation='relu'))
# For sequential models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
model.add(tf.keras.layers.Dropout(hparams.dropout_rate))
model.add(tf.keras.layers.Dense(hparams.num_classes))
return model
Explanation: Model definition
In order to demonstrate the use of graph regularization, we build a base model
for this problem first. We will use a simple feed-forward neural network with 2
hidden layers and dropout in between. We illustrate the creation of the base
model using all model types supported by the tf.Keras framework -- sequential,
functional, and subclass.
Sequential base model
End of explanation
def make_mlp_functional_model(hparams):
Creates a functional API-based multi-layer perceptron model.
inputs = tf.keras.Input(
shape=(hparams.max_seq_length,), dtype='int64', name='words')
# Input is already one-hot encoded in the integer format. We cast it to
# floating point format here.
cur_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))(
inputs)
for num_units in hparams.num_fc_units:
cur_layer = tf.keras.layers.Dense(num_units, activation='relu')(cur_layer)
# For functional models, by default, Keras ensures that the 'dropout' layer
# is invoked only during training.
cur_layer = tf.keras.layers.Dropout(hparams.dropout_rate)(cur_layer)
outputs = tf.keras.layers.Dense(hparams.num_classes)(cur_layer)
model = tf.keras.Model(inputs, outputs=outputs)
return model
Explanation: Functional base model
End of explanation
def make_mlp_subclass_model(hparams):
Creates a multi-layer perceptron subclass model in Keras.
class MLP(tf.keras.Model):
Subclass model defining a multi-layer perceptron.
def __init__(self):
super(MLP, self).__init__()
# Input is already one-hot encoded in the integer format. We create a
# layer to cast it to floating point format here.
self.cast_to_float_layer = tf.keras.layers.Lambda(
lambda x: tf.keras.backend.cast(x, tf.float32))
self.dense_layers = [
tf.keras.layers.Dense(num_units, activation='relu')
for num_units in hparams.num_fc_units
]
self.dropout_layer = tf.keras.layers.Dropout(hparams.dropout_rate)
self.output_layer = tf.keras.layers.Dense(hparams.num_classes)
def call(self, inputs, training=False):
cur_layer = self.cast_to_float_layer(inputs['words'])
for dense_layer in self.dense_layers:
cur_layer = dense_layer(cur_layer)
cur_layer = self.dropout_layer(cur_layer, training=training)
outputs = self.output_layer(cur_layer)
return outputs
return MLP()
Explanation: Subclass base model
End of explanation
# Create a base MLP model using the functional API.
# Alternatively, you can also create a sequential or subclass base model using
# the make_mlp_sequential_model() or make_mlp_subclass_model() functions
# respectively, defined above. Note that if a subclass model is used, its
# summary cannot be generated until it is built.
base_model_tag, base_model = 'FUNCTIONAL', make_mlp_functional_model(HPARAMS)
base_model.summary()
Explanation: Create base model(s)
End of explanation
# Compile and train the base MLP model
base_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
base_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
Explanation: Train base MLP model
End of explanation
# Helper function to print evaluation metrics.
def print_metrics(model_desc, eval_metrics):
Prints evaluation metrics.
Args:
model_desc: A description of the model.
eval_metrics: A dictionary mapping metric names to corresponding values. It
must contain the loss and accuracy metrics.
print('\n')
print('Eval accuracy for ', model_desc, ': ', eval_metrics['accuracy'])
print('Eval loss for ', model_desc, ': ', eval_metrics['loss'])
if 'graph_loss' in eval_metrics:
print('Eval graph loss for ', model_desc, ': ', eval_metrics['graph_loss'])
eval_results = dict(
zip(base_model.metrics_names,
base_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('Base MLP model', eval_results)
Explanation: Evaluate base MLP model
End of explanation
# Build a new base MLP model.
base_reg_model_tag, base_reg_model = 'FUNCTIONAL', make_mlp_functional_model(
HPARAMS)
# Wrap the base MLP model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
graph_reg_model.fit(train_dataset, epochs=HPARAMS.train_epochs, verbose=1)
Explanation: Train MLP model with graph regularization
Incorporating graph regularization into the loss term of an existing
tf.Keras.Model requires just a few lines of code. The base model is wrapped to
create a new tf.Keras subclass model, whose loss includes graph
regularization.
To assess the incremental benefit of graph regularization, we will create a new
base model instance. This is because base_model has already been trained for a
few iterations, and reusing this trained model to create a graph-regularized
model will not be a fair comparison for base_model.
End of explanation
eval_results = dict(
zip(graph_reg_model.metrics_names,
graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)))
print_metrics('MLP + graph regularization', eval_results)
Explanation: Evaluate MLP model with graph regularization
End of explanation |
1,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content
Glossary
1. Somename
Previous
Step1: Import section specific modules | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Content
Glossary
1. Somename
Previous: 1.1 Somename 2
Next: 1. Somename: References and further reading
Import standard modules:
End of explanation
pass
Explanation: Import section specific modules:
End of explanation |
1,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visu3d - Transform (go/v3d-transform)
If you're new to v3d, please look at the intro first.
Installation
We use same installation/imports as in the intro.
Step1: Transformations
v3d makes it easy to project back and forth across coordinates frames.
3d <> 3d
v3d.Transform stores the position, rotation and scale of an object.
It is used to transform objects (e.g. from world to camera 3d coordinates).
v3d.Transform is composed of R (rotation, scale) and t (translation) component
Step2: v3d.Transform can be composed with all types of objects
Step3: Inverting a transformation is trivial
Step4: See the API for all properties (.matrix4x4, .x_dir, .y_dir, .z_dir,...).
3d <> 2d (Camera pixel projections)
Let's create a camera looking at the center.
Step5: We can project the 3d into 2d pixel coordinates using px_from_world. It supports
Step6: Which is equivalent to
Step7: v3d.Point2d can be visualized in the pixel space
Step8: v3d.Point3d -> v3d.Point2d will preserve the depth and rgb values, which allows to project back to 3d without any information loss
Step9: The transformation preserves the shape (*shape, 3) -> (*shape, 2).
Step10: When the depth is missing, z=1 in camera coordinates
Step12: Supporting the Transform protocol
To support v3d.Transform, you only need to implement the apply_transform protocol. | Python Code:
!pip install visu3d etils[ecolab] jax[cpu] tf-nightly tfds-nightly sunds
from __future__ import annotations
from etils.ecolab.lazy_imports import *
Explanation: Visu3d - Transform (go/v3d-transform)
If you're new to v3d, please look at the intro first.
Installation
We use same installation/imports as in the intro.
End of explanation
tr = v3d.Transform(
R=[ # Define a rigid rotation
[-1/3, -(1/3)**.5, (1/3)**.5],
[1/3, -(1/3)**.5, -(1/3)**.5],
[-2/3, 0, -(1/3)**.5],
],
t=[2, 2, 2],
)
# Fig display the (x, y, z) basis of the transformation
tr.fig
Explanation: Transformations
v3d makes it easy to project back and forth across coordinates frames.
3d <> 3d
v3d.Transform stores the position, rotation and scale of an object.
It is used to transform objects (e.g. from world to camera 3d coordinates).
v3d.Transform is composed of R (rotation, scale) and t (translation) component:
End of explanation
v3d.make_fig([
tr,
tr @ np.array([[0, 0, 0], [1, 1, 1]]),
tr @ v3d.Point3d(p=[0, 0, 2], rgb=[255, 0, 0]),
tr @ v3d.Ray(pos=[0, 0, 0], dir=[0, 1, 1]),
tr @ v3d.Transform(R=np.eye(3), t=[0, 0, 3]),
])
Explanation: v3d.Transform can be composed with all types of objects:
xnp.array
v3d.Ray
v3d.Point3d
v3d.Camera
v3d.Transform
You custom object (see Protocol section below)
Transformation is applied through Python __matmul__ operator: tr @ <obj>
End of explanation
tr.inv
tr.inv @ tr # `tr.inv @ tr` is identity
Explanation: Inverting a transformation is trivial:
End of explanation
# Camera looking at the center
cam = v3d.Camera.from_look_at(
spec=v3d.PinholeCamera.from_focal(
resolution=(128, 170),
focal_in_px=120,
),
pos=[2, -0.5, 1.7],
target=[0, 0, 0], # < TODO(epot): Rename end -> look_at
)
# Point cloud of arbitrary `(..., 3)` shape
rng = np.random.default_rng(0)
point_cloud = v3d.Point3d(
p=(rng.random((50, 50, 3)) - 0.5) * 3,
rgb=rng.integers(255, size=(50, 50, 3)),
)
Explanation: See the API for all properties (.matrix4x4, .x_dir, .y_dir, .z_dir,...).
3d <> 2d (Camera pixel projections)
Let's create a camera looking at the center.
End of explanation
# Convert (world 3d) -> (px 2d) coordinates
px_coord = cam.px_from_world @ point_cloud
Explanation: We can project the 3d into 2d pixel coordinates using px_from_world. It supports:
xnp.array: (..., 3) -> (..., 2)
v3d.Point3d -> v3d.Point2d
Your custom objects (see Protocol section below)
End of explanation
# Convert (world 3d) -> (camera 3d) -> (px 2d) coordinates
px_coord = cam.spec.px_from_cam @ cam.cam_from_world @ point_cloud
Explanation: Which is equivalent to:
End of explanation
# Truncate coordinates outside the screen
# Use `(w, h)` as pixels are in `(i, j)` coordinates
px_coord = px_coord.clip(min=0, max=cam.wh)
px_coord.fig
Explanation: v3d.Point2d can be visualized in the pixel space:
End of explanation
px_coord.flatten()[0]
Explanation: v3d.Point3d -> v3d.Point2d will preserve the depth and rgb values, which allows to project back to 3d without any information loss:
End of explanation
print(f'{point_cloud.p.shape} -> {px_coord.p.shape}')
Explanation: The transformation preserves the shape (*shape, 3) -> (*shape, 2).
End of explanation
px_coord = px_coord.replace(depth=None)
# Convert (px 2d) -> (world 3d) coordinates
projected_points = cam.world_from_px @ px_coord
v3d.make_fig([
point_cloud,
projected_points,
cam,
])
Explanation: When the depth is missing, z=1 in camera coordinates:
End of explanation
from etils.array_types import f32
@dataclasses.dataclass(frozen=True)
class MyRay(v3d.DataclassArray):
pos: f32['*shape 3']
dir: f32['*shape 3']
def apply_transform(self, tr: v3d.Transform):
Supports `tr @ my_ray`.
return self.replace(
pos=tr @ self.pos,
# `tr.apply_to_dir` only apply the rotation (tr.R), but NOT the
# translation (tr.t)
dir=tr.apply_to_dir(self.dir),
)
my_ray = MyRay(pos=[0, 0, 0], dir=[0, 0, 1])
cam.world_from_cam @ my_ray
Explanation: Supporting the Transform protocol
To support v3d.Transform, you only need to implement the apply_transform protocol.
End of explanation |
1,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 6
This lesson will review the Pythagorean theorem, how to find the distance between two points, and an interesting method for finding square roots.
Pythagorean Theorem
A triangle is a polygon with three vertices, three edges, and three angles (between each pair of edges). One important property that is true for any triangle is that the three angles always sum up to $180^\circ$.
Write a method/function below called isValidTriangle that takes in three arguments, ang_1, ang_2, and ang_3 (all positive), and prints "YES" if they can be the angles of a triangle, and prints "NO" if they cannot.
For example isValidTriangle(90, 45, 45) would print "YES", isValidTriangle(10, 10, 160) would print "YES", but isValidTriangle(90, 90, 90) would print "NO".
Just a reminder, the syntax for creating a function is
python
def isValidTriangle(arg_1, arg_2, arg_3)
Step1: We can categorize triangles into three categories based on the properties of their angles.
Acute Triangle
Step2: Distance between two points
The pythagorean theorem lets us easily find the distance between any two cartesian points. This is because if we visualize two points on a cartesian grid, the distance between them is just the length of the hypotenuse created by connecting the two points! See the figure below
Step3: Finding two-dimensional TDOA
Now that we can find the distance between cartesian points, we can now find the TDOA of four sensors on a plane instead of the TDOA along just a single line with two sensors!
<img src="four_sensor_d.png" alt="Drawing" style="width
Step4: Now write a function that takes in the four parameters above, but an additional one called $v$ which is the velocity of the wave source. Have this function return an array/list of length four with the time it takes for the wave to reach of the sensors. Call it time_to_sensor.
Step5: Now we will write a method that returns the "Time distance of arrival" relative to sensor one. This means that we shift our position in time such the time it takes to reach sensor one is zero. We can do this by subtracting the time it takes to get to sensor one from every single element in the return value from the time_to_sensor method.
Wrte a method that takes in the same parameters as time_to_sensor, but returns the TDOA, where each "time" is relative to sensor one.
Call it TDOA.
Step6: Now we have successfully written a method that can return the theoretical TDOA to our sensors of a wave source! However, what we would ideally like is a way that can identify what the position $(x,y)$ is given the time distance of arrivals. The implementation for this is rather difficut, so we will not go into it. However, it is the same method that is used by seismologists to detect an earthquake, and what a GPS might use to determine your location!
Some things to think about though. If we know which sensor has the least TDOA is (remember, some of them can be negative since we are subtracting the time it takes the wave to get to sensor one. If the time difference is negative, it simply means that the wave reached that sensor before it reached sensor one), we can figure out which quadrant (or identify an area) that the wave source must be from. Discuss how we can do this.
We've written an implementation that takes in the TDOA. You can import this and use it to create a 2D touch screen!
Extension Material
In this section we will talk about a simple way of determining where the source of a disturbance is given the values of the TDOA.
For simplicity, imagine that we place the sensors on a square of sidelength $a$. One way we could hypothetically find the location is to test every single possible point within the square, and look for the point that returns the correct TDOA values. However, this is unfeasible because there are an infinite number of points within the grid, and it would be impossible to check every single point. But, instead of looking at every single point, we can look a finite number of points and look for the one whose TDOA values are the closest to that of the ones we are given.
<img src="grid.png" alt="Drawing" style="width | Python Code:
#Write your code here
#Solution
def isValidTriangle(arg_1, arg_2, arg_3):
if(arg_1 + arg_2 + arg_3 == 180):
print "YES"
else:
print "NO"
Explanation: Lesson 6
This lesson will review the Pythagorean theorem, how to find the distance between two points, and an interesting method for finding square roots.
Pythagorean Theorem
A triangle is a polygon with three vertices, three edges, and three angles (between each pair of edges). One important property that is true for any triangle is that the three angles always sum up to $180^\circ$.
Write a method/function below called isValidTriangle that takes in three arguments, ang_1, ang_2, and ang_3 (all positive), and prints "YES" if they can be the angles of a triangle, and prints "NO" if they cannot.
For example isValidTriangle(90, 45, 45) would print "YES", isValidTriangle(10, 10, 160) would print "YES", but isValidTriangle(90, 90, 90) would print "NO".
Just a reminder, the syntax for creating a function is
python
def isValidTriangle(arg_1, arg_2, arg_3):
#Implement your code here
End of explanation
#Write your functions here
import math
#Solutions
def find_hypotenuse(a, b):
return math.sqrt(a*a+b*b)
def find_leg(a,b):
if(a > b):
return math.sqrt(a*a-b*b)
else:
return math.sqrt(b*b-a*a)
Explanation: We can categorize triangles into three categories based on the properties of their angles.
Acute Triangle: All of the triangle's angles are less than $90^\circ$
<img src="acute_Triangle.png" alt="Drawing" style="width: 150px;"/>
For example if a triangle had the angles of $15^\circ, 85^\circ, 80^\circ$, it would be acute.
Obtuse Triangle: One of the triangle's angles is larger than $90^\circ$
<img src="obtuse_Triangle.png" alt="Drawing" style="width: 150px;"/>
For example if a triangle had the angles of $15^\circ, 15^\circ, 150^\circ$, it would be obtuse.
Right Triangle: One of the triangle's angles is exactly $90^\circ$. It is called a right triangle because $90^\circ$ is called a right angle and is the angle you see on the corner of books, doors, and squares.
<img src="right_Triangle.png" alt="Drawing" style="width: 150px;"/>
The pythagorean theorem deals with (and only applies) to right triangles. If we notice from the picture above, the right angle is opposite to the longest side, we call this the hypotenuse. We call the other two shorter sides the legs. We denote the legs with $A$, and $B$, and the hypotenuse with $C$, shown in the picture below.
<img src="abc_pag.png" alt="Drawing" style="width: 150px;"/>
The pythagorean statements simply states that $A^2 + B^2 = C^2$. Although this equation may look really simple, it is quite powerful. Conversly, if a triangle has three sides $A, B, C$ such that $A^2 + B^2 = C^2$, then it must be a right triangle.
The theorem is quite useful because if we know any the length of any two sides of a right triangle, we can figure out the length of the third.
For example, if we know the lengths of both the legs of a right triangle are $3$ and $4$, then the pythagorean theorem tells us that $3^2 + 4^2 = C^2 \rightarrow C = \sqrt{3^2+4^2} = \sqrt{9+16} = \sqrt{25} = 5$.
If we know the length of the hypotenuse and the length of one leg, we can figure out the length of the remaining leg. For example, if we know that a right triangle has a hypotenuse with length 13, and a leg of length 12, then the pythagorean theorem tells us that $A^2 + 12^2 = 13^2 \rightarrow A^2 = 13^2-12^2 \rightarrow A = \sqrt{13^2-12^2} = \sqrt{169-144} = \sqrt{25} = 5$.
Now we will write some methods that deal with the pythagorean theorem.
Write a method that, when given the length of the legs of a right triangle, return the length of the hypotenuse. Call it find_hypotenuse.
Write another method that, when given the length of the length of a single leg and the hypotenuse, return the length of the other leg. Notice that the order that the order of the inputs may not be in order of (leg, hypotenuse), so you will have to figure out a way to over come this! Call it find_leg
You may find it useful to first call
python
import math
You can use its squareroot method by calling
python
math.sqrt(x)
End of explanation
#Write your function here
#Solution
def distance(x1,y1,x2,y2):
return math.sqrt((x2-x1)*(x2-x1) + (y2-y1)*(y2-y1))
Explanation: Distance between two points
The pythagorean theorem lets us easily find the distance between any two cartesian points. This is because if we visualize two points on a cartesian grid, the distance between them is just the length of the hypotenuse created by connecting the two points! See the figure below:
<img src="distance.png" alt="Drawing" style="width: 500px;"/>
Since this is a right triangle, we know that $distance(P_1, P_2)^2 = |x_2-x_1|^2 + |y_2-y_1|^2 \rightarrow
distance(P_1, P_2) = \sqrt{|x_2-x_1|^2 + |y_2-y_1|^2}$
Write a function below called distance that takes in four arguments, x1, y1, x2, y2, corresponding to the points $p_1 = (x_1, y_1)$ and $p_2 = (x_2, y_2)$ and returns the distance between $p_1$ and $p_2$.
End of explanation
#Write your function below
#Solution
def distance_to_sensor(a,b,x,y):
distance_one = distance(0,0,x,y)
distance_two = distance(a,0,x,y)
distance_three = distance(0,b,x,y)
distance_four = distance(a,b,x,y)
return [distance_one, distance_two, distance_three, distance_four]
Explanation: Finding two-dimensional TDOA
Now that we can find the distance between cartesian points, we can now find the TDOA of four sensors on a plane instead of the TDOA along just a single line with two sensors!
<img src="four_sensor_d.png" alt="Drawing" style="width: 750px;"/>
When we set up the sensors, they will be on the four corners of a rectangle with width $a$ and height $b$. Thus the positions of the sensors are at $(0,0)$, $(a,0)$, $(b,0)$, and $(a,b)$. Whenever we tap within this rectangular region, we will be creating a circular wave with a velocity $v$, that travels to the four corners. Since we don't know exactly where we tapped, have it be at a variable point $(x,y)$.
Write a function that takes in four arguments, $a$ (width of the rectangle), $b$ (height of the triangle), and $x,y$, the cartesian coordinates of the wave source and return an array/list of length four, where the first element is the distance from the wave source to the first sensor, the second element is the distance from the wave source to the second sensor, etc. Call it distance_to_sensor.
Hint: you may find it useful to use the distance function you defined above.
End of explanation
#Write your function below
#Soluton
def time_to_sensor(a,b,x,y,v):
distances = distance_to_sensor(a,b,x,y)
time_one = distances[0]/v;
time_two = distances[1]/v;
time_three = distances[2]/v;
time_four = distances[3]/v;
#return map(lambda x: x/v, distances)
return [time_one, time_two, time_three, time_four]
Explanation: Now write a function that takes in the four parameters above, but an additional one called $v$ which is the velocity of the wave source. Have this function return an array/list of length four with the time it takes for the wave to reach of the sensors. Call it time_to_sensor.
End of explanation
#Write your function below
#Soluton
def TDOA(a,b,x,y,v):
times = time_to_sensor(a,b,x,y,v)
offset = times[0];
return map(lambda x: x-offset, times)
Explanation: Now we will write a method that returns the "Time distance of arrival" relative to sensor one. This means that we shift our position in time such the time it takes to reach sensor one is zero. We can do this by subtracting the time it takes to get to sensor one from every single element in the return value from the time_to_sensor method.
Wrte a method that takes in the same parameters as time_to_sensor, but returns the TDOA, where each "time" is relative to sensor one.
Call it TDOA.
End of explanation
#definition of cost function
def err(TDOA1, TDOA2):
ans = 0
for i in range(1,4):
ans += (TDOA1[i] - TDOA2[i])*(TDOA1[i] - TDOA2[i])
return ans
#Write your method below
#Solution
def find_point(a, time_differences, v):
currentX = -1
currentY = -1
min_error = 100000000
x = 0.0
while(x <= a):
y = 0.0
while(y <= a):
error = err(time_differences, TDOA(a,a,x,y,v))
if(error < min_error):
min_error = error
currentX = x
currentY = y
y = y+a/100.0
x = x+a/100.0
return (currentX,currentY)
print find_point(1, [0, 0, .618033, .618033], 1)
print TDOA(1,1, .5, 0, 1)
Explanation: Now we have successfully written a method that can return the theoretical TDOA to our sensors of a wave source! However, what we would ideally like is a way that can identify what the position $(x,y)$ is given the time distance of arrivals. The implementation for this is rather difficut, so we will not go into it. However, it is the same method that is used by seismologists to detect an earthquake, and what a GPS might use to determine your location!
Some things to think about though. If we know which sensor has the least TDOA is (remember, some of them can be negative since we are subtracting the time it takes the wave to get to sensor one. If the time difference is negative, it simply means that the wave reached that sensor before it reached sensor one), we can figure out which quadrant (or identify an area) that the wave source must be from. Discuss how we can do this.
We've written an implementation that takes in the TDOA. You can import this and use it to create a 2D touch screen!
Extension Material
In this section we will talk about a simple way of determining where the source of a disturbance is given the values of the TDOA.
For simplicity, imagine that we place the sensors on a square of sidelength $a$. One way we could hypothetically find the location is to test every single possible point within the square, and look for the point that returns the correct TDOA values. However, this is unfeasible because there are an infinite number of points within the grid, and it would be impossible to check every single point. But, instead of looking at every single point, we can look a finite number of points and look for the one whose TDOA values are the closest to that of the ones we are given.
<img src="grid.png" alt="Drawing" style="width: 500px;"/>
In the picture above, we are dividing each side into 8 points, and checking a total of $8^2 = 64$ points for the one with the best TDOA values. However, to make it more accurate, we can easily divide each side into $100$ points, and check a total of $10000$ points, which a computer can easily handle. If we are given a two TDOA arrays, we can define an error function, which is how "different" the two arrays are. We want to find the point in the grid that returns a TDOA array that is the most similar to the actual TDOA.
Implement a function that, given the length of the board, a TDOA array and the velocity of a wave in the material, returns the coordinate of a point, whose cost is the least. Call it find_point. Use a subdivision along each side of 101 points.
The heading should look like
```python
def find_point(a, time_differences, v):
your code here
```
Hint: It may be useful to use two loops to scan through the possible points and it may be useful to use several variables to keep track of the minimum and the corresponding coordinates as we are scanning through every single possible point.
End of explanation |
1,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Knuth-Bendix Completion Algorithm
This notebook presents the Knuth-Bendix completion algorithm for transforming a set of equations into a confluent term rewriting system. This notebook is divided into eight sections.
- Parsing
- Matching
- Term Rewriting
- Unification
- Knuth-Bendix Ordering
- Critical Pairs
- The Completion Algorithm
- Examples
Parsing
To begin, we need a parser that is capable of parsing terms and equations. This parser is implemented in the notebook Parser.ipynb and can parse equations of terms that use the binary operators +, -, *, /, \, %, and ^. The precedences of these operators are as follows
Step1: Back to top
Matching
The substitution $\sigma$ maps variables to terms. It is represented as a dictionary. If $t$ is a term and $\sigma$ is the substitution
$$ \sigma = { x_1
Step2: Given a string x, the function make_var(x) creates a variable with name x.
Step3: Given a term p, a term t, and a substitution σ, the function match_pattern(p, t, σ) tries to extend the
substitution σ so that the equation
$$ p \sigma = t $$
is satisfied. If this is possible, the function returns True and updates the substitution σ so that
$p \sigma = t$ holds. Otherwise, the function returns False.
Step4: Given a term t, the function find_variables(t) computes the set of all variables occurring in t. If, instead, $t$ is a list of terms or a set of terms, then find_variables(t) computes the set of those variables that occur in any of the terms of $t$.
Step5: Given a term t and a substitution σ that is represented as a dictionary of the form
$$ \sigma = { x_1
Step6: Given a set of terms or equations Ts and a substitution σ, the function apply_set(Ts, σ) applies the substitution σ to all elements in Ts.
Step7: If $\sigma = \big{ x_1 \mapsto s_1, \cdots, x_m \mapsto s_m \big}$ and
$\tau = \big{ y_1 \mapsto t_1, \cdots, y_n \mapsto t_n \big}$
are two substitutions that are non-overlapping, i.e. such that ${x_1,\cdots, x_m} \cap {y_1,\cdots,y_n} = {}$ holds,
then we define the composition $\sigma\tau$ of $\sigma$ and $\tau$ as follows
Step8: Back to top
Term Rewriting
Step9: Given a term s and a set of variables V, the function rename_variables(s, V) renames the variables in s so that they differ from the variables in the set V. This will only work if the number of variables occurring in V times two is less than the number of letters in the latin alphabet, i.e. less than 26. Therefore, the set V must have at most 13 variables. For our examples, this is not a restriction.
Step10: The function simplify_step(t, E) takes two arguments
Step11: The function normal_form(t, E) takes a term t and a list (or set) of equations E and tries to simplify the term t as much as possible using the equations from E.
In the implementation, we have to be careful to rename the variables occurring in E so that they are different from the variables occurring in t. Furthermore, we have to take care that we don't identify different variables in E by accident. Therefore, we rename the variables in E so that they are both different from the variables in t and from the old variables occurring in E.
Step12: Back to top
Unification
In this section, we implement the unification algorithm of Martelli and Montanari.
Given a variable name x and a term t, the function occurs(x, t) checks whether x occurs in t.
Step13: The algorithm implemented below takes a pair (E, σ) as its input. Here E is a set of syntactical equations that need to be solved and σ is a substitution that is initially empty. The pair (E, σ) is then transformed using the rules of Martelli and Montanari. The transformation is successful if the pair (E, σ) can be transformed into a pair of the form ({}, μ). Then μ is the solution to the system of equations E and hence μ is a most general unifier of E.
The rules that can be used to solve a system of syntactical equations are as follows
Step14: Given a set of <em style="color
Step15: Back to top
The Knuth-Bendix Ordering
In order to turn an equation $s = t$ into a rewrite rule, we have to check whether the term $s$ is more complex than the term $t$, so that $s$ should be simplified to $t$, or whether $t$ is more complex than $s$ and we should rewrite $t$ into $s$. To this end, we implement the Knuth-Bendix ordering, which is a method to compare terms.
Given a term t and a variable name x, the function count(t, x) computes the number of times that x occurs in t.
Step16: In order to define the Knuth-Bendix ordering on terms, three prerequisites need to be satisfied
Step17: Given a term t the function weight(t) computes the weight $w(t)$, where $w(t)$ is defined by induction on $t$
Step18: Given a term s and a term t, the function is_tower(s, t) returns True iff the following is true
Step19: The Knuth-Bendix order $s \prec_{\textrm{kbo}} t$ is defined for terms $s$ and $t$. We have $s \prec_{\textrm{kbo}} t$ iff one of the following two conditions hold
Step20: Given two lists S and T of terms, the function is_simpler_list(S, T) checks whether S is lexicographically simpler than T if the elements of S and T are compared with the Knuth-Bendix ordering $\prec_{\textrm{kbo}}$. It is assumed that S and T have the same length.
Step21: We define the class OrderException to be able to deal with equations that can't be ordered into a rewrite rule.
Step22: Given an equation eq and an Ordering of the function symbols occurring eq, the function order_equation orders the equation eq with respect to the Knuth-Bendix ordering, i.e. in the ordered equation, the right hand side is simpler than the left hand side. If the left hand side and the right hand side are incomparable, the function raises an OrderException.
Step23: Back to top
Critical Pairs
The central notion of the Knuth-Bendix algorithm is the notion of a critical pair.
Given two equations lhs1 = rhs1 and lhs2 = rhs2, a pair of terms (s, t) is a critical pair of these equations if we have the following
Step24: Given a term t and a position u in t, the function subterm(t, u) extracts the subterm that is located at position u, i.e. it computes t/u. The position u is zero-based.
Step25: Given a term t, a position u in t and a term s, the function replace_at(t, u, s) replaces the subterm at position u with t. The position u uses zero-based indexing. Hence it returns the term
$$ t[u \mapsto s]. $$
Step26: Given two equations eq1 and eq2, the function critical_pairs(eq1, eq2) computes the set of all critical pairs between these equations. A pair of terms (s, t) is a critical pair of eq1 and eq2 if we have
- eq1 has the form lhs1 = rhs1,
- eq2 has the form lhs2 = rhs2,
- u is a non-trivial position in lhs1,
- $\mu = \texttt{mgu}(\texttt{lhs}_1/u, \texttt{lhs}_2) \not= \texttt{None}$,
- $s = \texttt{lhs}_1\mu[u \leftarrow \texttt{rhs}_2\mu]$ and $t = \texttt{rhs}_1\mu$.
Step27: Back to top
The Completion Algorithm
Given a set of RewriteRules and a newly derived rewrite rule, the function simplify_rules(RewriteRules, rule) adds rule to the set RewriteRules. When the function returns, every equation in the set RewriteRules is in normal form with respect to all other equations in RewriteRules.
Step28: The function print_equations prints the set of Equations one by one and numbers them.
Step29: Given an equation eq of the form eq = ('=', lhs, rhs), the function complexity(eq) computes a measure of complexity for the given equation. This measure of complexity is the length of the string that represents the equation. This measure of complexity is later used to choose between equations
Step30: Given a set of equations RewriteRules and a single rewrite rule eq, the function all_critical_pairs(RewriteRules, eq) computes the set of all critical pairs that can be build by building critical pairs with an equation from RewriteRules and the equation eq. It is assumed that eq is already an element of RewriteRules.
Step31: The module heapq provides heap-based priority queues, which are implemented as lists.
Step32: Given a file name that contains a set of equations and a dictionary encoding an ordering of the function symbols, the function knuth_bendix_algorithm implements the Knuth-Bendix algorithm
Step33: Back to top
Examples
In this section we present a number of examples where the Knuth-Bendix completion algorithm is able to produce a confluent system of equations. In detail, we discuss the following examples
Step34: It is natural to ask whether the axiom describing the left neutral element and the axiom describing the left inverse can be replaced by corresponding axioms that require $1$ to be a right neutral element and $i(x)$ to be a right inverse. The Knuth-Bendix completion algorithm shows that this is indeed the case.
Step35: LR Systems
Next, it is natural to ask what happens if we have a left neutral element and a right inverse. Algebraic Structures of this kind are called LR systems. The Knuth-Bendix completion algorithm shows that, in general, LR systems are different from groups.
Step36: RL Systems
Similarly, if we have a right neutral element and a left inverse the resulting structure need not be a group. Systems of this kind are called RL system.
Step37: Central Groupoids
A structure $\mathcal{G} = \langle G, \rangle$ is a central groupoid iff
1. $G$ is a a non-empty set.
2. $
Step38: Back to top
Quasigroups
A structure $\mathcal{G} = \langle G, , /, \backslash \rangle$ is a quasigroup iff
1. $G$ is a non-empty set.
2. $
Step39: Quasigroups with Idempotence
A quasigroup with idempotence is a quasigroup that additionally satisfies the identity $x * x = x$. Therefore, a structure $\mathcal{G} = \langle G, , /, \backslash \rangle$ is a quasigroup with idempotence iff
1. $G$ is a set.
2. $
Step40: Quasigroups with Unipotence
A quasigroup with idempotence is a quasigroup that additionally satisfies the identity $x * x = 1$
where $1$ is a constant symbol. Therefore, a structure $\mathcal{G} = \langle G, 1, , /, \backslash \rangle$ is a quasigroup with idempotence iff
1. $G$ is a set.
2. $1 \in G$.
2. $
Step41: Loops
A loop is a quasigroup that additionally has an identity element. Therefore, a structure $\mathcal{G} = \langle G, 1, , /, \backslash \rangle$ is a loop iff
1. $G$ is a set.
2. $1 \in G$.
2. $ | Python Code:
%run Parser.ipynb
!cat Examples/quasigroups.eqn || type Examples\quasigroups.eqn
def test():
t = parse_term('x * y * z')
print(t)
print(to_str(t))
eq = parse_equation('i(x) * x = 1')
print(eq)
print(to_str(parse_file('Examples/quasigroups.eqn')))
test()
Explanation: The Knuth-Bendix Completion Algorithm
This notebook presents the Knuth-Bendix completion algorithm for transforming a set of equations into a confluent term rewriting system. This notebook is divided into eight sections.
- Parsing
- Matching
- Term Rewriting
- Unification
- Knuth-Bendix Ordering
- Critical Pairs
- The Completion Algorithm
- Examples
Parsing
To begin, we need a parser that is capable of parsing terms and equations. This parser is implemented in the notebook Parser.ipynb and can parse equations of terms that use the binary operators +, -, *, /, \, %, and ^. The precedences of these operators are as follows:
1. + and - have the precedence $1$, which is the lowest precedence.
Furthermore, they are left-associative.
2. *, /, \, % have the precedence $2$ and are also left associative.
3. ^ has the precedence $3$ and is right associative.
Furthermore, function symbols and variables are supported. Every string consisting of letters, digits, and underscores that does start with a letter is considered a function symbol if it is followed by an opening parenthesis. Otherwise, it is taken to be a variable. Terms are defined inductively:
- Every variable is a term.
- If $f$ is a function symbol and $t_1$, $\cdots$, $t_n$ are terms, then $f(t_1,\cdots,t_n)$ is a term.
- If $s$ and $t$ are terms and $o$ is an operator, then $s\; o\; t$ is a term.
The notebook Parser.ipynb also provides the function to_str for turning terms or equations into strings. All together, the notebook provides the following functions:
- parse_file(file_name) parses a file containing equations between terms.
It returns a list of the equations that have been parsed.
- parse_equation(s) converts the string s into an equation.
- parse_term(s) converts the string s into a term.
- to_str(o) converts an object o into a string. The object o either is
* a term,
* an equation,
* a list of equations,
* a set of equations, or
* a dictionary representing a substitution.
Terms and equations are represented as nested tuples. These are defined recursively:
- a string is a nested tuple,
- a tuple t is a nested tuple iff t[0] is a string and for all
$i \in {1,\cdots,\texttt{len}(t)-1}$ we have that t[i] is a nested tuple.
The parser is implemented using the parser generator Ply.
End of explanation
def is_var(t):
return t[0] == '$var'
Explanation: Back to top
Matching
The substitution $\sigma$ maps variables to terms. It is represented as a dictionary. If $t$ is a term and $\sigma$ is the substitution
$$ \sigma = { x_1: s_1, \cdots, x_n:s_n }, $$
then applying the substitution $\sigma$ to the term $t$ replaces the variables $x_i$ with the terms $s_i$. The application of $\sigma$ to $t$ is written as $t\sigma$ and is defined by induction on $t$:
- $x_i\sigma := s_i$,
- $v\sigma := v$ if $v$ is a variable and $v \not\in {x_1,\cdots,x_n}$,
- $f(t_1,\cdots,t_n)\sigma := f(t_1\sigma, \cdots, t_n\sigma)$.
A term $p$ matches a term $t$ iff there exists a substitution $\sigma$ such that $p\sigma = t$.
The function is_var(t) checks whether the term t is interpreted a variable. Variables are represented as nested tuples of the form ($var, name), where name is the name of the variable.
End of explanation
def make_var(x):
return ('$var', x)
Explanation: Given a string x, the function make_var(x) creates a variable with name x.
End of explanation
def match_pattern(pattern, term, σ):
match pattern:
case '$var', var:
if var in σ:
return σ[var] == term
else:
σ[var] = term # extend σ
return True
case _:
if pattern[0] == term[0] and len(pattern) == len(term):
return all(match_pattern(pattern[i], term[i], σ) for i in range(1, len(pattern)))
else:
return False
def test():
p = parse_term('i(x) * z')
t = parse_term('i(i(y)) * i(y)')
σ = {}
match_pattern(p, t, σ)
print(to_str(σ))
test()
Explanation: Given a term p, a term t, and a substitution σ, the function match_pattern(p, t, σ) tries to extend the
substitution σ so that the equation
$$ p \sigma = t $$
is satisfied. If this is possible, the function returns True and updates the substitution σ so that
$p \sigma = t$ holds. Otherwise, the function returns False.
End of explanation
def find_variables(t):
if isinstance(t, set) or isinstance(t, list):
return { var for term in t
for var in find_variables(term)
}
if is_var(t):
_, var = t
return { var }
_, *L = t
return find_variables(L)
def test():
eq = parse_equation('(x * y) * z = x * (y * z)')
print(find_variables(eq))
test()
Explanation: Given a term t, the function find_variables(t) computes the set of all variables occurring in t. If, instead, $t$ is a list of terms or a set of terms, then find_variables(t) computes the set of those variables that occur in any of the terms of $t$.
End of explanation
def apply(t, σ):
"Apply the substitution σ to the term t."
if is_var(t):
_, var = t
if var in σ:
return σ[var]
else:
return t
else:
f, *Ts = t
return (f,) + tuple(apply(s, σ) for s in Ts)
def test():
p = parse_term('i(x) * x')
t = parse_term('i(i(y)) * i(y)')
σ = {}
match_pattern(p, t, σ)
print(f'apply({to_str(p)}, {to_str(σ)}) = {to_str(apply(p, σ))}')
test()
Explanation: Given a term t and a substitution σ that is represented as a dictionary of the form
$$ \sigma = { x_1: s_1, \cdots, x_n:s_n }, $$
the function apply(t, σ) computes the term that results from replacing the variables $x_i$ with the terms $s_i$ in t for all $i=1,\cdots,n$. This term is written as $t\sigma$ and if $\sigma = { x_1: s_1, \cdots, x_n:s_n }$, then $t\sigma$ is defined by induction on t as follows:
- $x_i\sigma := s_i$,
- $v\sigma := v$ if $v$ is a variable and $v \not\in {x_1,\cdots,x_n}$,
- $f(t_1,\cdots,t_m)\sigma := f(t_1\sigma, \cdots, t_m\sigma)$.
End of explanation
def apply_set(Ts, σ):
return { apply(t, σ) for t in Ts }
Explanation: Given a set of terms or equations Ts and a substitution σ, the function apply_set(Ts, σ) applies the substitution σ to all elements in Ts.
End of explanation
def compose(σ, τ):
Result = { x: apply(s, τ) for (x, s) in σ.items() }
Result.update(τ)
return Result
def test():
t1 = parse_term('i(y)')
t2 = parse_term('a * b')
t3 = parse_term('i(b)')
σ = { 'x': t1 }
τ = { 'y': t2, 'z': t3 }
print(f'compose({to_str(σ)}, {to_str(τ)}) = {to_str(compose(σ, τ))}')
test()
Explanation: If $\sigma = \big{ x_1 \mapsto s_1, \cdots, x_m \mapsto s_m \big}$ and
$\tau = \big{ y_1 \mapsto t_1, \cdots, y_n \mapsto t_n \big}$
are two substitutions that are non-overlapping, i.e. such that ${x_1,\cdots, x_m} \cap {y_1,\cdots,y_n} = {}$ holds,
then we define the composition $\sigma\tau$ of $\sigma$ and $\tau$ as follows:
$$\sigma\tau := \big{ x_1 \mapsto s_1\tau, \cdots, x_m \mapsto s_m\tau,\; y_1 \mapsto t_1, \cdots, y_n \mapsto t_n \big}$$
This definition implies that the following associative law is valid:
$$ s(\sigma\tau) = (s\sigma)\tau $$
The function $\texttt{compose}(\sigma, \tau)$ takes two non-overlapping substitutions and computes their composition $\sigma\tau$.
End of explanation
from string import ascii_lowercase
ascii_lowercase
Explanation: Back to top
Term Rewriting
End of explanation
def rename_variables(s, Vars):
assert len(Vars) <= 13, f'Error: too many variables in {Vars}.'
NewVars = set(ascii_lowercase) - Vars
NewVars = sorted(list(NewVars))
σ = { x: make_var(NewVars[i]) for (i, x) in enumerate(Vars) }
return apply(s, σ)
def test():
t = parse_equation('x * y * z = x * (y * z)')
V = find_variables(t)
print(f'rename_variables({to_str(t)}, {V}) = {to_str(rename_variables(t, V))}')
test()
Explanation: Given a term s and a set of variables V, the function rename_variables(s, V) renames the variables in s so that they differ from the variables in the set V. This will only work if the number of variables occurring in V times two is less than the number of letters in the latin alphabet, i.e. less than 26. Therefore, the set V must have at most 13 variables. For our examples, this is not a restriction.
End of explanation
def simplify_step(t, Equations):
if is_var(t):
return None # variables can't be simplified
for eq in Equations:
_, lhs, rhs = eq
σ = {}
if match_pattern(lhs, t, σ):
return apply(rhs, σ)
f, *args = t
simpleArgs = []
change = False
for arg in args:
simple = simplify_step(arg, Equations)
if simple != None:
simpleArgs += [simple]
change = True
else:
simpleArgs += [arg]
if change:
return (f,) + tuple(simpleArgs)
return None
def test():
E = { parse_equation('(x * y) * z = x * (y * z)') }
t = parse_term('(a * b) * i(b)')
print(f'simplify_step({to_str(t)}, {to_str(E)}) = {to_str(simplify_step(t, E))}')
test()
Explanation: The function simplify_step(t, E) takes two arguments:
- t is a term,
- E is a set of equations of the form ('=', l, r).
The function tries to find an equation l = r in E and a subterm s in the term t such that the left hand side l of the equation matches the subterm s using some substitution $\sigma$, i.e. we have $s = l\sigma$. Then the term t is simplified by replacing the subterm s in t by $r\sigma$. More formally, if u is the position of s in t, i.e. t/u = s then t is simplified into the term
$$ t = t[u \mapsto l\sigma] \rightarrow_{{l=r}} t[u \mapsto r\sigma]. $$
If an appropriate subterm s is found, the simplified term is returned. Otherwise, the function returns None.
If multiple subterms of t can simplified, then the function simplify_step(t, E) simplifies all subterms.
End of explanation
def normal_form(t, E):
Vars = find_variables(t) | find_variables(E)
NewE = []
for eq in E:
NewE += [ rename_variables(eq, Vars) ]
while True:
s = simplify_step(t, NewE)
if s == None:
return t
t = s
!cat Examples/group-theory-1.eqn || type Examples\group-theory-1.eqn
def test():
E = parse_file('Examples/group-theory-1.eqn')
t = parse_term('1 * (b * i(a)) * a')
print(f'E = {to_str(E)}')
print(f'normal_form({to_str(t)}, E) = {to_str(normal_form(t, E))}')
test()
Explanation: The function normal_form(t, E) takes a term t and a list (or set) of equations E and tries to simplify the term t as much as possible using the equations from E.
In the implementation, we have to be careful to rename the variables occurring in E so that they are different from the variables occurring in t. Furthermore, we have to take care that we don't identify different variables in E by accident. Therefore, we rename the variables in E so that they are both different from the variables in t and from the old variables occurring in E.
End of explanation
def occurs(x, t):
if is_var(t):
_, var = t
return x == var
return any(occurs(x, arg) for arg in t[1:])
Explanation: Back to top
Unification
In this section, we implement the unification algorithm of Martelli and Montanari.
Given a variable name x and a term t, the function occurs(x, t) checks whether x occurs in t.
End of explanation
def unify(s, t):
return solve({('≐', s, t)}, {})
Explanation: The algorithm implemented below takes a pair (E, σ) as its input. Here E is a set of syntactical equations that need to be solved and σ is a substitution that is initially empty. The pair (E, σ) is then transformed using the rules of Martelli and Montanari. The transformation is successful if the pair (E, σ) can be transformed into a pair of the form ({}, μ). Then μ is the solution to the system of equations E and hence μ is a most general unifier of E.
The rules that can be used to solve a system of syntactical equations are as follows:
- If $y\in\mathcal{V}$ is a variable that does not occur in the term $t$,
then we perform the following reduction:
$$ \Big\langle E \cup \big{ y \doteq t \big}, \sigma \Big\rangle \quad\leadsto \quad
\Big\langle E[y \mapsto t], \sigma\big[ y \mapsto t \big] \Big\rangle
$$
- If the variable $y$ occurs in the term $t$ and $y$ is different from $t$, then the system of
syntactical equations
$E \cup \big{ y \doteq t \big}$ is not solvable:
$$ \Big\langle E \cup \big{ y \doteq t \big}, \sigma \Big\rangle\;\leadsto\; \texttt{None} \quad
\mbox{if $y \in \textrm{Var}(t)$ and $y \not=t$.}$$
- If $y\in\mathcal{V}$ is a variable and $t$ is no variable, then we use the following rule:
$$ \Big\langle E \cup \big{ t \doteq y \big}, \sigma \Big\rangle \quad\leadsto \quad
\Big\langle E \cup \big{ y \doteq t \big}, \sigma \Big\rangle.
$$
- Trivial syntactical equations of variables can be dropped:
$$ \Big\langle E \cup \big{ x \doteq x \big}, \sigma \Big\rangle \quad\leadsto \quad
\Big\langle E, \sigma \Big\rangle.
$$
- If $f$ is an $n$-ary function symbol, then we have:
$$ \Big\langle E \cup \big{ f(s_1,\cdots,s_n) \doteq f(t_1,\cdots,t_n) \big}, \sigma \Big\rangle
\;\leadsto\;
\Big\langle E \cup \big{ s_1 \doteq t_1, \cdots, s_n \doteq t_n}, \sigma \Big\rangle.
$$
- The system of syntactical equations $E \cup \big{ f(s_1,\cdots,s_m) \doteq g(t_1,\cdots,t_n) \big}$
has no solution if the function symbols $f$ and $g$ are different:
$$ \Big\langle E \cup \big{ f(s_1,\cdots,s_m) \doteq g(t_1,\cdots,t_n) \big},
\sigma \Big\rangle \;\leadsto\; \texttt{None} \qquad \mbox{if $f \not= g$}.
$$
Given two terms $s$ and $t$, the function $\texttt{unify}(s, t)$ computes the <em style="color:blue;">most general unifier</em> of $s$ and $t$.
End of explanation
def solve(E, σ):
while E != set():
_, s, t = E.pop()
if s == t: # remove trivial equations
continue
if is_var(s):
_, x = s
if occurs(x, t):
return None
else: # set x to t
E = apply_set(E, { x: t })
σ = compose(σ, { x: t })
elif is_var(t):
E.add(('≐', t, s))
else:
f , g = s[0] , t[0]
sArgs, tArgs = s[1:] , t[1:]
m , n = len(sArgs), len(tArgs)
if f != g or m != n:
return None
else:
E |= { ('≐', sArgs[i], tArgs[i]) for i in range(m) }
return σ
def test():
s = parse_term('x * i(x) * (y * z)')
t = parse_term('a * i(1) * b')
print(f'unify({to_str(s)}, {to_str(t)}) = {to_str(unify(s, t))}')
test()
Explanation: Given a set of <em style="color:blue;">syntactical equations</em> $E$ and a substitution $\sigma$, the function $\texttt{solve}(E, \sigma)$ applies the rules of Martelli and Montanari to solve $E$.
End of explanation
def count(t, x):
match t:
case '$var', y:
return 1 if x == y else 0
case _, *Ts:
return sum(count(arg, x) for arg in Ts)
def test():
t = parse_term('x * (i(x) * y)')
print(f'count({to_str(t)}, "x") = {count(t, "x")}')
test()
Explanation: Back to top
The Knuth-Bendix Ordering
In order to turn an equation $s = t$ into a rewrite rule, we have to check whether the term $s$ is more complex than the term $t$, so that $s$ should be simplified to $t$, or whether $t$ is more complex than $s$ and we should rewrite $t$ into $s$. To this end, we implement the Knuth-Bendix ordering, which is a method to compare terms.
Given a term t and a variable name x, the function count(t, x) computes the number of times that x occurs in t.
End of explanation
WEIGHT = { '1': 1, '*': 1, '/': 1, '\\': 1, 'i': 0 }
ORDERING = { '1': 0, '*': 1, '/': 2, '\\': 3, 'i': 5 }
max_fct = lambda: 'i'
Explanation: In order to define the Knuth-Bendix ordering on terms, three prerequisites need to be satisfied:
1. We need to assign a weight $w(f)$ to every function symbol $f$. These weights are
natural numbers. There must be at most one function symbol $g$ such that $w(g) = 0$.
Furthermore, if $w(g) = 0$, then $g$ has to be unary.
We define the weights via the dictionary Weight, i.e. we have $w(f) = \texttt{Weight}[f]$.
2. We need to define a strict order $<$ on the set of function symbols.
This ordering is implemented via the dictionary Ordering. We define
$$ f < g \;\stackrel{_\textrm{def}}{\Longleftrightarrow}\; \texttt{Ordering}[f] < \texttt{Ordering}[f]. $$
3. The order $<$ on the function symbols has to be admissible with respect to the weight function $w$, i.e. the following
condition needs to be satisfied:
$$ w(f) = 0 \rightarrow \forall g: \bigl(g \not=f \rightarrow g < f\bigr). $$
To put this in words: If the function symbol $f$ has a weight of $0$, then
all other function symbols $g$ have to be smaller than $f$ w.r.t. the strict order $<$.
Note that this implies that there can be at most one function symbol with $f$ such that $w(f) = 0$.
This function symbol $f$ is then the maximum w.r.t. the order $<$.
Below, for efficiency reasons, the function max_fct returns the function symbol $f$ that is maximal w.r.t. the strict order $<$.
End of explanation
def weight(t):
match t:
case '$var', _:
return 1
case f, *Ts:
return WEIGHT[f] + sum(weight(arg) for arg in Ts)
def test():
t = parse_term('x * (i(x) * 1)')
print(f'weight({to_str(t)}) = {weight(t)}')
test()
Explanation: Given a term t the function weight(t) computes the weight $w(t)$, where $w(t)$ is defined by induction on $t$:
- $w(x) := 1$ for all variables $x$,
- $w\bigl(f(t_1,\cdots,t_n)\bigr) := \texttt{Weight}[f] + \sum\limits_{i=1}^n w(t_i)$.
End of explanation
def is_tower(s, t):
if len(t) != 2: # f is not unary
return False
f, t1 = t
if f != max_fct():
return False
if t1 == s:
return True
return is_tower(s, t1)
def test():
t = parse_term('i(a)')
s = parse_term('i(i(a))')
print(f'is_tower({to_str(s)}, {to_str(t)}) = {is_tower(s, t)}')
test()
Explanation: Given a term s and a term t, the function is_tower(s, t) returns True iff the following is true:
$$ \exists n\in\mathbb{N}:\bigl( n > 0 \wedge t = f^{n}(s) \wedge f = \texttt{max_fct}()\bigr). $$
Here the expression $f^n(s)$ is the $n$-fold application of $f$ to $s$, e.g. we have $f^1(s) = f(s)$, $f^2(s) = f(f(s))$, and in general $f^{n+1}(s) = f\bigl(f^{n}(s)\bigr)$.
End of explanation
def is_simpler(s, t):
if is_var(t):
return False
if is_var(s):
_, x = s
return occurs(x, t)
Vs = find_variables(s)
for x in Vs:
if count(t, x) < count(s, x):
return False
ws = weight(s)
wt = weight(t)
if ws < wt:
return True
if ws > wt:
return False
# ws == wt
if is_tower(s, t):
return True
f, *Ss = s
g, *Ts = t
if ORDERING[f] < ORDERING[g]:
return True
if ORDERING[f] > ORDERING[g]:
return False
return is_simpler_list(Ss, Ts)
Explanation: The Knuth-Bendix order $s \prec_{\textrm{kbo}} t$ is defined for terms $s$ and $t$. We have $s \prec_{\textrm{kbo}} t$ iff one of the following two conditions hold:
1. $w(s) < w(t)$ and $\texttt{count}(s, x) \leq \texttt{count}(t, x)$ for all variables $x$ occurring in $s$ .
2. $w(s) = w(t)$, $\texttt{count}(s, x) \leq \texttt{count}(t, x)$ for all variables $x$ occurring in $s$, and
one of the following subconditions holds:
* $t = f^n(s)$ where $n \geq 1$ and $f$ is the maximum w.r.t. the order $<$ on function symbols,
i.e. we have $f = \texttt{max_fct}()$.
* $s = f(s_1,\cdots,s_m)$, $t=g(t_1,\cdots,t_n)$, and $f<g$.
* $s = f(s_1,\cdots,s_m)$, $t=f(t_1,\cdots,t_m)$, and
$[s_1,\cdots,s_m] \prec_{\textrm{lex}} [t_1,\cdots,t_m]$.
Here, $\prec_{\textrm{lex}}$ denotes the *lexicographic extension* of the ordering $\prec_{\textrm{kbo}}$ to
lists of terms. It is defined as follows:
$$ [x] + R_1 \prec_{\textrm{lex}} [y] + R_2 \;\stackrel{_\textrm{def}}{\Longleftrightarrow}\;
x \prec_{\textrm{kbo}} y \,\vee\, \bigl(x = y \wedge R_1 \prec_{\textrm{lex}} R_2\bigr)
$$
Given two terms s and t the function is_simpler(s, t) returns True if $s \prec_{\textrm{kbo}} t$.
End of explanation
def is_simpler_list(S, T):
if S == [] == T:
return False
if is_simpler(S[0], T[0]):
return True
if S[0] == T[0]:
return is_simpler_list(S[1:], T[1:])
return False
def test():
#l = parse_term('(x * y) * z')
#r = parse_term('x * (y * z)')
l = parse_term('i(a)')
r = parse_term('i(i(a))')
print(f'is_simpler({to_str(r)}, {to_str(l)}) = {is_simpler(r, l)}')
print(f'is_simpler({to_str(l)}, {to_str(r)}) = {is_simpler(l, r)}')
test()
Explanation: Given two lists S and T of terms, the function is_simpler_list(S, T) checks whether S is lexicographically simpler than T if the elements of S and T are compared with the Knuth-Bendix ordering $\prec_{\textrm{kbo}}$. It is assumed that S and T have the same length.
End of explanation
class OrderException(Exception):
pass
Explanation: We define the class OrderException to be able to deal with equations that can't be ordered into a rewrite rule.
End of explanation
def order_equation(eq):
_, s, t = eq
if is_simpler(t, s):
return ('=', s, t)
elif is_simpler(s, t):
return ('=', t, s)
else:
Msg = f'Knuth-Bendix algorithm failed: Could not order {to_str(s)} = {to_str(t)}'
raise OrderException(Msg)
def test():
equation = 'i(i(a)) = i(i(i(i(a))))'
eq = parse_equation(equation)
print(f'order_equation({to_str(eq)}) = {to_str(order_equation(eq))}')
test()
Explanation: Given an equation eq and an Ordering of the function symbols occurring eq, the function order_equation orders the equation eq with respect to the Knuth-Bendix ordering, i.e. in the ordered equation, the right hand side is simpler than the left hand side. If the left hand side and the right hand side are incomparable, the function raises an OrderException.
End of explanation
def non_triv_positions(t):
if is_var(t):
return set()
_, *args = t
Result = { () }
for i, arg in enumerate(args):
Result |= { (i,) + a for a in non_triv_positions(arg) }
return Result
def test():
t = parse_term('x * i(x) * 1')
print(f'non_triv_positions({to_str(t)}) = {non_triv_positions(t)}')
test()
Explanation: Back to top
Critical Pairs
The central notion of the Knuth-Bendix algorithm is the notion of a critical pair.
Given two equations lhs1 = rhs1 and lhs2 = rhs2, a pair of terms (s, t) is a critical pair of these equations if we have the following:
- u is a non-trivial position in lhs1, i.e. lhs1/u is not a variable,
- The subterm lhs1/u is unifiable with lhs2, i.e.
$$\mu = \texttt{mgu}(\texttt{lhs}_1 / \texttt{u}, \texttt{lhs}_2) \not= \texttt{None},$$
- $s = \texttt{lhs}_1\mu[\texttt{u} \mapsto \texttt{rhs}_2\mu]$ and $t = \texttt{rhs}_1\mu$.
The idea is then that the term $\texttt{lhs1}\mu$ can be rewritten into different ways:
- $\texttt{lhs1}\mu \rightarrow \texttt{rhs1}\mu = t$,
- $\texttt{lhs1}\mu \rightarrow \texttt{lhs}_1\mu[\texttt{u} \mapsto \texttt{rhs}_2\mu] = s$.
The function critical_pairs implemented in this section computes the critical pairs between two rewrite rules.
Given a term t, the function non_triv_positions computes the set $\mathcal{P}os(t)$ of all positions in t that do not point to variables. Such positions are called non-trivial positions. Given a term t, the set $\mathcal{P}os(t)$ of all positions in $t$ is defined by induction on t.
1. $\mathcal{P}os(v) := \bigl{()\bigr} \quad \mbox{if $v$ is a variable} $
2. $\mathcal{P}os\bigl(f(t_0,\cdots,t_{n-1})\bigr) :=
\bigl{()\bigr} \cup
\bigl{ (i,) + u \mid i \in{0,\cdots,n-1} \wedge u \in \mathcal{P}os(t_i) \bigr}
$
Note that since we are programming in Python, positions are zero-based. Given a position $v$ in a term $t$, we define $t/v$ as the subterm of $t$ at position $v$ by induction on $t$:
1. $t/() := t$,
2. $f(t_0,\cdots,t_{n-1})/u := t_{u\texttt{[0]}}/u\texttt{[1:]}$.
Given a term $s$, a term $t$, and a position $u \in \mathcal{P}os(t)$, we also define the replacement of the subterm at position $u$ by $t$, written $s[u \mapsto t]$ by induction on $u$:
1. $s\bigl[() \mapsto t\bigr] := t$.
2. $f(s_0,\cdots,s_{n-1})\bigl[\bigl((i,) + u\bigr) \mapsto t\bigr] := f\bigl(s_0,\cdots,s_i[u \mapsto t],\cdots,s_{n-1}\bigr)$.
End of explanation
def subterm(t, u):
if len(u) == 0:
return t
_, *args = t
i, *ur = u
return subterm(args[i], ur)
def test():
t = parse_term('(x * i(x)) * 1')
print(f'subterm({to_str(t)}, (0,1)) = {to_str(subterm(t, (0,1)))}')
test()
Explanation: Given a term t and a position u in t, the function subterm(t, u) extracts the subterm that is located at position u, i.e. it computes t/u. The position u is zero-based.
End of explanation
def replace_at(t, u, s):
if len(u) == 0:
return s
i, *ur = u
f, *Args = t
NewArgs = []
for j, arg in enumerate(Args):
if j == i:
NewArgs.append(replace_at(arg, ur, s))
else:
NewArgs.append(arg)
return (f,) + tuple(NewArgs)
def test():
t = parse_term('(x * i(x)) * 1')
s = parse_term('a * b')
print(f'replace_at({to_str(t)}, (0,1), {to_str(s)}) = {to_str(replace_at(t, (0,1), s))}')
test()
Explanation: Given a term t, a position u in t and a term s, the function replace_at(t, u, s) replaces the subterm at position u with t. The position u uses zero-based indexing. Hence it returns the term
$$ t[u \mapsto s]. $$
End of explanation
def critical_pairs(eq1, eq2):
Vars = find_variables(eq1) | find_variables(eq2)
eq2 = rename_variables(eq2, Vars)
_, lhs1, rhs1 = eq1
_, lhs2, rhs2 = eq2
Result = set()
Positions = non_triv_positions(lhs1)
for u in Positions:
𝜇 = unify(subterm(lhs1, u), lhs2)
if 𝜇 != None:
lhs1_new = apply(replace_at(lhs1, u, rhs2), 𝜇)
rhs1_new = apply(rhs1, 𝜇)
Result.add( (('=', lhs1_new, rhs1_new), eq1, eq2))
return Result
def test():
eq1 = parse_equation('(x * y) * z = x * (y * z)')
eq2 = parse_equation('i(x) * x = 1')
for ((_, s, t), _, _) in critical_pairs(eq1, eq2):
print(f'critical_pairs({to_str(eq1)}, {to_str(eq2)}) = ' + '{' + f'{to_str(s)} = {to_str(t)}' + '}')
test()
Explanation: Given two equations eq1 and eq2, the function critical_pairs(eq1, eq2) computes the set of all critical pairs between these equations. A pair of terms (s, t) is a critical pair of eq1 and eq2 if we have
- eq1 has the form lhs1 = rhs1,
- eq2 has the form lhs2 = rhs2,
- u is a non-trivial position in lhs1,
- $\mu = \texttt{mgu}(\texttt{lhs}_1/u, \texttt{lhs}_2) \not= \texttt{None}$,
- $s = \texttt{lhs}_1\mu[u \leftarrow \texttt{rhs}_2\mu]$ and $t = \texttt{rhs}_1\mu$.
End of explanation
def simplify_rules(RewriteRules, rule):
UnusedRules = [ rule ]
while UnusedRules != []:
UnchangedRules = set()
r = UnusedRules.pop()
for eq in RewriteRules:
simple = normal_form(eq, { r })
if simple != eq:
simple = normal_form(simple, RewriteRules | { r })
if simple[1] != simple[2]:
simple = order_equation(simple)
UnusedRules.append(simple)
print('simplified:')
print(f'old: {to_str(eq)}')
print(f'new: {to_str(simple)}')
else:
print(f'removed: {to_str(eq)}')
else:
UnchangedRules.add(eq)
RewriteRules = UnchangedRules | { r }
return RewriteRules
Explanation: Back to top
The Completion Algorithm
Given a set of RewriteRules and a newly derived rewrite rule, the function simplify_rules(RewriteRules, rule) adds rule to the set RewriteRules. When the function returns, every equation in the set RewriteRules is in normal form with respect to all other equations in RewriteRules.
End of explanation
def print_equations(Equations):
cnt = 1
for _, l, r in Equations:
print(f'{cnt}. {to_str(l)} = {to_str(r)}')
cnt += 1
Explanation: The function print_equations prints the set of Equations one by one and numbers them.
End of explanation
def complexity(eq):
return len(to_str(eq))
Explanation: Given an equation eq of the form eq = ('=', lhs, rhs), the function complexity(eq) computes a measure of complexity for the given equation. This measure of complexity is the length of the string that represents the equation. This measure of complexity is later used to choose between equations: Less complex equations are more interesting and should be considered first when computing critical pairs.
End of explanation
def all_critical_pairs(RewriteRules, eq):
Result = set()
for eq1 in RewriteRules:
Result |= { cp for cp in critical_pairs(eq1, eq) }
Result |= { cp for cp in critical_pairs(eq, eq1) }
return Result
Explanation: Given a set of equations RewriteRules and a single rewrite rule eq, the function all_critical_pairs(RewriteRules, eq) computes the set of all critical pairs that can be build by building critical pairs with an equation from RewriteRules and the equation eq. It is assumed that eq is already an element of RewriteRules.
End of explanation
import heapq as hq
Explanation: The module heapq provides heap-based priority queues, which are implemented as lists.
End of explanation
def knuth_bendix_algorithm(file):
Equations = set()
Axioms = set(parse_file(file))
RewriteRules = set()
try:
for eq in Axioms:
ordered_eq = order_equation(eq)
Equations.add(ordered_eq)
print(f'given: {to_str(ordered_eq)}')
EquationQueue = []
for eq in Equations:
hq.heappush(EquationQueue, (complexity(eq), eq))
while EquationQueue != []:
_, eq = hq.heappop(EquationQueue)
eq = normal_form(eq, RewriteRules)
if eq[1] != eq[2]:
lr = order_equation(eq)
print(f'added: {to_str(lr)}')
Pairs = all_critical_pairs(RewriteRules | { lr }, lr)
for eq, r1, r2 in Pairs:
new_eq = normal_form(eq, RewriteRules)
if new_eq[1] != new_eq[2]:
print(f'found: {to_str(eq)} from {to_str(r1)}, {to_str(r2)}')
hq.heappush(EquationQueue, (complexity(new_eq), new_eq))
RewriteRules = simplify_rules(RewriteRules, lr)
except OrderException as e:
print(e)
print()
print_equations(RewriteRules)
return RewriteRules
Explanation: Given a file name that contains a set of equations and a dictionary encoding an ordering of the function symbols, the function knuth_bendix_algorithm implements the Knuth-Bendix algorithm:
1. The equations read from the file are oriented into rewrite rules.
2. These oriented equations are pushed onto the priority queue EquationQueue according to their complexity.
3. The set RewriteRules is initialized as the empty set. The idea is that all critical pairs between
equations in RewriteRules have already been computed and that the resulting new equations have been added
to the priority queue EquationQueue.
4. As long as the priority queue EquationQueue is not empty, the least complex equation eq is removed from the
priority queue and simplified using the known RewriteRules.
5. If the simplified version of eq is not trivial, all critical pairs between eq and the
existing RewriteRules are computed. The resulting equations are pushed onto the priority queue EquationQueue.
6. When no new critical pairs can be found, the set of RewriteRules is returned.
This set is then guaranteed to be a confluent set of rewrite rules.
End of explanation
!cat Examples/group-theory-1.eqn || type Examples\group-theory-1.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/group-theory-1.eqn')
Explanation: Back to top
Examples
In this section we present a number of examples where the Knuth-Bendix completion algorithm is able to produce a confluent system of equations. In detail, we discuss the following examples:
1. Group Theory
2. Central Groupoids
3. Quasigroups
4. Quasigroups with Idempotence
5. Quasigroups with Unipotence
6. Loops
Group Theory
A structure $\mathcal{G} = \langle G, 1, *, i \rangle$ is a group iff
1. $G$ is a set.
2. $1 \in G$,
where $1$ is called the left-neutral element.
3. $*: G \times G \rightarrow G$,
where $$ is called the multiplication* of $\mathcal{G}$.
4. $i: G \rightarrow G$,
where for any $x \in G$ the element $i(x)$ is called the left-inverse of $x$.
5. The following equations hold for all $x,y,z \in G$:
* $1 * x = x$, i.e. $1$ is a left-neutral element.
* $i(x) * x = 1$, i.e. $i(x)$ is a left-inverse of $x$.
* $(x * y) * z = x * (y * z)$, i.e. the multiplication is associative.
A typical example of a group is the set of invertible $n \times n$ matrices.
Given the axioms defining a group, the Knuth-Bendix completion algorithm is able to prove the following:
1. The left neutral element is also a right neutral element, we have:
$$ x * 1 = x \quad \mbox{for all $x\in G$.} $$
2. The left inverse is also a right inverse, we have:
$$ x * i(x) = 1 \quad \mbox{for all $x\in G$.} $$
3. The operations $i$ and $*$ commute as follows:
$$ i(x * y) = i(y) * i(x) \quad \mbox{for all $x,y\in G$.}$$
End of explanation
!cat Examples/group-theory-2.eqn || type Examples\group-theory-2.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/group-theory-2.eqn')
Explanation: It is natural to ask whether the axiom describing the left neutral element and the axiom describing the left inverse can be replaced by corresponding axioms that require $1$ to be a right neutral element and $i(x)$ to be a right inverse. The Knuth-Bendix completion algorithm shows that this is indeed the case.
End of explanation
!cat Examples/lr-system.eqn || type Examples\lr-system.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/lr-system.eqn')
Explanation: LR Systems
Next, it is natural to ask what happens if we have a left neutral element and a right inverse. Algebraic Structures of this kind are called LR systems. The Knuth-Bendix completion algorithm shows that, in general, LR systems are different from groups.
End of explanation
!cat Examples/rl-system.eqn || type Examples\rl-system.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/rl-system.eqn')
Explanation: RL Systems
Similarly, if we have a right neutral element and a left inverse the resulting structure need not be a group. Systems of this kind are called RL system.
End of explanation
!cat Examples/central-groupoid.eqn || type Examples\central-groupoid.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/central-groupoid.eqn')
Explanation: Central Groupoids
A structure $\mathcal{G} = \langle G, \rangle$ is a central groupoid iff
1. $G$ is a a non-empty set.
2. $: G \times G \rightarrow G$,
3. The following equation holds for all $x,y,z \in G$:
$$ (x * y) * (y * z) = y $$
Central Groupoids have been defined by Trevor Adams in his paper Products of Points—Some Simple Algebras and Their Identities and are also discussed by Donald E. Knuth in his paper notes on Central Groupoids.
End of explanation
!cat Examples/quasigroups.eqn || type Examples\quasigroups.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/quasigroups.eqn')
Explanation: Back to top
Quasigroups
A structure $\mathcal{G} = \langle G, , /, \backslash \rangle$ is a quasigroup iff
1. $G$ is a non-empty set.
2. $: G \times G \rightarrow G$,
where $$ is called the multiplication* of $\mathcal{G}$.
3. $/: G \times G \rightarrow G$,
where $/$ is called the left division of $\mathcal{G}$.
4. $\backslash: G \times G \rightarrow G$,
where $\backslash$ is called the right division of $\mathcal{G}$.
5. The following equations hold for all $x,y \in G$:
* $x * (x \backslash y) = y$,
* $(x / y) * y = x$,
* $x \backslash (x * y) = y$,
* $(x * y) / y = x$.
End of explanation
!cat Examples/quasigroup-idempotence.eqn || type Examples\quasigroup-idempotence.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/quasigroup-idempotence.eqn')
Explanation: Quasigroups with Idempotence
A quasigroup with idempotence is a quasigroup that additionally satisfies the identity $x * x = x$. Therefore, a structure $\mathcal{G} = \langle G, , /, \backslash \rangle$ is a quasigroup with idempotence iff
1. $G$ is a set.
2. $: G \times G \rightarrow G$,
where $$ is called the multiplication* of $\mathcal{G}$.
3. $/: G \times G \rightarrow G$,
where $/$ is called the left division of $\mathcal{G}$.
4. $\backslash: G \times G \rightarrow G$,
where $\backslash$ is called the right division of $\mathcal{G}$.
5. The following equations hold for all $x,y \in G$:
* $x * (x \backslash y) = y$,
* $(x / y) * y = x$,
* $x \backslash (x * y) = y$,
* $(x * y) / y = x$,
* $x * x = x$.
End of explanation
!cat Examples/quasigroup-unipotence.eqn || type Examples\quasigroup-unipotence.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/quasigroup-unipotence.eqn')
Explanation: Quasigroups with Unipotence
A quasigroup with idempotence is a quasigroup that additionally satisfies the identity $x * x = 1$
where $1$ is a constant symbol. Therefore, a structure $\mathcal{G} = \langle G, 1, , /, \backslash \rangle$ is a quasigroup with idempotence iff
1. $G$ is a set.
2. $1 \in G$.
2. $: G \times G \rightarrow G$,
where $$ is called the multiplication* of $\mathcal{G}$.
3. $/: G \times G \rightarrow G$,
where $/$ is called the left division of $\mathcal{G}$.
4. $\backslash: G \times G \rightarrow G$,
where $\backslash$ is called the right division of $\mathcal{G}$.
5. The following equations hold for all $x,y \in G$:
* $x * (x \backslash y) = y$,
* $(x / y) * y = x$,
* $x \backslash (x * y) = y$,
* $(x * y) / y = x$,
* $x * x = 1$.
End of explanation
!cat Examples/loops.eqn || type Examples\loops.eqn
%%time
Rules = knuth_bendix_algorithm('Examples/loops.eqn')
Explanation: Loops
A loop is a quasigroup that additionally has an identity element. Therefore, a structure $\mathcal{G} = \langle G, 1, , /, \backslash \rangle$ is a loop iff
1. $G$ is a set.
2. $1 \in G$.
2. $: G \times G \rightarrow G$,
where $$ is called the multiplication* of $\mathcal{G}$.
3. $/: G \times G \rightarrow G$,
where $/$ is called the left division of $\mathcal{G}$.
4. $\backslash: G \times G \rightarrow G$,
where $\backslash$ is called the right division of $\mathcal{G}$.
5. The following equations hold for all $x,y \in G$:
* $1 * x = x$,
* $x * 1 = x$,
* $x * (x \backslash y) = y$,
* $(x / y) * y = x$,
* $x \backslash (x * y) = y$,
* $(x * y) / y = x$.
End of explanation |
1,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Table
Step2: View Table
Step3: Create New Empty Table
Step4: Copy Contents Of First Table Into Empty Table
Step5: View Previously Empty Table | Python Code:
# Ignore
%load_ext sql
%sql sqlite://
%config SqlMagic.feedback = False
Explanation: Title: Copy Data From One Table To Another
Slug: copy_data_between_tables
Summary: Copy Data From One Table To Another in SQL.
Date: 2016-05-01 12:00
Category: SQL
Tags: Basics
Authors: Chris Albon
Note: This tutorial was written using Catherine Devlin's SQL in Jupyter Notebooks library. If you have not using a Jupyter Notebook, you can ignore the two lines of code below and any line containing %%sql. Furthermore, this tutorial uses SQLite's flavor of SQL, your version might have some differences in syntax.
For more, check out Learning SQL by Alan Beaulieu.
End of explanation
%%sql
-- Create a table of criminals_1
CREATE TABLE criminals_1 (pid, name, age, sex, city, minor);
INSERT INTO criminals_1 VALUES (412, 'James Smith', 15, 'M', 'Santa Rosa', 1);
INSERT INTO criminals_1 VALUES (234, 'Bill James', 22, 'M', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (632, 'Stacy Miller', 23, 'F', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (621, 'Betty Bob', NULL, 'F', 'Petaluma', 1);
INSERT INTO criminals_1 VALUES (162, 'Jaden Ado', 49, 'M', NULL, 0);
INSERT INTO criminals_1 VALUES (901, 'Gordon Ado', 32, 'F', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (512, 'Bill Byson', 21, 'M', 'Santa Rosa', 0);
INSERT INTO criminals_1 VALUES (411, 'Bob Iton', NULL, 'M', 'San Francisco', 0);
Explanation: Create Table
End of explanation
%%sql
-- Select all
SELECT *
-- From the table 'criminals_1'
FROM criminals_1
Explanation: View Table
End of explanation
%%sql
-- Create a table called criminals_2
CREATE TABLE criminals_2 (pid, name, age, sex, city, minor);
Explanation: Create New Empty Table
End of explanation
%%sql
-- Insert into the empty table
INSERT INTO criminals_2
-- Everything
SELECT *
-- From the first table
FROM criminals_1;
Explanation: Copy Contents Of First Table Into Empty Table
End of explanation
%%sql
-- Select everything
SELECT *
-- From the previously empty table
FROM criminals_2
Explanation: View Previously Empty Table
End of explanation |
1,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples of Making Sky Plots from BOSS Meta Data
Examples of using the Basemap and healpy packages to make all sky maps of meta data accessed with the bossdata package. We use data from the BOSS quasar catalog. We also use astropy to show the position of the galactic plane.
Package Initialization
Step1: Ignore expected (harmless) warnings.
Step2: Query Database
Get list of ra, dec, z, and warning flags for quasar observations.
Step3: Plot redshift distribution of quasar observations
Step4: Sky Plots
Plot an "all sky map" using healpix binning. We use the Eckert IV projection as it is the best area preserving projection (see this link for discussion of "best" map projections).
Step5: Show distribution on sky
Plot the number of quasars per square degree
Step6: Plot the number of high redshift quasars per square degree
Step7: Show mean redshift on sky
We can show the mean value of a quantity on the sky as well
Step8: Using smaller bins (adjust using the nside parameter), we can see finer details | Python Code:
%pylab inline
from mpl_toolkits.basemap import Basemap
from matplotlib.collections import PolyCollection
import astropy.units as u
from astropy.coordinates import SkyCoord
import healpy as hp
print(hp.version.__version__)
import bossdata.meta
print(bossdata.__version__)
Explanation: Examples of Making Sky Plots from BOSS Meta Data
Examples of using the Basemap and healpy packages to make all sky maps of meta data accessed with the bossdata package. We use data from the BOSS quasar catalog. We also use astropy to show the position of the galactic plane.
Package Initialization
End of explanation
import warnings, matplotlib.cbook
warnings.filterwarnings('ignore', category=matplotlib.cbook.mplDeprecation)
Explanation: Ignore expected (harmless) warnings.
End of explanation
quasar_catalog = bossdata.meta.Database(quasar_catalog=True)
quasar_table = quasar_catalog.select_all(what='RA,DEC,Z_VI,BAL_FLAG_VI,ZWARNING', max_rows=0)
print('Found {0} total quasars'.format(len(quasar_table)))
bal_flagged = quasar_table[quasar_table['BAL_FLAG_VI'] != 0]
print('Found {0} with BAL identified from visual inspection'.format(len(bal_flagged)))
zwarning_flagged = quasar_table[quasar_table['ZWARNING'] != 0]
print('Found {0} with ZWARNING from pipepline'.format(len(zwarning_flagged)))
hiz_quasars = quasar_table[(quasar_table['Z_VI'] > 2.1) & (quasar_table['Z_VI'] < 3.5)]
print('Found {0} in high redshift sample (2.1 < z < 3.5)'.format(len(hiz_quasars)))
np.min(quasar_table['Z_VI']), np.max(quasar_table['Z_VI'])
Explanation: Query Database
Get list of ra, dec, z, and warning flags for quasar observations.
End of explanation
plt.figure(figsize=(8,6))
dr12_survey_area = 10400.0 # square degrees
wgt = 1.0/dr12_survey_area
zbins = np.linspace(0,6.5,66)
plt.hist(quasar_table['Z_VI'], weights=wgt*np.ones(len(quasar_table)), label='DR12Q', bins=zbins, histtype='step')
plt.hist(bal_flagged['Z_VI'], weights=wgt*np.ones(len(bal_flagged)), label='BAL_FLAG_VI != 0', bins=zbins, histtype='step')
plt.hist(zwarning_flagged['Z_VI'], weights=wgt*np.ones(len(zwarning_flagged)), label='ZWARNING != 0', bins=zbins, histtype='step')
plt.xlabel(r'Redshift $z$')
plt.ylabel(r'Number of quasars per $\Delta z = %.1f$ per sq degree' % (zbins[1]-zbins[0]))
plt.axvline(2.1, c='k', ls='--', lw=2)
plt.axvline(3.5, c='k', ls='--', lw=2)
plt.xlim(0,6.5)
plt.grid()
plt.legend()
plt.show()
Explanation: Plot redshift distribution of quasar observations
End of explanation
def plot_sky(ra, dec, data=None, nside=16, label='', projection='eck4', cmap=plt.get_cmap('jet'), norm=None,
hide_galactic_plane=False):
# get pixel area in degrees
pixel_area = hp.pixelfunc.nside2pixarea(nside, degrees=True)
# find healpixels associated with input vectors
pixels = hp.ang2pix(nside, 0.5*np.pi-np.radians(dec), np.radians(ra))
# find unique pixels
unique_pixels = np.unique(pixels)
# count number of points in each pixel
bincounts = np.bincount(pixels)
# if no data provided, show counts per sq degree
# otherwise, show mean per pixel
if data is None:
values = bincounts[unique_pixels]/pixel_area
else:
weighted_counts = np.bincount(pixels, weights=data)
values = weighted_counts[unique_pixels]/bincounts[unique_pixels]
# find pixel boundaries
corners = hp.boundaries(nside, unique_pixels, step=1)
corner_theta, corner_phi = hp.vec2ang(corners.transpose(0,2,1))
corner_ra, corner_dec = np.degrees(corner_phi), np.degrees(np.pi/2-corner_theta)
# set up basemap
m = Basemap(projection=projection, lon_0=90, resolution='l', celestial=True)
m.drawmeridians(np.arange(0, 360, 30), labels=[0,0,1,0], labelstyle='+/-')
m.drawparallels(np.arange(-90, 90, 15), labels=[1,0,0,0], labelstyle='+/-')
m.drawmapboundary()
# convert sky coords to map coords
x,y = m(corner_ra, corner_dec)
# regroup into pixel corners
verts = np.array([x.reshape(-1,4), y.reshape(-1,4)]).transpose(1,2,0)
# Make the collection and add it to the plot.
coll = PolyCollection(verts, array=values, cmap=cmap, norm=norm, edgecolors='none')
plt.gca().add_collection(coll)
plt.gca().autoscale_view()
if not hide_galactic_plane:
# generate vector in galactic coordinates and convert to equatorial coordinates
galactic_l = np.linspace(0, 2*np.pi, 1000)
galactic_plane = SkyCoord(l=galactic_l*u.radian, b=np.zeros_like(galactic_l)*u.radian, frame='galactic').fk5
# project to map coordinates
galactic_x, galactic_y = m(galactic_plane.ra.degree, galactic_plane.dec.degree)
m.scatter(galactic_x, galactic_y, marker='.', s=2, c='k')
# Add a colorbar for the PolyCollection
plt.colorbar(coll, orientation='horizontal', pad=0.01, aspect=40, label=label)
return m
Explanation: Sky Plots
Plot an "all sky map" using healpix binning. We use the Eckert IV projection as it is the best area preserving projection (see this link for discussion of "best" map projections).
End of explanation
plt.figure(figsize=(12,9))
plot_sky(quasar_table['RA'].data, quasar_table['DEC'].data, label='Number of quasars per square degree')
plt.show()
Explanation: Show distribution on sky
Plot the number of quasars per square degree:
End of explanation
plt.figure(figsize=(12,9))
plot_sky(hiz_quasars['RA'].data, hiz_quasars['DEC'].data, label='Number of quasars per square degree')
plt.show()
Explanation: Plot the number of high redshift quasars per square degree:
End of explanation
plt.figure(figsize=(12,9))
plot_sky(quasar_table['RA'].data, quasar_table['DEC'].data, data=quasar_table['Z_VI'].data,
label='Mean redshift', nside=16,
norm=mpl.colors.Normalize(vmin=1, vmax=3))
plt.show()
Explanation: Show mean redshift on sky
We can show the mean value of a quantity on the sky as well:
End of explanation
plt.figure(figsize=(12,9))
plot_sky(quasar_table['RA'].data, quasar_table['DEC'].data, data=quasar_table['Z_VI'].data,
label='Mean redshift', nside=32,
norm=mpl.colors.Normalize(vmin=1, vmax=3))
plt.show()
Explanation: Using smaller bins (adjust using the nside parameter), we can see finer details:
End of explanation |
1,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Defining a Custom Preprocessor and Extrapolator
Here you will be creating trivial preprocessor and and exztrqapolatoirs
following the API.
You start by importing the necessary modules.
Step1: Preprocessor
Defining a trivial preprocessor that returns a zeros map for any given input
map.
Step2: Make an input map that we will run the preprocessor on.
This will be changed to using the sample HMI image.
aMap2D = mp.Map('C
Step3: Instansiate the preprocessor and process the input map.
Step4: You can plot the preprocessed map using peek.
Step5: You can also access the metadata of the preprocessor like any map
Step6: Extrapolator
Defining a trivial extrapolator that returns a volume of one vectors.
Step7: Instansiate the preprocessor and extrapolate.
Step10: Testing an extrapolator | Python Code:
# General imports
import sunpy.map as mp
import numpy as np
from mayavi import mlab # Necessary for visulisation
# Module imports
from solarbextrapolation.preprocessors import Preprocessors
from solarbextrapolation.extrapolators import Extrapolators
from solarbextrapolation.map3dclasses import Map3D
from solarbextrapolation.visualisation_functions import visualise
Explanation: Defining a Custom Preprocessor and Extrapolator
Here you will be creating trivial preprocessor and and exztrqapolatoirs
following the API.
You start by importing the necessary modules.
End of explanation
class PreZeros(Preprocessors):
def __init__(self, map_magnetogram):
super(PreZeros, self).__init__(map_magnetogram)
def _preprocessor(self):
# Adding in custom parameters to the meta
self.meta['preprocessor_routine'] = 'Zeros Preprocessor'
# Creating the trivial zeros map of the same shape as the input map
map_output = mp.Map((np.zeros(self.map_input.data.shape),
self.meta))
# Outputting the map.
return map_output
Explanation: Preprocessor
Defining a trivial preprocessor that returns a zeros map for any given input
map.
End of explanation
from solarbextrapolation.example_data_generator import generate_example_data, dummyDataToMap
import astropy.units as u
aMap2D = arr_Data = dummyDataToMap(generate_example_data([ 20, 20 ],u.Quantity([ -10.0, 10.0 ] * u.arcsec),u.Quantity([ -10.0, 10.0 ] * u.arcsec)), u.Quantity([ -10.0, 10.0 ] * u.arcsec), u.Quantity([ -10.0, 10.0 ] * u.arcsec))
Explanation: Make an input map that we will run the preprocessor on.
This will be changed to using the sample HMI image.
aMap2D = mp.Map('C://git//solarextrapolation//solarextrapolation//data//example_data_(100x100)__01_hmi.fits')
End of explanation
aPrePro = PreZeros(aMap2D.submap([0, 10]*u.arcsec, [0, 10]*u.arcsec))
aPreProMap = aPrePro.preprocess()
Explanation: Instansiate the preprocessor and process the input map.
End of explanation
aPreProMap.peek()
Explanation: You can plot the preprocessed map using peek.
End of explanation
print "preprocessor_routine: " + str(aPreProMap.meta['preprocessor_routine'])
print "preprocessor_duration: " + str(aPreProMap.meta['preprocessor_duration'])
Explanation: You can also access the metadata of the preprocessor like any map:
End of explanation
class ExtOnes(Extrapolators):
def __init__(self, map_magnetogram, **kwargs):
super(ExtOnes, self).__init__(map_magnetogram, **kwargs)
def _extrapolation(self):
# Adding in custom parameters to the meta
self.meta['extrapolator_routine'] = 'Ones Extrapolator'
#arr_4d = np.ones([self.map_boundary_data.data.shape[0], self.map_boundary_data.data.shape[0], self.z, 3])
arr_4d = np.ones(self.shape.tolist() + [3])
return Map3D(arr_4d, self.meta)
Explanation: Extrapolator
Defining a trivial extrapolator that returns a volume of one vectors.
End of explanation
aExt = ExtOnes(aPreProMap, zshape=10)
aMap3D = aExt.extrapolate()
Explanation: Instansiate the preprocessor and extrapolate.
End of explanation
fig = visualise(aMap3D,
boundary=aPreProMap,
show_boundary_axes=False,
show_volume_axes=False,
debug=False)
mlab.show()
# aPreProData = aMap2D.submap([0,10], [0,10])
# Some checks:
#aPreProData.data # Should be a 2D zeros array.
#aPreProData.meta
#aPreProData.meta['preprocessor_routine']
#aPreProData.meta['preprocessor_start_time']
Explanation: You can visulise the field using MayaVi.
End of explanation
# Define trivial extrapolator
class ExtZeros(Extrapolators):
def __init__(self, map_magnetogram, **kwargs):
super(ExtZeros, self).__init__(map_magnetogram, **kwargs)
def _extrapolation(self):
# Adding in custom parameters to the meta
self.meta['extrapolator_routine'] = 'Zeros Extrapolator'
arr_4d = np.zeros([self.map_boundary_data.data.shape[0],
self.map_boundary_data.data.shape[0], self.z, 3])
return Map3D((arr_4d, self.meta))
aExt = ExtZeros(
aPreProData,
filepath='C://Users/Alex/solarextrapolation/solarextrapolation/3Dmap.m3d')
aMap3D = aExt.extrapolate()
# Some checks:
#aMap3D.data # Should be a 4D zeros array.
#aMap3D.meta
#aMap3D.meta['extrapolator_routine']
#aMap3D.meta['extrapolator_start_time']
# Testing a Map3DCube
aMapCube = Map3DCube(aMap3D, aMap3D)
aMapCube[0]
aMapCube[0].data
aMapCube[0].meta
aMapCube[1].data
aMapCube[1].meta
Explanation: Testing an extrapolator
End of explanation |
1,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 4 Assessment
Step1: Now, let's plot a digit from the dataset
Step5: Before we implement PCA, we will need to do some data preprocessing. In this assessment, some of them
will be implemented by you, others we will take care of. However, when you are working on real world problems, you will need to do all these steps by yourself!
The preprocessing steps we will do are
1. Convert unsigned interger 8 (uint8) encoding of pixels to a floating point number between 0-1.
2. Subtract from each image the mean $\mu$.
3. Scale each dimension of each image by $\frac{1}{\sigma}$ where $\sigma$ is the standard deviation of this dimension across the whole dataset.
The steps above ensure that our images will have zero mean and one variance. These preprocessing
steps are also known as Data Normalization or Feature Scaling.
1. PCA
Now we will implement PCA. Before we do that, let's pause for a moment and
think about the steps for performing PCA. Assume that we are performing PCA on
some dataset $\boldsymbol X$ for $M$ principal components.
We then need to perform the following steps, which we break into parts
Step7: Now, with the help of the functions you have implemented above, let's implement PCA! When you implement PCA, do take advantage of the functions that you have implemented above.
Step8: The greater number of of principal components we use, the smaller will our reconstruction
error be. Now, let's answer the following question
Step9: We can also put these numbers into perspective by plotting them.
Step10: But numbers don't tell us everything! Just what does it mean qualitatively for the loss to decrease from around
$450.0$ to less than $100.0$?
Let's find out! In the next cell, we draw the original eight as the leftmost image. Then we show the reconstruction of the image on the right, in descending number of principal components used.
Step11: We can also browse throught the reconstructions for other digits. Once again, interact becomes handy.
Step13: 2. PCA for high-dimensional datasets
Sometimes, the dimensionality of our dataset may be larger than the number of data points we
have. Then it might be inefficient to perform PCA with the implementation above. Instead,
as mentioned in the lectures, we can implement PCA in a more efficient manner, which we
call PCA for high-dimensional data (PCA_high_dim).
Consider the normalized data matrix $\boldsymbol{\bar{X}}$ of size $N \times D$ where $D > N$. To do PCA we perform the following steps
Step14: Given the same dataset, PCA_high_dim and PCA should give the same output.
Assuming we have implemented PCA correctly, we can then use PCA to test the correctness
of PCA_high_dim.
We can use this invariant
to test our implementation of PCA_high_dim, assuming that we have correctly implemented PCA.
Step15: Now let's compare the running time between PCA and PCA_high_dim.
Tips for running benchmarks or computationally expensive code
Step16: We first benchmark the time taken to compute $\boldsymbol X^T\boldsymbol X$ and $\boldsymbol X\boldsymbol X^T$. Jupyter's magic command %time is quite handy.
Next we benchmark PCA, PCA_high_dim.
Step17: Alternatively, use the time magic command.
Step18: We can also compare the running time for PCA and PCA_high_dim directly. Spend some time and think about what this plot means. We mentioned in lectures that PCA_high_dim are advantageous when
we have dataset size $N$ < data dimension $D$. Although our plot for the two running times does not intersect exactly at $N = D$, it does show the trend.
Step19: Again, with the magic command time. | Python Code:
# PACKAGE: DO NOT EDIT
import numpy as np
import timeit
# PACKAGE: DO NOT EDIT
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
Explanation: Week 4 Assessment: Principal Component Analysis (PCA)
Learning Objective
In this notebook, we will implement PCA. We will implement the two versions of PCA as described in the lectures, which handles the when the dataset size exceeds the dataset dimensionality, as well as the case when we have the dimensionality greater than the size of the dataset.
We will break down the task of implementing PCA into small components and combine them in the end to produce the final algorithm. We will apply PCA to the MNIST dataset and observe how the reconstruction changes as we change the number of principal components used.
If you are having issues with the grader, be sure to checkout the Q&A.
If you are stuck with the programming assignments, you can visit the discussion forum and discuss with your peers.
End of explanation
from ipywidgets import interact
from sklearn.datasets import fetch_mldata
MNIST = fetch_mldata('MNIST original', data_home='./MNIST')
%matplotlib inline
plt.figure(figsize=(4,4))
plt.imshow(MNIST.data[0].reshape(28,28), cmap='gray');
Explanation: Now, let's plot a digit from the dataset:
End of explanation
# GRADED FUNCTION: DO NOT EDIT THIS LINE
# ===YOU SHOULD EDIT THIS FUNCTION===
def normalize(X):
Normalize the given dataset X
Args:
X: ndarray, dataset
Returns:
(Xbar, mean, std): ndarray, Xbar is the normalized dataset
with mean 0 and standard deviation 1; mean and std are the
mean and standard deviation respectively.
Note:
You will encounter dimensions where the standard deviation is
zero, for those when you do normalization the normalized data
will be NaN. Handle this by setting using `std = 1` for those
dimensions when doing normalization.
mu = np.mean(X, axis=0) # EDIT THIS
std = np.std(X, axis=0)
std_filled = std.copy()
std_filled[std==0] = 1.
Xbar = (X - mu) / std_filled # EDIT THIS
return Xbar, mu, std_filled
# GRADED FUNCTION: DO NOT EDIT THIS LINE
# ===YOU SHOULD EDIT THIS FUNCTION===
def eig(S):
Compute the eigenvalues and corresponding eigenvectors
for the covariance matrix S.
Args:
S: ndarray, covariance matrix
Returns:
(eigvals, eigvecs): ndarray, the eigenvalues and eigenvectors
Note:
the eigenvals and eigenvecs SHOULD BE sorted in descending
order of the eigen values
Hint: take a look at np.argsort for how to sort in numpy.
eVals, eVecs = np.linalg.eig(S)
order = np.absolute(eVals).argsort()[::-1]
eVals = eVals[order]
eVecs = eVecs[:,order]
return (eVals, eVecs) # EDIT THIS
# GRADED FUNCTION: DO NOT EDIT THIS LINE
# ===YOU SHOULD EDIT THIS FUNCTION===
def projection_matrix(B):
Compute the projection matrix onto the space spanned by `B`
Args:
B: ndarray of dimension (D, M), the basis for the subspace
Returns:
P: the projection matrix
P = B @ np.linalg.inv(B.T @ B) @ B.T # EDIT THIS
return P
Explanation: Before we implement PCA, we will need to do some data preprocessing. In this assessment, some of them
will be implemented by you, others we will take care of. However, when you are working on real world problems, you will need to do all these steps by yourself!
The preprocessing steps we will do are
1. Convert unsigned interger 8 (uint8) encoding of pixels to a floating point number between 0-1.
2. Subtract from each image the mean $\mu$.
3. Scale each dimension of each image by $\frac{1}{\sigma}$ where $\sigma$ is the standard deviation of this dimension across the whole dataset.
The steps above ensure that our images will have zero mean and one variance. These preprocessing
steps are also known as Data Normalization or Feature Scaling.
1. PCA
Now we will implement PCA. Before we do that, let's pause for a moment and
think about the steps for performing PCA. Assume that we are performing PCA on
some dataset $\boldsymbol X$ for $M$ principal components.
We then need to perform the following steps, which we break into parts:
Data normalization (normalize).
Find eigenvalues and corresponding eigenvectors for the covariance matrix $\boldsymbol S$.
Sort by the largest eigenvalues and the corresponding eigenvectors (eig).
After these steps, we can then compute the projection and reconstruction of the data onto the spaced spanned by the top $M$ eigenvectors.
End of explanation
# GRADED FUNCTION: DO NOT EDIT THIS LINE
# ===YOU SHOULD EDIT THIS FUNCTION===
def PCA(X, num_components):
Args:
X: ndarray of size (N, D), where D is the dimension of the data,
and N is the number of datapoints
num_components: the number of principal components to use.
Returns:
X_reconstruct: ndarray of size (N, D) the reconstruction
of X from the first `num_components` principal components.
# Compute the data covariance matrix S
S = np.cov(X, rowvar=False, bias=True)
# Next find eigenvalues and corresponding eigenvectors for S by implementing eig().
eig_vals, eig_vecs = eig(S)
# Reconstruct the images from the lowerdimensional representation
# To do this, we first need to find the projection_matrix (which you implemented earlier)
# which projects our input data onto the vector space spanned by the eigenvectors
P = projection_matrix(eig_vecs[:,:num_components]) # projection matrix
# Then for each data point x_i in the dataset X
# we can project the original x_i onto the eigenbasis.
X_reconstruct = (P @ X.T).T
return X_reconstruct
## Some preprocessing of the data
NUM_DATAPOINTS = 1000
X = (MNIST.data.reshape(-1, 28 * 28)[:NUM_DATAPOINTS]) / 255.
Xbar, mu, std = normalize(X)
Explanation: Now, with the help of the functions you have implemented above, let's implement PCA! When you implement PCA, do take advantage of the functions that you have implemented above.
End of explanation
def mse(predict, actual):
return np.square(predict - actual).sum(axis=1).mean()
loss = []
reconstructions = []
for num_component in range(1, 100):
reconst = PCA(Xbar, num_component)
reconst = np.real(reconst)
error = mse(reconst, Xbar)
reconstructions.append(reconst)
# print('n = {:d}, reconstruction_error = {:f}'.format(num_component, error))
loss.append((num_component, error))
reconstructions = np.asarray(reconstructions)
reconstructions = reconstructions * std + mu # "unnormalize" the reconstructed image
loss = np.asarray(loss)
loss
Explanation: The greater number of of principal components we use, the smaller will our reconstruction
error be. Now, let's answer the following question:
How many principal components do we need
in order to reach a Mean Squared Error (MSE) of less than $100$ for our dataset?
End of explanation
fig, ax = plt.subplots()
ax.plot(loss[:,0], loss[:,1]);
ax.axhline(100, linestyle='--', color='r', linewidth=2)
ax.xaxis.set_ticks(np.arange(1, 100, 5));
ax.set(xlabel='num_components', ylabel='MSE', title='MSE vs number of principal components');
Explanation: We can also put these numbers into perspective by plotting them.
End of explanation
@interact(image_idx=(0, 1000))
def show_num_components_reconst(image_idx):
fig, ax = plt.subplots(figsize=(20., 20.))
actual = X[image_idx]
x = np.concatenate([actual[np.newaxis, :], reconstructions[:, image_idx]])
ax.imshow(np.hstack(x.reshape(-1, 28, 28)[np.arange(10)]),
cmap='gray');
ax.axvline(28, color='orange', linewidth=2)
Explanation: But numbers don't tell us everything! Just what does it mean qualitatively for the loss to decrease from around
$450.0$ to less than $100.0$?
Let's find out! In the next cell, we draw the original eight as the leftmost image. Then we show the reconstruction of the image on the right, in descending number of principal components used.
End of explanation
@interact(i=(0, 10))
def show_pca_digits(i=1):
plt.figure(figsize=(4,4))
actual_sample = X[i].reshape(28,28)
reconst_sample = (reconst[i, :] * std + mu).reshape(28, 28)
plt.imshow(np.hstack([actual_sample, reconst_sample]), cmap='gray')
plt.show()
Explanation: We can also browse throught the reconstructions for other digits. Once again, interact becomes handy.
End of explanation
# GRADED FUNCTION: DO NOT EDIT THIS LINE
def PCA_high_dim(X, num_components):
Compute PCA for small sample size.
Args:
X: ndarray of size (N, D), where D is the dimension of the data,
and N is the number of data points in the training set. You may assume the input
has been normalized.
num_components: the number of principal components to use.
Returns:
X_reconstruct: (N, D) ndarray. the reconstruction
of X from the first `num_components` principal components.
N, D = X.shape
M = (1/N)*(X @ X.T) # EDIT THIS, compute the matrix \frac{1}{N}XX^T.
eig_vals, eig_vecs = eig(M) # EDIT THIS, compute the eigenvalues.
U = X.T @ eig_vecs # EDIT THIS. Compute the eigenvectors for the original PCA problem.
# Similar to what you would do in PCA, compute the projection matrix,
# then perform the projection.
P = projection_matrix(U[:, 0:num_components]) # projection matrix
X_reconstruct = (P @ X.T).T # EDIT THIS.
return X_reconstruct
Explanation: 2. PCA for high-dimensional datasets
Sometimes, the dimensionality of our dataset may be larger than the number of data points we
have. Then it might be inefficient to perform PCA with the implementation above. Instead,
as mentioned in the lectures, we can implement PCA in a more efficient manner, which we
call PCA for high-dimensional data (PCA_high_dim).
Consider the normalized data matrix $\boldsymbol{\bar{X}}$ of size $N \times D$ where $D > N$. To do PCA we perform the following steps:
We solve the following eigenvalue/eigenvector equation for the matrix $\frac{1}{N} \boldsymbol{\bar{X}} \boldsymbol{\bar{X}}^T$, i.e. we solve for $\lambda_i$, $\boldsymbol c_i$ in
$$\frac{1}{N} \boldsymbol{\bar{X}} \boldsymbol{\bar{X}}^T \boldsymbol c_i = \lambda_i \boldsymbol c_i.$$
We want to recover original eigenvectors $\boldsymbol b_i$ of the data covariance matrix $\boldsymbol S = \frac{1}{N} \boldsymbol{\bar{X}^T} \boldsymbol{\bar{X}}$.
Left-multiply the eigenvectors $\boldsymbol c_i$ by $\boldsymbol{\bar{X}}^T$ yields
$$\frac{1}{N} \boldsymbol{\bar{X}}^T \boldsymbol{\bar{X}} \boldsymbol{\bar{X}}^T \boldsymbol c_i = \lambda_i \boldsymbol{\bar{X}}^T \boldsymbol c_i$$ and we recover $\boldsymbol b_i=\boldsymbol{\bar{X}}^T \boldsymbol c_i$ as eigenvector of $\boldsymbol S$ with the eigenvalue $\lambda_i$.
End of explanation
np.testing.assert_almost_equal(PCA(Xbar, 2), PCA_high_dim(Xbar, 2))
# In fact, you can generate random input dataset to verify your implementation.
print('correct')
Explanation: Given the same dataset, PCA_high_dim and PCA should give the same output.
Assuming we have implemented PCA correctly, we can then use PCA to test the correctness
of PCA_high_dim.
We can use this invariant
to test our implementation of PCA_high_dim, assuming that we have correctly implemented PCA.
End of explanation
def time(f, repeat=10):
times = []
for _ in range(repeat):
start = timeit.default_timer()
f()
stop = timeit.default_timer()
times.append(stop-start)
return np.mean(times), np.std(times)
times_mm0 = []
times_mm1 = []
for datasetsize in np.arange(4, 784, step=20):
XX = Xbar[:datasetsize]
mu, sigma = time(lambda : XX.T @ XX)
times_mm0.append((datasetsize, mu, sigma))
mu, sigma = time(lambda : XX @ XX.T)
times_mm1.append((datasetsize, mu, sigma))
times_mm0 = np.asarray(times_mm0)
times_mm1 = np.asarray(times_mm1)
fig, ax = plt.subplots()
ax.set(xlabel='size of dataset', ylabel='running time')
bar = ax.errorbar(times_mm0[:, 0], times_mm0[:, 1], times_mm0[:, 2], label="$X^T X$ (PCA)", linewidth=2)
ax.errorbar(times_mm1[:, 0], times_mm1[:, 1], times_mm1[:, 2], label="$X X^T$ (PCA_high_dim)", linewidth=2)
ax.legend();
Explanation: Now let's compare the running time between PCA and PCA_high_dim.
Tips for running benchmarks or computationally expensive code:
When you have some computation that takes up a non-negligible amount of time. Try separating
the code that produces output from the code that analyzes the result (e.g. plot the results, comput statistics of the results). In this way, you don't have to recompute when you want to produce more analysis.
End of explanation
times0 = []
times1 = []
for datasetsize in np.arange(4, 784, step=100):
XX = Xbar[:datasetsize]
npc = 2
mu, sigma = time(lambda : PCA( XX, 2))
times0.append((datasetsize, mu, sigma))
mu, sigma = time(lambda : PCA_high_dim(XX, 2))
times1.append((datasetsize, mu, sigma))
times0 = np.asarray(times0)
times1 = np.asarray(times1)
Explanation: We first benchmark the time taken to compute $\boldsymbol X^T\boldsymbol X$ and $\boldsymbol X\boldsymbol X^T$. Jupyter's magic command %time is quite handy.
Next we benchmark PCA, PCA_high_dim.
End of explanation
%time Xbar.T @ Xbar
%time Xbar @ Xbar.T
pass # Put this here, so that our output does not show the result of computing `Xbar @ Xbar.T`
Explanation: Alternatively, use the time magic command.
End of explanation
fig, ax = plt.subplots()
ax.set(xlabel='number of datapoints', ylabel='run time')
ax.errorbar(times0[:, 0], times0[:, 1], times0[:, 2], label="PCA", linewidth=2)
ax.errorbar(times1[:, 0], times1[:, 1], times1[:, 2], label="PCA_high_dim", linewidth=2)
ax.legend();
Explanation: We can also compare the running time for PCA and PCA_high_dim directly. Spend some time and think about what this plot means. We mentioned in lectures that PCA_high_dim are advantageous when
we have dataset size $N$ < data dimension $D$. Although our plot for the two running times does not intersect exactly at $N = D$, it does show the trend.
End of explanation
%time PCA(Xbar, 2)
%time PCA_high_dim(Xbar, 2)
pass
Explanation: Again, with the magic command time.
End of explanation |
1,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quakebot
PART ONE
Step1: What we want
Step2: PART TWO
Step3: PART THREE
Step4: PART FOUR | Python Code:
earthquake = {
'rms': '1.85',
'updated': '2014-06-11T05:22:21.596Z',
'type': 'earthquake',
'magType': 'mwp',
'longitude': '-136.6561',
'gap': '48',
'depth': '10',
'dmin': '0.811',
'mag': '5.7',
'time': '2014-06-04T11:58:58.200Z',
'latitude': '59.0001',
'place': '73km WSW of Haines, Alaska',
'net': 'us',
'nst': '',
'id': 'usc000rauc'}
Explanation: Quakebot
PART ONE: Write your few tiny functions
End of explanation
def depth_to_words(quake):
if int(quake['depth']) <= 70:
return 'shallow'
elif int(quake['depth']) <= 300:
return 'intermediate'
else:
return 'deep'
print(depth_to_words(earthquake))
def mag_to_words(quake):
if float(quake['mag']) <= 4.0:
return 'minor'
elif float(quake['mag']) <= 5.0:
return 'moderate'
elif float(quake['mag']) <= 6.0:
return 'strong'
elif float(quake['mag']) <= 7.0:
return 'major'
else:
return 'gigantic'
print(mag_to_words(earthquake))
def mag_numbers(quake):
return str(quake['mag'])
print(mag_numbers(earthquake))
#type(mag_numbers(earthquake))
import dateutil.parser
#!pip install dateutils
def day(quake):
timestring = quake['time']
day = dateutil.parser.parse(timestring)
return day.strftime("%A")
print(day(earthquake))
def time(quake):
timestring = quake['time']
time = dateutil.parser.parse(timestring)
#if time < 12:00
if int(time.strftime("%H")) <= 4:
return 'night'
elif int(time.strftime("%H")) <= 12:
return 'morning'
elif int(time.strftime("%H")) <= 18:
return 'afternoon'
elif int(time.strftime("%H")) <= 22:
return 'evening'
else:
return 'night'
print(time(earthquake))
type(time(earthquake))
def date(quake):
datestring = quake['time']
date = dateutil.parser.parse(datestring)
return date.strftime("%B %d")
print(date(earthquake))
def location(quake):
location_string = str(quake['place'])
return location_string
print(location(earthquake))
Explanation: What we want: A DEPTH POWER, MAGNITUDE earthquake was reported DAY TIME_OF_DAY on DATE LOCATION.
Shallow earthquakes are between 0 and 70 km deep; intermediate earthquakes, 70 - 300 km deep; and deep earthquakes, 300 - 700
End of explanation
def eq_to_sentence(quake):
str_depth_to_words = str(depth_to_words(quake))
str_mag_to_words = str(mag_to_words(quake))
str_mag_numbers = str(mag_numbers(quake))
return 'A ' + str_depth_to_words + " " + str_mag_to_words + " " + 'Magnitude' \
+ " " + str_mag_numbers + " earthquake was reported " + day(quake) + " " \
+ time(quake) + " on " + date(quake) + " " + location(quake) + "."
print(eq_to_sentence(earthquake))
Explanation: PART TWO: Write the eq_to_sentence function
End of explanation
#from urllib.request import urlopen
#many_quakes = urlopen("http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/1.0_month.csv").read()
import pandas as pd
#quakes = pd.read_csv('http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/1.0_month.csv')
quakes = pd.read_csv('http://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/1.0_month.csv')
earthquakes = quakes.to_dict('records')
#earthquakes
for x in earthquakes:
if x['mag'] > 4.0:
print(eq_to_sentence(x))
Explanation: PART THREE: Doing it in bulk
End of explanation
def other_mag_to_words(quake):
mag_other = str(quake['mag'])
return mag_other
def type_of_event(quake):
return quake['type']
def date_other_events(quake):
datestring = quake['time']
date = dateutil.parser.parse(datestring)
return date.strftime("%B %d")
def location_other_events(quake):
location_string = str(quake['place'])
return location_string
def other_events_to_sentence(quake):
return "There was also a magnitude " + other_mag_to_words(quake) + \
" " + type_of_event(quake) + " on " + date_other_events(quake) + " " \
+ location_other_events(quake) + "."
for x in earthquakes:
if x['type'] != 'earthquake':
print(other_events_to_sentence(x))
Explanation: PART FOUR: The other bits
End of explanation |
1,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explore U.S. Births
In this project, I am working with the dataset, compiled by FiveThirtyEight [https
Step1: Converting Data Into A List Of Lists
to convert the dataset into a list of lists where each nested list contains integer values (not strings). We also need to remove the header row.
Step2: 3
Step3: Calculating Number Of Births Each Day Of Week
Step4: Creating A More General Function
it's better to create a single function that works for any column and specify the column we want as a parameter each time we call the function. | Python Code:
f = open('US_births_1994-2003_CDC_NCHS.csv', 'r')
data = f.read()
data
data_spl = data.split("\n")
data_spl
data_spl[0:10]
Explanation: Explore U.S. Births
In this project, I am working with the dataset, compiled by FiveThirtyEight [https://raw.githubusercontent.com/fivethirtyeight/data/master/births/US_births_1994-2003_CDC_NCHS.csv]
First things first, let's read in the CSV file and explore it. Split the string on the newline character ("\n").
End of explanation
def read_csv(input_csv):
f = open(input_csv, 'r')
data = f.read()
splited = data.split('\n')
string_list = splited[1:len(splited)]
final_list = []
for each in string_list:
int_fields = []
string_fields = each.split(',')
for each in string_fields:
int_fields.append(int(each))
final_list.append(int_fields)
return final_list
cdc_list = read_csv("US_births_1994-2003_CDC_NCHS.csv")
cdc_list[0:10]
Explanation: Converting Data Into A List Of Lists
to convert the dataset into a list of lists where each nested list contains integer values (not strings). We also need to remove the header row.
End of explanation
def month_births(input_ls):
births_per_month = {}
for each in input_ls:
month = each[1]
births = each[4]
if month in births_per_month:
births_per_month[month] = births_per_month[month] + births
else: births_per_month[month] = births
return births_per_month
cdc_month_births = month_births(cdc_list)
cdc_month_births
Explanation: 3: Calculating Number Of Births Each Month
Now that the data is in a more usable format, we can start to analyze it. Let's calculate the total number of births that occured in each month, across all of the years in the dataset. We'll create a dictionary where each key is a unique month and each value is the number of births that happened in that month, across all years:
End of explanation
def dow_births(input_ls):
b_per_day = {}
for each in input_ls:
day_of_week = each[3]
births = each[4]
if day_of_week in b_per_day:
b_per_day[day_of_week] = b_per_day[day_of_week] + births
else:
b_per_day[day_of_week] = births
return b_per_day
cdc_day_births = dow_births(cdc_list)
cdc_day_births
Explanation: Calculating Number Of Births Each Day Of Week
End of explanation
def calc_counts(input_ls, column):
dictionary = {}
for each in input_ls:
births = each[4]
key = each[column]
if key in dictionary:
dictionary[key] = dictionary[key] + births
else:
dictionary[key] = births
return dictionary
cdc_year_births = calc_counts(cdc_list, 0)
cdc_year_births
cdc_month_births = calc_counts(cdc_list, 1)
cdc_month_births
cdc_dom_births = calc_counts(cdc_list, 2)
cdc_dom_births
cdc_dow_births = calc_counts(cdc_list, 3)
cdc_dow_births
def calc_min(dict_ls):
values = dict_ls.values
min_value = 1
max_value = 0
for each in values:
print(each)
if each < (each + 1):
min_value = each
for each in values:
if each > (each+1):
max_value = each
return max_value, min_value
g = calc_min(cdc_dow_births)
g
min_value = cdc_dow_births.values()
min_value
Explanation: Creating A More General Function
it's better to create a single function that works for any column and specify the column we want as a parameter each time we call the function.
End of explanation |
1,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Categorical Variables in Snorkel
This is a short tutorial on how to use categorical variables (i.e. more values than binary) in Snorkel. We'll use a completely toy scenario with three sentences and two LFs just to demonstrate the mechanics. Please see the main tutorial for a more comprehensive intro!
We'll highlight in bold all parts focusing on the categorical aspect.
Notes on Current Categorical Support
Step1: Step 1
Step2: Step 2
Step3: Now we extract candidates the same as in the Intro Tutorial (simplified here slightly)
Step4: Step 3
Step5: Now we apply the LFs to the candidates to produce our label matrix $L$
Step6: Step 4
Step7: Next, we can save the training marginals
Step8: And then reload (e.g. in another notebook)
Step9: Step 5 | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import numpy as np
from snorkel import SnorkelSession
session = SnorkelSession()
Explanation: Categorical Variables in Snorkel
This is a short tutorial on how to use categorical variables (i.e. more values than binary) in Snorkel. We'll use a completely toy scenario with three sentences and two LFs just to demonstrate the mechanics. Please see the main tutorial for a more comprehensive intro!
We'll highlight in bold all parts focusing on the categorical aspect.
Notes on Current Categorical Support:
The Viewer works in the categorical setting, but labeling Candidates in the Viewer does not.
Instead can import test / dev set labels from e.g. BRAT
The LogisticRegression and SparseLogisticRegression end models have been extended to the categorical setting, but other end models in contrib may not have been
Note: It's simple to make this change, so feel free to post an issue with requests for other end models!
End of explanation
from snorkel.parser import TSVDocPreprocessor, CorpusParser
doc_preprocessor = TSVDocPreprocessor('data/categorical_example.tsv')
corpus_parser = CorpusParser()
%time corpus_parser.apply(doc_preprocessor)
Explanation: Step 1: Preprocessing the data
End of explanation
from snorkel.models import candidate_subclass
Relationship = candidate_subclass('Relationship', ['person1', 'person2'], values=['Married', 'Employs', False])
Explanation: Step 2: Defining candidates
We'll define candidate relations between person mentions that now can take on one of three values:
python
['Married', 'Employs', False]
Note the importance of including a value for "not a relation of interest"- here we've used False, but any value could do.
Also note that None is a protected value -- denoting a labeling function abstaining -- so this cannot be used as a value.
End of explanation
from snorkel.candidates import Ngrams, CandidateExtractor
from snorkel.matchers import PersonMatcher
from snorkel.models import Sentence
# Define a Person-Person candidate extractor
ngrams = Ngrams(n_max=3)
person_matcher = PersonMatcher(longest_match_only=True)
cand_extractor = CandidateExtractor(
Relationship,
[ngrams, ngrams],
[person_matcher, person_matcher],
symmetric_relations=False
)
# Apply to all (three) of the sentences for this simple example
sents = session.query(Sentence).all()
# Run the candidate extractor
%time cand_extractor.apply(sents, split=0)
train_cands = session.query(Relationship).filter(Relationship.split == 0).all()
print("Number of candidates:", len(train_cands))
from snorkel.viewer import SentenceNgramViewer
# NOTE: This if-then statement is only to avoid opening the viewer during automated testing of this notebook
# You should ignore this!
import os
if 'CI' not in os.environ:
sv = SentenceNgramViewer(train_cands, session)
else:
sv = None
sv
Explanation: Now we extract candidates the same as in the Intro Tutorial (simplified here slightly):
End of explanation
import re
from snorkel.lf_helpers import get_between_tokens
# Getting an example candidate from the Viewer
c = train_cands[0]
# Traversing the context hierarchy...
print(c.get_contexts()[0].get_parent().text)
# Using a helper function
list(get_between_tokens(c))
def LF_married(c):
return 'Married' if 'married' in get_between_tokens(c) else None
WORKPLACE_RGX = r'employ|boss|company'
def LF_workplace(c):
sent = c.get_contexts()[0].get_parent()
matches = re.search(WORKPLACE_RGX, sent.text)
return 'Employs' if matches else None
LFs = [
LF_married,
LF_workplace
]
Explanation: Step 3: Writing Labeling Functions
The categorical labeling functions (LFs) we now write can output the following values:
Abstain: None OR 0
Categorical values: The literal values in Relationship.values OR their integer indices.
We'll write two simple LFs to illustrate.
Tip: we can get a random candidate (see below), or the example highlighted in the viewer above via sv.get_selected(), and then use this to test as we write the LFs!
End of explanation
from snorkel.annotations import LabelAnnotator
labeler = LabelAnnotator(lfs=LFs)
%time L_train = labeler.apply(split=0)
L_train
L_train.todense()
Explanation: Now we apply the LFs to the candidates to produce our label matrix $L$:
End of explanation
from snorkel.learning import GenerativeModel
gen_model = GenerativeModel()
# Note: We pass cardinality explicitly here to be safe
# Can usually be inferred, except we have no labels with value=3
gen_model.train(L_train, cardinality=3)
train_marginals = gen_model.marginals(L_train)
assert np.all(train_marginals.sum(axis=1) - np.ones(3) < 1e-10)
train_marginals
Explanation: Step 4: Training the Generative Model
End of explanation
from snorkel.annotations import save_marginals, load_marginals
save_marginals(session, L_train, train_marginals)
Explanation: Next, we can save the training marginals:
End of explanation
load_marginals(session, L_train)
Explanation: And then reload (e.g. in another notebook):
End of explanation
from snorkel.learning.disc_models.rnn import reRNN
train_kwargs = {
'lr': 0.01,
'dim': 50,
'n_epochs': 10,
'dropout': 0.25,
'print_freq': 1,
'max_sentence_length': 100
}
lstm = reRNN(seed=1701, n_threads=None, cardinality=Relationship.cardinality)
lstm.train(train_cands, train_marginals, **train_kwargs)
train_labels = [1, 2, 1]
correct, incorrect = lstm.error_analysis(session, train_cands, train_labels)
print("Accuracy:", lstm.score(train_cands, train_labels))
test_marginals = lstm.marginals(train_cands)
test_marginals
Explanation: Step 5: Training the End Model
Now we train an LSTM--note this is just to demonstrate the mechanics... since we only have three examples, don't expect anything spectacular!
End of explanation |
1,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gender Distinguished Analysis of ADHD v.s. Bipolar - by Yating Jing
Build models for female patients and male patients separately.
Step1: Machine Learning Utilities
K-Means Clustering
Step2: Principal Component Analysis
Step3: Locally Linear Embedding
Step4: Classification
Step5: Gender 1 ADHD v.s. Bipolar Analysis
Step6: Gender 2 ADHD v.s. Bipolar Analysis | Python Code:
import pandas as pd
import numpy as np
df_adhd = pd.read_csv('ADHD_Gender_rCBF.csv')
df_bipolar = pd.read_csv('Bipolar_Gender_rCBF.csv')
n1, n2 = df_adhd.shape[0], df_bipolar.shape[0]
print 'Number of ADHD patients (without Bipolar) is', n1
print 'Number of Bipolar patients (without ADHD) is', n2
print 'Chance before gender separation is', float(n1) / (n1 + n2)
# Separate the genders
adhd1_id, adhd2_id = list(), list()
bipolar1_id, bipolar2_id = list(), list()
for i, g in df_adhd[['Patient_ID', 'Gender_id']].values:
if g == 1:
adhd1_id.append(i)
elif g == 2:
adhd2_id.append(i)
for i, g in df_bipolar[['Patient_ID', 'Gender_id']].values:
if g == 1:
bipolar1_id.append(i)
elif g == 2:
bipolar2_id.append(i)
print 'Number of Gender 1 ADHD patients (without Bipolar) is', len(adhd1_id)
print 'Number of Gender 2 ADHD patients (without Bipolar) is', len(adhd2_id)
print 'Number of Gender 1 Bipolar patients (without ADHD) is', len(bipolar1_id)
print 'Number of Gender 2 Bipolar patients (without ADHD) is', len(bipolar2_id)
# Separate ADHD data gender-wise
df_adhd1 = df_adhd.loc[df_adhd['Patient_ID'].isin(adhd1_id)].drop(['Patient_ID', 'Gender_id'], axis=1)
df_adhd2 = df_adhd.loc[df_adhd['Patient_ID'].isin(adhd2_id)].drop(['Patient_ID', 'Gender_id'], axis=1)
# Separate Bipolar data gender-wise
df_bipolar1 = df_bipolar.loc[df_bipolar['Patient_ID'].isin(bipolar1_id)].drop(['Patient_ID', 'Gender_id'], axis=1)
df_bipolar2 = df_bipolar.loc[df_bipolar['Patient_ID'].isin(bipolar2_id)].drop(['Patient_ID', 'Gender_id'], axis=1)
# Create disorder labels for classification
# ADHD: 0, Bipolar: 1
n1_adhd, n1_bipolar = len(adhd1_id), len(bipolar1_id)
n2_adhd, n2_bipolar = len(adhd2_id), len(bipolar2_id)
# Labels for gender 1
y1 = [0] * n1_adhd + [1] * n1_bipolar
# Labels for gender 2
y2 = [0] * n2_adhd + [1] * n2_bipolar
print 'Shape check:'
print 'ADHD:', df_adhd1.shape, df_adhd2.shape
print 'Bipolar:', df_bipolar1.shape, df_bipolar2.shape
# Gender1 data
df1_all = pd.concat([df_adhd1, df_bipolar1], axis=0)
# Gender2 data
df2_all = pd.concat([df_adhd2, df_bipolar2], axis=0)
print '\nDouble shape check:'
print 'Gender 1:', df1_all.shape, len(y1)
print 'Gender 2:', df2_all.shape, len(y2)
# Compute chances
chance1 = float(n1_adhd)/(n1_adhd + n1_bipolar)
chance2 = float(n2_adhd)/(n2_adhd + n2_bipolar)
print 'Chance for gender 1 is', chance1
print 'Chance for gender 2 is', chance2
Explanation: Gender Distinguished Analysis of ADHD v.s. Bipolar - by Yating Jing
Build models for female patients and male patients separately.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
def kmeans(df, title, k=4):
data = df.values.T
kmeans = KMeans(n_clusters=k)
kmeans.fit(data)
labels = kmeans.labels_
centroids = kmeans.cluster_centers_
fig = plt.figure()
fig.suptitle('K-Means on '+title+' Features' , fontsize=14, fontweight='bold')
# Plot clusters
for i in range(k):
# Extract observations within each cluster
ds = data[np.where(labels==i)]
# Plot the observations with symbol o
plt.plot(ds[:,0], ds[:,1], 'o')
# Plot the centroids with simbol x
lines = plt.plot(centroids[i,0], centroids[i,1], 'x')
plt.setp(lines, ms=8.0)
plt.setp(lines, mew=2.0)
Explanation: Machine Learning Utilities
K-Means Clustering
End of explanation
from sklearn import preprocessing
from sklearn.decomposition import PCA
# Plot explained variance ratio
def plot_evr(ex_var_ratio):
plt.title('Explained Variance Ratios by PCA')
plt.plot(ex_var_ratio)
plt.ylabel('Explained Variance Ratio')
plt.xlabel('Principal Component')
def pca(df, n=20):
'''
Default number of principal components: 20
'''
# Scale
X = df.values
X_scaled = preprocessing.scale(X)
# PCA
pca = PCA(n_components=n)
pc = pca.fit_transform(X_scaled)
print '\nExplained Variance Ratios:'
print pca.explained_variance_ratio_
print '\nSum of Explained Variance Ratios of the first', n, 'components is',
print np.sum(pca.explained_variance_ratio_)
plot_evr(pca.explained_variance_ratio_)
return pc
Explanation: Principal Component Analysis
End of explanation
from sklearn.manifold import LocallyLinearEmbedding
# Compute explained variance ratio of transformed data
def compute_explained_variance_ratio(transformed_data):
explained_variance = np.var(transformed_data, axis=0)
explained_variance_ratio = explained_variance / np.sum(explained_variance)
explained_variance_ratio = np.sort(explained_variance_ratio)[::-1]
return explained_variance_ratio
def lle(X, n=10):
# Scale
X_scaled = preprocessing.scale(X)
# LLE
lle = LocallyLinearEmbedding(n_neighbors=25, n_components=n, method='ltsa')
pc = lle.fit_transform(X_scaled)
ex_var_ratio = compute_explained_variance_ratio(pc)
print '\nExplained Variance Ratios:'
print ex_var_ratio
# print '\nSum of Explained Variance Ratios of ', n, 'components is',
# print np.sum(ex_var_ratio)
return pc
Explanation: Locally Linear Embedding
End of explanation
from sklearn import cross_validation
from sklearn.cross_validation import KFold
def train_test_clf(clf, clf_name, X, y, k=10):
'''
Train and test a classifier using # K-fold cross validation
Args:
clf: sklearn classifier
clf_name: classifier name (for printing)
X: training data (2D numpy matrix)
y: labels (1D vector)
k: number of folds (default=10)
'''
kf = KFold(len(X), n_folds=k)
scores = cross_validation.cross_val_score(clf, X, y, cv=kf)
acc, acc_std = scores.mean(), scores.std()
print clf_name + ' accuracy is %0.4f (+/- %0.3f)' % (acc, acc_std)
return acc, acc_std
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC, SVC
from sklearn.lda import LDA
from sklearn.qda import QDA
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import AdaBoostClassifier
def classify(X, y, gender, feature_type):
lg = LogisticRegression(penalty='l2')
knn = KNeighborsClassifier(n_neighbors=7)
svc = LinearSVC()
lda = LDA()
qda = QDA()
rf = RandomForestClassifier(n_estimators=30)
gb = GradientBoostingClassifier(n_estimators=20, max_depth=3)
et = ExtraTreesClassifier(n_estimators=40, max_depth=5)
ada = AdaBoostClassifier()
classifiers = [lg, knn, svc, lda, qda, rf, gb, et, ada]
clf_names = ['Logistic Regression', 'KNN', 'Linear SVM', 'LDA', 'QDA', \
'Random Forest', 'Gradient Boosting', 'Extra Trees', 'AdaBoost']
accuracies = list()
for clf, name in zip(classifiers, clf_names):
acc, acc_std = train_test_clf(clf, name, X, y)
accuracies.append(acc)
# Visualize classifier performance
x = range(len(accuracies))
width = 0.6/1.5
plt.bar(x, accuracies, width)
# Compute chance
n0, n1 = y.count(0), y.count(1)
chance = max(n0, n1) / float(n0 + n1)
fig_title = gender + ' Classifier Performance on ' + feature_type + ' features'
plt.title(fig_title)
plt.xticks(x, clf_names, rotation=50)
plt.xlabel('Classifier')
plt.gca().xaxis.set_label_coords(1.1, -0.025)
plt.ylabel('Accuracy')
plt.axhline(chance, color='red', linestyle='--', label='chance') # plot chance
plt.legend(loc='center left', bbox_to_anchor=(1, 0.85))
Explanation: Classification
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plot = df1_all.plot(kind='hist', alpha=0.5, title='Gender 1 Data Distribution', legend=None)
# Cluster Gender 1 rCBF features
kmeans(df1_all, 'Gender 1')
# PCA
X1_pca = pca(df1_all, 20)
# LLE
X1_lle = lle(df1_all, 20)
# Classification using PCA features
print 'Using PCA features:'
classify(X1_pca, y1, 'Gender 1', 'PCA')
# Classification using LLE features
print 'Using LLE features:'
classify(X1_lle, y1, 'Gender 1', 'LLE')
Explanation: Gender 1 ADHD v.s. Bipolar Analysis
End of explanation
plot = df2_all.plot(kind='hist', alpha=0.5, title='Gender 2 Data Distribution', legend=None)
# Cluster Gender 2 rCBF features
kmeans(df2_all, 'Gender 2')
# PCA
X2_pca = pca(df2_all, 20)
# LLE
X2_lle = lle(df2_all, 20)
# Classification using PCA features
print 'Using PCA features:'
classify(X2_pca, y2, 'Gender 2', 'PCA')
# Classification using LLE features
print 'Using LLE features:'
classify(X2_lle, y2, 'Gender 2', 'LLE')
Explanation: Gender 2 ADHD v.s. Bipolar Analysis
End of explanation |
1,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
All the work you do with pycomlink will be based on the Comlink object, which represents one CML between two sites and with an arbitrary number of channels, i.e. the different connections between the two sites, typically one for each direction.
To get a Comlink object from you raw data which is probably in a CSV file, do the following
Read in the CSV file into a DataFrame using the Python package pandas
Reformat the DataFrame according to the convenctions of pycomlink
Prepare the necessary metadata for the ComlinkChannels and the Comlink object
Build ComlinkChannel objects for each channel, i.e. each pair of TX and RX time series that belong to one CML
Build a Comlink from the channels
Then you are set to go and use all the pycomlink functionality.
Read in CML data from CSV file
Use the fantastic pandas CSV reader. In this case the time stamps are in the first column, hence set index_col=0 and can automatically be parsed to datetime objects, hence set parse_dates=True.
Step1: pycomlink expects a fixed naming convention for the data in the DataFrames. The columns have to be named rx and tx. Hence, rename rsl here to rx and add a columne with the constant tx level, which was 20 dBm in this case. Please note that you always have to provide the tx level even if it is constat all the time. You can specify that TX is constant by passing atpc='off'.
Step2: Prepare the necessary metadata
Step3: Build a ComlinkChannel object
Step4: Build a Comlink object with the one channel from above
Step5: Look at the contents of the CML
Step6: In case your CML has several channels, you can pass a list of channels
Step7: Run typicall processing
(see other notebooks for more details on this) | Python Code:
df = pd.read_csv('example_data/gap0_gap4_2012.csv', parse_dates=True, index_col=0)
df.head()
Explanation: All the work you do with pycomlink will be based on the Comlink object, which represents one CML between two sites and with an arbitrary number of channels, i.e. the different connections between the two sites, typically one for each direction.
To get a Comlink object from you raw data which is probably in a CSV file, do the following
Read in the CSV file into a DataFrame using the Python package pandas
Reformat the DataFrame according to the convenctions of pycomlink
Prepare the necessary metadata for the ComlinkChannels and the Comlink object
Build ComlinkChannel objects for each channel, i.e. each pair of TX and RX time series that belong to one CML
Build a Comlink from the channels
Then you are set to go and use all the pycomlink functionality.
Read in CML data from CSV file
Use the fantastic pandas CSV reader. In this case the time stamps are in the first column, hence set index_col=0 and can automatically be parsed to datetime objects, hence set parse_dates=True.
End of explanation
# Rename the columns for the RX level
df.columns = ['rx']
df
# Add a constant TX level
df['tx'] = 20
df
Explanation: pycomlink expects a fixed naming convention for the data in the DataFrames. The columns have to be named rx and tx. Hence, rename rsl here to rx and add a columne with the constant tx level, which was 20 dBm in this case. Please note that you always have to provide the tx level even if it is constat all the time. You can specify that TX is constant by passing atpc='off'.
End of explanation
ch_metadata = {
'frequency': 18.7 * 1e9, # Frequency in Hz
'polarization': 'V',
'channel_id': 'channel_xy',
'atpc': 'off'} # This means that TX level is constant
cml_metadata = {
'site_a_latitude': 50.50, # Some fake coordinates
'site_a_longitude': 11.11,
'site_b_latitude': 50.59,
'site_b_longitude': 11.112,
'cml_id': 'XY_1234'}
Explanation: Prepare the necessary metadata
End of explanation
cml_ch = pycml.ComlinkChannel(df, metadata=ch_metadata)
cml_ch
Explanation: Build a ComlinkChannel object
End of explanation
cml = pycml.Comlink(channels=cml_ch, metadata=cml_metadata)
Explanation: Build a Comlink object with the one channel from above
End of explanation
cml
cml.plot_data(['rx']);
cml.plot_map()
Explanation: Look at the contents of the CML
End of explanation
cml_ch_1 = pycml.ComlinkChannel(df, metadata=ch_metadata)
df.rx = df.rx - 1.3
cml_ch_2 = pycml.ComlinkChannel(df, metadata=ch_metadata)
cml = pycml.Comlink(channels=[cml_ch_1, cml_ch_2], metadata=cml_metadata)
cml
cml.plot_data();
Explanation: In case your CML has several channels, you can pass a list of channels
End of explanation
cml.process.wet_dry.std_dev(window_length=100, threshold=0.3)
cml.process.baseline.linear()
cml.process.baseline.calc_A()
cml.process.A_R.calc_R()
cml.plot_data(['rx', 'wet', 'R']);
Explanation: Run typicall processing
(see other notebooks for more details on this)
End of explanation |
1,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Generazione data set sintetico
Step2: Esercizio 1
Step3: Esercizio 2 | Python Code:
# Vari import da usare
import numpy as np
import matplotlib.pyplot as plt
# Supporto per operazioni tra matrici e vettori
from numpy import matmul
from numpy import transpose
from numpy.linalg import inv
from numpy.linalg import pinv
def MakeSyntethicData(n=100, ifplot=False):
Restituisce una matrice X di 2 covariate e n righe,
e il vettore Y di n righe con le rispettive label.
Se ifplot=True fa uno scatter plot nel piano delle covariate
# Prima genero il campione di covariate per i punti classificati "blue"
np.random.seed(13)
x1_blue = np.random.normal(2, 0.8, n)
x2_blue = np.random.normal(6, 0.8, n)
# Poi genero il campione di covariate per i punti classificati "rossi"
# in modo che sia estratti da due distribuzioni diverse
m = 20
x1_red = np.random.normal(4, 0.5, max(n, n-m))
x2_red = np.random.normal(3, 0.5, max(n, n-m))
if n > m:
x1_red = np.append(x1_red, np.random.normal(10, 0.5, 20))
x2_red = np.append(x2_red, np.random.normal(0, 0.5, 20))
if ifplot:
fig, ax = plt.subplots(figsize=(7, 7))
ax.scatter(x1_blue, x2_blue, alpha=0.5, c='blue')
ax.scatter(x1_red, x2_red, alpha=0.5, c='red')
ax.set_xlabel('Covariata x1')
ax.set_ylabel('Covariata x2')
ax.legend(('Blue=0', 'Red=1'))
plt.show()
# Prepara la matrice delle covariate X e il vettore di label Y
X = []
Y = []
# Documentazione per la funzione zip()
# https://docs.python.org/3.6/library/functions.html#zip
for x,y in zip(x1_blue,x2_blue):
X.append((x,y))
Y.append(0) # 0 = blue
for x,y in zip(x1_red,x2_red):
X.append((x,y))
Y.append(1) # 1 = red
return X, Y
X,Y = MakeSyntethicData(100, True)
Explanation: Generazione data set sintetico
End of explanation
class RegressioneLineare(object):
def fit(self, x, y):
# Build the matrix with vector (1, x) as rows
X = np.matrix(list(map(lambda row: np.append([1], row), x)))
# Solve the normal equation (what if X is not invertible?)
self.w = matmul(matmul(inv(matmul(transpose(X), X)), transpose(X)), y)
def predict(self, x):
# Build the matrix with vector (1, x) as rows
X = np.matrix(list(map(lambda row: np.append([1], row), x)))
# Predict values
return matmul(transpose(X), self.w)
from sklearn.linear_model import LinearRegression
lr = LinearRegression(normalize=False)
lr.fit(X, Y)
print('Scikit LinearRegression, pesi trovati:', lr.intercept_, lr.coef_)
my = RegressioneLineare()
my.fit(X,Y)
print('My Regressione lineare, pesi trovati:', my.w)
Explanation: Esercizio 1: Regressione Lineare
Si usino le slide usate a lezione.
Confrontare i coefficienti $w$ trovati dalla vostra soluzione con quelli che trova la classe LinearRegression della libreria Scikit Learn.
End of explanation
class RegressioneLogistica(object):
def fit(self, x, y):
# DA COMPLETARE: METODO DI NEWTON RAPHSON SULLE SLIDES
pass
def predict(self, x):
# DA COMPLETARE: USARE I PARAMETRI w
pass
Explanation: Esercizio 2: Regressione Logistica
Si usino le slide usate a lezione.
Confrontare i coefficienti $w$ trovati dalla vostra soluzione con quelli che trova la classe LinearRegression della libreria Scikit Learn.
End of explanation |
1,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Post processing
Step1: Read the smoothed files
The following files are filtered and smoothed motion parameter regressed
I got the file name using the command
Step2: Check which subjects have volumes > 'vol'
Step3: Subject IDs to be considered -
subjects_refined
Step4: Below are subject IDs to be ignored
Step5: Volume correction
I have already extracted 4 volumes.
Now extract 120 - 4 = 116 volumes from each subject
So define vols = 114
Step6: Define a function to fetch the filenames of a particular subject ID
Step7: Extract volumes
Step8: Datasink
I needed to define the structure of what files are saved and where.
Step9: To create the substitutions I looked the datasink folder where I was redirecting the output. I manually selected the part of file/folder name that I wanted to change and copied below to be substituted.
Step10: Following is a Join Node that collects the preprocessed file paths and saves them in a file
Step11: Create a FC node
This node
Step12: IMPORTANT
Step13: Workflow to do just the pearcoff and fun2std transformation | Python Code:
from bids.grabbids import BIDSLayout
from nipype.interfaces.fsl import (BET, ExtractROI, FAST, FLIRT, ImageMaths,
MCFLIRT, SliceTimer, Threshold,Info, ConvertXFM,MotionOutliers)
from nipype.interfaces.afni import Resample
from nipype.interfaces.io import DataSink
from nipype.pipeline import Node, MapNode, Workflow, JoinNode
from nipype.interfaces.utility import IdentityInterface, Function
import os
from os.path import join as opj
from nipype.interfaces import afni
import nibabel as nib
import json
import numpy as np
# Paths
os.chdir('/home1/varunk/Autism-Connectome-Analysis-brain_connectivity/notebooks/')
path_cwd = os.getcwd()
path_split_list = path_cwd.split('/')
s = path_split_list[0:-2] # for getting to the parent dir of pwd
s = opj('/',*s) # *s converts list to path, # very important to add '/' in the begining so it is read as directory later
# json_path = opj(data_directory,'task-rest_bold.json')
json_path = '../scripts/json/paths.json'
with open(json_path, 'rt') as fp:
task_info = json.load(fp)
# base_directory = opj(s,'result')
# parent_wf_directory = 'preprocessPipeline_ABIDE2_GU1_withfloat'
# child_wf_directory = 'coregistrationPipeline'
# data_directory = opj(s,"data/ABIDE2-BIDS/GU1")
# datasink_name = 'datasink_preprocessed_ABIDE2_GU1_withfloat'
base_directory = opj(s,task_info["base_directory_for_results"])
motion_correction_bet_directory = task_info["motion_correction_bet_directory"]
parent_wf_directory = task_info["parent_wf_directory"]
# functional_connectivity_directory = task_info["functional_connectivity_directory"]
functional_connectivity_directory = 'temp_fc'
coreg_reg_directory = task_info["coreg_reg_directory"]
atlas_resize_reg_directory = task_info["atlas_resize_reg_directory"]
data_directory = opj(s,task_info["data_directory"])
datasink_name = task_info["datasink_name"]
# fc_datasink_name = task_info["fc_datasink_name"]
fc_datasink_name = 'temp_dataSink'
atlasPath = opj(s,task_info["atlas_path"])
# mask_file = '/media/varun/LENOVO4/Projects/result/preprocessPipeline/coregistrationPipeline/_subject_id_0050952/skullStrip/sub-0050952_T1w_resample_brain_mask.nii.gz'
# os.chdir(path)
# opj(base_directory,parent_wf_directory,motion_correction_bet_directory,coreg_reg_directory,'resample_mni')
brain_path = opj(base_directory,datasink_name,'preprocessed_brain_paths/brain_file_list.npy')
mask_path = opj(base_directory,datasink_name,'preprocessed_mask_paths/mask_file_list.npy')
atlas_path = opj(base_directory,datasink_name,'atlas_paths/atlas_file_list.npy')
tr_path = opj(base_directory,datasink_name,'tr_paths/tr_list.npy')
motion_params_path = opj(base_directory,datasink_name,'motion_params_paths/motion_params_file_list.npy')
func2std_mat_path = opj(base_directory, datasink_name,'joint_xformation_matrix_paths/joint_xformation_matrix_file_list.npy')
MNI3mm_path = opj(base_directory,parent_wf_directory,motion_correction_bet_directory,coreg_reg_directory,'resample_mni/MNI152_T1_2mm_brain_resample.nii')
# brain_list = np.load('../results_again_again/ABIDE1_Preprocess_Datasink/preprocessed_brain_paths/brain_file_list.npy')
# brain_path,mask_path,atlas_path,tr_path,motion_params_path,func2std_mat_path
brain_path = np.load(brain_path)
mask_path = np.load(mask_path)
atlas_path = np.load(atlas_path)
tr_path = np.load(tr_path)
motion_params_path = np.load(motion_params_path)
func2std_mat_path = np.load(func2std_mat_path)
# for a,b,c,d,e in zip(brain_path,mask_path,atlas_path,tr_path,motion_params_path):
# print (a,b,c,d,e,'\n')
Explanation: Post processing:
Global Signal Regression using orthogonalization
Band Pass filtering 0.1 - 0.01 Hz
Motion regression using GLM
End of explanation
smoothed_brains_in_file = '/home1/varunk/results_again_again/smoothed_brains.txt'
import re
smoothed_brains_paths = np.loadtxt(smoothed_brains_in_file, dtype = np.str)
smoothed_brains_subid = []
for path in smoothed_brains_paths:
sub_id_extracted = re.search('.+_subject_id_(\d+)', path).group(1)
print(sub_id_extracted)
smoothed_brains_subid.append(int(sub_id_extracted))
smoothed_brains_paths
base_directory
smoothed_brains_subid
# Append the conplete brain path to smoothed brain paths
new_brain_paths = []
for path in smoothed_brains_paths:
new_brain_paths.append(opj(base_directory, 'motionRegress1filt1global0smoothing1',path[2:]))
new_brain_paths
Explanation: Read the smoothed files
The following files are filtered and smoothed motion parameter regressed
I got the file name using the command:
find . -name "sub-*_task-rest_run-1_bold_roi_st_mcf_residual_bp_smoothed.nii.gz" > smoothed_brains.txt
Using the above extracted filenames to do the following:
Select the subjects having volumes atlest the required fixed number of volumes
perform correlation using just the fixed number of volumes
Register the file to the standard 3mm template
End of explanation
import pandas as pd
demographics_file_path = '/home1/varunk/Autism-Connectome-Analysis-brain_connectivity/notebooks/demographics.csv'
phenotype_file_path = '/home1/varunk/data/ABIDE1/RawDataBIDs/composite_phenotypic_file.csv'
df_phenotype = pd.read_csv(phenotype_file_path)
df_phenotype = df_phenotype.sort_values(['SUB_ID'])
df_phenotype_sub_id = df_phenotype.as_matrix(['SITE_ID','SUB_ID']).squeeze()
df_demographics = pd.read_csv(demographics_file_path)
df_demographics_volumes = df_demographics.as_matrix(['SITE_NAME','VOLUMES']).squeeze()
# df_phenotype.sort_values(['SUB_ID'])
df_demographics_volumes
# SUB_ID - Volumes Dictionary
site_vol_dict = dict(zip(df_demographics_volumes[:,0], df_demographics_volumes[:,1]))
# for site_subid in df_demographics_volumes:
# subid_site_dict = dict(zip(df_phenotype_sub_id[:,1], df_phenotype_sub_id[:,0]))
subid_vol_dict = dict(zip(df_phenotype_sub_id[:,1],[site_vol_dict[site] for site in df_phenotype_sub_id[:,0]] ))
(subid_vol_dict)
vols = 120
del_idx = []
for idx,df in enumerate(df_demographics_volumes):
# print(idx,df[1])
if df[1] < vols:
del_idx.append(idx)
df_demographics_volumes = np.delete(df_demographics_volumes,del_idx, axis = 0)
df_demographics_sites_refined = df_demographics_volumes[:,0]
df_demographics_sites_refined
df_phenotype_sub_id
Explanation: Check which subjects have volumes > 'vol'
End of explanation
subjects_refined = []
for df in df_phenotype_sub_id:
if df[0] in df_demographics_sites_refined:
# print(df[1])
subjects_refined.append(df[1])
subjects_refined;
subjects_refined = list(set(subjects_refined) - (set(df_phenotype_sub_id[:,1]) - set(smoothed_brains_subid) ) )
len(subjects_refined)
Explanation: Subject IDs to be considered -
subjects_refined
End of explanation
#sanity check
set(df_phenotype_sub_id[:,1]) - set(subjects_refined)
Explanation: Below are subject IDs to be ignored
End of explanation
vols = vols - 4
def vol_correct(sub_id, subid_vol_dict, vols):
sub_vols = subid_vol_dict[sub_id] - 4
if sub_vols > vols:
t_min = sub_vols - vols
elif sub_vols == vols:
t_min = 0
else:
raise Exception('Volumes of Sub ',sub_id,' less than desired!')
return int(t_min)
volCorrect = Node(Function(function=vol_correct, input_names=['sub_id','subid_vol_dict','vols'],
output_names=['t_min']), name='volCorrect')
volCorrect.inputs.subid_vol_dict = subid_vol_dict
volCorrect.inputs.vols = vols
os.chdir('/home1/varunk/results_again_again/temp/')
volCorrect.inputs.sub_id = 51456
res = volCorrect.run()
res.outputs
# 146 - 116
# 0...145
# 145 - 30 + 1
# 146 - 116
# fslroi /home1/varunk/results_again_again/motionRegress1filt1global0smoothing1/_subject_id_0051456/spatialSmooth/sub-0051456_task-rest_run-1_bold_roi_st_mcf_residual_bp_smoothed.nii.gz /home1/varunk/results_again_again/temp_fc/motionRegress1filt1global0smoothing1/_subject_id_51456/extract/sub-0051456_task-rest_run-1_bold_roi_st_mcf_residual_bp_smoothed_roi.nii.gz 35 -1
# # ExtractROI - skip dummy scans
# extract = Node(ExtractROI(t_size=-1),
# output_type='NIFTI',
# name="extract")
# # t_min=4,
# layout = BIDSLayout(data_directory)
number_of_subjects = 2 # Number of subjects you wish to preprocess
# number_of_subjects = len(layout.get_subjects())
Explanation: Volume correction
I have already extracted 4 volumes.
Now extract 120 - 4 = 116 volumes from each subject
So define vols = 114
End of explanation
def get_subject_filenames_old(subject_id,brain_path,mask_path,atlas_path,tr_path,motion_params_path,func2std_mat_path,MNI3mm_path):
import re
for brain,mask,atlas,tr,motion_param,func2std_mat in zip(brain_path,mask_path,atlas_path,tr_path,motion_params_path,func2std_mat_path):
sub_id_extracted = re.search('.+_subject_id_(\d+)', brain)
if str(subject_id) in brain:
# print("Files for subject ",subject_id,brain,mask,atlas,tr,motion_param)
return brain,mask,atlas,tr,motion_param,func2std_mat,MNI3mm_path
print ('Unable to locate Subject: ',subject_id,'extracted: ',sub_id_extracted)
return 0
def get_subject_filenames(subject_id,brain_path,mask_path,atlas_path,tr_path,motion_params_path,func2std_mat_path,MNI3mm_path):
import re
for brain in brain_path:
sub_id_extracted = re.search('.+_subject_id_(\d+)', brain)
print(sub_id_extracted)
if str(subject_id) in brain:
for mask,atlas,tr,motion_param,func2std_mat in zip(mask_path,atlas_path,tr_path,motion_params_path,func2std_mat_path):
if str(subject_id) in mask:
# print("Files for subject ",subject_id,brain,mask,atlas,tr,motion_param)
return brain,mask,atlas,tr,motion_param,func2std_mat,MNI3mm_path
print ('Unable to locate Subject: ',subject_id,'extracted: ',sub_id_extracted)
raise Exception('Unable to locate Subject: ',subject_id,'extracted: ',sub_id_extracted)
return 0
# 555/60 # mask_path,new_brain_paths
len(new_brain_paths),len(mask_path), len(subjects_refined)
# Make a node
getSubjectFilenames = Node(Function(function=get_subject_filenames, input_names=['subject_id','brain_path','mask_path','atlas_path','tr_path','motion_params_path','func2std_mat_path','MNI3mm_path'],
output_names=['brain','mask','atlas','tr','motion_param','func2std_mat', 'MNI3mm_path']), name='getSubjectFilenames')
# getSubjectFilenames.inputs.brain_path = brain_path
getSubjectFilenames.inputs.brain_path = new_brain_paths
getSubjectFilenames.inputs.mask_path = mask_path
getSubjectFilenames.inputs.atlas_path = atlas_path
getSubjectFilenames.inputs.tr_path = tr_path
getSubjectFilenames.inputs.motion_params_path = motion_params_path
getSubjectFilenames.inputs.func2std_mat_path = func2std_mat_path
getSubjectFilenames.inputs.MNI3mm_path = MNI3mm_path
getSubjectFilenames.inputs.subject_id = 51270
res = getSubjectFilenames.run()
# import re
# text = '/home1/varunk/results_again_again/ABIDE1_Preprocess/motion_correction_bet/coreg_reg/atlas_resize_reg_directory/_subject_id_0050004/111std2func_xform/fullbrain_atlas_thr0-2mm_resample_flirt.nii'
# try:
# found = re.search('.+_subject_id_(\d+)', text).group(1)
# except AttributeError:
# # AAA, ZZZ not found in the original string
# found = '' # apply your error handling
# # found: 1234
# found
subject_list = subjects_refined[0:number_of_subjects]
infosource = Node(IdentityInterface(fields=['subject_id']),
name="infosource")
infosource.iterables = [('subject_id',subject_list)]
# ,'brain_path','mask_path','atlas_path','tr_path','motion_params_path'
# infosource.brain_path = brain_path
# infosource.mask_path = mask_path
# infosource.atlas_path = atlas_path
# infosource.tr_path = tr_path
# infosource.motion_params_path = motion_params_path
Explanation: Define a function to fetch the filenames of a particular subject ID
End of explanation
# ExtractROI - skip dummy scans
extract = Node(ExtractROI(t_size=-1),
output_type='NIFTI',
name="extract")
# t_min=4,
Explanation: Extract volumes
End of explanation
# Create DataSink object
dataSink = Node(DataSink(), name='datasink')
# Name of the output folder
dataSink.inputs.base_directory = opj(base_directory,fc_datasink_name)
Explanation: Datasink
I needed to define the structure of what files are saved and where.
End of explanation
# Define substitution strings so that the data is similar to BIDS
substitutions = [('_subject_id_', 'sub-')]
# Feed the substitution strings to the DataSink node
dataSink.inputs.substitutions = substitutions
# ('_resample_brain_flirt.nii_brain', ''),
# ('_roi_st_mcf_flirt.nii_brain_flirt', ''),
base_directory
Explanation: To create the substitutions I looked the datasink folder where I was redirecting the output. I manually selected the part of file/folder name that I wanted to change and copied below to be substituted.
End of explanation
def save_file_list_function(in_fc_map_brain_file):
# Imports
import numpy as np
import os
from os.path import join as opj
file_list = np.asarray(in_fc_map_brain_file)
print('######################## File List ######################: \n',file_list)
np.save('fc_map_brain_file_list',file_list)
file_name = 'fc_map_brain_file_list.npy'
out_fc_map_brain_file = opj(os.getcwd(),file_name) # path
return out_fc_map_brain_file
save_file_list = JoinNode(Function(function=save_file_list_function, input_names=['in_fc_map_brain_file'],
output_names=['out_fc_map_brain_file']),
joinsource="infosource",
joinfield=['in_fc_map_brain_file'],
name="save_file_list")
Explanation: Following is a Join Node that collects the preprocessed file paths and saves them in a file
End of explanation
# Saves the brains instead of FC matrix files
def pear_coff(in_file, atlas_file, mask_file):
# code to find how many voxels are in the brain region using the mask
# imports
import numpy as np
import nibabel as nib
import os
from os.path import join as opj
mask_data = nib.load(mask_file)
mask = mask_data.get_data()
x_dim, y_dim, z_dim = mask_data.shape
atlasPath = atlas_file
# Read the atlas
atlasObject = nib.load(atlasPath)
atlas = atlasObject.get_data()
num_ROIs = int((np.max(atlas) - np.min(atlas) ))
# Read the brain in_file
brain_data = nib.load(in_file)
brain = brain_data.get_data()
x_dim, y_dim, z_dim, num_volumes = brain.shape
num_brain_voxels = 0
x_dim, y_dim, z_dim = mask_data.shape
for i in range(x_dim):
for j in range(y_dim):
for k in range(z_dim):
if mask[i,j,k] == 1:
num_brain_voxels = num_brain_voxels + 1
# Initialize a matrix of ROI time series and voxel time series
ROI_matrix = np.zeros((num_ROIs, num_volumes))
voxel_matrix = np.zeros((num_brain_voxels, num_volumes))
# Fill up the voxel_matrix
voxel_counter = 0
for i in range(x_dim):
for j in range(y_dim):
for k in range(z_dim):
if mask[i,j,k] == 1:
voxel_matrix[voxel_counter,:] = brain[i,j,k,:]
voxel_counter = voxel_counter + 1
# Fill up the ROI_matrix
# Keep track of number of voxels per ROI as well by using an array - num_voxels_in_ROI[]
num_voxels_in_ROI = np.zeros((num_ROIs,1)) # A column arrray containing number of voxels in each ROI
for i in range(x_dim):
for j in range(y_dim):
for k in range(z_dim):
label = int(atlas[i,j,k]) - 1
if label != -1:
ROI_matrix[label,:] = np.add(ROI_matrix[label,:], brain[i,j,k,:])
num_voxels_in_ROI[label,0] = num_voxels_in_ROI[label,0] + 1
ROI_matrix = np.divide(ROI_matrix,num_voxels_in_ROI) # I get nan coz of this! At places where num voxels = 0
X, Y = ROI_matrix, voxel_matrix
# Subtract mean from X and Y
X = np.subtract(X, np.mean(X, axis=1, keepdims=True))
Y = np.subtract(Y, np.mean(Y, axis=1, keepdims=True))
temp1 = np.dot(X,Y.T)
temp2 = np.sqrt(np.sum(np.multiply(X,X), axis=1, keepdims=True))
temp3 = np.sqrt(np.sum(np.multiply(Y,Y), axis=1, keepdims=True))
temp4 = np.dot(temp2,temp3.T)
coff_matrix = np.divide(temp1, (temp4 + 1e-7))
print('Saving X ')
np.save('X',X)
print('saved')
# Check if any ROI is missing and replace the NAN values in coff_matrix by 0
if np.argwhere(np.isnan(coff_matrix)).shape[0] != 0:
print("Some ROIs are not present. Replacing NAN in coff matrix by 0")
np.nan_to_num(coff_matrix, copy=False)
# TODO: when I have added 1e-7 in the dinominator, then why did I feel the need to replace NAN by zeros
sub_id = in_file.split('/')[-1].split('.')[0].split('_')[0].split('-')[1]
fc_file_name = sub_id + '_fc_map'
print ("Pear Matrix calculated for subject: ",sub_id)
roi_brain_matrix = coff_matrix
brain_file = in_file
x_dim, y_dim, z_dim, t_dim = brain.shape
(brain_data.header).set_data_shape([x_dim,y_dim,z_dim,num_ROIs])
brain_roi_tensor = np.zeros((brain_data.header.get_data_shape()))
print("Creating brain for Subject-",sub_id)
for roi in range(num_ROIs):
brain_voxel_counter = 0
for i in range(x_dim):
for j in range(y_dim):
for k in range(z_dim):
if mask[i,j,k] == 1:
brain_roi_tensor[i,j,k,roi] = roi_brain_matrix[roi,brain_voxel_counter]
brain_voxel_counter = brain_voxel_counter + 1
assert (brain_voxel_counter == len(roi_brain_matrix[roi,:]))
print("Created brain for Subject-",sub_id)
path = os.getcwd()
fc_file_name = fc_file_name + '.nii.gz'
out_file = opj(path,fc_file_name)
brain_with_header = nib.Nifti1Image(brain_roi_tensor, affine=brain_data.affine,header = brain_data.header)
nib.save(brain_with_header,out_file)
fc_map_brain_file = out_file
return fc_map_brain_file
# Again Create the Node and set default values to paths
pearcoff = Node(Function(function=pear_coff, input_names=['in_file','atlas_file','mask_file'],
output_names=['fc_map_brain_file']), name='pearcoff')
# output_names=['fc_map_brain_file']
# pearcoff.inputs.atlas_file = atlasPath
# pearcoff.inputs.num_brain_voxels = num_brain_voxels
# pearcoff.inputs.mask_file = mask_file
Explanation: Create a FC node
This node:
1. Exracts the average time series of the brain ROI's using the atlas and stores
it as a matrix of size [ROIs x Volumes].
2. Extracts the Voxel time series and stores it in matrix of size [Voxels x Volumes]
Saving the Brains instead of FC matrices
End of explanation
func2std_xform = Node(FLIRT(output_type='NIFTI_GZ',
apply_xfm=True), name="func2std_xform")
# %%time
# pearcoff.run()
# motion_param_reg = [True, False]
# global_signal_reg = [True, False]
# band_pass_filt= [True, False]
# for motion_param_regression, global_signal_regression, band_pass_filtering in zip(motion_param_reg, global_signal_reg, band_pass_filt):
# print (motion_param_regression, global_signal_regression, band_pass_filtering)
Explanation: IMPORTANT:
The ROI 255 has been removed due to resampling. Therefore the FC maps will have nan at that row. So don't use that ROI :)
I came to know coz I keep getting this error: RuntimeWarning: invalid value encountered in true_divide
To debug it, I read the coff matrix and checked its diagnol to discover the nan value.
Node for applying xformation matrix to functional data
End of explanation
motion_param_regression = 1
band_pass_filtering = 1
global_signal_regression = 0
smoothing = 1
volcorrect = 1
num_proc = 7
combination = 'motionRegress' + str(int(motion_param_regression)) + 'filt' + \
str(int(band_pass_filtering)) + 'global' + str(int(global_signal_regression)) + \
'smoothing' + str(int(smoothing))
print("Combination: ",combination)
base_dir = opj(base_directory,functional_connectivity_directory)
# wf = Workflow(name=functional_connectivity_directory)
wf = Workflow(name=combination)
wf.base_dir = base_dir # Dir where all the outputs will be stored.
wf.connect([(infosource , getSubjectFilenames, [('subject_id','subject_id')])])
if motion_param_regression == 1 and global_signal_regression == 0 and band_pass_filtering == 1 and smoothing == 1 and volcorrect == 1: # 101
wf.connect([(infosource, volCorrect, [('subject_id','sub_id')])])
wf.connect([( getSubjectFilenames, extract, [('brain','in_file')])])
wf.connect([( volCorrect, extract, [('t_min','t_min')])])
wf.connect([( extract, pearcoff, [('roi_file','in_file')])])
# wf.connect([( bandpass, pearcoff, [('out_file','in_file')])])
wf.connect([( getSubjectFilenames, pearcoff, [('atlas','atlas_file')])])
wf.connect([( getSubjectFilenames, pearcoff, [('mask','mask_file')])])
# ---------------------------------------------------------------------------------------
wf.connect([(pearcoff, func2std_xform, [('fc_map_brain_file','in_file')])])
wf.connect([(getSubjectFilenames, func2std_xform, [('func2std_mat','in_matrix_file')])])
wf.connect([(getSubjectFilenames, func2std_xform, [('MNI3mm_path','reference')])])
# -- send out file to save file list and then save the outputs
folder_name = 'pearcoff_' + combination + '.@fc_map_brain_file'
wf.connect([(func2std_xform, save_file_list, [('out_file','in_fc_map_brain_file')])])
# --------------------------------------------------------------------------------------------
wf.connect([(save_file_list, dataSink, [('out_fc_map_brain_file',folder_name)])])
# wf.write_graph(graph2use='flat', format='png')
from IPython.display import Image
wf.write_graph(graph2use='exec', format='png', simple_form=True)
wf.run('MultiProc', plugin_args={'n_procs': num_proc})
file_name = opj(base_dir,combination,'graph_detailed.dot.png')
Image(filename=file_name)
file_name = opj(base_dir,combination,'graph_detailed.dot.png')
Image(filename=file_name)
X = np.load("../../results_again_again/temp_dataSink/pearcoff_motionRegress1filt1global0smoothing1/fc_map_brain_file_list.npy")
X
!nipypecli show crash-20171228-144509-root-getSubjectFilenames.a1-6a701e42-c600-4b38-b4ba-71c175cbf567.pklz
import numpy as np
X = np.load('../results_again_again/fc_datasink/pearcoff_motionRegress0filt0global1/fc_map_brain_file_list.npy')
X.shape
X_temp4 = np.load('/home1/varunk/results_again_again/temp_fc/motionRegress1filt1global0smoothing1/_subject_id_51456/temp4_again.npy')
np.diag(X_temp4)
X_temp3 = np.load('/home1/varunk/results_again_again/temp_fc/motionRegress1filt1global0smoothing1/_subject_id_51456/temp3.npy')
(X_temp3).shape
X_temp2 = np.load('/home1/varunk/results_again_again/temp_fc/motionRegress1filt1global0smoothing1/_subject_id_51456/temp2.npy')
(X_temp2).shape
np.isnan(X_temp3).sum()
np.isnan(X_temp2).sum()
X_temp2[254]
X = np.load('/home1/varunk/results_again_again/temp_fc/motionRegress1filt1global0smoothing1/_subject_id_51456/X.npy')
X[254,:]
X = np.load('/home1/varunk/results_again_again/fc_datasink_volumes_corrected/pearcoff_motionRegress1filt1global0smoothing1/fc_map_brain_file_list.npy')
X.shape
Explanation: Workflow to do just the pearcoff and fun2std transformation
End of explanation |
1,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iteration
One of the most basic operations in programming is iterating over a list of elements to perform some kind of operation.
In python we use the for statement to iterate. It is easier to use than the same statement in C, C++ or FORTRAN because instead of running over a integer index it takes as an input any iterable object and runs over it.
Let's see some examples
Iterating over lists
Lists are ordered. Iteration is done in the same order as the input list.
Step1: Iterating over dictionaries
Dictionaries are not ordered. Iterating over them does not have to produce an order sequence.
Step2: Iterating over a sequence
The function range() is useful to generate a sequence of integers that can be used to iterate. | Python Code:
a = [4,5,6,8,10]
for i in a:
print(i)
# A fragment of `One Hundred Years of Solitude`
GGM = 'Many years later, as he faced the firing squad, \
Colonel Aureliano Buendía was to remember that dist \
ant afternoon when his father took him to discover ice. \
At that time Macondo was a village of twenty adobe houses,\
built on the bank of a river of clear water that ran along \
a bed of polished stones, which were white and enormous,\
like prehistoric eggs.'
print(GGM)
dot = GGM.split() # we create a list where each element is a word
print(dot)
for i in dot:
print(i)
Explanation: Iteration
One of the most basic operations in programming is iterating over a list of elements to perform some kind of operation.
In python we use the for statement to iterate. It is easier to use than the same statement in C, C++ or FORTRAN because instead of running over a integer index it takes as an input any iterable object and runs over it.
Let's see some examples
Iterating over lists
Lists are ordered. Iteration is done in the same order as the input list.
End of explanation
a = {} # empty dictionary
a[1] = 'one'
a[2] = 'two'
a[3] = 'three'
a[4] = 'four'
a[5] = 'five'
print(a)
for k in a.keys(): # iterate over the keys
print(a[k])
for v in a.values(): #iterate over the values
print(v)
Explanation: Iterating over dictionaries
Dictionaries are not ordered. Iterating over them does not have to produce an order sequence.
End of explanation
print(range(10)) # range itself returns an iterable object
a = list(range(10)) # this translates that iterable object into a list
print(a) # be careful! the lists has 10 objects starting with 0
for i in range(10): # if you given a single argument the iterations starts at 0.
print(i)
for i in range(4,10): # you can algo give two arguments: range(start, end).
print(i)
for i in range(0,10,3): # if you give three arguments they are interpreted as range(start, end, step)
print(i)
Explanation: Iterating over a sequence
The function range() is useful to generate a sequence of integers that can be used to iterate.
End of explanation |
1,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h3>Current School Panda</h3>
Working with directory school data
Creative Commons in all schools
This script uses a csv file from Creative Commons New Zealand and csv file from Ministry of Education.
The ccnz csv file contains schools names that have cc licence, type of licence,
The Ministry of Education csv file contains every public school in New Zealand and info about them.
Standards for website addresses - if school name ends with school then cut it from name and add to .
eg Horowhenua Collage
horowhenua.collage.nz
not
horowhenuacollage.school.nz
Auckland Girls Grammar School
aucklandgirlsgrammar.school.nz
not
aucklandgirlsgrammarschool.school.nz
Everyschool has their own domain name and Linux server hosting the site. Private/Public keys. Static site, git repo. Nikola blog.
What made you choose that particular Creative Commons licence?
I like the CC
Step1: Compare the schools on List of CC schools with list of all public/private schools.
Why shouldn't it be default for all public schools licence to be under a Creative Commons BY license?
Step2: Cycle through only first 89 values - stop when reaching
Step3: Create restfulapi of schools thaat have cc and those that don't
Merge two dicts together.
Both are
{name of school | Python Code:
crcom = pd.read_csv('/home/wcmckee/Downloads/List of CC schools - Sheet1.csv', skiprows=5, index_col=0, usecols=[0,1,2])
Explanation: <h3>Current School Panda</h3>
Working with directory school data
Creative Commons in all schools
This script uses a csv file from Creative Commons New Zealand and csv file from Ministry of Education.
The ccnz csv file contains schools names that have cc licence, type of licence,
The Ministry of Education csv file contains every public school in New Zealand and info about them.
Standards for website addresses - if school name ends with school then cut it from name and add to .
eg Horowhenua Collage
horowhenua.collage.nz
not
horowhenuacollage.school.nz
Auckland Girls Grammar School
aucklandgirlsgrammar.school.nz
not
aucklandgirlsgrammarschool.school.nz
Everyschool has their own domain name and Linux server hosting the site. Private/Public keys. Static site, git repo. Nikola blog.
What made you choose that particular Creative Commons licence?
I like the CC:BY licence because it offers the most freedom to people.
I am not a fan of licenses that restrict commucial use. I believe everyone should be able to do what the like with my work with minimal interference.
If I could I would remove non-commucial licenses.
In the early days of my art blogging I would license under cc nc. This was wrong and I later changed this to a cc by license.
With my photography I once had a photo I taken in the newpaper. It made the front page. I was offered money and seeked permission. I was fine with it of course - the license allows this. At the bottom of the photo it read: PHOTO: William Mckee.
Perfect.
The only thing I ask is they attupute.
I like the idea of sharealike but at the end of the I really don't care and would hate to chase down people to license it wrong. Sure, I don't like it that people could take my stuff and make it not open. I think everything should be open and free.
My art site - artcontrol.me is currently down but when it was up I licensed the site under a cc:by. Elements of the site are still up - such as my YouTube channel.
I attended art school in Wellington - The Learning Connexion. My focus was on drawing and painting. I taught myself programming on the bus to art school. Even when I was drawing on the easel I would be 'drawing' python code. During breaks I would often get my laptop out.
I volunteered at Whaihanga Early Learning Centre. I spend the majority of my time there in the art area doing collabarth works with others. Oil Pastel, coloured pencil and pencil were my mediums of choice. Sometimes I would use paint, but it's quite messy.
Copyright shouldn't be default. Apply and pay if you want copyright. CC license by default. That will sort the world.
End of explanation
#crcom
aqcom = pd.read_csv('/home/wcmckee/Downloads/List of CC schools - Sheet1.csv', skiprows=6, usecols=[0])
aqjsz = aqcom.to_json()
dicthol = json.loads(aqjsz)
dschoz = dicthol['School']
#dicthol
dscv = dschoz.values()
ccschool = list()
for ds in range(87):
#print(dschoz[str(ds)])
ccschool.append((dschoz[str(ds)]))
schccd = dict()
scda = dict({'cc' : True})
sanoc = dict({'cc' : False})
#schccd.update({ccs : scda})
for ccs in ccschool:
#These schools have a cc license. Update the list of all schools with cc and value = true.
#Focus on schools that don't have cc license.
#Filter schools in area that don't have cc license.
#print (ccs)
schccd.update({ccs : scda})
ccschz = list()
for dsc in range(87):
#print (dschoz[str(dsc)])
ccschz.append((dschoz[str(dsc)]))
#Append in names of schools that are missing from this dict.
#Something like
#schccd.update{school that doesnt have cc : {'cc' : False}}
#schccd
Explanation: Compare the schools on List of CC schools with list of all public/private schools.
Why shouldn't it be default for all public schools licence to be under a Creative Commons BY license?
End of explanation
noclist = pd.read_csv('/home/wcmckee/Downloads/Directory-School-current.csv', skiprows=3, usecols=[1])
webskol = pd.read_csv('/home/wcmckee/Downloads/Directory-School-current.csv', skiprows=3, usecols=[6])
websjs = webskol.to_json()
dictscha = json.loads(websjs)
numsweb = dictscha['School website']
lenmuns = len(numsweb)
#for nuran in range(lenmuns):
# print (numsweb[str(nuran)])
#noclist.values[0:10]
aqjaq = noclist.to_json()
jsaqq = json.loads(aqjaq)
najsa = jsaqq['Name']
alsl = len(najsa)
allschlis = list()
for alr in range(alsl):
allschlis.append(najsa[str(alr)])
#allschlis
newlis = list(set(allschlis) - set(ccschool))
empd = dict()
Explanation: Cycle through only first 89 values - stop when reaching : These are schools that have expressed an interest in CC, and may have a policy in progress.
New spreadsheet for schools in progress of CC license. Where are they up to? What is the next steps?
Why are schools using a license that isn't CC:BY. They really should be using the same license. CC NC is unexceptable. SA would be OK but majority of schools already have CC BY so best to go with what is common so you don't have conflicts of licenses.
End of explanation
sstru = json.dumps(schccd)
for newl in newlis:
#print (newl)
empd.update({newl : sanoc})
empdum = json.dumps(empd)
empdum
savjfin = open('/home/wcmckee/ccschool/nocc.json', 'w')
savjfin.write(empdum)
savjfin.close()
savtru = open('/home/wcmckee/ccschool/cctru.json', 'w')
savtru.write(sstru)
savtru.close()
#for naj in najsa.values():
#print (naj)
# for schk in schccd.keys():
#print(schk)
# allschlis.append(schk)
#for i in ccschz[:]:
# if i in allschlis:
# ccschz.remove(i)
# allschlis.remove(i)
#Cycle though some schools rather than everything.
#Cycle though all schools and find schools that have cc
#for naj in range(2543):
#print(najsa[str(naj)])
# for schk in schccd.keys():
# if schk in (najsa[str(naj)]):
#Remove these schools from the list
# print (schk)
Explanation: Create restfulapi of schools thaat have cc and those that don't
Merge two dicts together.
Both are
{name of school : 'cc' : 'True'/'False'}
End of explanation |
1,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Download the list of occultation periods from the MOC at Berkeley.
Note that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning.
Step1: Download the NuSTAR TLE archive.
This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.
The times, line1, and line2 elements are now the TLE elements for each epoch.
Step2: Here is where we define the observing window that we want to use.
Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error.
Step3: We want to know how to orient NuSTAR for the Sun.
We can more or less pick any angle that we want. But this angle has to be specified a little in advance so that the NuSTAR SOC can plan the "slew in" maneuvers. Below puts DET0 in the top left corner (north-east with respect to RA/Dec coordinates).
This is what you tell the SOC you want the "Sky PA angle" to be.
Step4: Set up the offset you want to use here
Step5: Loop over each orbit and correct the pointing for the same heliocentric pointing position.
Note that you may want to update the pointing for solar rotation. That's up to the user to decide and is not done here. | Python Code:
fname = io.download_occultation_times(outdir='./data/')
print(fname)
Explanation: Download the list of occultation periods from the MOC at Berkeley.
Note that the occultation periods typically only are stored at Berkeley for the future and not for the past. So this is only really useful for observation planning.
End of explanation
tlefile = io.download_tle(outdir='./data')
print(tlefile)
times, line1, line2 = io.read_tle_file(tlefile)
Explanation: Download the NuSTAR TLE archive.
This contains every two-line element (TLE) that we've received for the whole mission. We'll expand on how to use this later.
The times, line1, and line2 elements are now the TLE elements for each epoch.
End of explanation
tstart = '2019-01-12T12:00:00'
tend = '2019-01-12T23:00:00'
orbits = planning.sunlight_periods(fname, tstart, tend)
Explanation: Here is where we define the observing window that we want to use.
Note that tstart and tend must be in the future otherwise you won't find any occultation times and sunlight_periods will return an error.
End of explanation
pa = planning.get_nustar_roll(tstart, 0)
print("NuSTAR Roll angle for Det0 in NE quadrant: {}".format(pa))
Explanation: We want to know how to orient NuSTAR for the Sun.
We can more or less pick any angle that we want. But this angle has to be specified a little in advance so that the NuSTAR SOC can plan the "slew in" maneuvers. Below puts DET0 in the top left corner (north-east with respect to RA/Dec coordinates).
This is what you tell the SOC you want the "Sky PA angle" to be.
End of explanation
offset = [-190., -47.]*u.arcsec
Explanation: Set up the offset you want to use here:
The first element is the direction +WEST of the center of the Sun, the second is the offset +NORTH of the center of the Sun.
If you want multiple pointing locations you can either specify an array of offsets or do this "by hand" below.
End of explanation
for ind, orbit in enumerate(orbits):
midTime = (0.5*(orbit[1] - orbit[0]) + orbit[0])
sky_pos = planning.get_skyfield_position(midTime, offset, load_path='./data', parallax_correction=True)
print("Orbit: {}".format(ind))
print("Orbit start: {} Orbit end: {}".format(orbit[0].isoformat(), orbit[1].isoformat()))
print('Aim time: {} RA (deg): {} Dec (deg): {}'.format(midTime.isoformat(), sky_pos[0], sky_pos[1]))
print("")
Explanation: Loop over each orbit and correct the pointing for the same heliocentric pointing position.
Note that you may want to update the pointing for solar rotation. That's up to the user to decide and is not done here.
End of explanation |
1,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dénes Csala
MCC, 2022
Based on Elements of Data Science (Allen B. Downey, 2021) and Python Data Science Handbook (Jake VanderPlas, 2018)
License
Step1: Validating Models
One of the most important pieces of machine learning is model validation
Step2: Let's fit a K-neighbors classifier
Step3: Now we'll use this classifier to predict labels for the data
Step4: Finally, we can check how well our prediction did
Step5: It seems we have a perfect classifier!
Question
Step6: Now we train on the training data, and validate on the test data
Step7: This gives us a more reliable estimate of how our model is doing.
The metric we're using here, comparing the number of matches to the total number of samples, is known as the accuracy score, and can be computed using the following routine
Step8: This can also be computed directly from the model.score method
Step9: Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors
Step10: We see that in this case, a small number of neighbors seems to be the best option.
Cross-Validation
One problem with validation sets is that you "lose" some of the data. Above, we've only used 3/4 of the data for the training, and used 1/4 for the validation. Another option is to use 2-fold cross-validation, where we split the sample in half and perform the validation twice
Step11: Thus a two-fold cross-validation gives us two estimates of the score for that parameter.
Because this is a bit of a pain to do by hand, scikit-learn has a utility routine to help
Step12: K-fold Cross-Validation
Here we've used 2-fold cross-validation. This is just one specialization of $K$-fold cross-validation, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set.
We can do this by changing the cv parameter above. Let's do 10-fold cross-validation
Step13: This gives us an even better idea of how well our model is doing.
Overfitting, Underfitting and Model Selection
Now that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question
Step14: Now let's create a realization of this dataset
Step15: Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit
Step16: We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.
Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the PolynomialFeatures preprocessor, which can be pipelined with a linear regression.
Let's make a convenience routine to do this
Step17: Now we'll use this to fit a quadratic curve to the data.
Step18: This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
Step19: When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data.
Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively
Step20: Detecting Over-fitting with Validation Curves
Clearly, computing the error on the training data is not enough (we saw this previously). As above, we can use cross-validation to get a better handle on how the model fit is working.
Let's do this here, again using the validation_curve utility. To make things more clear, we'll use a slightly larger dataset
Step21: Now let's plot the validation curves
Step22: Notice the trend here, which is common for this type of plot.
For a small model complexity, the training error and validation error are very similar. This indicates that the model is under-fitting the data
Step23: Detecting Data Sufficiency with Learning Curves
As you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of learning curves, which display this property.
The idea is to plot the mean-squared-error for the training and test set as a function of Number of Training Points
Step24: Let's see what the learning curves look like for a linear model
Step25: This shows a typical learning curve
Step26: Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!
What if we get even more complex? | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn')
Explanation: Dénes Csala
MCC, 2022
Based on Elements of Data Science (Allen B. Downey, 2021) and Python Data Science Handbook (Jake VanderPlas, 2018)
License: MIT
Validation and Model Selection
In this section, we'll look at model evaluation and the tuning of hyperparameters, which are parameters that define the model.
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
Explanation: Validating Models
One of the most important pieces of machine learning is model validation: that is, checking how well your model fits a given dataset. But there are some pitfalls you need to watch out for.
Consider the digits example we've been looking at previously. How might we check how well our model fits the data?
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
Explanation: Let's fit a K-neighbors classifier
End of explanation
y_pred = knn.predict(X)
Explanation: Now we'll use this classifier to predict labels for the data
End of explanation
print("{0} / {1} correct".format(np.sum(y == y_pred), len(y)))
Explanation: Finally, we can check how well our prediction did:
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.shape, X_test.shape
Explanation: It seems we have a perfect classifier!
Question: what's wrong with this?
Validation Sets
Above we made the mistake of testing our data on the same set of data that was used for training. This is not generally a good idea. If we optimize our estimator this way, we will tend to over-fit the data: that is, we learn the noise.
A better way to test a model is to use a hold-out set which doesn't enter the training. We've seen this before using scikit-learn's train/test split utility:
End of explanation
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print("{0} / {1} correct".format(np.sum(y_test == y_pred), len(y_test)))
Explanation: Now we train on the training data, and validate on the test data:
End of explanation
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
Explanation: This gives us a more reliable estimate of how our model is doing.
The metric we're using here, comparing the number of matches to the total number of samples, is known as the accuracy score, and can be computed using the following routine:
End of explanation
knn.score(X_test, y_test)
Explanation: This can also be computed directly from the model.score method:
End of explanation
for n_neighbors in [1, 5, 10, 20, 30]:
knn = KNeighborsClassifier(n_neighbors)
knn.fit(X_train, y_train)
print(n_neighbors, knn.score(X_test, y_test))
Explanation: Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors:
End of explanation
X1, X2, y1, y2 = train_test_split(X, y, test_size=0.5, random_state=0)
X1.shape, X2.shape
print(KNeighborsClassifier(1).fit(X2, y2).score(X1, y1))
print(KNeighborsClassifier(1).fit(X1, y1).score(X2, y2))
Explanation: We see that in this case, a small number of neighbors seems to be the best option.
Cross-Validation
One problem with validation sets is that you "lose" some of the data. Above, we've only used 3/4 of the data for the training, and used 1/4 for the validation. Another option is to use 2-fold cross-validation, where we split the sample in half and perform the validation twice:
End of explanation
from sklearn.model_selection import cross_val_score
cv = cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
cv.mean()
Explanation: Thus a two-fold cross-validation gives us two estimates of the score for that parameter.
Because this is a bit of a pain to do by hand, scikit-learn has a utility routine to help:
End of explanation
cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
Explanation: K-fold Cross-Validation
Here we've used 2-fold cross-validation. This is just one specialization of $K$-fold cross-validation, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set.
We can do this by changing the cv parameter above. Let's do 10-fold cross-validation:
End of explanation
def test_func(x, err=0.5):
y = 10 - 1. / (x + 0.1)
if err > 0:
y = np.random.normal(y, err)
return y
Explanation: This gives us an even better idea of how well our model is doing.
Overfitting, Underfitting and Model Selection
Now that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, Sometimes using a
more complicated model will give worse results. Also, Sometimes adding
training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
Illustration of the Bias-Variance Tradeoff
For this section, we'll work with a simple 1D regression problem. This will help us to
easily visualize the data and the model, and the results generalize easily to higher-dimensional
datasets. We'll explore a simple linear regression problem.
This can be accomplished within scikit-learn with the sklearn.linear_model module.
We'll create a simple nonlinear function that we'd like to fit
End of explanation
def make_data(N=40, error=1.0, random_seed=1):
# randomly sample the data
np.random.seed(1)
X = np.random.random(N)[:, np.newaxis]
y = test_func(X.ravel(), error)
return X, y
X, y = make_data(40, error=1)
plt.scatter(X.ravel(), y);
Explanation: Now let's create a realization of this dataset:
End of explanation
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
model = LinearRegression()
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
Explanation: Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit:
End of explanation
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
Explanation: We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.
Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the PolynomialFeatures preprocessor, which can be pipelined with a linear regression.
Let's make a convenience routine to do this:
End of explanation
model = PolynomialRegression(2)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
Explanation: Now we'll use this to fit a quadratic curve to the data.
End of explanation
model = PolynomialRegression(30)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)))
plt.ylim(-4, 14);
Explanation: This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
End of explanation
from ipywidgets import interact
def plot_fit(degree=1, Npts=50):
X, y = make_data(Npts, error=1)
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
model = PolynomialRegression(degree=degree)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.ylim(-4, 14)
plt.title("mean squared error: {0:.2f}".format(mean_squared_error(model.predict(X), y)))
interact(plot_fit, degree=[1, 30], Npts=[2, 100]);
Explanation: When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data.
Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively:
End of explanation
X, y = make_data(120, error=1.0)
plt.scatter(X, y);
from sklearn.model_selection import validation_curve
def rms_error(model, X, y):
y_pred = model.predict(X)
return np.sqrt(np.mean((y - y_pred) ** 2))
degree = np.arange(0, 18)
val_train, val_test = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=7,
scoring=rms_error)
Explanation: Detecting Over-fitting with Validation Curves
Clearly, computing the error on the training data is not enough (we saw this previously). As above, we can use cross-validation to get a better handle on how the model fit is working.
Let's do this here, again using the validation_curve utility. To make things more clear, we'll use a slightly larger dataset:
End of explanation
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
plot_with_err(degree, val_train, label='training scores')
plot_with_err(degree, val_test, label='validation scores')
plt.xlabel('degree'); plt.ylabel('rms error')
plt.legend();
Explanation: Now let's plot the validation curves:
End of explanation
model = PolynomialRegression(4).fit(X, y)
plt.scatter(X, y)
plt.plot(X_test, model.predict(X_test));
Explanation: Notice the trend here, which is common for this type of plot.
For a small model complexity, the training error and validation error are very similar. This indicates that the model is under-fitting the data: it doesn't have enough complexity to represent the data. Another way of putting it is that this is a high-bias model.
As the model complexity grows, the training and validation scores diverge. This indicates that the model is over-fitting the data: it has so much flexibility, that it fits the noise rather than the underlying trend. Another way of putting it is that this is a high-variance model.
Note that the training score (nearly) always improves with model complexity. This is because a more complicated model can fit the noise better, so the model improves. The validation data generally has a sweet spot, which here is around 5 terms.
Here's our best-fit model according to the cross-validation:
End of explanation
from sklearn.model_selection import learning_curve
def plot_learning_curve(degree=3):
train_sizes = np.linspace(0.05, 1, 120)
N_train, val_train, val_test = learning_curve(PolynomialRegression(degree),
X, y, train_sizes, cv=5,
scoring=rms_error)
plot_with_err(N_train, val_train, label='training scores')
plot_with_err(N_train, val_test, label='validation scores')
plt.xlabel('Training Set Size'); plt.ylabel('rms error')
plt.ylim(0, 3)
plt.xlim(5, 80)
plt.legend()
Explanation: Detecting Data Sufficiency with Learning Curves
As you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of learning curves, which display this property.
The idea is to plot the mean-squared-error for the training and test set as a function of Number of Training Points
End of explanation
plot_learning_curve(1)
Explanation: Let's see what the learning curves look like for a linear model:
End of explanation
plot_learning_curve(3)
Explanation: This shows a typical learning curve: for very few training points, there is a large separation between the training and test error, which indicates over-fitting. Given the same model, for a large number of training points, the training and testing errors converge, which indicates potential under-fitting.
As you add more data points, the training error will never increase, and the testing error will never decrease (why do you think this is?)
It is easy to see that, in this plot, if you'd like to reduce the MSE down to the nominal value of 1.0 (which is the magnitude of the scatter we put in when constructing the data), then adding more samples will never get you there. For $d=1$, the two curves have converged and cannot move lower. What about for a larger value of $d$?
End of explanation
plot_learning_curve(10)
Explanation: Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!
What if we get even more complex?
End of explanation |
1,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring precision and recall
The goal of this second notebook is to understand precision-recall in the context of classifiers.
Use Amazon review data in its entirety.
Train a logistic regression model.
Explore various evaluation metrics
Step1: Load amazon review dataset
Step2: Extract word counts and sentiments
As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following
Step3: Now, let's remember what the dataset looks like by taking a quick peek
Step4: Split data into training and test sets
We split the data into a 80-20 split where 80% is in the training set and 20% is in the test set.
Step5: Train a logistic regression classifier
We will now train a logistic regression classifier with sentiment as the target and word_count as the features. We will set validation_set=None to make sure everyone gets exactly the same results.
Remember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall.
Step6: Model Evaluation
We will explore the advanced model evaluation concepts that were discussed in the lectures.
Accuracy
One performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
To obtain the accuracy of our trained models using GraphLab Create, simply pass the option metric='accuracy' to the evaluate function. We compute the accuracy of our logistic regression model on the test_data as follows
Step7: Baseline
Step8: Quiz Question
Step9: Quiz Question
Step10: Quiz Question
Step11: Quiz Question
Step12: Run prediction with output_type='probability' to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.
Step13: Quiz Question
Step14: Quiz Question (variant 1)
Step15: For each of the values of threshold, we compute the precision and recall scores.
Step16: Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.
Step17: Quiz Question
Step18: Now, let's predict the probability of classifying these reviews as positive
Step19: Let's plot the precision-recall curve for the baby_reviews dataset.
First, let's consider the following threshold_values ranging from 0.5 to 1
Step20: Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below.
Step21: Quiz Question | Python Code:
import graphlab
from __future__ import division
import numpy as np
graphlab.canvas.set_target('ipynb')
Explanation: Exploring precision and recall
The goal of this second notebook is to understand precision-recall in the context of classifiers.
Use Amazon review data in its entirety.
Train a logistic regression model.
Explore various evaluation metrics: accuracy, confusion matrix, precision, recall.
Explore how various metrics can be combined to produce a cost of making an error.
Explore precision and recall curves.
Because we are using the full Amazon review dataset (not a subset of words or reviews), in this assignment we return to using GraphLab Create for its efficiency. As usual, let's start by firing up GraphLab Create.
Make sure you have the latest version of GraphLab Create (1.8.3 or later). If you don't find the decision tree module, then you would need to upgrade graphlab-create using
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby.gl/')
Explanation: Load amazon review dataset
End of explanation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
review_clean = products['review'].apply(remove_punctuation)
# Count words
products['word_count'] = graphlab.text_analytics.count_words(review_clean)
# Drop neutral sentiment reviews.
products = products[products['rating'] != 3]
# Positive sentiment to +1 and negative sentiment to -1
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
Explanation: Extract word counts and sentiments
As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following:
Remove punctuation.
Remove reviews with "neutral" sentiment (rating 3).
Set reviews with rating 4 or more to be positive and those with 2 or less to be negative.
End of explanation
products
Explanation: Now, let's remember what the dataset looks like by taking a quick peek:
End of explanation
train_data, test_data = products.random_split(.8, seed=1)
Explanation: Split data into training and test sets
We split the data into a 80-20 split where 80% is in the training set and 20% is in the test set.
End of explanation
model = graphlab.logistic_classifier.create(train_data, target='sentiment',
features=['word_count'],
validation_set=None)
Explanation: Train a logistic regression classifier
We will now train a logistic regression classifier with sentiment as the target and word_count as the features. We will set validation_set=None to make sure everyone gets exactly the same results.
Remember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall.
End of explanation
accuracy= model.evaluate(test_data, metric='accuracy')['accuracy']
print "Test Accuracy: %s" % accuracy
Explanation: Model Evaluation
We will explore the advanced model evaluation concepts that were discussed in the lectures.
Accuracy
One performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
To obtain the accuracy of our trained models using GraphLab Create, simply pass the option metric='accuracy' to the evaluate function. We compute the accuracy of our logistic regression model on the test_data as follows:
End of explanation
baseline = len(test_data[test_data['sentiment'] == 1])/len(test_data)
print "Baseline accuracy (majority class classifier): %s" % baseline
Explanation: Baseline: Majority class prediction
Recall from an earlier assignment that we used the majority class classifier as a baseline (i.e reference) model for a point of comparison with a more sophisticated classifier. The majority classifier model predicts the majority class for all data points.
Typically, a good model should beat the majority class classifier. Since the majority class in this dataset is the positive class (i.e., there are more positive than negative reviews), the accuracy of the majority class classifier can be computed as follows:
End of explanation
confusion_matrix = model.evaluate(test_data, metric='confusion_matrix')['confusion_matrix']
confusion_matrix
Explanation: Quiz Question: Using accuracy as the evaluation metric, was our logistic regression model better than the baseline (majority class classifier)?
Confusion Matrix
The accuracy, while convenient, does not tell the whole story. For a fuller picture, we turn to the confusion matrix. In the case of binary classification, the confusion matrix is a 2-by-2 matrix laying out correct and incorrect predictions made in each label as follows:
+---------------------------------------------+
| Predicted label |
+----------------------+----------------------+
| (+1) | (-1) |
+-------+-----+----------------------+----------------------+
| True |(+1) | # of true positives | # of false negatives |
| label +-----+----------------------+----------------------+
| |(-1) | # of false positives | # of true negatives |
+-------+-----+----------------------+----------------------+
To print out the confusion matrix for a classifier, use metric='confusion_matrix':
End of explanation
precision = model.evaluate(test_data, metric='precision')['precision']
print "Precision on test data: %s" % precision
Explanation: Quiz Question: How many predicted values in the test set are false positives?
Computing the cost of mistakes
Put yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, false positives cost more than false negatives. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.)
Suppose you know the costs involved in each kind of mistake:
1. \$100 for each false positive.
2. \$1 for each false negative.
3. Correctly classified reviews incur no cost.
Quiz Question: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the test set?
Precision and Recall
You may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where precision comes in:
$$
[\text{precision}] = \frac{[\text{# positive data points with positive predicitions}]}{\text{[# all data points with positive predictions]}} = \frac{[\text{# true positives}]}{[\text{# true positives}] + [\text{# false positives}]}
$$
So to keep the percentage of false positives below 3.5% of positive predictions, we must raise the precision to 96.5% or higher.
First, let us compute the precision of the logistic regression classifier on the test_data.
End of explanation
recall = model.evaluate(test_data, metric='recall')['recall']
print "Recall on test data: %s" % recall
Explanation: Quiz Question: Out of all reviews in the test set that are predicted to be positive, what fraction of them are false positives? (Round to the second decimal place e.g. 0.25)
Quiz Question: Based on what we learned in lecture, if we wanted to reduce this fraction of false positives to be below 3.5%, we would: (see the quiz)
A complementary metric is recall, which measures the ratio between the number of true positives and that of (ground-truth) positive reviews:
$$
[\text{recall}] = \frac{[\text{# positive data points with positive predicitions}]}{\text{[# all positive data points]}} = \frac{[\text{# true positives}]}{[\text{# true positives}] + [\text{# false negatives}]}
$$
Let us compute the recall on the test_data.
End of explanation
def apply_threshold(probabilities, threshold):
### YOUR CODE GOES HERE
# +1 if >= threshold and -1 otherwise.
...
Explanation: Quiz Question: What fraction of the positive reviews in the test_set were correctly predicted as positive by the classifier?
Quiz Question: What is the recall value for a classifier that predicts +1 for all data points in the test_data?
Precision-recall tradeoff
In this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve.
Varying the threshold
False positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold.
Write a function called apply_threshold that accepts two things
* probabilities (an SArray of probability values)
* threshold (a float between 0 and 1).
The function should return an SArray, where each element is set to +1 or -1 depending whether the corresponding probability exceeds threshold.
End of explanation
probabilities = model.predict(test_data, output_type='probability')
predictions_with_default_threshold = apply_threshold(probabilities, 0.5)
predictions_with_high_threshold = apply_threshold(probabilities, 0.9)
print "Number of positive predicted reviews (threshold = 0.5): %s" % (predictions_with_default_threshold == 1).sum()
print "Number of positive predicted reviews (threshold = 0.9): %s" % (predictions_with_high_threshold == 1).sum()
Explanation: Run prediction with output_type='probability' to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.
End of explanation
# Threshold = 0.5
precision_with_default_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_default_threshold)
recall_with_default_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_default_threshold)
# Threshold = 0.9
precision_with_high_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_high_threshold)
recall_with_high_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_high_threshold)
print "Precision (threshold = 0.5): %s" % precision_with_default_threshold
print "Recall (threshold = 0.5) : %s" % recall_with_default_threshold
print "Precision (threshold = 0.9): %s" % precision_with_high_threshold
print "Recall (threshold = 0.9) : %s" % recall_with_high_threshold
Explanation: Quiz Question: What happens to the number of positive predicted reviews as the threshold increased from 0.5 to 0.9?
Exploring the associated precision and recall as the threshold varies
By changing the probability threshold, it is possible to influence precision and recall. We can explore this as follows:
End of explanation
threshold_values = np.linspace(0.5, 1, num=100)
print threshold_values
Explanation: Quiz Question (variant 1): Does the precision increase with a higher threshold?
Quiz Question (variant 2): Does the recall increase with a higher threshold?
Precision-recall curve
Now, we will explore various different values of tresholds, compute the precision and recall scores, and then plot the precision-recall curve.
End of explanation
precision_all = []
recall_all = []
probabilities = model.predict(test_data, output_type='probability')
for threshold in threshold_values:
predictions = apply_threshold(probabilities, threshold)
precision = graphlab.evaluation.precision(test_data['sentiment'], predictions)
recall = graphlab.evaluation.recall(test_data['sentiment'], predictions)
precision_all.append(precision)
recall_all.append(recall)
Explanation: For each of the values of threshold, we compute the precision and recall scores.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
def plot_pr_curve(precision, recall, title):
plt.rcParams['figure.figsize'] = 7, 5
plt.locator_params(axis = 'x', nbins = 5)
plt.plot(precision, recall, 'b-', linewidth=4.0, color = '#B0017F')
plt.title(title)
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.rcParams.update({'font.size': 16})
plot_pr_curve(precision_all, recall_all, 'Precision recall curve (all)')
Explanation: Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.
End of explanation
baby_reviews = test_data[test_data['name'].apply(lambda x: 'baby' in x.lower())]
Explanation: Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places.
Quiz Question: Using threshold = 0.98, how many false negatives do we get on the test_data? (Hint: You may use the graphlab.evaluation.confusion_matrix function implemented in GraphLab Create.)
This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier.
Evaluating specific search terms
So far, we looked at the number of false positives for the entire test set. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon.
Precision-Recall on all baby related items
From the test set, select all the reviews for all products with the word 'baby' in them.
End of explanation
probabilities = model.predict(baby_reviews, output_type='probability')
Explanation: Now, let's predict the probability of classifying these reviews as positive:
End of explanation
threshold_values = np.linspace(0.5, 1, num=100)
Explanation: Let's plot the precision-recall curve for the baby_reviews dataset.
First, let's consider the following threshold_values ranging from 0.5 to 1:
End of explanation
precision_all = []
recall_all = []
for threshold in threshold_values:
# Make predictions. Use the `apply_threshold` function
## YOUR CODE HERE
predictions = ...
# Calculate the precision.
# YOUR CODE HERE
precision = ...
# YOUR CODE HERE
recall = ...
# Append the precision and recall scores.
precision_all.append(precision)
recall_all.append(recall)
Explanation: Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below.
End of explanation
plot_pr_curve(precision_all, recall_all, "Precision-Recall (Baby)")
Explanation: Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better for the reviews of data in baby_reviews? Round your answer to 3 decimal places.
Quiz Question: Is this threshold value smaller or larger than the threshold used for the entire dataset to achieve the same specified precision of 96.5%?
Finally, let's plot the precision recall curve.
End of explanation |
1,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using ChoiceRank to understand network traffic
This notebook provides a quick example on how to use ChoiceRank to estimate transitions along the edges of a network based only on the marginal traffic at the nodes.
Step1: 1. Generating sample data
First, we will generate sample data.
This includes
generating a network,
generating a parameter for each node of the network,
generating samples of choices in the network.
Step2: The network looks as follows
Step3: Now we aggregate the all the transitions into incoming traffic and outgoing traffic.
Step4: 2. Estimating transitions using ChoiceRank
ChoiceRank can be used to recover the transitions on the network based only on
Step5: We can attempt to reconstruct the transition matrix using the marginal traffic data and the parameters. | Python Code:
import choix
import networkx as nx
import numpy as np
%matplotlib inline
Explanation: Using ChoiceRank to understand network traffic
This notebook provides a quick example on how to use ChoiceRank to estimate transitions along the edges of a network based only on the marginal traffic at the nodes.
End of explanation
n_items = 8
p_edge = 0.3
n_samples = 3000
# 1. Generate a network.
graph = nx.erdos_renyi_graph(n_items, p_edge, directed=True)
# 2. Generate a parameter for each node.
params = choix.generate_params(n_items, interval=2.0)
# 3. Generate samples of choices in the network.
transitions = np.zeros((n_items, n_items))
for _ in range(n_samples):
src = np.random.choice(n_items)
neighbors = list(graph.successors(src))
if len(neighbors) == 0:
continue
dst = choix.compare(neighbors, params)
transitions[src, dst] += 1
Explanation: 1. Generating sample data
First, we will generate sample data.
This includes
generating a network,
generating a parameter for each node of the network,
generating samples of choices in the network.
End of explanation
nx.draw(graph, with_labels=True)
Explanation: The network looks as follows
End of explanation
traffic_in = transitions.sum(axis=0)
traffic_out = transitions.sum(axis=1)
print("incoming traffic:", traffic_in)
print("outgoing traffic:", traffic_out)
Explanation: Now we aggregate the all the transitions into incoming traffic and outgoing traffic.
End of explanation
params = choix.choicerank(graph, traffic_in, traffic_out)
Explanation: 2. Estimating transitions using ChoiceRank
ChoiceRank can be used to recover the transitions on the network based only on:
information about the structure of the network, and
the (marginal) incoming and outgoing traffic at each node.
ChoiceRank works under the assumption that each node has a latent "preference" score, and that transitions follow Luce's choice model.
End of explanation
est = np.zeros((n_items, n_items))
for src in range(n_items):
neighbors = list(graph.successors(src))
if len(neighbors) == 0:
continue
probs = choix.probabilities(neighbors, params)
est[src,neighbors] = traffic_out[src] * probs
print("True transition matrix:")
print(transitions)
print("\nEstimated transition matrix:")
print(np.round_(est))
print("\nDifference:")
print(np.round_(transitions - est))
Explanation: We can attempt to reconstruct the transition matrix using the marginal traffic data and the parameters.
End of explanation |
1,533 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Feature Engineering para XGBoost
| Python Code::
important_values = values.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'), important_values['damage_grade'], test_size = 0.2, random_state = 123)
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type","position", "ground_floor_type", "other_floor_type","plan_configuration", "legal_ownership_status"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train
import time
def my_grid_search():
print(time.gmtime())
i = 1
df = pd.DataFrame({'subsample': [],'gamma': [],'learning_rate': [],'max_depth': [],'score': []})
for subsample in [0.75, 0.885, 0.95]:
for gamma in [0.75, 1, 1.25]:
for learning_rate in [0.4375, 0.45, 0.4625]:
for max_depth in [5, 6, 7]:
model = XGBClassifier(n_estimators = 350,booster = 'gbtree',subsample = subsample,gamma = gamma,max_depth = max_depth,learning_rate = learning_rate,label_encoder = False,verbosity = 0)
model.fit(X_train, y_train)
y_preds = model.predict(X_test)
score = f1_score(y_test, y_preds, average = 'micro')
df = df.append(pd.Series(data={'subsample': subsample,'gamma': gamma,'learning_rate': learning_rate,'max_depth': max_depth,'score': score},name = i))
print(i, time.gmtime())
i += 1
return df.sort_values('score', ascending = False)
current_df = my_grid_search()
df = pd.read_csv('grid-search/res-feature-engineering.csv')
df.append(current_df)
df.to_csv('grid-search/res-feature-engineering.csv')
|
1,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step8: Vertex constants
Setup up the following constants for Vertex
Step9: AutoML constants
Set constants unique to AutoML datasets and training
Step10: Tutorial
Now you are ready to start creating your own AutoML text sentiment analysis model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
Step11: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following
Step12: Now save the unique dataset identifier for the Dataset resource instance you created.
Step13: Data preparation
The Vertex Dataset resource for text has a couple of requirements for your text data.
Text examples must be stored in a CSV or JSONL file.
CSV
For text sentiment analysis, the CSV file has a few requirements
Step14: Quick peek at your data
You will use a version of the Crowdflower Claritin-Twitter dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
Step15: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following
Step16: Train the model
Now train an AutoML text sentiment analysis model using your Vertex Dataset resource. To train the model, do the following steps
Step17: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields we need to specify are
Step18: Now save the unique identifier of the training pipeline you created.
Step19: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step20: Deployment
Training the above model may take upwards of 180 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step21: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter
Step22: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps
Step23: Now get the unique identifier for the Endpoint resource you created.
Step24: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step25: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step26: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
Step27: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters
Step28: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step29: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: AutoML text sentiment analysis model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_sentiment_analysis_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_text_sentiment_analysis_online.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create text sentiment analysis models and do online prediction using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the Crowdflower Claritin-Twitter dataset from data.world Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Objective
In this tutorial, you create an AutoML text sentiment analysis model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
# Text Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/text_1.0.0.yaml"
# Text Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/text_sentiment_io_format_1.0.0.yaml"
# Text Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_text_sentiment_1.0.0.yaml"
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own AutoML text sentiment analysis model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("claritin-" + TIMESTAMP, DATA_SCHEMA)
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
IMPORT_FILE = "gs://cloud-samples-data/language/claritin.csv"
SENTIMENT_MAX = 4
Explanation: Data preparation
The Vertex Dataset resource for text has a couple of requirements for your text data.
Text examples must be stored in a CSV or JSONL file.
CSV
For text sentiment analysis, the CSV file has a few requirements:
No heading.
First column is the text example or Cloud Storage path to text file.
Second column the label (i.e., sentiment).
Third column is the maximum sentiment value. For example, if the range is 0 to 3, then the maximum value is 3.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
Explanation: Quick peek at your data
You will use a version of the Crowdflower Claritin-Twitter dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
Explanation: Import data
Now, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:
Uses the Dataset client.
Calls the client method import_data, with the following parameters:
name: The human readable name you give to the Dataset resource (e.g., claritin).
import_configs: The import configuration.
import_configs: A Python list containing a dictionary, with the key/value entries:
gcs_sources: A list of URIs to the paths of the one or more index files.
import_schema_uri: The schema identifying the labeling type.
The import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
End of explanation
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
Explanation: Train the model
Now train an AutoML text sentiment analysis model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
PIPE_NAME = "claritin_pipe-" + TIMESTAMP
MODEL_NAME = "claritin_model-" + TIMESTAMP
task = json_format.ParseDict(
{
"sentiment_max": SENTIMENT_MAX,
},
Value(),
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields we need to specify are:
sentiment_max: The maximum value for the sentiment (e.g., 4).
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
Explanation: Deployment
Training the above model may take upwards of 180 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("meanAbsoluteError", metrics["meanAbsoluteError"])
print("precision", metrics["precision"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (meanAbsoluteError and precision) you will print the result.
End of explanation
ENDPOINT_NAME = "claritin_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "claritin_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"automatic_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
automatic_resources: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
test_item = ! gsutil cat $IMPORT_FILE | head -n1
if len(test_item[0]) == 3:
_, test_item, test_label, max = str(test_item[0]).split(",")
else:
test_item, test_label, max = str(test_item[0]).split(",")
print(test_item, test_label)
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Get test item
You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
def predict_item(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{"content": data}]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
return response
response = predict_item(test_item, endpoint_id, None)
Explanation: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters:
filename: The Cloud Storage path to the test item.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (text files) to predict.
parameters: Additional filtering parameters for serving prediction results. Note, text models do not support additional parameters.
Request
The format of each instance is:
{ 'content': text_item }
Since the predict() method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what you pass to the predict() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding text in the request. You will see in the output for each prediction -- in our case there is just one:
The sentiment rating
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
1,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notes
Development and evaluation of imaginet and related models.
Step1: Image retrieval evaluation
Models
Step2: Models | Python Code:
%pylab inline
from ggplot import *
import pandas as pd
data = pd.DataFrame(
dict(epoch=range(1,11)+range(1,11)+range(1,11)+range(1,8)+range(1,11)+range(1,11),
model=hstack([repeat("char-3-grow", 10),
repeat("char-1", 10),
repeat("char-3", 10),
repeat("visual", 7),
repeat("multitask",10),
repeat("sum", 10)]),
recall=[#char-3-grow lw0222.uvt.nl:/home/gchrupala/reimaginet/run-110-phon
0.097281087565,
0.140863654538,
0.161015593762,
0.173410635746,
0.176969212315,
0.175529788085,
0.175089964014,
0.174010395842,
0.173370651739,
0.173050779688,
# char-1 yellow.uvt.nl:/home/gchrupala/repos/reimagine/run-200-phon
0.100919632147,
0.127588964414,
0.140583766493,
0.148300679728,
0.150739704118,
0.153338664534,
0.156657337065,
0.159016393443,
0.159056377449,
0.160655737705,
# char-3 yellow.uvt.nl:/home/gchrupala/repos/reimagine/run-201-phon
0.078368652539,
0.125789684126,
0.148140743703,
0.158216713315,
0.163694522191,
0.168612554978,
0.172570971611,
0.17181127549,
0.171531387445,
0.170611755298,
# visual
0.160015993603,
0.184406237505,
0.193202718912,
0.19956017593,
0.201079568173,
0.201719312275,
0.19944022391,
# multitask
0.16093562575,
0.185525789684,
0.194482207117,
0.202758896441,
0.203558576569,
0.20243902439,
0.199240303878,
0.195361855258,
0.193242702919,
0.189924030388,
# sum
0.137984806078,
0.145581767293,
0.149340263894,
0.151819272291,
0.152898840464,
0.154218312675,
0.155257896841,
0.155697720912,
0.15637744902,
0.156657337065
]))
def standardize(x):
return (x-numpy.mean(x))/numpy.std(x)
Explanation: Notes
Development and evaluation of imaginet and related models.
End of explanation
ggplot(data.loc[data['model'].isin(['sum','char-1','char-3','char-3-grow','multitask'])],
aes(x='epoch', y='recall', color='model')) + geom_line(size=3) + theme()
ggplot(data.loc[data['model'].isin(['visual','multitask','sum'])],
aes(x='epoch', y='recall', color='model')) + geom_line(size=3) + theme()
data_grow = pd.DataFrame(dict(epoch=range(1,11)+range(1,11),
model=hstack([repeat("gru-2-grow", 10),repeat("gru-1", 10)]),
recall=[#gru-1
0.170971611355,
0.192163134746,
0.206797281088,
0.211355457817,
0.21331467413,
0.218992403039,
0.214674130348,
0.214634146341,
0.214434226309,
0.212115153938,
# gru-2-grow
0.173730507797,
0.198320671731,
0.206117552979,
0.211715313874,
0.212914834066,
0.211915233906,
0.209956017593,
0.210795681727,
0.209076369452,
0.208996401439
]))
Explanation: Image retrieval evaluation
Models:
- Sum - additively composed word embeddings (1024 dimensions)
- Visual - Imaginet with disabled textual pathway (1024 embeddings + 1 x 1024 hidden
- Multitask - Full Imaginet model (1024 embeddings + 1 x 1024 hidden)
- Char-1 - Model similar to imaginet, but trained on character-level. Captions are lowecases, with spaces removed. The model has 256 character embeddings + 3 layers of 1024 recurrent hidden layers.
- Char-3 - Like above, but 3 GRU layers
- Char-3-grow. Like above, but layers >1 initialized to pre-trained approximate identity
Remarks:
- Models NOT trained on extra train data (restval)
End of explanation
ggplot(data_grow, aes(x='epoch', y='recall', color='model')) + geom_line(size=3) + theme()
Explanation: Models:
- GRU-1 - Imaginet (1024 emb + 1 x 1024 hidden)
- GRU-2 grow - Imaginet (1024 emb + 2 x 1024 hidden)
Remarks:
- Models trained on extra train data (restval)
- Layers >1 initialized to pre-trained approximate identity
End of explanation |
1,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collapsed Gibbs sampler for Generalized Relational Topic Models with Data Augmentation
<div style="display
Step1: Generate topics
We assume a vocabulary of 25 terms, and create ten "topics", where each topic assigns exactly 5 consecutive terms equal probability.
Step2: Generate documents from topics
We generate 1,000 documents from these 10 topics by sampling 1,000 topic distributions, one for each document, from a Dirichlet distribution with parameter $\alpha = (1, \ldots, 1)$.
Step3: Generate document network
Create a document network from the documents by applying $\psi$ and applying a threshold $\psi_0$.
Step4: Estimate parameters
Step5: Predict edges on pairs of test documents
Create 1,000 test documents using the same generative process as our training documents.
Step6: Learn their topic distributions using the model trained on the training documents, then calculate the actual and predicted values of $\psi$. For predicted $\psi$, estimate $\eta$ as the mean of our samples of $\eta$ after burn-in.
Step7: Measure the goodness of our prediction by the area under the associated ROC curve.
Step8: Learn topics, then learn classifier
Step9: Compute Hadamard products between learned topic distributions for training and test documents.
Step10: Logistic regression
Train logistic regression on training data
calculate probability of an edge for each pair of test documents, and
measure the goodness of our prediction by computing the area under the ROC curve.
Step11: Gradient boosted trees
Train gradient boosted trees on training data
calculate probability of an edge for each pair of test documents, and
measure the goodness of our prediction by computing the area under the ROC curve.
Step12: Use GRTM topics | Python Code:
%matplotlib inline
from modules.helpers import plot_images
from functools import partial
from sklearn.metrics import (roc_auc_score, roc_curve)
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
imshow = partial(plt.imshow, cmap='gray', interpolation='nearest', aspect='auto')
sns.set(style='white')
Explanation: Collapsed Gibbs sampler for Generalized Relational Topic Models with Data Augmentation
<div style="display:none">
$
\DeclareMathOperator{\dir}{Dirichlet}
\DeclareMathOperator{\dis}{Discrete}
\DeclareMathOperator{\normal}{Normal}
\DeclareMathOperator{\ber}{Bernoulli}
\DeclareMathOperator{\diag}{diag}
\DeclareMathOperator{\Betaf}{B}
\DeclareMathOperator{\Gammaf}{\Gamma}
\DeclareMathOperator{\PG}{PG}
\DeclareMathOperator{\v}{vec}
\newcommand{\norm}[1]{\left\| #1 \right\|}
\newcommand{\cp}[2]{p \left( #1 \middle| #2 \right)}
\newcommand{\cN}[2]{\mathscr{N} \left( #1 \middle| #2 \right)}
\newcommand{\cpsi}[2]{\psi \left( #1 \middle| #2 \right)}
\newcommand{\cPsi}[2]{\Psi \left( #1 \middle| #2 \right)}
\newcommand{\etd}[1]{\mathbf{z}^{(#1)}}
\newcommand{\etdT}[1]{\left. \mathbf{z}^{(#1)} \right.^T}
\newcommand{\Etd}[2]{\mathbf{z}^{(#1, #2)}}
\newcommand{\sumetd}{\mathbf{z}}
\newcommand{\one}{\mathbf{1}}
\newcommand{\Eta}{H}
\newcommand{\eHe}{\etdT{d} \Eta \etd{d'}}
$
</div>
Here is the collapsed Gibbs sampler for Chen et al.'s generalized relational topic models with data augmentation. I am building on the collapsed Gibbs sampler I wrote for binary logistic supervised latent Dirichlet allocation.
The generative model for RTMs is as follows:
$$\begin{align}
\theta^{(d)} &\sim \dir(\alpha) &\text{(topic distribution for document $d \in {1, \ldots, D}$)} \
\phi^{(k)} &\sim \dir(\beta) &\text{(term distribution for topic $k \in {1, \ldots, K}$)} \
z_n^{(d)} \mid \theta^{(d)} &\sim \dis \left( \theta^{(d)} \right) &\text{(topic of $n$th token of document $d$, $n \in {1, \ldots, N^{(d)}}$)} \
w_n^{(d)} \mid \phi^{(z_n^{(d)})} &\sim \dis \left( \phi^{(z_n^{(d)})} \right) &\text{(term of $n$th token of document $d$, $n \in {1, \ldots, N^{(d)}}$)} \
\Eta_{k, k'} &\sim \normal \left( \mu, \nu^2 \right) &\text{(regression coefficients for topic pairs $k, k' \in {1, \ldots, K}$)} \
y^{(d, d')} \mid \Eta, \etd{d}, \etd{d'} &\sim \ber \left(
\frac{ \exp \left( \eHe \right) }{ 1 + \exp \left( \eHe \right) } \right)
&\text{(link indicator for documents $d, d' \in {1, \ldots, D}$)}
\end{align}$$
where each token can be any one of $V$ terms in our vocabulary, $\etd{d}$ is the empirical topic distribution of document $d$, and $\circ$ is the Hadamard (element-wise) product.
<img src="http://yosinski.com/mlss12/media/slides/MLSS-2012-Blei-Probabilistic-Topic-Models_084.png" width="600">
<p style='text-align: center; font-style: italic;'>
Plate notation for relational topic models.
<br/>
This diagram should replace $\beta_k$ with $\phi^{(k)}$, and each $\phi^{(k)}$ should be dependent on a single $\beta$.
</p>
Following Chen et al. 2013, the regularized pseudo-likelihood for the link variable $y^{(d, d')}$, with regularization parameter $b \ge 0$, can be written
$$\begin{align}
\cpsi{y^{(d, d')}}{\Eta, \etd{d}, \etd{d'}, b}
&= \cp{y^{(d, d')}}{\Eta, \etd{d}, \etd{d'}}^b
\ &= \left( \frac{\exp \left( \eHe \right)^{y^{(d, d')}}}{ 1 + \exp \left( \eHe \right)} \right)^b
\ &= \frac{\exp \left( b y^{(d, d')} \eHe \right)}
{ \left( \exp \left( -\frac{\eHe}{2} \right) + \exp \left( \frac{\eHe}{2} \right) \right)^b \exp \left( \frac{b}{2} \eHe \right) }
\ &= 2^{-b} \exp \left( b \left( y^{(d, d')} - \frac{1}{2} \right) \left( \eHe \right) \right) \cosh \left( \frac{ \eHe }{2} \right)^{-b}
\ &= 2^{-b} \exp \left( b \left( y^{(d, d')} - \frac{1}{2} \right) \left( \eHe \right) \right)
\int_0^\infty \exp \left( -\frac{ \left( \eHe \right)^2 }{2} \omega^{(d, d')} \right)
\cp{\omega^{(d, d')}}{b, 0} d\omega^{(d, d')}
\end{align}$$
where $\omega^{(d, d')}$ is a Polya-Gamma distributed variable with parameters $b = b$ and $c = 0$ (see Polson et al. 2012 for details). This means that, for each pair of documents $d$ and $d'$, the pseudo-likelihood of $y^{(d, d')}$ is actually a mixture of Gaussians with respect to the Polya-Gamma distribution $\PG(b, 0)$. Therefore, the joint pseudo-likelihood of $y^{(d, d')}$ and $\omega^{(d, d')}$ can be written
$$\cPsi{y^{(d, d')}, \omega^{(d, d')}}{\Eta, \etd{d}, \etd{d'}, b}
= 2^{-b} \exp \left( \kappa^{(d, d')} \zeta^{(d, d')} - \frac{ \omega^{(d, d')} }{2} (\zeta^{(d, d')})^2 \right) \cp{\omega^{(d, d')}}{b, 0}.$$
where $\kappa^{(d, d')} = b(y^{(d, d')} - 1/2)$ and $\zeta^{(d, d')} = \eHe$. The joint probability distribution can therefore be factored as follows:
$$\begin{align}
\cp{\theta, \phi, z, w, \Eta, y, \omega}{\alpha, \beta, \mu, \nu^2, b}
&=
\prod_{k=1}^{K} \cp{\phi^{(k)}}{\beta}
\prod_{d=1}^{D} \cp{\theta^{(d)}}{\alpha}
\prod_{n=1}^{N^{(d)}} \cp{z_n^{(d)}}{\theta^{(d)}} \cp{w_n^{(d)}}{\phi^{(z_n^{(d)})}}
\ & \quad \times \prod_{k_1=1}^{K} \prod_{k_2=1}^{K} \cp{\Eta_{k_1, k_2}}{\mu, \nu^2}
\prod_{d_1=1}^D \prod_{\substack{d_2=1 \ d_2 \neq d_1}}^D \cPsi{y^{(d_1, d_2)}, \omega^{(d_1, d_2)}}{\Eta, \etd{d_1}, \etd{d_2}, b}
\ &=
\prod_{k=1}^{K} \frac{\Betaf(b^{(k)} + \beta)}{\Betaf(\beta)} \cp{\phi^{(k)}}{b^{(k)} + \beta}
\prod_{d=1}^{D} \frac{\Betaf(a^{(d)} + \alpha)}{\Betaf(\alpha)} \cp{\theta^{(d)}}{a^{(d)} + \alpha}
\ &\quad \times
\prod_{k_1=1}^{K} \prod_{k_2=1}^{K} \cN{\Eta_{k_1, k_2}}{\mu, \nu^2}
\prod_{d_1=1}^D \prod_{\substack{d_2=1 \ d_2 \neq d_1}}^D 2^{-b} \exp \left( \kappa^{(d_1, d_2)} \zeta^{(d_1, d_2)} - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right) \cp{\omega^{(d_1, d_2)}}{b, 0}
\end{align}$$
where $a_k^{(d)}$ is the number of tokens in document $d$ assigned to topic $k$, $b_v^{(k)}$ is the number of tokens equal to term $v$ and assigned to topic $k$, and $\Betaf$ is the multivariate Beta function. Marginalizing out $\theta$ and $\phi$ by integrating with respect to each $\theta^{(d)}$ and $\phi^{(k)}$ over their respective sample spaces yields
$$\begin{align}
\cp{z, w, \Eta, y, \omega}{\alpha, \beta, \mu, \nu^2, b} &=
\prod_{k=1}^{K} \frac{\Betaf(b^{(k)} + \beta)}{\Betaf(\beta)}
\prod_{d=1}^{D} \frac{\Betaf(a^{(d)} + \alpha)}{\Betaf(\alpha)}
\ &\quad\quad \times \prod_{k_1=1}^{K} \prod_{k_2=1}^{K} \cN{\Eta_{k_1, k_2}}{\mu, \nu^2}
\prod_{d_1=1}^D \prod_{\substack{d_2=1 \ d_2 \neq d_1}}^D 2^{-b} \exp \left( \kappa^{(d_1, d_2)} \zeta^{(d_1, d_2)} - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right) \cp{\omega^{(d_1, d_2)}}{b, 0}
\ &=
\cp{w}{z, \beta} \cp{z}{\alpha} \cp{\Eta}{\mu, \nu^2} \cPsi{y, \omega}{\Eta, z, b}.
\end{align}$$
See my LDA notebook for step-by-step details of the previous two calculations.
Our goal is to calculate the posterior distribution
$$\cp{z, \Eta, \omega}{w, y, \alpha, \beta, \mu, \nu^2, b} =
\frac{\cp{z, w, \Eta, y, \omega}{\alpha, \beta, \mu, \nu^2, b}}
{\sum_{z'} \iint \cp{z', w, \Eta', y, \omega{'}}{\alpha, \beta, \mu, \nu^2, b} d\Eta' d\omega{'}}$$
in order to infer the topic assignments $z$ and regression coefficients $\Eta$ from the given term assignments $w$ and link data $y$. Since calculating this directly is infeasible, we resort to collapsed Gibbs sampling. The sampler is "collapsed" because we marginalized out $\theta$ and $\phi$, and will estimate them from the topic assignments $z$:
$$\hat\theta_k^{(d)} = \frac{a_k^{(d)} + \alpha_k}{\sum_{k'=1}^K \left(a_{k'}^{(d)} + \alpha_{k'} \right)},\quad
\hat\phi_v^{(k)} = \frac{b_v^{(k)} + \beta_v}{\sum_{v'=1}^V \left(b_{v'}^{(k)} + \beta_{v'} \right)}.$$
Gibbs sampling requires us to compute the full conditionals for each $z_n^{(d)}$, $\omega^{(d, d')}$ and $\Eta_{k, k'}$. For example, we need to calculate, for all $n$, $d$ and $k$,
$$\begin{align}
\cp{z_n^{(d)} = k}{z \setminus z_n^{(d)}, w, H, y, \omega, \alpha, \beta, \mu, \nu^2, b}
&\propto
\cp{z_n^{(d)} = k, z \setminus z_n^{(d)}, w, H, y, \omega}{\alpha, \beta, \mu, \nu^2, b}
\ &\propto
\frac{b_{w_n^{(d)}}^{(k)} \setminus z_n^{(d)} + \beta_{w_n^{(d)}}}{ \sum_{v=1}^V \left( b_v^{(k)} \setminus z_n^{(d)} + \beta_v\right)}
\left( a_k^{(d)} \setminus z_n^{(d)} + \alpha_k \right)
\prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \ d_2 \neq d_1}}^{D} \exp \left( \kappa^{(d_1, d_2)} \zeta^{(d_1, d_2)} - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right)
\ &\propto
\frac{b_{w_n^{(d)}}^{(k)} \setminus z_n^{(d)} + \beta_{w_n^{(d)}}}{ \sum_{v=1}^V \left( b_v^{(k)} \setminus z_n^{(d)} + \beta_v\right)}
\left( a_k^{(d)} \setminus z_n^{(d)} + \alpha_k \right)
\ &\quad\quad\times
\exp \left( \sum_{\substack{d_1=1 \ d_1 \neq d}}^{D} \left[ \left( \kappa^{(d_1, d)} - \omega^{(d_1, d)} ( \zeta^{(d_1, d)} \setminus z_n^{(d)}) \right) \frac{H_{:, k}^T \etd{d_1}}{N^{(d)}}
- \frac{ \omega^{(d_1, d)} }{2} \left( \frac{H_{:, k}^T \etd{d_1}}{N^{(d)}} \right)^2 \right] \right.
\ &\quad\quad\quad\quad +
\left. \sum_{\substack{d_2=1 \ d_2 \neq d}}^{D} \left[ \left( \kappa^{(d, d_2)} - \omega^{(d, d_2)} ( \zeta^{(d, d_2)} \setminus z_n^{(d)}) \right) \frac{H_{k, :} \etd{d_2}}{N^{(d)}}
- \frac{ \omega^{(d, d_2)} }{2} \left( \frac{H_{k, :} \etd{d_2}}{N^{(d)}} \right)^2 \right] \right)
\end{align}$$
where the "set-minus" notation $\cdot \setminus z_n^{(d)}$ denotes the variable the notation is applied to with the entry $z_n^{(d)}$ removed (again, see my LDA notebook for details). This final proportionality is true since
$$\begin{align}
\prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \ d_2 \neq d_1}}^{D} \exp \left( \kappa^{(d_1, d_2)} \zeta^{(d_1, d_2)} - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right)
&=
\prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \ d_2 \neq d_1}}^{D} \exp \left( \kappa^{(d_1, d_2)} \left( \zeta^{(d_1, d_2)} \setminus z_n^{(d)} + \Delta_{d, d_1, d_2}^{(k)} \right)
- \frac{ \omega^{(d_1, d_2)} }{2} \left( \zeta^{(d_1, d_2)} \setminus z_n^{(d)} + \Delta_{d, d_1, d_2}^{(k)} \right)^2 \right)
\ &\propto
\prod_{\substack{d_1=1 \ d_1 \neq d}}^{D} \exp \left( \kappa^{(d_1, d)} \left( \zeta^{(d_1, d)} \setminus z_n^{(d)} + \Delta_{d, d_1, d}^{(k)} \right)
- \frac{ \omega^{(d_1, d)} }{2} \left( \zeta^{(d_1, d)} \setminus z_n^{(d)} + \Delta_{d, d_1, d}^{(k)} \right)^2 \right)
\ &\quad\quad\times
\prod_{\substack{d_2=1 \ d_2 \neq d}}^{D} \exp \left( \kappa^{(d, d_2)} \left( \zeta^{(d, d_2)} \setminus z_n^{(d)} + \Delta_{d, d, d_2}^{(k)} \right)
- \frac{ \omega^{(d, d_2)} }{2} \left( \zeta^{(d, d_2)} \setminus z_n^{(d)} + \Delta_{d, d, d_2}^{(k)} \right)^2 \right)
\ &\propto
\exp \left( \sum_{\substack{d_1=1 \ d_1 \neq d}}^{D} \left[ \left( \kappa^{(d_1, d)} - \omega^{(d_1, d)} ( \zeta^{(d_1, d)} \setminus z_n^{(d)}) \right) \frac{H_{:, k}^T (\etd{d_1} \setminus z_n^{(d)})}{N^{(d)}}
- \frac{ \omega^{(d_1, d)} }{2} \left( \frac{H_{:, k}^T (\etd{d_1} \setminus z_n^{(d)})}{N^{(d)}} \right)^2 \right] \right.
\ &\quad\quad +
\left. \sum_{\substack{d_2=1 \ d_2 \neq d}}^{D} \left[ \left( \kappa^{(d, d_2)} - \omega^{(d, d_2)} ( \zeta^{(d, d_2)} \setminus z_n^{(d)}) \right) \frac{H_{k, :} (\etd{d_2} \setminus z_n^{(d)})}{N^{(d)}}
- \frac{ \omega^{(d, d_2)} }{2} \left( \frac{H_{k, :} (\etd{d_2} \setminus z_n^{(d)})}{N^{(d)}} \right)^2 \right] \right)
\ &=
\exp \left( \sum_{\substack{d_1=1 \ d_1 \neq d}}^{D} \left[ \left( \kappa^{(d_1, d)} - \omega^{(d_1, d)} ( \zeta^{(d_1, d)} \setminus z_n^{(d)}) \right) \frac{H_{:, k}^T \etd{d_1}}{N^{(d)}}
- \frac{ \omega^{(d_1, d)} }{2} \left( \frac{H_{:, k}^T \etd{d_1}}{N^{(d)}} \right)^2 \right] \right.
\ &\quad\quad +
\left. \sum_{\substack{d_2=1 \ d_2 \neq d}}^{D} \left[ \left( \kappa^{(d, d_2)} - \omega^{(d, d_2)} ( \zeta^{(d, d_2)} \setminus z_n^{(d)}) \right) \frac{H_{k, :} \etd{d_2}}{N^{(d)}}
- \frac{ \omega^{(d, d_2)} }{2} \left( \frac{H_{k, :} \etd{d_2}}{N^{(d)}} \right)^2 \right] \right)
\end{align}$$
where
$$\Delta_{d, d_1, d_2}^{(k)} = \delta_{d, d_1} \frac{H_{k, :} (\etd{d_2} \setminus z_n^{(d)})}{N^{(d)}} + \delta_{d, d_2} \frac{H_{:, k}^T (\etd{d_1} \setminus z_n^{(d)})}{N^{(d)}},$$
$\delta_{d, d'}$ is the Kronecker delta, and $H_{k, :}$ and $H_{:, k}$ are the $k$th row and column of $H$, respectively. The first proportionality is a result of the fact that $\Delta_{d, d_1, d_2}^{(k)}$ is nonzero only when $d = d_1$ or $d = d_2$. The last equality follows from the fact that $d \neq d_1$ in the first summation and $d \neq d_2$ in the second.
In order to calculate the full conditional for $H$, let $\eta = (H_{:,1}^T \cdots H_{:, K}^T)^T$ be the vector of concatenated columns of $H$, $Z = (\etd{1, 1} \cdots \etd{D, D})$ be the matrix whose columns are the vectors $\etd{d, d'} = \etd{d'} \otimes \etd{d}$, where $\otimes$ is the Kronecker product, $\Omega = \diag(\omega^{(1,1)}, \ldots, \omega^{(D,D)})$ be the diagonal matrix whose diagonal entries are $\omega^{(d, d')}$, $I$ be the identity matrix, and $\one$ be the vector of ones, and note that
$$\prod_{k_1=1}^{K} \prod_{k_2=1}^{K} \cN{H_{k_1, k_2}}{\mu, \nu^2} = \cN{\eta}{\mu \one, \nu^2 I}$$
$$\prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \ d_2 \neq d_1}}^{D} \exp \left( \kappa^{(d_1, d_2)} \zeta^{(d_1, d_2)} - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right)
= \exp \left( \eta^T Z \kappa - \frac{1}{2} \eta^T Z \Omega Z^T \eta \right)
\propto \cN{\eta}{(Z \Omega Z^T)^{-1} Z \kappa, (Z \Omega Z^T)^{-1}}.$$
Therefore
$$\begin{align}
\cp{\eta}{z, w, y, \omega, \alpha, \beta, \mu, \nu^2, b}
&\propto
\cp{z, w, \eta, y, \omega}{\alpha, \beta, \mu, \nu^2, b}
\ &\propto
\cN{\eta}{\mu \one, \nu^2 I} \cN{\eta}{(Z \Omega Z^T)^{-1} Z \kappa, (Z \Omega Z^T)^{-1}}
\ &\propto
\cN{\eta}{\Sigma \left( \frac{\mu}{\nu^2} \one + Z \kappa \right), \Sigma}
\end{align}$$
where $\Sigma^{-1} = \nu^{-2} I + Z \Omega Z^T$ (see Section 8.1.8 of the Matrix Cookbook).
We also need to calculate the full conditional for $\omega$. We calculate
$$\begin{align}
\cp{\omega}{z, w, H, y, \alpha, \beta, \mu, \nu^2, b}
&\propto
\cp{z, w, H, y, \omega}{\alpha, \beta, \mu, \nu^2, b}
\ &\propto
\prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \ d_2 \neq d_1}}^{D} \exp \left( - \frac{ \omega^{(d_1, d_2)} }{2} (\zeta^{(d_1, d_2)})^2 \right) \cp{\omega^{(d_1, d_2)}}{b, 0}
\ &=
\prod_{d_1=1}^{D} \prod_{\substack{d_2=1 \ d_2 \neq d_1}}^{D} \cp{\omega^{(d_1, d_2)}}{b, \zeta^{(d_1, d_2)}}
\end{align}$$
that is, $\omega^{(d_1, d_2)} \sim \PG(b, \eHe)$ for each pair of documents $d_1$ and $d_2$. We sample from the Polya-Gamma distribution according to the method of Polson et al. 2012, implemented for Python 3 in this code repo.
Graphical test
End of explanation
V = 25
K = 10
N = 100
D = 1000
topics = []
topic_base = np.concatenate((np.ones((1, 5)) * 0.2, np.zeros((4, 5))), axis=0).ravel()
for i in range(5):
topics.append(np.roll(topic_base, i * 5))
topic_base = np.concatenate((np.ones((5, 1)) * 0.2, np.zeros((5, 4))), axis=1).ravel()
for i in range(5):
topics.append(np.roll(topic_base, i))
topics = np.array(topics)
plt.figure(figsize=(10, 5))
plot_images(plt, topics, (5, 5), layout=(2, 5), figsize=(10, 5))
Explanation: Generate topics
We assume a vocabulary of 25 terms, and create ten "topics", where each topic assigns exactly 5 consecutive terms equal probability.
End of explanation
alpha = np.ones(K)
np.random.seed(42)
thetas = np.random.dirichlet(alpha, size=D)
topic_assignments = np.array([np.random.choice(range(K), size=100, p=theta)
for theta in thetas])
word_assignments = np.array([[np.random.choice(range(V), size=1, p=topics[topic_assignments[d, n]])[0]
for n in range(N)] for d in range(D)])
doc_term_matrix = np.array([np.histogram(word_assignments[d], bins=V, range=(0, V - 1))[0] for d in range(D)])
imshow(doc_term_matrix)
Explanation: Generate documents from topics
We generate 1,000 documents from these 10 topics by sampling 1,000 topic distributions, one for each document, from a Dirichlet distribution with parameter $\alpha = (1, \ldots, 1)$.
End of explanation
from itertools import product
from sklearn.cross_validation import StratifiedKFold
# choose parameter values
mu = 0.
nu2 = 1.
np.random.seed(14)
H = np.random.normal(loc=mu, scale=nu2, size=(K, K))
zeta = pd.DataFrame([(i, j, np.dot(np.dot(thetas[i], H), thetas[j])) for i, j in product(range(D), repeat=2)],
columns=('tail', 'head', 'zeta'))
_ = zeta.zeta.hist(bins=50)
# choose parameter values
zeta['y'] = (zeta.zeta >= 0).astype(int)
# plot histogram of responses
print('positive examples {} ({:.1f}%)'.format(zeta.y.sum(), zeta.y.sum() / D / D * 100))
_ = zeta.y.hist()
y = zeta[['tail', 'head', 'y']].values
skf = StratifiedKFold(y[:, 2], n_folds=100)
_, train_idx = next(iter(skf))
train_idx.shape
Explanation: Generate document network
Create a document network from the documents by applying $\psi$ and applying a threshold $\psi_0$.
End of explanation
from slda.topic_models import GRTM
_K = 10
_alpha = alpha[:_K]
_beta = np.repeat(0.01, V)
_mu = mu
_nu2 = nu2
_b = 1.
n_iter = 500
grtm = GRTM(_K, _alpha, _beta, _mu, _nu2, _b, n_iter, seed=42)
%%time
grtm.fit(doc_term_matrix, y[train_idx])
plot_images(plt, grtm.phi, (5, 5), (2, 5), figsize=(10, 5))
topic_order = [4, 7, 3, 1, 0, 9, 5, 2, 8]
plot_images(plt, grtm.phi[topic_order], (5, 5), (2, 5), figsize=(10, 5))
burnin = -1
mean_final_lL = grtm.loglikelihoods[burnin:].mean()
print(mean_final_lL)
plt.plot(grtm.loglikelihoods, label='mean final LL {:.2f}'.format(mean_final_lL))
_ = plt.legend()
imshow(grtm.theta)
H_pred = grtm.H[burnin:].mean(axis=0)
_ = plt.hist(H_pred.ravel(), bins=20)
_ = plt.hist(H.ravel(), bins=20)
Explanation: Estimate parameters
End of explanation
np.random.seed(42^2)
thetas_test = np.random.dirichlet(alpha, size=D)
topic_assignments_test = np.array([np.random.choice(range(K), size=100, p=theta)
for theta in thetas_test])
word_assignments_test = np.array([[np.random.choice(range(V), size=1, p=topics[topic_assignments_test[d, n]])[0]
for n in range(N)] for d in range(D)])
doc_term_matrix_test = np.array([np.histogram(word_assignments_test[d], bins=V, range=(0, V - 1))[0] for d in range(D)])
imshow(doc_term_matrix_test)
Explanation: Predict edges on pairs of test documents
Create 1,000 test documents using the same generative process as our training documents.
End of explanation
def bern_param(theta1, theta2, H):
zeta = np.dot(np.dot(theta1, H), theta2)
return np.exp(zeta) / (1 + np.exp(zeta))
thetas_test_grtm = grtm.transform(doc_term_matrix_test)
p_test = np.zeros(D * D)
p_test_grtm = np.zeros(D * D)
for n, i in enumerate(product(range(D), range(D))):
p_test[n] = bern_param(thetas_test[i[0]], thetas_test[i[1]], H)
p_test_grtm[n] = bern_param(thetas_test_grtm[i[0]], thetas_test_grtm[i[1]], H_pred)
Explanation: Learn their topic distributions using the model trained on the training documents, then calculate the actual and predicted values of $\psi$. For predicted $\psi$, estimate $\eta$ as the mean of our samples of $\eta$ after burn-in.
End of explanation
y_test = (p_test > 0.5).astype(int)
y_grtm = p_test_grtm
fpr, tpr, _ = roc_curve(y_test, y_grtm)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_grtm))))
_ = plt.legend(loc='best')
Explanation: Measure the goodness of our prediction by the area under the associated ROC curve.
End of explanation
from slda.topic_models import LDA
lda = LDA(_K, _alpha, _beta, n_iter, seed=42)
%%time
lda.fit(doc_term_matrix)
plot_images(plt, lda.phi, (5, 5), (1, 5), figsize=(10, 5))
plt.plot(lda.loglikelihoods)
imshow(lda.theta)
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
Explanation: Learn topics, then learn classifier
End of explanation
%%time
thetas_test_lda = lda.transform(doc_term_matrix_test)
lda_train = np.zeros((D * D, _K * _K))
lda_test = np.zeros((D * D, _K * _K))
for n, i in enumerate(product(range(D), range(D))):
lda_train[n] = np.kron(lda.theta[i[0]], lda.theta[i[1]])
lda_test[n] = np.kron(thetas_test_lda[i[0]], thetas_test_lda[i[1]])
Explanation: Compute Hadamard products between learned topic distributions for training and test documents.
End of explanation
_C_grid = np.arange(1, 202, 10)
roc_auc_scores = []
for _C in _C_grid:
print('Training Logistic Regression with C = {}'.format(_C))
_lr = LogisticRegression(fit_intercept=False, C=_C)
_lr.fit(lda_train, zeta.y)
_y_lr = _lr.predict_proba(lda_test)[:, 1]
roc_auc_scores.append(roc_auc_score(y_test, _y_lr))
print(' roc_auc_score = {}'.format(roc_auc_scores[-1]))
print(_C_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores))
plt.plot(_C_grid, roc_auc_scores)
lr = LogisticRegression(fit_intercept=False, C=11)
lr.fit(lda_train, zeta.y)
y_lr = lr.predict_proba(lda_test)[:, 1]
fpr, tpr, _ = roc_curve(y_test, y_lr)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_lr))))
_ = plt.legend(loc='best')
Explanation: Logistic regression
Train logistic regression on training data
calculate probability of an edge for each pair of test documents, and
measure the goodness of our prediction by computing the area under the ROC curve.
End of explanation
_C_grid = np.arange(1, 4)
roc_auc_scores = []
for _C in _C_grid:
print('Training Logistic Regression with C = {}'.format(_C))
_gbc = GradientBoostingClassifier(max_depth=_C)
_gbc.fit(lda_train, zeta.y)
_y_gbc = _gbc.predict_proba(lda_test)[:, 1]
roc_auc_scores.append(roc_auc_score(y_test, _y_gbc))
print(' roc_auc_score = {}'.format(roc_auc_scores[-1]))
print(_C_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores))
plt.plot(_C_grid, roc_auc_scores)
gbc = GradientBoostingClassifier(max_depth=3)
gbc.fit(lda_train, zeta.y)
y_gbc = gbc.predict_proba(lda_test)[:, 1]
fpr_gbc, tpr_gbc, _ = roc_curve(y_test, y_gbc)
plt.plot(fpr_gbc, tpr_gbc, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_gbc))))
plt.legend(loc='best')
Explanation: Gradient boosted trees
Train gradient boosted trees on training data
calculate probability of an edge for each pair of test documents, and
measure the goodness of our prediction by computing the area under the ROC curve.
End of explanation
grtm_train = np.zeros((D * D, _K * _K))
grtm_test = np.zeros((D * D, _K * _K))
for n, i in enumerate(product(range(D), range(D))):
grtm_train[n] = np.kron(grtm.theta[i[0]], grtm.theta[i[1]])
grtm_test[n] = np.kron(thetas_test_grtm[i[0]], thetas_test_grtm[i[1]])
_C_grid = np.arange(1, 52, 10)
roc_auc_scores = []
for _C in _C_grid:
print('Training Logistic Regression with C = {}'.format(_C))
_lr = LogisticRegression(fit_intercept=False, C=_C)
_lr.fit(grtm_train, zeta.y)
_y_lr = _lr.predict_proba(grtm_test)[:, 1]
roc_auc_scores.append(roc_auc_score(y_test, _y_lr))
print(' roc_auc_score = {}'.format(roc_auc_scores[-1]))
print(_C_grid[np.argmax(roc_auc_scores)], np.max(roc_auc_scores))
plt.plot(_C_grid, roc_auc_scores)
lr = LogisticRegression(fit_intercept=False, C=51)
lr.fit(grtm_train, zeta.y)
y_lr = lr.predict_proba(grtm_test)[:, 1]
fpr, tpr, _ = roc_curve(y_test, y_lr)
plt.plot(fpr, tpr, label=('AUC = {:.3f}'.format(roc_auc_score(y_test, y_lr))))
_ = plt.legend(loc='best')
Explanation: Use GRTM topics
End of explanation |
1,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This demo shows how to use the Bayesian Representational Similarity Analysis method in brainiak with a simulated dataset.
The brainik.reprsimil.brsa module has two estimators named BRSA and GBRSA. Both of them can be used to estimate representational similarity from a single participant, but with some differences in the assumptions of the models and fitting procedure. The basic usages are similar. We now generally recommend using GBRSA over BRSA for most of the cases. This document shows how to use BRSA for most of the part. At the end of the document, the usage of GBRSA is shown as well. You are encouranged to go through the example and try both estimators for your data.
The group_brsa_example.ipynb in the same directory demonstrates how to use GBRSA to estimate shared representational structure from multiple participants.
Please note that the model assumes that the covariance matrix U which all $\beta_i$ follow describe a multi-variate Gaussian distribution that is zero-meaned. This assumption does not imply that there must be both positive and negative responses across voxels.
However, it means that (Group) Bayesian RSA treats the task-evoked activity against baseline BOLD level as signal, while in other RSA tools the deviation of task-evoked activity in each voxel from the average task-evoked activity level across voxels may be considered as signal of interest.
Due to this assumption in (G)BRSA, relatively high degree of similarity may be expected when the activity patterns of two task conditions share a strong sensory driven components. When two task conditions elicit exactly the same activity pattern but only differ in their global magnitudes, under the assumption in (G)BRSA, their similarity is 1; under the assumption that only deviation of pattern from average patterns is signal of interest (which is currently not supported by (G)BRSA), their similarity would be -1 because the deviations of the two patterns from their average pattern are exactly opposite.
Load some package which we will use in this demo.
If you see error related to loading any package, you can install that package. For example, if you use Anaconda, you can use "conda install matplotlib" to install matplotlib.
Notice that due to current implementation, you need to import either prior_GP_var_inv_gamma or prior_GP_var_half_cauchy from brsa module, in order to use the smooth prior imposed onto SNR in BRSA (see below). They are forms of priors imposed on the variance of Gaussian Process prior on log(SNR). (If you think these sentences are confusing, just import them like below and forget about this).
Step1: You might want to keep a log of the output.
Step2: We want to simulate some data in which each voxel responds to different task conditions differently, but following a common covariance structure
Load an example design matrix.
The user should prepare their design matrix with their favorate software, such as using 3ddeconvolve of AFNI, or using SPM or FSL.
The design matrix reflects your belief of how fMRI signal should respond to a task (if a voxel does respond).
The common assumption is that a neural event that you are interested in will elicit a slow hemodynamic response in some voxels. The response peaks around 4-6 seconds after the event onset and dies down more than 12 seconds after the event. Therefore, typically you convolve a time series A, composed of delta (stem) functions reflecting the time of each neural event belonging to the same category (e.g. all trials in which a participant sees a face), with a hemodynamic response function B, to form the hypothetic response of any voxel to such type of neural event.
For each type of event, such a convoluted time course can be generated. These time courses, put together, are called design matrix, reflecting what we believe a temporal signal would look like, if it exists in any voxel.
Our goal is to figure out how the (spatial) response pattern of a population of voxels (in an Region of Interest, ROI) are similar or disimilar to different types of tasks (e.g., watching face vs. house, watching different categories of animals, different conditions of a cognitive task). So we need the design matrix in order to estimate the similarity matrix we are interested.
We can use the utility called ReadDesign in brainiak.utils to read a design matrix generated from AFNI. For design matrix saved as Matlab data file by SPM or or other toolbox, you can use scipy.io.loadmat('YOURFILENAME') and extract the design matrix from the dictionary returned. Basically, the Bayesian RSA in this toolkit just needs a numpy array which is in size of {time points} * {condition}
You can also generate design matrix using the function gen_design which is in brainiak.utils. It takes in (name of) event timing files in AFNI or FSL format (denoting onsets, duration, and weight for each event belongning to the same condition) and outputs the design matrix as numpy array.
In typical fMRI analysis, some nuisance regressors such as head motion, baseline time series and slow drift are also entered into regression. In using our method, you should not include such regressors into the design matrix, because the spatial spread of such nuisance regressors might be quite different from the spatial spread of task related signal. Including such nuisance regressors in design matrix might influence the pseudo-SNR map, which in turn influence the estimation of the shared covariance matrix.
We concatenate the design matrix by 2 times, mimicking 2 runs of identical timing
Step3: simulate data
Step4: Then, we simulate signals, assuming the magnitude of response to each condition follows a common covariance matrix.
Our model allows to impose a Gaussian Process prior on the log(SNR) of each voxels.
What this means is that SNR turn to be smooth and local, but betas (response amplitudes of each voxel to each condition) are not necessarily correlated in space. Intuitively, this is based on the assumption that voxels coding for related aspects of a task turn to be clustered (instead of isolated)
Our Gaussian Process are defined on both the coordinate of a voxel and its mean intensity.
This means that voxels close together AND have similar intensity should have similar SNR level. Therefore, voxels of white matter but adjacent to gray matters do not necessarily have high SNR level.
If you have an ROI saved as a binary Nifti file, say, with name 'ROI.nii'
Then you can use nibabel package to load the ROI and the following example code to retrive the coordinates of voxels.
Note
Step5: Let's keep in mind of the pattern of the ideal covariance / correlation below and see how well BRSA can recover their patterns.
Step6: In the following, pseudo-SNR is generated from a Gaussian Process defined on a "rectangular" ROI, just for simplicity of code
Step7: The reason that the pseudo-SNRs in the example voxels are not too small, while the signal looks much smaller is because we happen to have low amplitudes in our design matrix. The true SNR depends on both the amplitudes in design matrix and the pseudo-SNR. Therefore, be aware that pseudo-SNR does not directly reflects how much signal the data have, but rather a map indicating the relative strength of signal in differerent voxels.
When you have multiple runs, the noise won't be correlated between runs. Therefore, you should tell BRSA when is the onset of each scan.
Note that the data (variable Y above) you feed to BRSA is the concatenation of data from all runs along the time dimension, as a 2-D matrix of time x space
Step8: Fit Bayesian RSA to our simulated data
The nuisance regressors in typical fMRI analysis (such as head motion signal) are replaced by principal components estimated from residuals after subtracting task-related response. n_nureg tells the model how many principal components to keep from the residual as nuisance regressors, in order to account for spatial correlation in noise.
If you prefer not using this approach based on principal components of residuals, you can set auto_nuisance=False, and optionally provide your own nuisance regressors as nuisance argument to BRSA.fit(). In practice, we find that the result is much better with auto_nuisance=True.
Step9: We can have a look at the estimated similarity in matrix brsa.C_.
We can also compare the ideal covariance above with the one recovered, brsa.U_
Step10: In contrast, we can have a look of the similarity matrix based on Pearson correlation between point estimates of betas of different conditions.
This is what vanila RSA might give
Step11: We can make a comparison between the estimated SNR map and the true SNR map (normalized)
Step12: Empirically, the smoothness turns to be over-estimated when signal is weak.
We can also look at how other parameters are recovered.
Step13: Even though the variation reduced in estimated pseudo-SNR (due to overestimation of smoothness of the GP prior under low SNR situation), betas recovered by the model has higher correlation with true betas than doing simple regression, shown below. Obiously there is shrinkage of the estimated betas, as a result of variance-bias tradeoff. But we think such shrinkage does preserve the patterns of betas, and therefore the result is suitable to be further used for decoding purpose.
Step14: The singular decomposition of noise, and the comparison between the first two principal component of noise and the patterns of the first two nuisance regressors, returned by the model.
The principal components may not look exactly the same. The first principal components both capture the baseline image intensities (although they may sometimes appear counter-phase)
Apparently one can imagine that the choice of the number of principal components used as nuisance regressors can influence the result. If you just choose 1 or 2, perhaps only the global drift would be captured. But including too many nuisance regressors would slow the fitting speed and might have risk of overfitting. The users might consider starting in the range of 5-20. We do not have automatic cross-validation built in. But you can use the score() function to do cross-validation and select the appropriate number. The idea here is similar to that in GLMdenoise (http
Step15: "Decoding" from new data
Now we generate a new data set, assuming signal is the same but noise is regenerated. We want to use the transform() function of brsa to estimate the "design matrix" in this new dataset.
Step16: Model selection by cross-validataion
Step17: As can be seen above, the model with the correct design matrix explains new data with signals generated from the true model better than the null model, but explains pure noise worse than the null model.
We can also try the version which marginalize SNR and rho for each voxel.
This version is intended for analyzing data of a group of participants and estimating their shared similarity matrix. But it also allows analyzing single participant.
Step18: We can also do "decoding" and cross-validating using the marginalized version in GBRSA | Python Code:
%matplotlib inline
import scipy.stats
import scipy.spatial.distance as spdist
import numpy as np
from brainiak.reprsimil.brsa import BRSA, prior_GP_var_inv_gamma, prior_GP_var_half_cauchy
from brainiak.reprsimil.brsa import GBRSA
import brainiak.utils.utils as utils
import matplotlib.pyplot as plt
import logging
np.random.seed(10)
Explanation: This demo shows how to use the Bayesian Representational Similarity Analysis method in brainiak with a simulated dataset.
The brainik.reprsimil.brsa module has two estimators named BRSA and GBRSA. Both of them can be used to estimate representational similarity from a single participant, but with some differences in the assumptions of the models and fitting procedure. The basic usages are similar. We now generally recommend using GBRSA over BRSA for most of the cases. This document shows how to use BRSA for most of the part. At the end of the document, the usage of GBRSA is shown as well. You are encouranged to go through the example and try both estimators for your data.
The group_brsa_example.ipynb in the same directory demonstrates how to use GBRSA to estimate shared representational structure from multiple participants.
Please note that the model assumes that the covariance matrix U which all $\beta_i$ follow describe a multi-variate Gaussian distribution that is zero-meaned. This assumption does not imply that there must be both positive and negative responses across voxels.
However, it means that (Group) Bayesian RSA treats the task-evoked activity against baseline BOLD level as signal, while in other RSA tools the deviation of task-evoked activity in each voxel from the average task-evoked activity level across voxels may be considered as signal of interest.
Due to this assumption in (G)BRSA, relatively high degree of similarity may be expected when the activity patterns of two task conditions share a strong sensory driven components. When two task conditions elicit exactly the same activity pattern but only differ in their global magnitudes, under the assumption in (G)BRSA, their similarity is 1; under the assumption that only deviation of pattern from average patterns is signal of interest (which is currently not supported by (G)BRSA), their similarity would be -1 because the deviations of the two patterns from their average pattern are exactly opposite.
Load some package which we will use in this demo.
If you see error related to loading any package, you can install that package. For example, if you use Anaconda, you can use "conda install matplotlib" to install matplotlib.
Notice that due to current implementation, you need to import either prior_GP_var_inv_gamma or prior_GP_var_half_cauchy from brsa module, in order to use the smooth prior imposed onto SNR in BRSA (see below). They are forms of priors imposed on the variance of Gaussian Process prior on log(SNR). (If you think these sentences are confusing, just import them like below and forget about this).
End of explanation
logging.basicConfig(
level=logging.DEBUG,
filename='brsa_example.log',
format='%(relativeCreated)6d %(threadName)s %(message)s')
Explanation: You might want to keep a log of the output.
End of explanation
design = utils.ReadDesign(fname="example_design.1D")
n_run = 3
design.n_TR = design.n_TR * n_run
design.design_task = np.tile(design.design_task[:,:-1],
[n_run, 1])
# The last "condition" in design matrix
# codes for trials subjects made and error.
# We ignore it here.
fig = plt.figure(num=None, figsize=(12, 3),
dpi=150, facecolor='w', edgecolor='k')
plt.plot(design.design_task)
plt.ylim([-0.2, 0.4])
plt.title('hypothetic fMRI response time courses '
'of all conditions\n'
'(design matrix)')
plt.xlabel('time')
plt.show()
n_C = np.size(design.design_task, axis=1)
# The total number of conditions.
ROI_edge = 15
# We simulate "ROI" of a rectangular shape
n_V = ROI_edge**2 * 2
# The total number of simulated voxels
n_T = design.n_TR
# The total number of time points,
# after concatenating all fMRI runs
Explanation: We want to simulate some data in which each voxel responds to different task conditions differently, but following a common covariance structure
Load an example design matrix.
The user should prepare their design matrix with their favorate software, such as using 3ddeconvolve of AFNI, or using SPM or FSL.
The design matrix reflects your belief of how fMRI signal should respond to a task (if a voxel does respond).
The common assumption is that a neural event that you are interested in will elicit a slow hemodynamic response in some voxels. The response peaks around 4-6 seconds after the event onset and dies down more than 12 seconds after the event. Therefore, typically you convolve a time series A, composed of delta (stem) functions reflecting the time of each neural event belonging to the same category (e.g. all trials in which a participant sees a face), with a hemodynamic response function B, to form the hypothetic response of any voxel to such type of neural event.
For each type of event, such a convoluted time course can be generated. These time courses, put together, are called design matrix, reflecting what we believe a temporal signal would look like, if it exists in any voxel.
Our goal is to figure out how the (spatial) response pattern of a population of voxels (in an Region of Interest, ROI) are similar or disimilar to different types of tasks (e.g., watching face vs. house, watching different categories of animals, different conditions of a cognitive task). So we need the design matrix in order to estimate the similarity matrix we are interested.
We can use the utility called ReadDesign in brainiak.utils to read a design matrix generated from AFNI. For design matrix saved as Matlab data file by SPM or or other toolbox, you can use scipy.io.loadmat('YOURFILENAME') and extract the design matrix from the dictionary returned. Basically, the Bayesian RSA in this toolkit just needs a numpy array which is in size of {time points} * {condition}
You can also generate design matrix using the function gen_design which is in brainiak.utils. It takes in (name of) event timing files in AFNI or FSL format (denoting onsets, duration, and weight for each event belongning to the same condition) and outputs the design matrix as numpy array.
In typical fMRI analysis, some nuisance regressors such as head motion, baseline time series and slow drift are also entered into regression. In using our method, you should not include such regressors into the design matrix, because the spatial spread of such nuisance regressors might be quite different from the spatial spread of task related signal. Including such nuisance regressors in design matrix might influence the pseudo-SNR map, which in turn influence the estimation of the shared covariance matrix.
We concatenate the design matrix by 2 times, mimicking 2 runs of identical timing
End of explanation
noise_bot = 0.5
noise_top = 5.0
noise_level = np.random.rand(n_V) * \
(noise_top - noise_bot) + noise_bot
# The standard deviation of the noise is in the range of [noise_bot, noise_top]
# In fact, we simulate autocorrelated noise with AR(1) model. So the noise_level reflects
# the independent additive noise at each time point (the "fresh" noise)
# AR(1) coefficient
rho1_top = 0.8
rho1_bot = -0.2
rho1 = np.random.rand(n_V) \
* (rho1_top - rho1_bot) + rho1_bot
noise_smooth_width = 10.0
coords = np.mgrid[0:ROI_edge, 0:ROI_edge*2, 0:1]
coords_flat = np.reshape(coords,[3, n_V]).T
dist2 = spdist.squareform(spdist.pdist(coords_flat, 'sqeuclidean'))
# generating noise
K_noise = noise_level[:, np.newaxis] \
* (np.exp(-dist2 / noise_smooth_width**2 / 2.0) \
+ np.eye(n_V) * 0.1) * noise_level
# We make spatially correlated noise by generating
# noise at each time point from a Gaussian Process
# defined over the coordinates.
plt.pcolor(K_noise)
plt.colorbar()
plt.xlim([0, n_V])
plt.ylim([0, n_V])
plt.title('Spatial covariance matrix of noise')
plt.show()
L_noise = np.linalg.cholesky(K_noise)
noise = np.zeros([n_T, n_V])
noise[0, :] = np.dot(L_noise, np.random.randn(n_V))\
/ np.sqrt(1 - rho1**2)
for i_t in range(1, n_T):
noise[i_t, :] = noise[i_t - 1, :] * rho1 \
+ np.dot(L_noise,np.random.randn(n_V))
# For each voxel, the noise follows AR(1) process:
# fresh noise plus a dampened version of noise at
# the previous time point.
# In this simulation, we also introduced spatial smoothness resembling a Gaussian Process.
# Notice that we simulated in this way only to introduce spatial noise correlation.
# This does not represent the assumption of the form of spatial noise correlation in the model.
# Instead, the model is designed to capture structured noise correlation manifested
# as a few spatial maps each modulated by a time course, which appears as spatial noise correlation.
fig = plt.figure(num=None, figsize=(12, 2), dpi=150,
facecolor='w', edgecolor='k')
plt.plot(noise[:, 0])
plt.title('noise in an example voxel')
plt.show()
Explanation: simulate data: noise + signal
First, we start with noise, which is Gaussian Process in space and AR(1) in time
End of explanation
# import nibabel
# ROI = nibabel.load('ROI.nii')
# I,J,K = ROI.shape
# all_coords = np.zeros((I, J, K, 3))
# all_coords[...,0] = np.arange(I)[:, np.newaxis, np.newaxis]
# all_coords[...,1] = np.arange(J)[np.newaxis, :, np.newaxis]
# all_coords[...,2] = np.arange(K)[np.newaxis, np.newaxis, :]
# ROI_coords = nibabel.affines.apply_affine(
# ROI.affine, all_coords[ROI.get_data().astype(bool)])
Explanation: Then, we simulate signals, assuming the magnitude of response to each condition follows a common covariance matrix.
Our model allows to impose a Gaussian Process prior on the log(SNR) of each voxels.
What this means is that SNR turn to be smooth and local, but betas (response amplitudes of each voxel to each condition) are not necessarily correlated in space. Intuitively, this is based on the assumption that voxels coding for related aspects of a task turn to be clustered (instead of isolated)
Our Gaussian Process are defined on both the coordinate of a voxel and its mean intensity.
This means that voxels close together AND have similar intensity should have similar SNR level. Therefore, voxels of white matter but adjacent to gray matters do not necessarily have high SNR level.
If you have an ROI saved as a binary Nifti file, say, with name 'ROI.nii'
Then you can use nibabel package to load the ROI and the following example code to retrive the coordinates of voxels.
Note: the following code won't work if you just installed Brainiak and try this demo because ROI.nii does not exist. It just serves as an example for you to retrieve coordinates of voxels in an ROI. You can use the ROI_coords for the argument coords in BRSA.fit()
End of explanation
# ideal covariance matrix
ideal_cov = np.zeros([n_C, n_C])
ideal_cov = np.eye(n_C) * 0.6
ideal_cov[8:12, 8:12] = 0.6
for cond in range(8, 12):
ideal_cov[cond,cond] = 1
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(ideal_cov)
plt.colorbar()
plt.xlim([0, 16])
plt.ylim([0, 16])
ax = plt.gca()
ax.set_aspect(1)
plt.title('ideal covariance matrix')
plt.show()
std_diag = np.diag(ideal_cov)**0.5
ideal_corr = ideal_cov / std_diag / std_diag[:, None]
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(ideal_corr)
plt.colorbar()
plt.xlim([0, 16])
plt.ylim([0, 16])
ax = plt.gca()
ax.set_aspect(1)
plt.title('ideal correlation matrix')
plt.show()
Explanation: Let's keep in mind of the pattern of the ideal covariance / correlation below and see how well BRSA can recover their patterns.
End of explanation
L_full = np.linalg.cholesky(ideal_cov)
# generating signal
snr_level = 1.0
# Notice that accurately speaking this is not SNR.
# The magnitude of signal depends not only on beta but also on x.
# (noise_level*snr_level)**2 is the factor multiplied
# with ideal_cov to form the covariance matrix from which
# the response amplitudes (beta) of a voxel are drawn from.
tau = 1.0
# magnitude of Gaussian Process from which the log(SNR) is drawn
smooth_width = 3.0
# spatial length scale of the Gaussian Process, unit: voxel
inten_kernel = 4.0
# intensity length scale of the Gaussian Process
# Slightly counter-intuitively, if this parameter is very large,
# say, much larger than the range of intensities of the voxels,
# then the smoothness has much small dependency on the intensity.
inten = np.random.rand(n_V) * 20.0
# For simplicity, we just assume that the intensity
# of all voxels are uniform distributed between 0 and 20
# parameters of Gaussian process to generate pseuso SNR
# For curious user, you can also try the following commond
# to see what an example snr map might look like if the intensity
# grows linearly in one spatial direction
# inten = coords_flat[:,0] * 2
inten_tile = np.tile(inten, [n_V, 1])
inten_diff2 = (inten_tile - inten_tile.T)**2
K = np.exp(-dist2 / smooth_width**2 / 2.0
- inten_diff2 / inten_kernel**2 / 2.0) * tau**2 \
+ np.eye(n_V) * tau**2 * 0.001
# A tiny amount is added to the diagonal of
# the GP covariance matrix to make sure it can be inverted
L = np.linalg.cholesky(K)
snr = np.abs(np.dot(L, np.random.randn(n_V))) * snr_level
sqrt_v = noise_level * snr
betas_simulated = np.dot(L_full, np.random.randn(n_C, n_V)) * sqrt_v
signal = np.dot(design.design_task, betas_simulated)
Y = signal + noise + inten
# The data to be fed to the program.
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(np.reshape(snr, [ROI_edge, ROI_edge*2]))
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('pseudo-SNR in a rectangular "ROI"')
plt.show()
idx = np.argmin(np.abs(snr - np.median(snr)))
# choose a voxel of medium level SNR.
fig = plt.figure(num=None, figsize=(12, 4), dpi=150,
facecolor='w', edgecolor='k')
noise_plot, = plt.plot(noise[:,idx],'g')
signal_plot, = plt.plot(signal[:,idx],'b')
plt.legend([noise_plot, signal_plot], ['noise', 'signal'])
plt.title('simulated data in an example voxel'
' with pseudo-SNR of {}'.format(snr[idx]))
plt.xlabel('time')
plt.show()
fig = plt.figure(num=None, figsize=(12, 4), dpi=150,
facecolor='w', edgecolor='k')
data_plot, = plt.plot(Y[:,idx],'r')
plt.legend([data_plot], ['observed data of the voxel'])
plt.xlabel('time')
plt.show()
idx = np.argmin(np.abs(snr - np.max(snr)))
# display the voxel of the highest level SNR.
fig = plt.figure(num=None, figsize=(12, 4), dpi=150,
facecolor='w', edgecolor='k')
noise_plot, = plt.plot(noise[:,idx],'g')
signal_plot, = plt.plot(signal[:,idx],'b')
plt.legend([noise_plot, signal_plot], ['noise', 'signal'])
plt.title('simulated data in the voxel with the highest'
' pseudo-SNR of {}'.format(snr[idx]))
plt.xlabel('time')
plt.show()
fig = plt.figure(num=None, figsize=(12, 4), dpi=150,
facecolor='w', edgecolor='k')
data_plot, = plt.plot(Y[:,idx],'r')
plt.legend([data_plot], ['observed data of the voxel'])
plt.xlabel('time')
plt.show()
Explanation: In the following, pseudo-SNR is generated from a Gaussian Process defined on a "rectangular" ROI, just for simplicity of code
End of explanation
scan_onsets = np.int32(np.linspace(0, design.n_TR,num=n_run + 1)[: -1])
print('scan onsets: {}'.format(scan_onsets))
Explanation: The reason that the pseudo-SNRs in the example voxels are not too small, while the signal looks much smaller is because we happen to have low amplitudes in our design matrix. The true SNR depends on both the amplitudes in design matrix and the pseudo-SNR. Therefore, be aware that pseudo-SNR does not directly reflects how much signal the data have, but rather a map indicating the relative strength of signal in differerent voxels.
When you have multiple runs, the noise won't be correlated between runs. Therefore, you should tell BRSA when is the onset of each scan.
Note that the data (variable Y above) you feed to BRSA is the concatenation of data from all runs along the time dimension, as a 2-D matrix of time x space
End of explanation
brsa = BRSA(GP_space=True, GP_inten=True)
# Initiate an instance, telling it
# that we want to impose Gaussian Process prior
# over both space and intensity.
brsa.fit(X=Y, design=design.design_task,
coords=coords_flat, inten=inten, scan_onsets=scan_onsets)
# The data to fit should be given to the argument X.
# Design matrix goes to design. And so on.
Explanation: Fit Bayesian RSA to our simulated data
The nuisance regressors in typical fMRI analysis (such as head motion signal) are replaced by principal components estimated from residuals after subtracting task-related response. n_nureg tells the model how many principal components to keep from the residual as nuisance regressors, in order to account for spatial correlation in noise.
If you prefer not using this approach based on principal components of residuals, you can set auto_nuisance=False, and optionally provide your own nuisance regressors as nuisance argument to BRSA.fit(). In practice, we find that the result is much better with auto_nuisance=True.
End of explanation
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(brsa.C_, vmin=-0.1, vmax=1)
plt.xlim([0, n_C])
plt.ylim([0, n_C])
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('Estimated correlation structure\n shared between voxels\n'
'This constitutes the output of Bayesian RSA\n')
plt.show()
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(brsa.U_)
plt.xlim([0, 16])
plt.ylim([0, 16])
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('Estimated covariance structure\n shared between voxels\n')
plt.show()
Explanation: We can have a look at the estimated similarity in matrix brsa.C_.
We can also compare the ideal covariance above with the one recovered, brsa.U_
End of explanation
regressor = np.insert(design.design_task,
0, 1, axis=1)
betas_point = np.linalg.lstsq(regressor, Y)[0]
point_corr = np.corrcoef(betas_point[1:, :])
point_cov = np.cov(betas_point[1:, :])
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(point_corr, vmin=-0.1, vmax=1)
plt.xlim([0, 16])
plt.ylim([0, 16])
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('Correlation structure estimated\n'
'based on point estimates of betas\n')
plt.show()
fig = plt.figure(num=None, figsize=(4, 4), dpi=100)
plt.pcolor(point_cov)
plt.xlim([0, 16])
plt.ylim([0, 16])
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('Covariance structure of\n'
'point estimates of betas\n')
plt.show()
Explanation: In contrast, we can have a look of the similarity matrix based on Pearson correlation between point estimates of betas of different conditions.
This is what vanila RSA might give
End of explanation
fig = plt.figure(num=None, figsize=(5, 5), dpi=100)
plt.pcolor(np.reshape(brsa.nSNR_, [ROI_edge, ROI_edge*2]))
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
ax.set_title('estimated pseudo-SNR')
plt.show()
fig = plt.figure(num=None, figsize=(5, 5), dpi=100)
plt.pcolor(np.reshape(snr / np.exp(np.mean(np.log(snr))),
[ROI_edge, ROI_edge*2]))
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
ax.set_title('true normalized pseudo-SNR')
plt.show()
RMS_BRSA = np.mean((brsa.C_ - ideal_corr)**2)**0.5
RMS_RSA = np.mean((point_corr - ideal_corr)**2)**0.5
print('RMS error of Bayesian RSA: {}'.format(RMS_BRSA))
print('RMS error of standard RSA: {}'.format(RMS_RSA))
print('Recovered spatial smoothness length scale: '
'{}, vs. true value: {}'.format(brsa.lGPspace_, smooth_width))
print('Recovered intensity smoothness length scale: '
'{}, vs. true value: {}'.format(brsa.lGPinten_, inten_kernel))
print('Recovered standard deviation of GP prior: '
'{}, vs. true value: {}'.format(brsa.bGP_, tau))
Explanation: We can make a comparison between the estimated SNR map and the true SNR map (normalized)
End of explanation
plt.scatter(rho1, brsa.rho_)
plt.xlabel('true AR(1) coefficients')
plt.ylabel('recovered AR(1) coefficients')
ax = plt.gca()
ax.set_aspect(1)
plt.show()
plt.scatter(np.log(snr) - np.mean(np.log(snr)),
np.log(brsa.nSNR_))
plt.xlabel('true normalized log SNR')
plt.ylabel('recovered log pseudo-SNR')
ax = plt.gca()
ax.set_aspect(1)
plt.show()
Explanation: Empirically, the smoothness turns to be over-estimated when signal is weak.
We can also look at how other parameters are recovered.
End of explanation
plt.scatter(betas_simulated, brsa.beta_)
plt.xlabel('true betas (response amplitudes)')
plt.ylabel('recovered betas by Bayesian RSA')
ax = plt.gca()
ax.set_aspect(1)
plt.show()
plt.scatter(betas_simulated, betas_point[1:, :])
plt.xlabel('true betas (response amplitudes)')
plt.ylabel('recovered betas by simple regression')
ax = plt.gca()
ax.set_aspect(1)
plt.show()
Explanation: Even though the variation reduced in estimated pseudo-SNR (due to overestimation of smoothness of the GP prior under low SNR situation), betas recovered by the model has higher correlation with true betas than doing simple regression, shown below. Obiously there is shrinkage of the estimated betas, as a result of variance-bias tradeoff. But we think such shrinkage does preserve the patterns of betas, and therefore the result is suitable to be further used for decoding purpose.
End of explanation
u, s, v = np.linalg.svd(noise + inten)
plt.plot(s)
plt.xlabel('principal component')
plt.ylabel('singular value of unnormalized noise')
plt.show()
plt.pcolor(np.reshape(v[0,:], [ROI_edge, ROI_edge*2]))
ax = plt.gca()
ax.set_aspect(1)
plt.title('Weights of the first principal component in unnormalized noise')
plt.colorbar()
plt.show()
plt.pcolor(np.reshape(brsa.beta0_[0,:], [ROI_edge, ROI_edge*2]))
ax = plt.gca()
ax.set_aspect(1)
plt.title('Weights of the DC component in noise')
plt.colorbar()
plt.show()
plt.pcolor(np.reshape(inten, [ROI_edge, ROI_edge*2]))
ax = plt.gca()
ax.set_aspect(1)
plt.title('The baseline intensity of the ROI')
plt.colorbar()
plt.show()
plt.pcolor(np.reshape(v[1,:], [ROI_edge, ROI_edge*2]))
ax = plt.gca()
ax.set_aspect(1)
plt.title('Weights of the second principal component in unnormalized noise')
plt.colorbar()
plt.show()
plt.pcolor(np.reshape(brsa.beta0_[1,:], [ROI_edge, ROI_edge*2]))
ax = plt.gca()
ax.set_aspect(1)
plt.title('Weights of the first recovered noise pattern\n not related to DC component in noise')
plt.colorbar()
plt.show()
Explanation: The singular decomposition of noise, and the comparison between the first two principal component of noise and the patterns of the first two nuisance regressors, returned by the model.
The principal components may not look exactly the same. The first principal components both capture the baseline image intensities (although they may sometimes appear counter-phase)
Apparently one can imagine that the choice of the number of principal components used as nuisance regressors can influence the result. If you just choose 1 or 2, perhaps only the global drift would be captured. But including too many nuisance regressors would slow the fitting speed and might have risk of overfitting. The users might consider starting in the range of 5-20. We do not have automatic cross-validation built in. But you can use the score() function to do cross-validation and select the appropriate number. The idea here is similar to that in GLMdenoise (http://kendrickkay.net/GLMdenoise/)
End of explanation
noise_new = np.zeros([n_T, n_V])
noise_new[0, :] = np.dot(L_noise, np.random.randn(n_V))\
/ np.sqrt(1 - rho1**2)
for i_t in range(1, n_T):
noise_new[i_t, :] = noise_new[i_t - 1, :] * rho1 \
+ np.dot(L_noise,np.random.randn(n_V))
Y_new = signal + noise_new + inten
ts, ts0 = brsa.transform(Y_new,scan_onsets=scan_onsets)
# ts, ts0 = brsa.transform(Y_new,scan_onsets=scan_onsets)
recovered_plot, = plt.plot(ts[:200, 8], 'b')
design_plot, = plt.plot(design.design_task[:200, 8], 'g')
plt.legend([design_plot, recovered_plot],
['design matrix for one condition', 'recovered time course for the condition'])
plt.show()
# We did not plot the whole time series for the purpose of seeing closely how much the two
# time series overlap
c = np.corrcoef(design.design_task.T, ts.T)
# plt.pcolor(c[0:n_C, n_C:],vmin=-0.5,vmax=1)
plt.pcolor(c[0:16, 16:],vmin=-0.5,vmax=1)
ax = plt.gca()
ax.set_aspect(1)
plt.title('correlation between true design matrix \nand the recovered task-related activity')
plt.colorbar()
plt.xlabel('recovered task-related activity')
plt.ylabel('true design matrix')
plt.show()
# plt.pcolor(c[n_C:, n_C:],vmin=-0.5,vmax=1)
plt.pcolor(c[16:, 16:],vmin=-0.5,vmax=1)
ax = plt.gca()
ax.set_aspect(1)
plt.title('correlation within the recovered task-related activity')
plt.colorbar()
plt.show()
Explanation: "Decoding" from new data
Now we generate a new data set, assuming signal is the same but noise is regenerated. We want to use the transform() function of brsa to estimate the "design matrix" in this new dataset.
End of explanation
[score, score_null] = brsa.score(X=Y_new, design=design.design_task, scan_onsets=scan_onsets)
print("Score of full model based on the correct esign matrix, assuming {} nuisance"
" components in the noise: {}".format(brsa.n_nureg_, score))
print("Score of a null model with the same assumption except that there is no task-related response: {}".format(
score_null))
plt.bar([0,1],[score, score_null], width=0.5)
plt.ylim(np.min([score, score_null])-100, np.max([score, score_null])+100)
plt.xticks([0,1],['Model','Null model'])
plt.ylabel('cross-validated log likelihood')
plt.title('cross validation on new data')
plt.show()
[score_noise, score_noise_null] = brsa.score(X=noise_new+inten, design=design.design_task, scan_onsets=scan_onsets)
print("Score of full model for noise, based on the correct design matrix, assuming {} nuisance"
" components in the noise: {}".format(brsa.n_nureg_, score_noise))
print("Score of a null model for noise: {}".format(
score_noise_null))
plt.bar([0,1],[score_noise, score_noise_null], width=0.5)
plt.ylim(np.min([score_noise, score_noise_null])-100, np.max([score_noise, score_noise_null])+100)
plt.xticks([0,1],['Model','Null model'])
plt.ylabel('cross-validated log likelihood')
plt.title('cross validation on noise')
plt.show()
Explanation: Model selection by cross-validataion:
You can compare different models by cross-validating the parameters of one model learnt from some training data
on some testing data. BRSA provides a score() function, which provides you a pair of cross-validated log likelihood
for testing data. The first value is the cross-validated log likelihood of the model you have specified. The second
value is a null model which assumes everything else the same except that there is no task-related activity.
Notice that comparing the score of your model of interest against its corresponding null model is not the single way to compare models. You might also want to compare against a model using the same set of design matrix, but a different rank (especially rank 1, which means all task conditions have the same response pattern, only differing in the magnitude).
In general, in the context of BRSA, a model means the timing of each event and the way these events are grouped, together with other trivial parameters such as the rank of the covariance matrix and the number of nuisance regressors. All these parameters can influence model performance.
In future, we will provide interface to test the performance of a model with predefined similarity matrix or covariance matrix.
End of explanation
gbrsa = GBRSA(nureg_method='PCA', auto_nuisance=True, logS_range=1,
anneal_speed=20, n_iter=50)
# Initiate an instance, telling it
# that we want to impose Gaussian Process prior
# over both space and intensity.
gbrsa.fit(X=Y, design=design.design_task,scan_onsets=scan_onsets)
# The data to fit should be given to the argument X.
# Design matrix goes to design. And so on.
plt.pcolor(np.reshape(gbrsa.nSNR_, (ROI_edge, ROI_edge*2)))
plt.colorbar()
ax = plt.gca()
ax.set_aspect(1)
plt.title('SNR map estimated by marginalized BRSA')
plt.show()
plt.pcolor(np.reshape(snr, (ROI_edge, ROI_edge*2)))
ax = plt.gca()
ax.set_aspect(1)
plt.colorbar()
plt.title('true SNR map')
plt.show()
plt.scatter(snr, gbrsa.nSNR_)
ax = plt.gca()
ax.set_aspect(1)
plt.xlabel('simulated pseudo-SNR')
plt.ylabel('estimated pseudo-SNR')
plt.show()
plt.scatter(np.log(snr), np.log(gbrsa.nSNR_))
ax = plt.gca()
ax.set_aspect(1)
plt.xlabel('simulated log(pseudo-SNR)')
plt.ylabel('estimated log(pseudo-SNR)')
plt.show()
plt.pcolor(gbrsa.U_)
plt.colorbar()
plt.title('covariance matrix estimated by marginalized BRSA')
plt.show()
plt.pcolor(ideal_cov)
plt.colorbar()
plt.title('true covariance matrix')
plt.show()
plt.scatter(betas_simulated, gbrsa.beta_)
ax = plt.gca()
ax.set_aspect(1)
plt.xlabel('simulated betas')
plt.ylabel('betas estimated by marginalized BRSA')
plt.show()
plt.scatter(rho1, gbrsa.rho_)
ax = plt.gca()
ax.set_aspect(1)
plt.xlabel('simulated AR(1) coefficients')
plt.ylabel('AR(1) coefficients estimated by marginalized BRSA')
plt.show()
Explanation: As can be seen above, the model with the correct design matrix explains new data with signals generated from the true model better than the null model, but explains pure noise worse than the null model.
We can also try the version which marginalize SNR and rho for each voxel.
This version is intended for analyzing data of a group of participants and estimating their shared similarity matrix. But it also allows analyzing single participant.
End of explanation
# "Decoding"
ts, ts0 = gbrsa.transform([Y_new],scan_onsets=[scan_onsets])
recovered_plot, = plt.plot(ts[0][:200, 8], 'b')
design_plot, = plt.plot(design.design_task[:200, 8], 'g')
plt.legend([design_plot, recovered_plot],
['design matrix for one condition', 'recovered time course for the condition'])
plt.show()
# We did not plot the whole time series for the purpose of seeing closely how much the two
# time series overlap
c = np.corrcoef(design.design_task.T, ts[0].T)
plt.pcolor(c[0:n_C, n_C:],vmin=-0.5,vmax=1)
ax = plt.gca()
ax.set_aspect(1)
plt.title('correlation between true design matrix \nand the recovered task-related activity')
plt.colorbar()
plt.xlabel('recovered task-related activity')
plt.ylabel('true design matrix')
plt.show()
plt.pcolor(c[n_C:, n_C:],vmin=-0.5,vmax=1)
ax = plt.gca()
ax.set_aspect(1)
plt.title('correlation within the recovered task-related activity')
plt.colorbar()
plt.show()
# cross-validataion
[score, score_null] = gbrsa.score(X=[Y_new], design=[design.design_task], scan_onsets=[scan_onsets])
print("Score of full model based on the correct design matrix, assuming {} nuisance"
" components in the noise: {}".format(gbrsa.n_nureg_, score))
print("Score of a null model with the same assumption except that there is no task-related response: {}".format(
score_null))
plt.bar([0,1],[score[0], score_null[0]], width=0.5)
plt.ylim(np.min([score[0], score_null[0]])-100, np.max([score[0], score_null[0]])+100)
plt.xticks([0,1],['Model','Null model'])
plt.ylabel('cross-validated log likelihood')
plt.title('cross validation on new data')
plt.show()
[score_noise, score_noise_null] = gbrsa.score(X=[noise_new+inten], design=[design.design_task],
scan_onsets=[scan_onsets])
print("Score of full model for noise, based on the correct design matrix, assuming {} nuisance"
" components in the noise: {}".format(gbrsa.n_nureg_, score_noise))
print("Score of a null model for noise: {}".format(
score_noise_null))
plt.bar([0,1],[score_noise[0], score_noise_null[0]], width=0.5)
plt.ylim(np.min([score_noise[0], score_noise_null[0]])-100, np.max([score_noise[0], score_noise_null[0]])+100)
plt.xticks([0,1],['Model','Null model'])
plt.ylabel('cross-validated log likelihood')
plt.title('cross validation on noise')
plt.show()
Explanation: We can also do "decoding" and cross-validating using the marginalized version in GBRSA
End of explanation |
1,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tech - carte
Faire une carte, c'est toujours compliqué. C'est simple jusqu'à ce qu'on s'aperçoive qu'on doit récupérer la description des zones administratives d'un pays, fournies parfois dans des coordonnées autres que longitude et latitude. Quelques modules utiles
Step1: Exposé
On télécharge des données hospitalières par départements.
Données COVID
Step2: Données départements
On récupère ensuite la définition géographique des départements.
Step3: Il faudrait aussi fusionner avec la population de chaque département. Ce sera pour une autre fois.
Carte
Step4: On enlève tous les départements à trois chiffres.
Step5: Carte COVID
Step6: Les régions les plus peuplées ont sans doute la plus grande capacité hospitalière. Il faudrait diviser par cette capacité pour avoir une carte qui ait un peu plus de sens. Comme l'idée est ici de simplement tracer la carte, on ne calculera pas de ratio.
Step7: La création de carte a toujours été plus ou moins compliqué. Les premiers notebooks que j'ai créés sur le sujet étaient beaucoup plus complexe. geopandas a simplifié les choses. Son développement a commencé entre 2013 et a bien évolué depuis. Et j'ai dû passer quelques heures à récupérer les contours des départements il y a cinq ans.
On peut également récupérer la capacité maximale de chaque département en regardant sur le passé. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
Explanation: Tech - carte
Faire une carte, c'est toujours compliqué. C'est simple jusqu'à ce qu'on s'aperçoive qu'on doit récupérer la description des zones administratives d'un pays, fournies parfois dans des coordonnées autres que longitude et latitude. Quelques modules utiles :
cartopy : surcouche de matplotlib pour faire des dessins avec des coordonnées géographiques
bokeh : pour tracer des cartes interactives
pyproj : conversion entre systèmes de coordonnées
shapely : manipuler des polygones géographiques (union, intersection, ...)
pyshp : lire ou écrire des polygones géographiques
geopandas : manipulation de dataframe avec des coordonnées géographiques
Quelques notebooks intéressants :
* Tracer une carte en Python avec bokeh
* Tracer une carte en Python
* Données carroyées et OpenStreetMap
* Carte de France avec les départements
* Carte de France avec les départements (2)
End of explanation
# https://www.data.gouv.fr/fr/datasets/donnees-hospitalieres-relatives-a-lepidemie-de-covid-19/
from pandas import read_csv
url = "https://www.data.gouv.fr/fr/datasets/r/63352e38-d353-4b54-bfd1-f1b3ee1cabd7"
covid = read_csv(url, sep=";")
covid.tail()
last_day = covid.loc[covid.index[-1], "jour"]
last_day
last_data = covid[covid.jour == last_day].groupby("dep").sum()
last_data.shape
last_data.describe()
last_data.head()
last_data.tail()
Explanation: Exposé
On télécharge des données hospitalières par départements.
Données COVID
End of explanation
import geopandas
# dernier lien de la page (format shapefiles)
url = "https://www.data.gouv.fr/en/datasets/r/ed02b655-4307-4db4-b1ca-7939145dc20f"
geo = geopandas.read_file(url)
geo.tail()
Explanation: Données départements
On récupère ensuite la définition géographique des départements.
End of explanation
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1, figsize=(5, 4))
geo.plot(ax=ax, color='white', edgecolor='black');
Explanation: Il faudrait aussi fusionner avec la population de chaque département. Ce sera pour une autre fois.
Carte
End of explanation
codes = [_ for _ in set(geo.code_depart) if len(_) < 3]
metropole = geo[geo.code_depart.isin(codes)]
metropole.tail()
fig, ax = plt.subplots(1, 1, figsize=(5, 4))
metropole.plot(ax=ax, color='white', edgecolor='black')
ax.set_title("%s départements" % metropole.shape[0]);
Explanation: On enlève tous les départements à trois chiffres.
End of explanation
merged = last_data.reset_index(drop=False).merge(metropole, left_on="dep", right_on="code_depart")
merged.shape
merged.tail()
fig, ax = plt.subplots(1, 1, figsize=(5, 4))
merged.hist('rea', bins=20, ax=ax)
ax.set_title("Distribution rea");
Explanation: Carte COVID
End of explanation
merged.sort_values('rea').tail()
geomerged = geopandas.GeoDataFrame(merged)
from mpl_toolkits.axes_grid1 import make_axes_locatable
fig, ax = plt.subplots(1, 1)
# ligne à ajouter pour avoir une légende ajustée à la taille du graphe
cax = make_axes_locatable(ax).append_axes("right", size="5%", pad=0.1)
geomerged.plot(column="rea", ax=ax, edgecolor='black', legend=True, cax=cax)
ax.set_title("Réanimations pour les %d départements" % metropole.shape[0]);
Explanation: Les régions les plus peuplées ont sans doute la plus grande capacité hospitalière. Il faudrait diviser par cette capacité pour avoir une carte qui ait un peu plus de sens. Comme l'idée est ici de simplement tracer la carte, on ne calculera pas de ratio.
End of explanation
capacite = covid.groupby(["jour", "dep"]).sum().groupby("dep").max()
capacite.head()
capa_merged = merged.merge(capacite, left_on="dep", right_on="dep")
capa_merged["occupation"] = capa_merged["rea_x"] / capa_merged["rea_y"]
capa_merged.head(n=2).T
geocapa = geopandas.GeoDataFrame(capa_merged)
fig, ax = plt.subplots(1, 1)
# ligne à ajouter pour avoir une légende ajustée à la taille du graphe
cax = make_axes_locatable(ax).append_axes("right", size="5%", pad=0.1)
geocapa.plot(column="occupation", ax=ax, edgecolor='black', legend=True, cax=cax)
ax.set_title("Occupations en réanimations pour les %d départements" % metropole.shape[0]);
Explanation: La création de carte a toujours été plus ou moins compliqué. Les premiers notebooks que j'ai créés sur le sujet étaient beaucoup plus complexe. geopandas a simplifié les choses. Son développement a commencé entre 2013 et a bien évolué depuis. Et j'ai dû passer quelques heures à récupérer les contours des départements il y a cinq ans.
On peut également récupérer la capacité maximale de chaque département en regardant sur le passé.
End of explanation |
1,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Layout viewport
Use the Layout class to create a variety of map views for comparison.
For more information, run help(Layout).
The first example sets a common viewport for all maps while the second sets two different viewports for maps.
Step1: Same viewport
Step2: Different viewports | Python Code:
from cartoframes.auth import set_default_credentials
set_default_credentials('cartoframes')
Explanation: Layout viewport
Use the Layout class to create a variety of map views for comparison.
For more information, run help(Layout).
The first example sets a common viewport for all maps while the second sets two different viewports for maps.
End of explanation
from cartoframes.viz import Map, Layer, Layout, basic_style
Layout([
Map(Layer('select * from drought_wk_1 where dm = 3', basic_style(color='#e15383'))),
Map(Layer('select * from drought_wk_2 where dm = 3', basic_style(color='#e15383'))),
Map(Layer('select * from drought_wk_3 where dm = 3', basic_style(color='#e15383'))),
Map(Layer('select * from drought_wk_4 where dm = 3', basic_style(color='#e15383'))),
], is_static=True, viewport={'zoom': 3, 'lat': 33.4706, 'lng': -98.3457})
Explanation: Same viewport
End of explanation
from cartoframes.viz import Map, Layer, Layout, basic_style
Layout([
Map(Layer('drought_wk_1'), viewport={ 'zoom': 0.5 }),
Map(Layer('select * from drought_wk_1 where dm = 1', basic_style(color='#ffc285'))),
Map(Layer('select * from drought_wk_1 where dm = 2', basic_style(color='#fa8a76'))),
Map(Layer('select * from drought_wk_1 where dm = 3', basic_style(color='#e15383'))),
], is_static=True, viewport={'zoom': 3, 'lat': 33.4706, 'lng': -98.3457})
Explanation: Different viewports
End of explanation |
1,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Light 2 Numerical and Data Analysis Answers
Step1: 1. Identify Balmer absorption lines in a star
Author
Step2: 2. Identify Balmer emission lines in a galaxy
Author
Step3: Balmer Series
The Balmer series are lines due to transitions to the n=2 level of hydrogen. The wavelengths of the first few lines are given below.
The next line, H_epsilon, is outside of the region of our spectrum.
Step4: Find the wavelength at which the lines are observed, due to redshifting.
Step5: The H$\alpha$ line is clear, but the others are hard to see when looking at the full spectrum.
Step6: Zooming in
We see that the H$\alpha$ line is very strong, and the H$\beta$ line also has a clear emission peak.
H$\gamma$ and H$\delta$ do not appear to have emission that is significant relative to the noise. The black
lines in these plots are the model fit by the spectroscopic pipeline in SDSS, so it does not necessarily
faithfully represent the true galaxy spectrum.
Step7: Absorption to Emission
Zooming in further on the H$\beta$ line to visually inspect it, the model (black) has clear emission and clear absorption. The absorption is in the underlying stellar continuum spectrum and reflects the presence of neutral, but excited, hydrogen gas in the stellar atmospheres. The absorption feature is believable in the data itself (blue), but it is less obviously real, because of the noise.
Step8: 6. Estimate dust extinction
Author
Step9: Let's construct arrays with the rest-frame wavelength and the flux. We will not concern ourselves with the overall normalization of the flux in this step.
Step10: We can plot both, and we see that for UGC 10227, which is seen edge-on, it is a much redder spectrum than MCG -01-53-020, which is seen face-on. But many of the small scale features of the spectra are similar
Step11: We want to put these functions on the same wavelength grid. For our purposes, a simple 3rd-order spline interpolation scheme will be sufficient. Note that for more demanding purposes, a more accurate interpolation, or avoiding interpolation altogether, could be necessary. Whenever you interpolate, you usually cause error covariances between the output pixels and a loss of information.
Step12: Let's just check that the interpolation didn't do anything silly.
Step13: Now we can just divide the two arrays on the same wavelength grid to get some estimate of the extinction (here quantified in magnitude units).
Step14: Now we will estimate the total dust extinction under the assumption that the extinction follows the law | Python Code:
import numpy as np
import scipy.interpolate as interpolate
import astropy.io.fits as fits
import matplotlib.pyplot as plt
import requests
Explanation: Light 2 Numerical and Data Analysis Answers
End of explanation
def find_nearest(array, value):
index = (np.abs(array - value)).argmin()
return index
def find_local_min(array, index):
min_index = np.argmin(array[index-25:index+26])
return min_index + index - 25
balmer_series = np.array((6562.79, 4861.35, 4340.472, 4101.734, 3970.075, 3889.064, 3835.397))
balmer_labels = [r'H$\alpha$', r'H$\beta$', r'H$\gamma$', r'H$\delta$', r'H$\epsilon$', r'H$\zeta$', r'H$\eta$']
hdul = fits.open('A0.fits')
data = hdul[1].data
loglam = data['Loglam']
lam = 10**loglam
flux = data['Flux']
mask = lam < 8000
plt.figure(figsize=(15,8))
plt.plot(lam[mask],flux[mask])
for i in range(len(balmer_series)):
index = find_nearest(lam, balmer_series[i]) # finds the closest wavelength index to current balmer series
min_index = find_local_min(flux, index) # finds the local minimum near current index
plt.text(lam[min_index]-30,flux[min_index]-0.3, balmer_labels[i], fontsize=10) # puts the appropriate label near each local minimum
plt.xlabel('Wavelength (Angstroms)', fontsize=14)
plt.ylabel('Normalized Flux', fontsize=14)
plt.title('Balmer Absorption Lines for an A star', fontsize=14)
plt.savefig('balmer.png', dpi=300)
Explanation: 1. Identify Balmer absorption lines in a star
Author: Nicholas Faucher
Download an optical spectrum of an A star. Identify
all Balmer absorption lines that are apparent in that spectrum.
Data downloaded from https://doi.org/10.5281/zenodo.321394
Referenced in https://iopscience.iop.org/article/10.3847/1538-4365/aa656d/pdf
Fluxes are normalized to the flux at 8000 Å
End of explanation
request_template = 'https://dr13.sdss.org/optical/spectrum/view/data/format=fits/spec=lite?plateid={plate}&mjd={mjd}&fiberid={fiberid}'
request = request_template.format(plate=2214, fiberid=6, mjd=53794)
r = requests.get(request)
fp = open('spec-2214-53794-0006.fits', 'wb')
fp.write(r.content)
fp.close()
hdu = fits.open('spec-2214-53794-0006.fits')
header = hdu[0].header
data = hdu[1].data
z = 0.0657799 #Redshift at link above
wl = 10**data['loglam']
flux = data['flux']
model = data['model']
Explanation: 2. Identify Balmer emission lines in a galaxy
Author: Kate Storey-Fisher
Download an optical spectrum of a star forming galaxy. Identify all Balmer emission lines that are apparent in the spectrum. Zooming in on Hα or Hβ, visually compare the Balmer absorption (in the stellar continuum) to the emission.
Data
This is an optical spectrum of a galaxy in SDSS. The data and more info can be found here: https://dr12.sdss.org/spectrumDetail?mjd=53794&fiber=6&plateid=2214
End of explanation
#Balmer series
halpha = 6564.5377
hbeta = 4861.3615
hgamma = 4340.462
hdelta = 4101.74
lines = [halpha, hbeta, hgamma, hdelta]
labels = [r'H$_{\alpha}$', r'H$_{\beta}$', r'H$_{\gamma}$', r'H$_{\delta}$']
Explanation: Balmer Series
The Balmer series are lines due to transitions to the n=2 level of hydrogen. The wavelengths of the first few lines are given below.
The next line, H_epsilon, is outside of the region of our spectrum.
End of explanation
#Shifted
lines_shifted = np.empty(len(lines))
for i in range(len(lines)):
lines_shifted[i] = lines[i]*(1+z)
Explanation: Find the wavelength at which the lines are observed, due to redshifting.
End of explanation
fig = plt.figure(figsize=(13, 7))
plt.plot(wl, flux)
plt.plot(wl, model, color='black')
plt.xlabel('Wavelength $\lambda$ ($\AA$)')
plt.ylabel('Flux $f_\lambda$ ($10^{-17}$ erg cm$^{-2}$ s$^{-1}$ $\AA$)')
for line, label in zip(lines_shifted, labels):
plt.axvline(line, color='red', alpha=0.7)
plt.annotate(label, xy=(line, 25), xytext=(line, 25), size=16)
Explanation: The H$\alpha$ line is clear, but the others are hard to see when looking at the full spectrum.
End of explanation
# Zooms
width = 100
fig, axarr = plt.subplots(2,2, figsize=(15, 10))
plt.subplots_adjust(hspace=0.3)
count = 0
for i in range(2):
for j in range(2):
line = lines_shifted[count]
wf = [(w, f, m) for w, f, m in zip(wl, flux, model) if (w<line+width) and (w>line-width)]
wlcut = [tup[0] for tup in wf]
fluxcut = [tup[1] for tup in wf]
modelcut = [tup[2] for tup in wf]
axarr[i,j].set_title(labels[count], size=20)
axarr[i,j].plot(wlcut, fluxcut)
axarr[i,j].plot(wlcut, modelcut, color='black')
axarr[i,j].axvline(line, color='red', alpha=0.7)
axarr[i,j].set_xlabel('Wavelength $\lambda$ ($\AA$)')
axarr[i,j].set_ylabel('Flux $f_\lambda$ ($10^{-17}$ erg cm$^{-2}$ s$^{-1}$ $\AA$)')
count += 1
Explanation: Zooming in
We see that the H$\alpha$ line is very strong, and the H$\beta$ line also has a clear emission peak.
H$\gamma$ and H$\delta$ do not appear to have emission that is significant relative to the noise. The black
lines in these plots are the model fit by the spectroscopic pipeline in SDSS, so it does not necessarily
faithfully represent the true galaxy spectrum.
End of explanation
width = 30
fig = plt.figure(figsize=(10, 7))
count = 1
line = lines_shifted[count] #H_beta
wf = [(w, f, m) for w, f, m in zip(wl, flux, model) if (w<line+width) and (w>line-width)]
wlcut = [tup[0] for tup in wf]
fluxcut = [tup[1] for tup in wf]
modelcut = [tup[2] for tup in wf]
plt.title(labels[count], size=20)
plt.plot(wlcut, fluxcut)
plt.plot(wlcut, modelcut, color='black')
plt.axvline(line, color='red', alpha=0.7)
plt.xlabel('Wavelength $\lambda$ ($\AA$)')
plt.ylabel('Flux $f_\lambda$ ($10^{-17}$ erg cm$^{-2}$ s$^{-1}$ $\AA$)')
Explanation: Absorption to Emission
Zooming in further on the H$\beta$ line to visually inspect it, the model (black) has clear emission and clear absorption. The absorption is in the underlying stellar continuum spectrum and reflects the presence of neutral, but excited, hydrogen gas in the stellar atmospheres. The absorption feature is believable in the data itself (blue), but it is less obviously real, because of the noise.
End of explanation
UG = fits.open('https://dr16.sdss.org/sas/dr16/sdss/spectro/redux/26/spectra/lite/1056/spec-1056-52764-0308.fits')
MCG = fits.open('https://dr16.sdss.org/sas/dr16/sdss/spectro/redux/26/spectra/lite/0637/spec-0637-52174-0403.fits')
Explanation: 6. Estimate dust extinction
Author: Jiarong Zhu
Find the SDSS optical spectra and images for the two galaxies
UGC 10227 (a typical-looking disk galaxy observed at high inclination)
and MCG -01-53-020 (a typical-looking disk galaxy observed at low
inclination). A major difference in observing galaxies at these
inclinations is the resulting amount of dust extinction. For a
standard reddening law, how much extinction do you need to explain
the first galaxy spectrum as a reddened version of the second?
First we open the appropriate spectra, which can be found using the search facilities on SkyServer. Using the plate, MJD, and fiber numbers there, we construct the URL to download the data:
End of explanation
z_UG = UG[2].data['Z'][0]
z_MCG = MCG[2].data['Z'][0]
lam_UG = UG[1].data['loglam'] - np.log10(1. + z_UG)
lam_MCG = MCG[1].data['loglam'] - np.log10(1. + z_MCG)
f_UG = UG[1].data['flux']
f_MCG = MCG[1].data['flux']
Explanation: Let's construct arrays with the rest-frame wavelength and the flux. We will not concern ourselves with the overall normalization of the flux in this step.
End of explanation
plt.figure()
plt.plot(10.**lam_UG, f_UG, label='UGC 10227 (edge-on)')
plt.plot(10.**lam_MCG, f_MCG, label='MCG -01-53-020 (face-on)')
plt.xlabel('wavelength')
plt.ylabel('flux')
plt.legend()
plt.show()
Explanation: We can plot both, and we see that for UGC 10227, which is seen edge-on, it is a much redder spectrum than MCG -01-53-020, which is seen face-on. But many of the small scale features of the spectra are similar: the 4000 Angstrom break, with its Calcium H and K lines, the G band features redward of 4000 Angstromes, Na D line, and the TiO bands. Not all the features are quite the same. MCG -01-53-020 has a weaker Mg b line and does not have evident H$\alpha$ emission.
End of explanation
f_MCG_interp_func = interpolate.interp1d(lam_MCG, f_MCG, kind='cubic',
fill_value='extrapolate')
f_MCG_interp = f_MCG_interp_func(lam_UG)
Explanation: We want to put these functions on the same wavelength grid. For our purposes, a simple 3rd-order spline interpolation scheme will be sufficient. Note that for more demanding purposes, a more accurate interpolation, or avoiding interpolation altogether, could be necessary. Whenever you interpolate, you usually cause error covariances between the output pixels and a loss of information.
End of explanation
plt.figure()
plt.plot(10.**lam_UG, f_UG, label='UGC 10227 (edge-on)')
plt.plot(10.**lam_UG, f_MCG_interp, label='MCG -01-53-020 (face-on)')
plt.xlabel('wavelength')
plt.ylabel('flux')
plt.legend()
plt.show()
Explanation: Let's just check that the interpolation didn't do anything silly.
End of explanation
A = - 2.5 * np.log10(np.abs(f_UG / f_MCG_interp)) # abs() is used here to avoid invalid(negative) points
plt.figure()
plt.plot(10.**lam_UG, A)
plt.xlabel('$\lambda$ in Anstroms')
plt.ylabel('extinction $A_{\lambda}$ in mag')
#plt.plot(lam,lam*A)
plt.show()
Explanation: Now we can just divide the two arrays on the same wavelength grid to get some estimate of the extinction (here quantified in magnitude units).
End of explanation
AV = 1.
Amodel_10 = AV * (5500. / 10.**lam_UG)
AV = 0.5
Amodel_05 = AV * (5500. / 10.**lam_UG)
AV = 2.0
Amodel_20 = AV * (5500. / 10.**lam_UG)
plt.figure()
plt.plot(10.**lam_UG, A - Amodel_05, label='Residuals from A_V = 0.5')
plt.plot(10.**lam_UG, A - Amodel_10, label='Residuals from A_V = 1.0')
plt.plot(10.**lam_UG, A - Amodel_20, label='Residuals from A_V = 2.0')
plt.xlabel('$\lambda$ in Anstroms')
plt.ylabel('extinction $A_{\lambda}$ in mag')
plt.legend()
plt.show()
Explanation: Now we will estimate the total dust extinction under the assumption that the extinction follows the law:
$$\frac{A(\lambda)}{A_V} = \left(\frac {\lambda} {5500 \mathrm{~Angstrom}} \right)^{-1}$$
This is an approximation of more detailed extinction laws estimated from stellar absorption studies; e.g. Cardelli, Clayton, and Mathis (1989).
It is important to realize that $A(\lambda)$, despite being a logarithmic measure of extinction, is multiplicatively related to $A_V$, due to the fact that the extinction is exponentially related to optical depth. It is this property that allows us to use the shape of the spectrum to determine the absolute level of extinction.
We will take a crude approach and just bracket $A_V$ with three values (0.5, 1, and 2), showing that $A_V \sim 1$ reproduces the shape of the ratio between the spectra, and that therefore $A_V \sim 1$ is about the actual level of extinction in UGC 10227.
End of explanation |
1,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repairing artifacts with ICA
This tutorial covers the basics of independent components analysis (ICA) and
shows how ICA can be used for artifact repair; an extended example illustrates
repair of ocular and heartbeat artifacts.
Step1: <div class="alert alert-info"><h4>Note</h4><p>Before applying ICA (or any artifact repair strategy), be sure to observe
the artifacts in your data to make sure you choose the right repair tool.
Sometimes the right tool is no tool at all — if the artifacts are small
enough you may not even need to repair them to get good analysis results.
See `tut-artifact-overview` for guidance on detecting and
visualizing various types of artifact.</p></div>
What is ICA?
^^^^^^^^^^^^
Independent components analysis (ICA) is a technique for estimating
independent source signals from a set of recordings in which the source
signals were mixed together in unknown ratios. A common example of this is
the problem of blind source separation_
Step2: We can get a summary of how the ocular artifact manifests across each channel
type using
Step3: Now we'll do the same for the heartbeat artifacts, using
Step4: Filtering to remove slow drifts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we run the ICA, an important step is filtering the data to remove
low-frequency drifts, which can negatively affect the quality of the ICA fit.
The slow drifts are problematic because they reduce the independence of the
assumed-to-be-independent sources (e.g., during a slow upward drift, the
neural, heartbeat, blink, and other muscular sources will all tend to have
higher values), making it harder for the algorithm to find an accurate
solution. A high-pass filter with 1 Hz cutoff frequency is recommended.
However, because filtering is a linear operation, the ICA solution found from
the filtered signal can be applied to the unfiltered signal (see [2]_ for
more information), so we'll keep a copy of the unfiltered
Step5: Fitting and plotting the ICA solution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. sidebar
Step6: Some optional parameters that we could have passed to the
Step7: Here we can pretty clearly see that the first component (ICA000) captures
the EOG signal quite well, and the second component (ICA001) looks a lot
like a heartbeat <qrs_> (for more info on visually identifying Independent
Components, this EEGLAB tutorial is a good resource). We can also
visualize the scalp field distribution of each component using
Step8: <div class="alert alert-info"><h4>Note</h4><p>
Step9: We can also plot some diagnostics of each IC using
Step10: In the remaining sections, we'll look at different ways of choosing which ICs
to exclude prior to reconstructing the sensor signals.
Selecting ICA components manually
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once we're certain which components we want to exclude, we can specify that
manually by setting the ica.exclude attribute. Similar to marking bad
channels, merely setting ica.exclude doesn't do anything immediately (it
just adds the excluded ICs to a list that will get used later when it's
needed). Once the exclusions have been set, ICA methods like
Step11: Now that the exclusions have been set, we can reconstruct the sensor signals
with artifacts removed using the
Step12: Using an EOG channel to select ICA components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It may have seemed easy to review the plots and manually select which ICs to
exclude, but when processing dozens or hundreds of subjects this can become
a tedious, rate-limiting step in the analysis pipeline. One alternative is to
use dedicated EOG or ECG sensors as a "pattern" to check the ICs against, and
automatically mark for exclusion any ICs that match the EOG/ECG pattern. Here
we'll use
Step13: Note that above we used
Step14: The last of these plots is especially useful
Step15: Much better! Now we've captured both ICs that are reflecting the heartbeat
artifact (and as a result, we got two diagnostic plots
Step16: Selecting ICA components using template matching
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When dealing with multiple subjects, it is also possible to manually select
an IC for exclusion on one subject, and then use that component as a
template for selecting which ICs to exclude from other subjects' data,
using
Step17: Now let's run
Step18: The first figure shows the template map, while the second figure shows all
the maps that were considered a "match" for the template (including the
template itself). There were only three matches from the four subjects;
notice the output message No maps selected for subject(s) 1, consider a
more liberal threshold. By default the threshold is set automatically by
trying several values; here it may have chosen a threshold that is too high.
Let's take a look at the ICA sources for each subject
Step19: Notice that subject 1 does seem to have an IC that looks like it reflects
blink artifacts (component ICA000). Notice also that subject 3 appears to
have two components that are reflecting ocular artifacts (ICA000 and
ICA002), but only one was caught by
Step20: Now we get the message At least 1 IC detected for each subject (which is
good). At this point we'll re-run
Step21: Notice that the first subject has 3 different labels for the IC at index 0
Step22: As a final note, it is possible to extract ICs numerically using the | Python Code:
import os
import mne
from mne.preprocessing import (ICA, create_eog_epochs, create_ecg_epochs,
corrmap)
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60.)
Explanation: Repairing artifacts with ICA
This tutorial covers the basics of independent components analysis (ICA) and
shows how ICA can be used for artifact repair; an extended example illustrates
repair of ocular and heartbeat artifacts.
:depth: 2
We begin as always by importing the necessary Python modules and loading some
example data <sample-dataset>. Because ICA can be computationally
intense, we'll also crop the data to 60 seconds; and to save ourselves from
repeatedly typing mne.preprocessing we'll directly import a few functions
and classes from that submodule:
End of explanation
# pick some channels that clearly show heartbeats and blinks
regexp = r'(MEG [12][45][123]1|EEG 00.)'
artifact_picks = mne.pick_channels_regexp(raw.ch_names, regexp=regexp)
raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Before applying ICA (or any artifact repair strategy), be sure to observe
the artifacts in your data to make sure you choose the right repair tool.
Sometimes the right tool is no tool at all — if the artifacts are small
enough you may not even need to repair them to get good analysis results.
See `tut-artifact-overview` for guidance on detecting and
visualizing various types of artifact.</p></div>
What is ICA?
^^^^^^^^^^^^
Independent components analysis (ICA) is a technique for estimating
independent source signals from a set of recordings in which the source
signals were mixed together in unknown ratios. A common example of this is
the problem of blind source separation_: with 3 musical instruments playing
in the same room, and 3 microphones recording the performance (each picking
up all 3 instruments, but at varying levels), can you somehow "unmix" the
signals recorded by the 3 microphones so that you end up with a separate
"recording" isolating the sound of each instrument?
It is not hard to see how this analogy applies to EEG/MEG analysis: there are
many "microphones" (sensor channels) simultaneously recording many
"instruments" (blinks, heartbeats, activity in different areas of the brain,
muscular activity from jaw clenching or swallowing, etc). As long as these
various source signals are statistically independent_ and non-gaussian, it
is usually possible to separate the sources using ICA, and then re-construct
the sensor signals after excluding the sources that are unwanted.
ICA in MNE-Python
~~~~~~~~~~~~~~~~~
MNE-Python implements three different ICA algorithms: fastica (the
default), picard, and infomax. FastICA and Infomax are both in fairly
widespread use; Picard is a newer (2017) algorithm that is expected to
converge faster than FastICA and Infomax, and is more robust than other
algorithms in cases where the sources are not completely independent, which
typically happens with real EEG/MEG data. See [1]_ for more information.
The ICA interface in MNE-Python is similar to the interface in
scikit-learn_: some general parameters are specified when creating an
:class:~mne.preprocessing.ICA object, then the
:class:~mne.preprocessing.ICA object is fit to the data using its
:meth:~mne.preprocessing.ICA.fit method. The results of the fitting are
added to the :class:~mne.preprocessing.ICA object as attributes that end in
an underscore (_), such as ica.mixing_matrix_ and
ica.unmixing_matrix_. After fitting, the ICA component(s) that you want
to remove must be chosen, and the ICA fit must then be applied to the
:class:~mne.io.Raw or :class:~mne.Epochs object using the
:class:~mne.preprocessing.ICA object's :meth:~mne.preprocessing.ICA.apply
method.
.. sidebar:: ICA and dimensionality reduction
If you want to perform ICA with no dimensionality reduction (other than
the number of Independent Components (ICs) given in ``n_components``, and
any subsequent exclusion of ICs you specify in ``ICA.exclude``), pass
``max_pca_components=None`` and ``n_pca_components=None`` (these are the
default values). If you want to reduce dimensionality, consider this
example: if you have 300 sensor channels and you set
``max_pca_components=200``, ``n_components=50`` and
``n_pca_components=None``, then the PCA step yields 200 PCs, the first 50
PCs are sent to the ICA algorithm (yielding 50 ICs), and during
reconstruction :meth:`~mne.preprocessing.ICA.apply` will use the 50 ICs
plus PCs number 51-200 (the full PCA residual). If instead you specify
``n_pca_components=120`` then :meth:`~mne.preprocessing.ICA.apply` will
reconstruct using the 50 ICs plus the first 70 PCs in the PCA residual
(numbers 51-120).
As is typically done with ICA, the data are first scaled to unit variance and
whitened using principal components analysis (PCA) before performing the ICA
decomposition. You can impose an optional dimensionality reduction at this
step by specifying max_pca_components. From the retained Principal
Components (PCs), the first n_components are then passed to the ICA
algorithm (n_components may be an integer number of components to use, or
a fraction of explained variance that used components should capture).
After visualizing the Independent Components (ICs) and excluding any that
capture artifacts you want to repair, the sensor signal can be reconstructed
using the :class:~mne.preprocessing.ICA object's
:meth:~mne.preprocessing.ICA.apply method. By default, signal
reconstruction uses all of the ICs (less any ICs listed in ICA.exclude)
plus all of the PCs that were not included in the ICA decomposition (i.e.,
the "PCA residual"). If you want to reduce the number of components used at
the reconstruction stage, it is controlled by the n_pca_components
parameter (which will in turn reduce the rank of your data; by default
n_pca_components = max_pca_components resulting in no additional
dimensionality reduction). The fitting and reconstruction procedures and the
parameters that control dimensionality at various stages are summarized in
the diagram below:
.. graphviz:: ../../_static/diagrams/ica.dot
:alt: Diagram of ICA procedure in MNE-Python
:align: left
See the Notes section of the :class:~mne.preprocessing.ICA documentation
for further details. Next we'll walk through an extended example that
illustrates each of these steps in greater detail.
Example: EOG and ECG artifact repair
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Visualizing the artifacts
~~~~~~~~~~~~~~~~~~~~~~~~~
Let's begin by visualizing the artifacts that we want to repair. In this
dataset they are big enough to see easily in the raw data:
End of explanation
eog_evoked = create_eog_epochs(raw).average()
eog_evoked.plot_joint()
Explanation: We can get a summary of how the ocular artifact manifests across each channel
type using :func:~mne.preprocessing.create_eog_epochs like we did in the
tut-artifact-overview tutorial:
End of explanation
ecg_evoked = create_ecg_epochs(raw).average()
ecg_evoked.plot_joint()
Explanation: Now we'll do the same for the heartbeat artifacts, using
:func:~mne.preprocessing.create_ecg_epochs:
End of explanation
filt_raw = raw.copy()
filt_raw.load_data().filter(l_freq=1., h_freq=None)
Explanation: Filtering to remove slow drifts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Before we run the ICA, an important step is filtering the data to remove
low-frequency drifts, which can negatively affect the quality of the ICA fit.
The slow drifts are problematic because they reduce the independence of the
assumed-to-be-independent sources (e.g., during a slow upward drift, the
neural, heartbeat, blink, and other muscular sources will all tend to have
higher values), making it harder for the algorithm to find an accurate
solution. A high-pass filter with 1 Hz cutoff frequency is recommended.
However, because filtering is a linear operation, the ICA solution found from
the filtered signal can be applied to the unfiltered signal (see [2]_ for
more information), so we'll keep a copy of the unfiltered
:class:~mne.io.Raw object around so we can apply the ICA solution to it
later.
End of explanation
ica = ICA(n_components=15, random_state=97)
ica.fit(filt_raw)
Explanation: Fitting and plotting the ICA solution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. sidebar:: Ignoring the time domain
The ICA algorithms implemented in MNE-Python find patterns across
channels, but ignore the time domain. This means you can compute ICA on
discontinuous :class:`~mne.Epochs` or :class:`~mne.Evoked` objects (not
just continuous :class:`~mne.io.Raw` objects), or only use every Nth
sample by passing the ``decim`` parameter to ``ICA.fit()``.
Now we're ready to set up and fit the ICA. Since we know (from observing our
raw data) that the EOG and ECG artifacts are fairly strong, we would expect
those artifacts to be captured in the first few dimensions of the PCA
decomposition that happens before the ICA. Therefore, we probably don't need
a huge number of components to do a good job of isolating our artifacts
(though it is usually preferable to include more components for a more
accurate solution). As a first guess, we'll run ICA with n_components=15
(use only the first 15 PCA components to compute the ICA decomposition) — a
very small number given that our data has over 300 channels, but with the
advantage that it will run quickly and we will able to tell easily whether it
worked or not (because we already know what the EOG / ECG artifacts should
look like).
ICA fitting is not deterministic (e.g., the components may get a sign
flip on different runs, or may not always be returned in the same order), so
we'll also specify a random seed_ so that we get identical results each
time this tutorial is built by our web servers.
End of explanation
raw.load_data()
ica.plot_sources(raw)
Explanation: Some optional parameters that we could have passed to the
:meth:~mne.preprocessing.ICA.fit method include decim (to use only
every Nth sample in computing the ICs, which can yield a considerable
speed-up) and reject (for providing a rejection dictionary for maximum
acceptable peak-to-peak amplitudes for each channel type, just like we used
when creating epoched data in the tut-overview tutorial).
Now we can examine the ICs to see what they captured.
:meth:~mne.preprocessing.ICA.plot_sources will show the time series of the
ICs. Note that in our call to :meth:~mne.preprocessing.ICA.plot_sources we
can use the original, unfiltered :class:~mne.io.Raw object:
End of explanation
ica.plot_components()
Explanation: Here we can pretty clearly see that the first component (ICA000) captures
the EOG signal quite well, and the second component (ICA001) looks a lot
like a heartbeat <qrs_> (for more info on visually identifying Independent
Components, this EEGLAB tutorial is a good resource). We can also
visualize the scalp field distribution of each component using
:meth:~mne.preprocessing.ICA.plot_components. These are interpolated based
on the values in the ICA unmixing matrix:
End of explanation
# blinks
ica.plot_overlay(raw, exclude=[0], picks='eeg')
# heartbeats
ica.plot_overlay(raw, exclude=[1], picks='mag')
Explanation: <div class="alert alert-info"><h4>Note</h4><p>:meth:`~mne.preprocessing.ICA.plot_components` (which plots the scalp
field topographies for each component) has an optional ``inst`` parameter
that takes an instance of :class:`~mne.io.Raw` or :class:`~mne.Epochs`.
Passing ``inst`` makes the scalp topographies interactive: clicking one
will bring up a diagnostic :meth:`~mne.preprocessing.ICA.plot_properties`
window (see below) for that component.</p></div>
In the plots above it's fairly obvious which ICs are capturing our EOG and
ECG artifacts, but there are additional ways visualize them anyway just to
be sure. First, we can plot an overlay of the original signal against the
reconstructed signal with the artifactual ICs excluded, using
:meth:~mne.preprocessing.ICA.plot_overlay:
End of explanation
ica.plot_properties(raw, picks=[0, 1])
Explanation: We can also plot some diagnostics of each IC using
:meth:~mne.preprocessing.ICA.plot_properties:
End of explanation
ica.exclude = [0, 1] # indices chosen based on various plots above
Explanation: In the remaining sections, we'll look at different ways of choosing which ICs
to exclude prior to reconstructing the sensor signals.
Selecting ICA components manually
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Once we're certain which components we want to exclude, we can specify that
manually by setting the ica.exclude attribute. Similar to marking bad
channels, merely setting ica.exclude doesn't do anything immediately (it
just adds the excluded ICs to a list that will get used later when it's
needed). Once the exclusions have been set, ICA methods like
:meth:~mne.preprocessing.ICA.plot_overlay will exclude those component(s)
even if no exclude parameter is passed, and the list of excluded
components will be preserved when using :meth:mne.preprocessing.ICA.save
and :func:mne.preprocessing.read_ica.
End of explanation
# ica.apply() changes the Raw object in-place, so let's make a copy first:
reconst_raw = raw.copy()
ica.apply(reconst_raw)
raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
reconst_raw.plot(order=artifact_picks, n_channels=len(artifact_picks))
del reconst_raw
Explanation: Now that the exclusions have been set, we can reconstruct the sensor signals
with artifacts removed using the :meth:~mne.preprocessing.ICA.apply method
(remember, we're applying the ICA solution from the filtered data to the
original unfiltered signal). Plotting the original raw data alongside the
reconstructed data shows that the heartbeat and blink artifacts are repaired.
End of explanation
ica.exclude = []
# find which ICs match the EOG pattern
eog_indices, eog_scores = ica.find_bads_eog(raw)
ica.exclude = eog_indices
# barplot of ICA component "EOG match" scores
ica.plot_scores(eog_scores)
# plot diagnostics
ica.plot_properties(raw, picks=eog_indices)
# plot ICs applied to raw data, with EOG matches highlighted
ica.plot_sources(raw)
# plot ICs applied to the averaged EOG epochs, with EOG matches highlighted
ica.plot_sources(eog_evoked)
Explanation: Using an EOG channel to select ICA components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It may have seemed easy to review the plots and manually select which ICs to
exclude, but when processing dozens or hundreds of subjects this can become
a tedious, rate-limiting step in the analysis pipeline. One alternative is to
use dedicated EOG or ECG sensors as a "pattern" to check the ICs against, and
automatically mark for exclusion any ICs that match the EOG/ECG pattern. Here
we'll use :meth:~mne.preprocessing.ICA.find_bads_eog to automatically find
the ICs that best match the EOG signal, then use
:meth:~mne.preprocessing.ICA.plot_scores along with our other plotting
functions to see which ICs it picked. We'll start by resetting
ica.exclude back to an empty list:
End of explanation
ica.exclude = []
# find which ICs match the ECG pattern
ecg_indices, ecg_scores = ica.find_bads_ecg(raw, method='correlation')
ica.exclude = ecg_indices
# barplot of ICA component "ECG match" scores
ica.plot_scores(ecg_scores)
# plot diagnostics
ica.plot_properties(raw, picks=ecg_indices)
# plot ICs applied to raw data, with ECG matches highlighted
ica.plot_sources(raw)
# plot ICs applied to the averaged ECG epochs, with ECG matches highlighted
ica.plot_sources(ecg_evoked)
Explanation: Note that above we used :meth:~mne.preprocessing.ICA.plot_sources on both
the original :class:~mne.io.Raw instance and also on an
:class:~mne.Evoked instance of the extracted EOG artifacts. This can be
another way to confirm that :meth:~mne.preprocessing.ICA.find_bads_eog has
identified the correct components.
Using a simulated channel to select ICA components
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you don't have an EOG channel,
:meth:~mne.preprocessing.ICA.find_bads_eog has a ch_name parameter that
you can use as a proxy for EOG. You can use a single channel, or create a
bipolar reference from frontal EEG sensors and use that as virtual EOG
channel. This carries a risk however: you must hope that the frontal EEG
channels only reflect EOG and not brain dynamics in the prefrontal cortex (or
you must not care about those prefrontal signals).
For ECG, it is easier: :meth:~mne.preprocessing.ICA.find_bads_ecg can use
cross-channel averaging of magnetometer or gradiometer channels to construct
a virtual ECG channel, so if you have MEG channels it is usually not
necessary to pass a specific channel name.
:meth:~mne.preprocessing.ICA.find_bads_ecg also has two options for its
method parameter: 'ctps' (cross-trial phase statistics [3]_) and
'correlation' (Pearson correlation between data and ECG channel).
End of explanation
# refit the ICA with 30 components this time
new_ica = ICA(n_components=30, random_state=97)
new_ica.fit(filt_raw)
# find which ICs match the ECG pattern
ecg_indices, ecg_scores = new_ica.find_bads_ecg(raw, method='correlation')
new_ica.exclude = ecg_indices
# barplot of ICA component "ECG match" scores
new_ica.plot_scores(ecg_scores)
# plot diagnostics
new_ica.plot_properties(raw, picks=ecg_indices)
# plot ICs applied to raw data, with ECG matches highlighted
new_ica.plot_sources(raw)
# plot ICs applied to the averaged ECG epochs, with ECG matches highlighted
new_ica.plot_sources(ecg_evoked)
Explanation: The last of these plots is especially useful: it shows us that the heartbeat
artifact is coming through on two ICs, and we've only caught one of them.
In fact, if we look closely at the output of
:meth:~mne.preprocessing.ICA.plot_sources (online, you can right-click →
"view image" to zoom in), it looks like ICA014 has a weak periodic
component that is in-phase with ICA001. It might be worthwhile to re-run
the ICA with more components to see if that second heartbeat artifact
resolves out a little better:
End of explanation
# clean up memory before moving on
del raw, filt_raw, ica, new_ica
Explanation: Much better! Now we've captured both ICs that are reflecting the heartbeat
artifact (and as a result, we got two diagnostic plots: one for each IC that
reflects the heartbeat). This demonstrates the value of checking the results
of automated approaches like :meth:~mne.preprocessing.ICA.find_bads_ecg
before accepting them.
End of explanation
mapping = {
'Fc5.': 'FC5', 'Fc3.': 'FC3', 'Fc1.': 'FC1', 'Fcz.': 'FCz', 'Fc2.': 'FC2',
'Fc4.': 'FC4', 'Fc6.': 'FC6', 'C5..': 'C5', 'C3..': 'C3', 'C1..': 'C1',
'Cz..': 'Cz', 'C2..': 'C2', 'C4..': 'C4', 'C6..': 'C6', 'Cp5.': 'CP5',
'Cp3.': 'CP3', 'Cp1.': 'CP1', 'Cpz.': 'CPz', 'Cp2.': 'CP2', 'Cp4.': 'CP4',
'Cp6.': 'CP6', 'Fp1.': 'Fp1', 'Fpz.': 'Fpz', 'Fp2.': 'Fp2', 'Af7.': 'AF7',
'Af3.': 'AF3', 'Afz.': 'AFz', 'Af4.': 'AF4', 'Af8.': 'AF8', 'F7..': 'F7',
'F5..': 'F5', 'F3..': 'F3', 'F1..': 'F1', 'Fz..': 'Fz', 'F2..': 'F2',
'F4..': 'F4', 'F6..': 'F6', 'F8..': 'F8', 'Ft7.': 'FT7', 'Ft8.': 'FT8',
'T7..': 'T7', 'T8..': 'T8', 'T9..': 'T9', 'T10.': 'T10', 'Tp7.': 'TP7',
'Tp8.': 'TP8', 'P7..': 'P7', 'P5..': 'P5', 'P3..': 'P3', 'P1..': 'P1',
'Pz..': 'Pz', 'P2..': 'P2', 'P4..': 'P4', 'P6..': 'P6', 'P8..': 'P8',
'Po7.': 'PO7', 'Po3.': 'PO3', 'Poz.': 'POz', 'Po4.': 'PO4', 'Po8.': 'PO8',
'O1..': 'O1', 'Oz..': 'Oz', 'O2..': 'O2', 'Iz..': 'Iz'
}
raws = list()
icas = list()
for subj in range(4):
# EEGBCI subjects are 1-indexed; run 3 is a left/right hand movement task
fname = mne.datasets.eegbci.load_data(subj + 1, runs=[3])[0]
raw = mne.io.read_raw_edf(fname)
# remove trailing `.` from channel names so we can set montage
raw.rename_channels(mapping)
raw.set_montage('standard_1005')
# fit ICA
ica = ICA(n_components=30, random_state=97)
ica.fit(raw)
raws.append(raw)
icas.append(ica)
Explanation: Selecting ICA components using template matching
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When dealing with multiple subjects, it is also possible to manually select
an IC for exclusion on one subject, and then use that component as a
template for selecting which ICs to exclude from other subjects' data,
using :func:mne.preprocessing.corrmap [4]_. The idea behind
:func:~mne.preprocessing.corrmap is that the artifact patterns are similar
enough across subjects that corresponding ICs can be identified by
correlating the ICs from each ICA solution with a common template, and
picking the ICs with the highest correlation strength.
:func:~mne.preprocessing.corrmap takes a list of ICA solutions, and a
template parameter that specifies which ICA object and which component
within it to use as a template.
Since our sample dataset only contains data from one subject, we'll use a
different dataset with multiple subjects: the EEGBCI dataset [5] [6]. The
dataset has 109 subjects, we'll just download one run (a left/right hand
movement task) from each of the first 4 subjects:
End of explanation
# use the first subject as template; use Fpz as proxy for EOG
raw = raws[0]
ica = icas[0]
eog_inds, eog_scores = ica.find_bads_eog(raw, ch_name='Fpz')
corrmap(icas, template=(0, eog_inds[0]))
Explanation: Now let's run :func:~mne.preprocessing.corrmap:
End of explanation
for index, (ica, raw) in enumerate(zip(icas, raws)):
fig = ica.plot_sources(raw)
fig.suptitle('Subject {}'.format(index))
Explanation: The first figure shows the template map, while the second figure shows all
the maps that were considered a "match" for the template (including the
template itself). There were only three matches from the four subjects;
notice the output message No maps selected for subject(s) 1, consider a
more liberal threshold. By default the threshold is set automatically by
trying several values; here it may have chosen a threshold that is too high.
Let's take a look at the ICA sources for each subject:
End of explanation
corrmap(icas, template=(0, eog_inds[0]), threshold=0.9)
Explanation: Notice that subject 1 does seem to have an IC that looks like it reflects
blink artifacts (component ICA000). Notice also that subject 3 appears to
have two components that are reflecting ocular artifacts (ICA000 and
ICA002), but only one was caught by :func:~mne.preprocessing.corrmap.
Let's try setting the threshold manually:
End of explanation
corrmap(icas, template=(0, eog_inds[0]), threshold=0.9, label='blink',
plot=False)
print([ica.labels_ for ica in icas])
Explanation: Now we get the message At least 1 IC detected for each subject (which is
good). At this point we'll re-run :func:~mne.preprocessing.corrmap with
parameters label=blink, show=False to label the ICs from each subject
that capture the blink artifacts (without plotting them again).
End of explanation
icas[3].plot_components(picks=icas[3].labels_['blink'])
icas[3].exclude = icas[3].labels_['blink']
icas[3].plot_sources(raws[3])
Explanation: Notice that the first subject has 3 different labels for the IC at index 0:
"eog/0/Fpz", "eog", and "blink". The first two were added by
:meth:~mne.preprocessing.ICA.find_bads_eog; the "blink" label was added by
the last call to :func:~mne.preprocessing.corrmap. Notice also that each
subject has at least one IC index labelled "blink", and subject 3 has two
components (0 and 2) labelled "blink" (consistent with the plot of IC sources
above). The labels_ attribute of :class:~mne.preprocessing.ICA objects
can also be manually edited to annotate the ICs with custom labels. They also
come in handy when plotting:
End of explanation
template_eog_component = icas[0].get_components()[:, eog_inds[0]]
corrmap(icas, template=template_eog_component, threshold=0.9)
print(template_eog_component)
Explanation: As a final note, it is possible to extract ICs numerically using the
:meth:~mne.preprocessing.ICA.get_components method of
:class:~mne.preprocessing.ICA objects. This will return a :class:NumPy
array <numpy.ndarray> that can be passed to
:func:~mne.preprocessing.corrmap instead of the :class:tuple of
(subject_index, component_index) we passed before, and will yield the
same result:
End of explanation |
1,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification of Organisms
Using a Digital Dichotomous Key
Step 1 - Creating a Checkpoint
Create a checkpoint by clicking <b>File</b> ==> <b>Save and Checkpoint</b>. If you make a major mistake, you can click <u>File</u> ==> <u>Revert to Checkpoint</u> to reset the Jupyter Notebook online on Binder.org.
Importing the Data
The next 2 blocks of code imports the data that we will need to examine the caracteristics of many different organisms. You can begin to execute the cells using <b> Shift + Enter </b> to import the data set and continue.
Step1: Pre-Questions
A Dichotomous Key is....
a tool that allows scienctists to identify and classify organisms in the natural world. Based on their characterists, scienctists can narrow down species into groups such as trees, flowers, mammals, reptiles, rocks, and fish. A Dichotomous Key can help to understand how scientists have classified organisms using Bionomial Nomenclature.
<b><span style="color
Step2: PART 1
Step3: Use and modify the section of code below to answer questions 3-5.
Step4: PART 1
Step5: PART 2
Step6: PART 3 | Python Code:
# Import modules that contain functions we need
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Our data is the dichotomous key table and is defined as the word 'key'.
# key is set equal to the .csv file that is read by pandas.
# The .csv file must be in the same directory as the program.
#If the data is being pulled locally use the code that is commented out below
#key = pd.read_csv("Classification of Organisms- Jupyter Data.csv")
#key2 = pd.read_csv("Classification of Organisms- Jupyter Data KEY 2.csv")
key = pd.read_csv("https://gist.githubusercontent.com/GoodmanSciences/f4d51945a169ef3125234c57b878e058/raw/bebeaae8038f0b418ed37c2a98b82aa9d3cc38d1/Classification%2520of%2520Organisms-Jupyter%2520Data.csv")
key2 = pd.read_csv("https://gist.githubusercontent.com/GoodmanSciences/4060d993635e90cdcc46fe637c92ee37/raw/d9031747855b9762b239dea07a60254eaa6051f7/Classification%2520of%2520Organisms-%2520Jupyter%2520Data%2520KEY%25202.csv")
# This sets Organism as the index instead of numbers
#key = data.set_index("organism")
Explanation: Classification of Organisms
Using a Digital Dichotomous Key
Step 1 - Creating a Checkpoint
Create a checkpoint by clicking <b>File</b> ==> <b>Save and Checkpoint</b>. If you make a major mistake, you can click <u>File</u> ==> <u>Revert to Checkpoint</u> to reset the Jupyter Notebook online on Binder.org.
Importing the Data
The next 2 blocks of code imports the data that we will need to examine the caracteristics of many different organisms. You can begin to execute the cells using <b> Shift + Enter </b> to import the data set and continue.
End of explanation
# Here is a helpful image of a sample Dichotomous Key!
from IPython.display import Image
from IPython.core.display import HTML
Image(url= 'http://biology-igcse.weebly.com/uploads/1/5/0/7/15070316/8196495_orig.gif')
Explanation: Pre-Questions
A Dichotomous Key is....
a tool that allows scienctists to identify and classify organisms in the natural world. Based on their characterists, scienctists can narrow down species into groups such as trees, flowers, mammals, reptiles, rocks, and fish. A Dichotomous Key can help to understand how scientists have classified organisms using Bionomial Nomenclature.
<b><span style="color:green">You can find out more about Dichotomous Keys by watching this video.</span></b> Dichotomous Key Video
Pre-Questions
1.After watching the video and reading, what is a dichotomous key and why is it useful for scientists who study organisms.
2.Why do scientists classify organisms? How does this help with research?
End of explanation
# Animal options in Dichotomous Key
# Displays all row titles as an array
key.organism
# Conditions/Questions for finding the correct animal
# Displays all column titles as an array
key.columns
Explanation: PART 1: Sorting Organisms by One Characteristic
We will be looking at the characterists of 75 unique organisms in our Dichotomous Key. The imput below will provide us with a some of the possible organisms you may discover and the different Organism Characteristics/Conditions in our data set.
End of explanation
key[(key['fur'] == 'yes')]
Explanation: Use and modify the section of code below to answer questions 3-5.
End of explanation
# This conditional allows us to query a column and if the data within that cell matches it will display the animal(s).
#if you are unsure of what to put try making that column a comment by adding # in front of it.
key[
#physical characteristics
(key['fur'] == 'yes') & \
(key['feathers'] == 'no') & \
(key['poisonous'] == 'no') & \
(key['scales'] == 'no') & \
(key['multicellular'] == 'yes') & \
(key['fins'] == 'no') & \
(key['wings'] == 'no') & \
(key['vertebrate'] == 'yes') & \
#environmental characteristics
(key['marine'] == 'no') & \
(key['terrestrial'] == 'yes') & \
#feeding characteristics
#decomposers get their food by breaking down decaying organisms
(key['decomposer'] == 'no') & \
#carnivores get their food by eating animals
(key['carnivore'] == 'no') & \
#herbivores get their food by eating plants
(key['herbivore'] == 'yes') & \
#omnivores get their food by eating both plants and animals
(key['omnivore'] == 'no') & \
#photosynthesis is the process of making food using energy from sunlight
(key['photosynthesis'] == 'no') & \
#autotrophs are organisms that generate their own food inside themselves
(key['autotroph'] == 'no') & \
#possible kingdoms include: animalia, plantae, fungi
(key['kingdom'] == 'animalia') & \
#cell type
(key['eukaryotic'] == 'yes') & \
(key['prokaryotic'] == 'no')
]
Explanation: PART 1: Sorting Organisms by One Characteristic
3.Organisms are classified by shared characteristics. Is it possible for something to be eukaryotic and prokaryotic at the same time? Why might this kind of trait be helpful for scientists?
4.How many different organisms in our list of 75 have wings? Are they all similar? Are “wings” a good characteristic to use for classification?
5.Which characteristic gave you the largest category? Which gave you the smallest? Why might this be the case?
PART 2: Sorting Organisms by Many Characteristics
These are the conditions or the characteristics in which ceratin answers are categorized for certain organisms. Each characteristic/condition has a yes/no except for the Kingdoms. Change the conditionals in the code below to change what organism(s) are displayed. For most, the only change needs to be the 'yes' or 'no'.
<span style="color:red">Capitalization matters so be careful. You also must put in only allowed answers in every condition or the code will break!</span>
Use and modify the section of code below to answer questions 6-8 in your coding booklet.
End of explanation
#sort your organisms by their taxonomical classification
# This conditional allows us to query a column and if the data within that cell matches,
# it will display the corresponding animal(s)
key2[(key2['kingdom'] == 'animalia')]
Explanation: PART 2: Sorting Organisms by Many Characteristics
6.Set the list to the characteristics of a cow. What are some other organisms that are sorted this way? What new traits would make the cow the only result?
7.How would the list of characteristics differ between a whale and dolphin?
8.A zoologist is exploring the jungle when she spots a small, hairy animal. As she follows the animal she sees it eat nuts from a tree and some insects off the ground. She also observes that even though it appears to have wings, it seems to prefer to move along the ground. According to your key, what organism is it most similar to, how do the observed characteristics differ from what you know about this animal?
Part 3 & 4: Scientific Classification of Organisms & Unstructured Coding
Use and modify the section of code below to answer questions 9-13 in your coding booklet.
End of explanation
#Done?? Insert a image for one of the organisms you found using the dichotomous key.
from IPython.display import Image
from IPython.core.display import HTML
Image(url= 'https://lms.mrc.ac.uk/wp-content/uploads/insert-pretty-picture-here1.jpg')
Explanation: PART 3: Scientific Classification of Organisms
9.Are all organisms in the same kingdom classified in the same phylum?
10.If organisms are in the same order (like rodentia), describe how their kingdom, phylum, and classes compare. Explain.
PART 4: Unstructured Coding
11.What are two examples of organisms in kingdom plantae?
12.What are two examples of organisms in kingdom fungi?
13.Find a few poisonous animals by changing the conditional statements.
End of explanation |
1,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Single Replica TIS
This notebook shows how to run single replica TIS move scheme. This assumes you can load engine, network, and initial sample from a previous calculation.
Step2: Open the storage and load things from it.
Step3: One of the points of SRTIS is that we use a bias (which comes from an estimate of the crossing probability) in order to improve our sampling.
Step4: Here we actually set up the SRTIS move scheme for the given network. It only requires one line
Step5: Now we'll visualize the SRTIS move scheme.
Step6: Next we need to set up an appropriate single-replica initial sampleset. We'll take the last version of from one of the outer TIS ensembles.
Step7: Finally, we set up the new storage file and the new simulation.
Step8: From here, we'll be doing the analysis of the SRTIS run. | Python Code:
%matplotlib inline
import openpathsampling as paths
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from openpathsampling.visualize import PathTreeBuilder, PathTreeBuilder
from IPython.display import SVG, HTML
def ipynb_visualize(movevis):
Default settings to show a movevis in an ipynb.
view = movevis.renderer
view.zoom = 1.5
view.scale_y = 18
view.scale_th = 20
view.font_size = 0.4
return view
Explanation: Single Replica TIS
This notebook shows how to run single replica TIS move scheme. This assumes you can load engine, network, and initial sample from a previous calculation.
End of explanation
old_store = paths.AnalysisStorage("mstis.nc")
#old_store = paths.Storage("mstis.nc") # if not actually doing analysis, but loading network, etc
network = old_store.networks[0]
engine = old_store.engines[0]
template = old_store.snapshots[0]
Explanation: Open the storage and load things from it.
End of explanation
# this is how we would get it out of a simulation (although the actual simulation here has bad stats)
# first, we need the crossing probabilities, which we get when we calculate the rate
network.hist_args['max_lambda'] = { 'bin_width' : 0.02, 'bin_range' : (0.0, 0.5) }
network.hist_args['pathlength'] = { 'bin_width' : 5, 'bin_range' : (0, 150) }
rates = network.rate_matrix(old_store.steps)
# just use the analyzed network to make the bias
bias = paths.SRTISBiasFromNetwork(network)
bias.df
# For better stats, use the results that I got from a 20k MC step run
# We can create fake TCPs and force them on the network.
tcp_A = paths.numerics.LookupFunction.from_dict(
{0.2: 1.0,
0.3: 0.13293104100673198,
0.4: 0.044370838092911397,
0.5: 0.021975696374764188}
)
tcp_B = paths.numerics.LookupFunction.from_dict(
{0.2: 1.0,
0.3: 0.13293104100673198,
0.4: 0.044370838092911397,
0.5: 0.021975696374764188}
)
tcp_C = paths.numerics.LookupFunction.from_dict(
{0.2: 1.0,
0.3: 0.19485705066078274,
0.4: 0.053373003923696649,
0.5: 0.029175949467020165}
)
# load states for identification purposes
stateA = old_store.volumes['A']
stateB = old_store.volumes['B']
stateC = old_store.volumes['C']
# use the sampling transitions; in MSTIS, these are also stored in from_state
network.from_state[stateA].tcp = tcp_A
network.from_state[stateB].tcp = tcp_B
network.from_state[stateC].tcp = tcp_C
bias = paths.SRTISBiasFromNetwork(network)
bias.df
Explanation: One of the points of SRTIS is that we use a bias (which comes from an estimate of the crossing probability) in order to improve our sampling.
End of explanation
scheme = paths.SRTISScheme(network, bias=bias, engine=engine)
Explanation: Here we actually set up the SRTIS move scheme for the given network. It only requires one line:
End of explanation
movevis = paths.visualize.MoveTreeBuilder()
#movevis.mover(scheme.move_decision_tree(), network.all_ensembles)
#SVG(ipynb_visualize(movevis).to_svg())
Explanation: Now we'll visualize the SRTIS move scheme.
End of explanation
final_samp0 = old_store.steps[len(old_store.steps)-1].active[network.sampling_ensembles[-1]]
sset = paths.SampleSet([final_samp0])
Explanation: Next we need to set up an appropriate single-replica initial sampleset. We'll take the last version of from one of the outer TIS ensembles.
End of explanation
storage = paths.Storage("srtis.nc", "w")
storage.save(template)
srtis = paths.PathSampling(
storage=storage,
sample_set=sset,
move_scheme=scheme
)
n_steps_to_run = int(scheme.n_steps_for_trials(
mover=scheme.movers['minus'][0],
n_attempts=1
))
print(n_steps_to_run)
# logging creates ops_output.log file with details of what the calculation is doing
#import logging.config
#logging.config.fileConfig("logging.conf", disable_existing_loggers=False)
%%time
multiplier = 2
srtis.run_until(multiplier*n_steps_to_run)
#storage.close()
Explanation: Finally, we set up the new storage file and the new simulation.
End of explanation
%%time
#storage = paths.AnalysisStorage("srtis.nc")
#scheme = storage.schemes[0]
scheme.move_summary(storage.steps)
scheme.move_summary(storage.steps, 'shooting')
scheme.move_summary(storage.steps, 'minus')
scheme.move_summary(storage.steps, 'repex')
scheme.move_summary(storage.steps, 'pathreversal')
replica = storage.samplesets[0].samples[0].replica
ensemble_trace = paths.trace_ensembles_for_replica(replica, storage.steps)
print len(ensemble_trace)
srtis_ensembles = scheme.network.sampling_ensembles+scheme.network.special_ensembles['ms_outer'].keys()
srtis_ensemble_numbers = {e : srtis_ensembles.index(e) for e in srtis_ensembles}
# this next is just for pretty printing
srtis_numbers_ensemble = {srtis_ensemble_numbers[e] : e for e in srtis_ensemble_numbers}
for k in sorted(srtis_numbers_ensemble.keys()):
print k, ":", srtis_numbers_ensemble[k].name
plt.plot([srtis_ensemble_numbers[e] for e in ensemble_trace])
count = 0
for i in range(len(ensemble_trace)-1):
[this_val, next_val] = [srtis_ensemble_numbers[ensemble_trace[k]] for k in [i,i+1]]
if this_val == 1 and next_val == 0:
count += 1
count
hist_numbers = [srtis_ensemble_numbers[e] for e in ensemble_trace]
bins = [i-0.5 for i in range(len(srtis_ensembles)+1)]
plt.hist(hist_numbers, bins=bins);
import pandas as pd
hist = paths.analysis.Histogram(bin_width=1.0, bin_range=[-0.5,9.5])
colnames = {i : srtis_numbers_ensemble[i].name for i in range(len(srtis_ensembles))}
df = pd.DataFrame(columns=[colnames[i] for i in colnames])
for i in range(len(hist_numbers)):
hist.add_data_to_histogram([hist_numbers[i]])
if i % 100 == 0:
normalized = hist.normalized()
local_df = pd.DataFrame([normalized.values()], index=[i], columns=[colnames[k] for k in normalized.keys()])
df = df.append(local_df)
plt.pcolormesh(df.fillna(0.0), cmap="bwr", vmin=0.0, vmax=0.2);
plt.gca().invert_yaxis()
plt.colorbar()
Explanation: From here, we'll be doing the analysis of the SRTIS run.
End of explanation |
1,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy
Why NumPy ?
NumPy is an acronym for "Numeric Python" or "Numerical Python"
NumPy is the fundamental package for scientific computing with Python. It contains among other things
Step1: numpy array (ndarray)
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers
ndarray.ndim - the number of axes (dimensions) of the array. In the Python world, the number of dimensions is referred to as rank.
ndarray.shape - the dimensions of the array. This is a tuple of integers indicating the size of the array in each dimension.
ndarray.size - the total number of elements of the array. This is equal to the product of the elements of shape.
ndarray.dtype - an object describing the type of the elements in the array.
<img src="images/fig_numpy_axes.png " alt="NumPy axes" height="300" width="300" align="left">
Step2: Array basic operations
Step3: Array Slicing
Step4: Broadcasting
The term Broadcasting describes how numpy treats arrays with different shapes during arithmetic operations.
Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes.
Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python.
Step5: <img src="images/fig_broadcast_visual_1.png" alt="Broadcasting" height="500" width="500", align="left">
Exercises
Step6: <img src="images/fig_numpy_indexing_q.png" alt="Array Slicing" height="300" width="300" align="left">
Step7: NumPy functions used for performing computations
np.sum
np.std
np.mean
np.max
np.min | Python Code:
# import NumPy library
# This library is bundled along with anaconda distribution
# np alias is the standard convention
import numpy as np
Explanation: NumPy
Why NumPy ?
NumPy is an acronym for "Numeric Python" or "Numerical Python"
NumPy is the fundamental package for scientific computing with Python. It contains among other things:
A powerful N-dimensional array object (ndarray) - efficiently implemented multi-dimensional arrays
Array oriented computing - sophisticated (broadcasting) functions
Tools for integrating C/C++ and Fortran code
Designed for scientific computation - useful linear algebra, Fourier transform, and random number capabilities
End of explanation
%%timeit
temp_list = range(100000)
temp_list1 = [ x*2 for x in temp_list]
%%timeit
temp_array = np.arange(100000)
temp_array1 = temp_array*2
# ndarray can be created for regular python list or tupple
mylist = [2,5,8,15,25]
array = np.array(mylist)
type(array)
array.shape
array[0]
array[0:3]
array.dtype
array.ndim
# dtype can be mentioned while creating an array
array2 = np.array(mylist,dtype=np.float)
array2
array2.dtype
# creating a 5 X 3 multi dimensional array
marray = np.arange(15).reshape(5,3)
marray
marray.ndim
marray.shape
marray.size
# ravel function generates a flattens
marray.ravel()
# reshape can be used to change the shape of an array
marray.ravel().reshape(3,5)
marray.shape
Explanation: numpy array (ndarray)
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers
ndarray.ndim - the number of axes (dimensions) of the array. In the Python world, the number of dimensions is referred to as rank.
ndarray.shape - the dimensions of the array. This is a tuple of integers indicating the size of the array in each dimension.
ndarray.size - the total number of elements of the array. This is equal to the product of the elements of shape.
ndarray.dtype - an object describing the type of the elements in the array.
<img src="images/fig_numpy_axes.png " alt="NumPy axes" height="300" width="300" align="left">
End of explanation
# multiplying a scalr and ndarray
print(marray*2)
marray
# inplace change
# there are certain operations that will modify the object inplace like one below
marray += 10
marray
# Guess - what would be the result of the following
marray > 15
arr_A = np.array( [ [2,3], [4,5] ] )
arr_B = np.array( [ [1,1], [2,1] ] )
# * operates element wise
arr_A * arr_B
# dot is used for matrix multiplication
# np.dot(arr_A,arr_B) also works
arr_A.dot(arr_B)
Explanation: Array basic operations
End of explanation
marray[0]
marray[0,1]
marray
marray[:,1:3]
marray[0:3,:]
marray[1:3,1:]
Explanation: Array Slicing
End of explanation
marray
marray + 5
Explanation: Broadcasting
The term Broadcasting describes how numpy treats arrays with different shapes during arithmetic operations.
Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes.
Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python.
End of explanation
# Exercise - 1
# Construct 3 by 3 ndarray with 5 as diagonal elemet and 1 as remaining elements
# [[5, 1, 1][1,5,1][1,1,5]]
# Tip : explore np.ones and np.eye functions
# the dtype should be int
# Exercise
# try following array slicing
a = np.arange(0,60).reshape(6,10)[0:6,0:6]
Explanation: <img src="images/fig_broadcast_visual_1.png" alt="Broadcasting" height="500" width="500", align="left">
Exercises
End of explanation
a
Explanation: <img src="images/fig_numpy_indexing_q.png" alt="Array Slicing" height="300" width="300" align="left">
End of explanation
# np.NaN is a datatype - Not a Number
np.NaN?
np.random.seed(0)
arr_c = np.random.random(15).reshape((5,3))
arr_c
arr_c.sum()
arr_c.min()
arr_c.max()
arr_c.mean()
arr_c.mean(axis=0)
arr_c.mean(axis=1)
Explanation: NumPy functions used for performing computations
np.sum
np.std
np.mean
np.max
np.min
End of explanation |
1,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Here is the library of functions
Step1: Everything after here is the script that runs the simulation
Step2: Regression
Step3: HMC
Step4: HMC - Unscaled
nsample = 1000
m = 20
eps = .008
theta = np.zeros(p+1)
theta = beta_true_unscale.copy()
phi = 5
M = np.identity(p+1)
samples, accept, rho, H = run_hmc(Y, X, U_logistic, gradU_logistic, M, eps, m, theta, phi, nsample)
np.mean(samples, axis=0) - beta_true_unscale
plt.plot((samples - beta_true_unscale)[
Step5: HMC - Unscaled (no intercept)
nsample = 1000
m = 20
eps = .00001
theta = np.zeros(p+1)
theta = beta_true_scale.copy()
phi = 5
nbatch = 500
C = 1 * np.identity(p+1)
V = 0 * np.identity(p+1)
M = np.identity(p+1)
samples, H = run_sghmc(Y, X, U_logistic, gradU_logistic, M, eps, m, theta, C, V, phi, nsample)
print(np.mean(samples, axis=0) - beta_true_unscale)
plt.plot((samples - beta_true_unscale)[ | Python Code:
def logistic(x):
'''
'''
return 1/(1+np.exp(-x))
def U_logistic(theta, Y, X, phi):
'''
'''
return - (Y.T @ X @ theta - np.sum(np.log(1+np.exp(X @ theta))) - 0.5 * phi * np.sum(theta**2))
def gradU_logistic(theta, Y, X, phi):
'''
'''
n = X.shape[0]
Y_pred = logistic(X @ theta)
epsilon = (Y[:,np.newaxis] - Y_pred[:,np.newaxis])
grad = X.T @ epsilon - phi * theta[:, np.newaxis]
return -grad/n
def hmc(Y, X, U, gradU, M, eps, m, theta0, phi):
'''
'''
theta = theta0.copy()
n, p = X.shape
# Precompute
Minv = np.linalg.inv(M)
# Randomly sample momentum
r = np.random.multivariate_normal(np.zeros(p),M)[:,np.newaxis]
# Intial energy
H0 = U(theta0, Y, X, phi) + 0.5 * np.asscalar(r.T @ Minv @ r)
# Hamiltonian dynamics
r -= (eps/2)*gradU(theta, Y, X, phi)
for i in range(m):
theta += (eps*Minv@r).ravel()
r -= eps*gradU(theta, Y, X, phi)
r -= (eps/2)*gradU(theta, Y, X, phi)
# Final energy
H1 = U(theta, Y, X, phi) + np.asscalar(0.5 * r.T @ Minv @ r)
# MH step
u = np.random.uniform()
rho = np.exp(H0 - H1) # Acceptance probability
if u < np.min((1, rho)):
# accept
accept = True
H = H1
else:
# reject
theta = theta0
accept = False
H = H0
return theta, accept, rho, H
def run_hmc(Y, X, U, gradU, M, eps, m, theta, phi, nsample):
n, p = X.shape
# Allocate space
samples = np.zeros((nsample, p))
accept = np.zeros(nsample)
rho = np.zeros(nsample)
H = np.zeros(nsample)
# Run hmc
for i in range(nsample):
theta, accept[i], rho[i], H[i] = hmc(Y, X, U, gradU, M, eps, m, theta, phi)
samples[i] = theta
return samples, accept, rho, H
def stogradU(theta, Y, X, nbatch, phi):
'''A function that returns the stochastic gradient. Adapted from Eq. 5.
Inputs are:
theta, the parameters
Y, the response
X, the covariates
nbatch, the number of samples to take from the full data
'''
n, p = X.shape
# Sample minibatch
batch_id = np.random.choice(np.arange(n),nbatch,replace=False)
Y_pred = logistic(X[batch_id,:] @ theta[:,np.newaxis])
epsilon = (Y[batch_id,np.newaxis] - Y_pred)
grad = n/nbatch * X[batch_id,:].T @ epsilon - phi * theta[:, np.newaxis]
#return -grad/n
return -grad
def sghmc(Y, X, U, gradU, M, Minv, eps, m, theta, B, D, phi):
n, p = X.shape
# Randomly sample momentum
r = np.random.multivariate_normal(np.zeros(p),M)[:,np.newaxis]
# Hamiltonian dynamics
for i in range(m):
theta += (eps*Minv@r).ravel()
r -= eps*stogradU(theta, Y, X, nbatch,phi) - eps*C @ Minv @ r \
+ np.random.multivariate_normal(np.zeros(p),D)[:,np.newaxis]
# Record the energy
H = U(theta, Y, X, phi) + np.asscalar(0.5 * r.T @ Minv @ r)
return theta, H
def run_sghmc(Y, X, U, gradU, M, eps, m, theta, C, V, phi, nsample):
n, p = X.shape
# Precompute
Minv = np.linalg.inv(M)
B = 0.5 * V * eps
D = 2*(C-B)*eps
# Allocate space
samples = np.zeros((nsample, p))
H = np.zeros(nsample)
# Run sghmc
for i in range(nsample):
theta, H[i] = sghmc(Y, X, U, gradU, M, Minv, eps, m, theta, B, D, phi)
samples[i] = theta
return samples, H
def gd(Y, X, gradU, eps, m, theta, phi):
'''
'''
samples = np.zeros((nsample, p))
for i in range(m):
theta -= eps*gradU(theta, Y, X, phi).ravel()
return theta
Explanation: Here is the library of functions:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
n = 500
p = 50
beta = np.random.normal(0, 1, p+1)
Sigma = np.zeros((p, p))
Sigma_diags = np.array([25, 5, 0.2**2])
distribution = np.random.multinomial(p, pvals=[.05, .05, .9], size=1).tolist()
np.fill_diagonal(Sigma, np.repeat(Sigma_diags, distribution[0], axis=0))
X = np.random.multivariate_normal(np.zeros(p), Sigma, n)
X = np.hstack((np.ones((n, 1)), X))
p = np.exp(X @ beta)/np.exp(1 + np.exp(X @ beta))
Y = np.random.binomial(1, p, n)
Xs = (X - np.mean(X, axis=0))/np.concatenate((np.ones(1),np.std(X[:,1:], axis=0)))
Xs = Xs[:,1:]
p = Xs.shape[1]
Explanation: Everything after here is the script that runs the simulation:
End of explanation
from sklearn.linear_model import LogisticRegression
# Unscaled
mod_logis = LogisticRegression(fit_intercept=False, C=1e50)
mod_logis.fit(X,Y)
beta_true_unscale = mod_logis.coef_.ravel()
beta_true_unscale
# Scaled
mod_logis = LogisticRegression(fit_intercept=False, C=1e50)
mod_logis.fit(Xs,Y)
beta_true_scale = mod_logis.coef_.ravel()
beta_true_scale
Explanation: Regression
End of explanation
# HMC - Scaled
nsample = 1000
m = 20
eps = .0005
theta = np.zeros(p)
#theta = beta_true_scale.copy()
phi = 5
M = np.identity(p)
samples, accept, rho, H = run_hmc(Y, Xs, U_logistic, gradU_logistic, M, eps, m, theta, phi, nsample)
hmc_mean = np.mean(samples, axis=0)
np.mean(samples, axis=0) - beta_true_scale
plt.plot((samples - beta_true_scale)[:,3])
plt.show()
plt.plot(H)
plt.show()
Explanation: HMC
End of explanation
# HMC - Scaled (no intercept)
nsample = 1000
m = 20
eps = .01
theta = np.zeros(p)
#theta = beta_true_scale.copy()
phi = 5
nbatch = 500
C = 1 * np.identity(p)
V = 0 * np.identity(p)
M = np.identity(p)
samples, H = run_sghmc(Y, Xs, U_logistic, gradU_logistic, M, eps, m, theta, C, V, phi, nsample)
print(np.mean(samples, axis=0) - beta_true_scale)
plt.plot((samples - beta_true_scale)[:,0])
plt.show()
plt.plot(H)
plt.show()
Explanation: HMC - Unscaled
nsample = 1000
m = 20
eps = .008
theta = np.zeros(p+1)
theta = beta_true_unscale.copy()
phi = 5
M = np.identity(p+1)
samples, accept, rho, H = run_hmc(Y, X, U_logistic, gradU_logistic, M, eps, m, theta, phi, nsample)
np.mean(samples, axis=0) - beta_true_unscale
plt.plot((samples - beta_true_unscale)[:,3])
plt.show()
plt.plot(H)
plt.show()
SGHMC
End of explanation
# Gradient descent - Scaled
np.random.seed(2)
phi = .1
res = gd(Y, Xs, gradU_logistic, .1, 20000, np.zeros(p), phi)
res - beta_true_scale
Explanation: HMC - Unscaled (no intercept)
nsample = 1000
m = 20
eps = .00001
theta = np.zeros(p+1)
theta = beta_true_scale.copy()
phi = 5
nbatch = 500
C = 1 * np.identity(p+1)
V = 0 * np.identity(p+1)
M = np.identity(p+1)
samples, H = run_sghmc(Y, X, U_logistic, gradU_logistic, M, eps, m, theta, C, V, phi, nsample)
print(np.mean(samples, axis=0) - beta_true_unscale)
plt.plot((samples - beta_true_unscale)[:,0])
plt.show()
plt.plot(H)
plt.show()
Gradient Descent
End of explanation |
1,546 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have fitted a k-means algorithm on more than 400 samples using the python scikit-learn library. I want to have the 100 samples closest (data, not just index) to a cluster center "p" (e.g. p=2) as an output, here "p" means the p^th center. How do I perform this task? | Problem:
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
p, X = load_data()
assert type(X) == np.ndarray
km = KMeans()
km.fit(X)
d = km.transform(X)[:, p]
indexes = np.argsort(d)[::][:100]
closest_100_samples = X[indexes] |
1,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Transient Model
In this example the transient model is create from an ASCII file. Alternatively you could use the built-in SN models of sncosmo or the Blackbody model provided in simsurvey.
Step2: Transient distribution
You need to define a function that draws the model parameters (except z and t0) from a random distribution.
In this case only host exinction is random but in addition the amplitude of each MN must be scaled for its luminosity distance.
Step3: TransientGenerator
The transient generator combines model and ditribution, and randomly draws all parameters needed to simulate the lightcurves.
(Note that here we also set the volumetric rate as funtion of z. For macronovae, a good guess would be $5\cdot 10^{-7}~\textrm{Mpc}^{-3}~\textrm{yr}^{-1}$ but this would only results in a couple of observed macronovae. For this example we'll use a 100 times larger rate.)
Step4: SimulSurvey
Lastly, all parts are combined in a SimulSurvey object that will generate the lightcurves.
(This may take about a minute or two.)
Step5: Analysing the output
The output of get_lightcurves() is a LightcurveCollection object. Lightcurves are automatically filter, so only those that would be detected in the survey are kept.
You can save a the lightcurves in a pickle file and load them again later without rerunning the simulation.
Step6: You can inspect the lightcurves manually. This example should return the lightcurve with the most points with S/N > 5.
Step7: The two figures below show how early the MNe are detected and at what redshifts. The simulation input parameters of transients that were not detected are also kept, so can check completeness. | Python Code:
import os
home_dir = os.environ.get('HOME')
# Please enter the filename of the ztf_sim output file you would like to use. The example first determines
# your home directory and then uses a relative path (useful if working on several machines with different usernames)
survey_file = os.path.join(home_dir, 'data/ZTF/test_schedule_v6.db')
# Please enter the path to where you have placed the Schlegel, Finkbeiner & Davis (1998) dust map files
# You can also set the environment variable SFD_DIR to this path (in that case the variable below should be None)
sfd98_dir = os.path.join(home_dir, 'data/sfd98')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import simsurvey
import sncosmo
from astropy.cosmology import Planck15
import simsurvey_tools as sst
# Load the ZTF CCD corners and filters
ccds = sst.load_ztf_ccds()
sst.load_ztf_filters()
# Load simulated survey from file (download from ftp://ftp.astro.caltech.edu/users/ebellm/one_year_sim_incomplete.db)
# Currently DES filters are used as proxies for ZTF filters
plan = simsurvey.SurveyPlan(load_opsim=survey_file, band_dict={'g': 'ztfg', 'r': 'ztfr', 'i': 'desi'}, ccds=ccds)
mjd_range = (plan.cadence['time'].min() - 30, plan.cadence['time'].max() + 30)
# To review the pointing schedule, you can use this table
plan.pointings
Explanation: Tutorial: Generating Macronova lightcurves based on ztf_sim output
This notebook shows how to load the output for Eric's survey simulator ztf_sim and generate Macronova lightcurves for it using an SED from Rosswog et al. (2016). (Check out the other notebooks for examples how to simulate other transients.)
Note: You need to download Eric's newest sample output here. The link was also included in Eric's email, so you will likely only need to change the path below.
Furthermore you'll require the dust map from Schlegel, Finkbeiner & Davis (1998) for full functionality. It can be found here.
End of explanation
# Load phase, wavelengths and flux from file
phase, wave, flux = sncosmo.read_griddata_ascii('data/macronova_sed_wind20.dat')
# Create a time series source
source = sncosmo.TimeSeriesSource(phase, wave, flux)
# Create the model that combines SED and propagation effects
dust = sncosmo.CCM89Dust()
model = sncosmo.Model(source=source,
effects=[dust],
effect_names=['host'],
effect_frames=['rest'])
Explanation: Transient Model
In this example the transient model is create from an ASCII file. Alternatively you could use the built-in SN models of sncosmo or the Blackbody model provided in simsurvey.
End of explanation
def random_parameters(redshifts, model,
r_v=2., ebv_rate=0.11,
**kwargs):
cosmo = Planck15
# Amplitude
amp = []
for z in redshifts:
d_l = cosmo.luminosity_distance(z).value * 1e5
amp.append(d_l**-2)
return {
'amplitude': np.array(amp),
'hostr_v': r_v * np.ones(len(redshifts)),
'hostebv': np.random.exponential(ebv_rate, len(redshifts))
}
Explanation: Transient distribution
You need to define a function that draws the model parameters (except z and t0) from a random distribution.
In this case only host exinction is random but in addition the amplitude of each MN must be scaled for its luminosity distance.
End of explanation
transientprop = dict(lcmodel=model,
lcsimul_func=random_parameters)
tr = simsurvey.get_transient_generator((0.0, 0.05),
ratefunc=lambda z: 5e-5,
dec_range=(-30,90),
mjd_range=(mjd_range[0],
mjd_range[1]),
transientprop=transientprop,
sfd98_dir=sfd98_dir)
Explanation: TransientGenerator
The transient generator combines model and ditribution, and randomly draws all parameters needed to simulate the lightcurves.
(Note that here we also set the volumetric rate as funtion of z. For macronovae, a good guess would be $5\cdot 10^{-7}~\textrm{Mpc}^{-3}~\textrm{yr}^{-1}$ but this would only results in a couple of observed macronovae. For this example we'll use a 100 times larger rate.)
End of explanation
survey = simsurvey.SimulSurvey(generator=tr, plan=plan)
lcs = survey.get_lightcurves(
#progress_bar=True, notebook=True # If you get an error because of the progress_bar, delete this line.
)
len(lcs.lcs)
lcs[0]
Explanation: SimulSurvey
Lastly, all parts are combined in a SimulSurvey object that will generate the lightcurves.
(This may take about a minute or two.)
End of explanation
lcs.save('lcs_tutorial_mne.pkl')
lcs = simsurvey.LightcurveCollection(load='lcs_tutorial_mne.pkl')
Explanation: Analysing the output
The output of get_lightcurves() is a LightcurveCollection object. Lightcurves are automatically filter, so only those that would be detected in the survey are kept.
You can save a the lightcurves in a pickle file and load them again later without rerunning the simulation.
End of explanation
_ = sncosmo.plot_lc(lcs[0])
Explanation: You can inspect the lightcurves manually. This example should return the lightcurve with the most points with S/N > 5.
End of explanation
plt.hist(lcs.stats['p_det'], lw=2, histtype='step', range=(0,10), bins=20)
plt.xlabel('Detection phase (observer-frame)', fontsize='x-large')
_ = plt.ylabel(r'$N_{MNe}$', fontsize='x-large')
plt.hist(lcs.meta_full['z'], lw=1, histtype='step', range=(0,0.05), bins=20, label='all')
plt.hist(lcs.meta['z'], lw=2, histtype='step', range=(0,0.05), bins=20, label='detected')
plt.xlabel('Redshift', fontsize='x-large')
plt.ylabel(r'$N_{MNe}$', fontsize='x-large')
plt.xlim((0, 0.05))
plt.legend()
Explanation: The two figures below show how early the MNe are detected and at what redshifts. The simulation input parameters of transients that were not detected are also kept, so can check completeness.
End of explanation |
1,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LFC Data Analysis
Step1: Notebook Change Log
| Date | Change Description |
|
Step2: Print version numbers.
Step3: Data Load
Data description
The data files are located in the data sub-directory.
Match data
The E0_<season>.csv files were downloaded from english premiership stats. These files contain the premier league match data from season 2004-5 to season 2014-15. The csv structure is described in the notes.
LFC top scorers data
The LFC_PL_scorers_2004-05_2014-15.csv file was constructed from data held at the excellent lfchistory.net. This file contains the LFC top scorers in the premier league from 2004-5 to 2014-15.
LFC top scorers appearance data
The LFC_PL_top_apps.csv file was also constructed from data held at lfchistory.net. This file contains the premier league appearances of the LFC top 5 scorers from 2004-5 to 2014-15.
Rich list data
The Rich_list_2015.csv file was extracted from Forbes' list of the most valuable football clubs on wikipedia. This file contains the list of the 10 richest clubs in the world, as ranked by Forbes magazine.
LFC champion hotshots
The LFC_champ_hotshots.csv file was also constructed from data held at lfchistory.net. This file contains the LFC top scoring partnerships for the 18 title winning seasons.
Read the match data into a pandas dataframe for each season.
Step4: Check the number of rows and columns for each season's dataframe.
Step5: 11 seasons, each with 380 games per season - as expected.
Let's check the data - display the final 5 matches from the most recent season.
Step6: Stoke 6 (SIX!), what a debacle.
Read the LFC scorers data into a dataframe.
Step7: Let's check the data - show the top 3 scorers for the most recent season.
Step8: Read the LFC top scorer appearances data into a dataframe.
Step9: Let's check the data - show Suarez' appearances.
Step10: Read the Forbes rich list into a dataframe.
Note that the data is restricted to the top 10 in 2015.
Step11: Let's check the data - show the top 3.
Step12: Read the LFC title winning scoring partnerships into a dataframe.
Step13: Let's show the hotshots from season 1900-01, Liverpool's first title winning season.
Step16: Data Munge
Let's munge the dataframes to provide a reworked view for the LFC match results only.
Step17: Test the munge with the most recent LFC season - display the first 5 matches
Step18: Create dictionary to hold the new LFC dataframes, with key of season and value of dataframe.
Step19: Let's display the last 5 rows of munged dataframe for the most recent season.
Step20: Check the number of rows and columns.
Step21: As expected each season's dataframe contains 38 matches.
Step22: Data Analysis
Let's now analyse the data... Think of a question and produce the answer!
Compare key stats (totals)
Step23: Let's look at goal difference.
Step24: Plot points per season
Step25: Plot goals per season
Step26: Plot goal difference
Step27: Rafa achieved the best defensive performance, in particular from 2005-06 to 2008-09. Let's find the average goals conceded across these 4 season.
Step28: what is best goal difference?
Step30: Let's add the LFC top goal scorers to this LFC Goals per Season chart.
Step31: Let's find the top scorers across all seasons
Step32: Let's add the appearance data for top 5 players.
Step33: Let's now examine the shots data.
Step34: Compare key stats (average per game)
Step35: Compare results (wins, draws, losses)
Step37: What does the graph comparing performance look like?
Create event data structure using a dictionary with key of season and value of a tuple with match number and event description. This is used for plotting annotations.
Step38: Create a dictionary to hold the season specific matplotlib plot options.
Step39: Plot the match vs cumlative points for all seasons, with annotations
Step40: What was the best winning run and worst losing run?
Let's find best winning run
Step41: Let's find worst losing run
Step43: How has league position changed?
Let's produce a dataframe that shows league position for each season.
data
Step45: Plot position performance.
Step46: Blackburn was his final game - what joy
Who are the richest English clubs?
Step47: What are the title winning top scorer partnerships?
Let's analyse the champion hotshot data.
Step48: What is the highest scoring partnership?
Step49: What is the lowest scoring partnership?
Step50: What is the average total goals of title winning partnership? And what is highest and lowest?
Step51: What is the average total goals of title winning top striker?
Step52: What is the average total goals of title winning partner?
Step53: Which top striker scored most goals?
Step54: Which top striker scored least goals?
Step55: Which partner striker scored most goals?
Step56: That's high, let's look at the partnership.
Step57: Which partner striker scored least goals?
Step58: Who were hotshots in 70s and 80s? | Python Code:
%%html
<! left align the change log table in next cell >
<style>
table {float:left}
</style>
Explanation: LFC Data Analysis: From Rafa to Rodgers
Lies, Damn Lies and Statistics
See Terry's blog LFC: From Rafa To Rodgers for a discussion of of the data generated by this analysis.
This notebook analyses Liverpool FC's premier league performance from Rafael Benitez to Brendan Rodgers, a period of 11 years covering season 2004-5 to 2014-15. The analysis uses IPython Notebook, python, pandas and matplotlib to analyse the data sets.
End of explanation
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import sys
from datetime import datetime
from __future__ import division
# enable inline plotting
%matplotlib inline
Explanation: Notebook Change Log
| Date | Change Description |
| :------------ | :----------------- |
| 25th May 2015 | Initial baseline |
| 28th May 2015 | Fixed typos in text and graphs |
| 5th June 2015 | Fixed draw and loss % on LFC points per game graph |
Set-up
Let's import the modules needed for the analysis.
End of explanation
print 'python version: {}'.format(sys.version)
print 'pandas version: {}'.format(pd.__version__)
print 'matplotlib version: {}'.format(mpl.__version__)
print 'numpy version: {}'.format(np.__version__)
Explanation: Print version numbers.
End of explanation
# define list of seasons to analyse, corresponding to the csv files
SEASON_LIST = ['2004-05', '2005-06', '2006-07', '2007-08', '2008-09', '2009-10',
'2010-11', '2011-12', '2012-13', '2013-14', '2014-15']
MOST_RECENT_SEASON = SEASON_LIST[-1]
# defines the selected columns from the csv file to keep
COLUMNS_FILTER = ['Date', 'HomeTeam','AwayTeam', 'FTHG', 'FTAG', 'FTR', 'HS',
'AS', 'HST', 'AST']
# define dictionary to hold premier league dataframes
# key is the season and value is the dataframe
df_dict = {}
# read the selected data in each csv into the dictionary
for season in SEASON_LIST:
df_dict[season] = pd.read_csv('data/E0_{}.csv'.format(season), usecols=COLUMNS_FILTER)
Explanation: Data Load
Data description
The data files are located in the data sub-directory.
Match data
The E0_<season>.csv files were downloaded from english premiership stats. These files contain the premier league match data from season 2004-5 to season 2014-15. The csv structure is described in the notes.
LFC top scorers data
The LFC_PL_scorers_2004-05_2014-15.csv file was constructed from data held at the excellent lfchistory.net. This file contains the LFC top scorers in the premier league from 2004-5 to 2014-15.
LFC top scorers appearance data
The LFC_PL_top_apps.csv file was also constructed from data held at lfchistory.net. This file contains the premier league appearances of the LFC top 5 scorers from 2004-5 to 2014-15.
Rich list data
The Rich_list_2015.csv file was extracted from Forbes' list of the most valuable football clubs on wikipedia. This file contains the list of the 10 richest clubs in the world, as ranked by Forbes magazine.
LFC champion hotshots
The LFC_champ_hotshots.csv file was also constructed from data held at lfchistory.net. This file contains the LFC top scoring partnerships for the 18 title winning seasons.
Read the match data into a pandas dataframe for each season.
End of explanation
for season, df in sorted(df_dict.items()):
print 'season={}, shape={}'.format(season, df.shape)
Explanation: Check the number of rows and columns for each season's dataframe.
End of explanation
print 'season: {}'.format(MOST_RECENT_SEASON)
df_dict[MOST_RECENT_SEASON].tail()
Explanation: 11 seasons, each with 380 games per season - as expected.
Let's check the data - display the final 5 matches from the most recent season.
End of explanation
dflfc_scorers = pd.read_csv('data/LFC_PL_scorers_2004-05_2014-15.csv')
Explanation: Stoke 6 (SIX!), what a debacle.
Read the LFC scorers data into a dataframe.
End of explanation
print dflfc_scorers[dflfc_scorers['Season'] == MOST_RECENT_SEASON].head(3)
Explanation: Let's check the data - show the top 3 scorers for the most recent season.
End of explanation
dflfc_apps = pd.read_csv('data/LFC_PL_top_apps.csv')
Explanation: Read the LFC top scorer appearances data into a dataframe.
End of explanation
dflfc_apps[dflfc_apps['Player'] == 'Luis Suarez']
Explanation: Let's check the data - show Suarez' appearances.
End of explanation
rich_list_2015 = pd.read_csv('data/Rich_list_2015.csv')
Explanation: Read the Forbes rich list into a dataframe.
Note that the data is restricted to the top 10 in 2015.
End of explanation
rich_list_2015.head(3)
Explanation: Let's check the data - show the top 3.
End of explanation
dflfc_champ_hotshots = pd.read_csv('data/LFC_champ_hotshots.csv')
Explanation: Read the LFC title winning scoring partnerships into a dataframe.
End of explanation
dflfc_champ_hotshots[dflfc_champ_hotshots['Season'] == '1900-01']
Explanation: Let's show the hotshots from season 1900-01, Liverpool's first title winning season.
End of explanation
def assign_points(row):
Return the points associated with given win, draw or loss result.
POINTS_MAPPER = {'W': 3, 'D': 1, 'L': 0}
return POINTS_MAPPER[row['R']]
def prem_munge(df, team='Liverpool'):
Return new dataframe for given team.
input dataframe columns: see http://www.football-data.co.uk/englandm.php
with
output dataframe columns:
'Date', 'Opponent', 'R', 'HA', GF', 'GA', SF', 'SA', 'SFT', 'SAT', 'PTS', 'CUMPTS'
Date is match Date (datetime), Opponent is opponent (str)
HA is Home or Away (str: 'H' or 'A')
R is Result (str: 'W' or 'D' or 'L')
GF is Goals For (int), GA is Goals Against (int)
SF is Shots For (int), SA is Shots Against (int)
SFT is Shots For on Target (int), SAT is Shots Against on Target (int)
PTS is PoinTS (int), CUMPTS is CUMlative PoinTS (int)
DATE_FORMAT = '%d/%m/%y' # input dataframe's Date column format
ALT_DATE_FORMAT = '%d/%m/%Y' # used for 2004-5 dataframe
# define column order for the output dataframe
COLUMN_ORDER = ['Date', 'Opponent', 'HA', 'R', 'GF', 'GA',
'SF', 'SA', 'SFT', 'SAT', 'PTS', 'CUMPTS']
# filter dataframe for home team
df_home = df[(df['HomeTeam'] == team)].copy()
df_home['HA'] = 'Home'
df_home.drop('HomeTeam', inplace=True, axis=1)
########################
# rebuild home dataframe
df_home.rename(columns={'AwayTeam': 'Opponent', 'FTHG': 'GF', 'FTAG': 'GA', 'FTR': 'R',
'HS': 'SF', 'AS': 'SA', 'HST': 'SFT', 'AST': 'SAT'}, inplace=True)
# rework home result and assign points
# define mapping dictionary, Home win is win for home team, Away win is loss
HOME_MAPPER = {'H': 'W', 'D': 'D', 'A': 'L'}
f_home = lambda x: HOME_MAPPER[x]
df_home['R'] = df_home['R'].map(f_home)
df_home['PTS'] = df_home.apply(assign_points, axis=1)
# filter dataframe for away team
df_away = df[(df['AwayTeam'] == team)].copy()
df_away['HA'] = 'Away'
########################
# rebuild away dataframe
df_away.rename(columns={'HomeTeam': 'Opponent', 'FTHG': 'GA', 'FTAG': 'GF', 'FTR': 'R',
'HS': 'SA', 'AS': 'SF', 'HST': 'SAT', 'AST': 'SFT'}, inplace=True)
# rework away result and assign points
# define mapping dictionary, Away win is win for away team, Home win is loss
AWAY_MAPPER = {'H': 'L', 'D': 'D', 'A': 'W'}
f_away = lambda x: AWAY_MAPPER[x]
df_away['R'] = df_away['R'].map(f_away)
df_away['PTS'] = df_away.apply(assign_points, axis=1)
df_away.drop('AwayTeam', inplace=True, axis=1)
########################
# create new dataframe by combining home and away dataframes
df_new = pd.concat([df_home, df_away])
# convert Date column to datetime (uses numpy datetime64) and sort by date
try:
df_new['Date'] = pd.to_datetime(df_new['Date'], format=DATE_FORMAT)
except ValueError:
df_new['Date'] = pd.to_datetime(df_new['Date'], format=ALT_DATE_FORMAT)
df_new.sort(columns='Date', inplace=True)
# add new CUMPTS column of cumulative points
df_new['CUMPTS'] = df_new['PTS'].cumsum()
# reset index to give match number, starting at 0
df_new.reset_index(inplace=True, drop=True)
# re-work columns to match required column order
df_new = df_new[COLUMN_ORDER]
return df_new
Explanation: Data Munge
Let's munge the dataframes to provide a reworked view for the LFC match results only.
End of explanation
df = prem_munge(df_dict[MOST_RECENT_SEASON])
df.head()
Explanation: Test the munge with the most recent LFC season - display the first 5 matches
End of explanation
dflfc_dict = {}
for season, df in sorted(df_dict.items()):
dflfc_dict[season] = prem_munge(df)
Explanation: Create dictionary to hold the new LFC dataframes, with key of season and value of dataframe.
End of explanation
dflfc_dict[MOST_RECENT_SEASON].tail()
Explanation: Let's display the last 5 rows of munged dataframe for the most recent season.
End of explanation
for season, df in sorted(dflfc_dict.items()):
print 'season={}, shape={}'.format(season, df.shape)
Explanation: Check the number of rows and columns.
End of explanation
print 'total LFC matches analysed: {}'.format(sum(dflfc_dict[season].shape[0] for season in SEASON_LIST))
Explanation: As expected each season's dataframe contains 38 matches.
End of explanation
dflfc_seasons = pd.DataFrame()
for season, dflfc in sorted(dflfc_dict.items()):
dflfc_summary = pd.DataFrame(dflfc.sum()).transpose()
dflfc_summary.drop('CUMPTS', axis=1, inplace=True)
dflfc_summary['Season'] = season
dflfc_summary['GD'] = (dflfc_summary['GF'] - dflfc_summary['GA']) # goal difference
dflfc_summary['SPG'] = (dflfc_summary['SF']/dflfc_summary['GF']).round(2) # shots per goal
dflfc_seasons = pd.concat([dflfc_seasons, dflfc_summary], axis=0)
dflfc_seasons.set_index('Season', inplace=True)
dflfc_seasons.columns.name = 'Total'
dflfc_seasons
Explanation: Data Analysis
Let's now analyse the data... Think of a question and produce the answer!
Compare key stats (totals)
End of explanation
dflfc_seasons.sort(['GD'], ascending=False)
Explanation: Let's look at goal difference.
End of explanation
FIG_SIZE = (12, 8)
fig = plt.figure(figsize=FIG_SIZE)
ax = dflfc_seasons['PTS'].plot(kind='bar', title='LFC Total Points per Season', color='red')
ax.set_ylabel("Total Points")
plt.show()
Explanation: Plot points per season
End of explanation
FIG_SIZE = (12, 8)
fig = plt.figure(figsize=FIG_SIZE)
ax = dflfc_seasons['SPG'].plot(kind='bar', title='LFC Shots per Goal per Season', color='red')
ax.set_ylabel("Shots per Goal")
plt.show()
Explanation: Plot goals per season
End of explanation
FIG_SIZE = (12, 8)
fig = plt.figure(figsize=FIG_SIZE)
ax = (dflfc_seasons['GF'] - dflfc_seasons['GA']).plot(kind='bar', title='LFC Goal Difference per Season', color='red')
ax.set_ylabel("Total Goal Difference")
plt.show()
FIG_SIZE = (12, 8)
fig = plt.figure(figsize=FIG_SIZE)
ax = dflfc_seasons['GF'].plot(kind='bar', label='Goals For', title='LFC Goals per Season', color='red')
ax = dflfc_seasons['GA'].plot(kind='bar', stacked=True, color='green', label='Goals Against')
ax.set_ylabel("Total Goals")
ax.legend(loc='upper left', fancybox=True, shadow=True)
plt.show()
Explanation: Plot goal difference
End of explanation
dflfc_seasons['GA']
RAFA_BEST_GA = ['2005-06', '2006-07', '2007-08', '2008-09']
dflfc_seasons['GA'].loc[RAFA_BEST_GA].mean().round(1)
Explanation: Rafa achieved the best defensive performance, in particular from 2005-06 to 2008-09. Let's find the average goals conceded across these 4 season.
End of explanation
s = dflfc_seasons['GD'].copy()
s.sort('GD', ascending=False)
s
Explanation: what is best goal difference?
End of explanation
def lfc_top_scorers(df, season, n):
Return list of names of top n goal scorers for given season in given dataframe.
Exclude own goals.
If there are multiple scorers on same number of goals then return them all.
Input:
df - pandas dataframe containing cols: 'Season', 'Player', 'LeagueGoals'
season - str containing season e.g. '2014-15'
n - integer containing number of top goal scorers to return
Output
top_scorer_list - list of (player, goals) tuples
target = n # target number of scorers
top_scorer_list = [] # holds top player list, containing (player, goals)
count = 0
prev_tot = None
for player, goal_tot in df[['Player', 'LeagueGoals']][(df['Season'] == season) &
(df['Player'] != 'Own goals')].values:
if goal_tot != prev_tot:
# goal tot not same as before so increment count
count += 1
prev_tot = goal_tot
if count > target:
break
else:
top_scorer_list.append((player, goal_tot))
return top_scorer_list
# test
L = lfc_top_scorers(dflfc_scorers, '2006-07', 3)
print L
# test join
L2 = '\n'.join(['{} ({})'.format(player, goals) for player, goals in L])
print L2
print len(L)
NUM_TOP_SCORERS = 3
FIG_SIZE = (15, 12)
WIDTH = 0.7
fig = plt.figure(figsize=FIG_SIZE)
ax = dflfc_seasons['GF'].plot(kind='bar', label='Goals For', color='red', width=WIDTH)
ax = dflfc_seasons['GA'].plot(kind='bar', label='Goals Against', color='blue', width=WIDTH, stacked=True)
for season, gf in dflfc_seasons['GF'].iteritems():
# determine top goal scorers and form string to print
top_scorer_list = lfc_top_scorers(dflfc_scorers, season, NUM_TOP_SCORERS)
top_scorer_str = '\n'.join(['{} ({})'.format(player, goals) for player, goals in top_scorer_list])
# calculate position of annotation
sidx = SEASON_LIST.index(season)
x, y = (sidx, gf + len(top_scorer_list) - 2)
# annotate above GF bar the names of top scorers and number of goals
ax.annotate(top_scorer_str, xy=(x,y), xytext=(x,y), va="bottom", ha="center", fontsize=8.5)
ax.set_ylabel("Total Goals")
ax.set_title('LFC Goals per Season with Top Goal Scorers')
ax.legend(loc='upper left', fancybox=True, shadow=True)
plt.show()
fig.savefig('SeasonvsGoals.png', bbox_inches='tight')
Explanation: Let's add the LFC top goal scorers to this LFC Goals per Season chart.
End of explanation
TITLE = 'Top 5 Scorers Across Total Goals Scored'
FIG_SIZE = (9, 6)
fig = plt.figure(figsize=FIG_SIZE)
dflfc_scorers_grouped = pd.DataFrame(dflfc_scorers['LeagueGoals'].groupby(dflfc_scorers['Player']).sum())
dflfc_topscorers = dflfc_scorers_grouped.sort('LeagueGoals', ascending=False).head(5)
ax = dflfc_topscorers.plot(kind='bar', legend='False', color='red', figsize=FIG_SIZE)
ax.set_ylabel('Total Goals Scored')
ax.set_title(TITLE)
ax.legend_.remove()
fig = plt.gcf() # save current figure
plt.show()
#fig.savefig('PlayervsGoals.png', bbox_inches='tight')
Explanation: Let's find the top scorers across all seasons
End of explanation
dflfc_apps.head()
# build new dataframe with player, appearances, goals, goals per appearance, appearances per goal
dflfc_apps_grouped = dflfc_apps[['Player', 'Appearances']].groupby(dflfc_apps['Player']).sum()
dflfc_apps_tot = pd.DataFrame(dflfc_apps_grouped)
dflfc_top = dflfc_apps_tot.join(dflfc_topscorers)
dflfc_top['GPA'] = dflfc_top['LeagueGoals']/dflfc_top['Appearances']
dflfc_top['GPA'] = dflfc_top['GPA'].round(3)
dflfc_top.sort('GPA', ascending=False, inplace=True)
dflfc_top.rename(columns={'LeagueGoals': 'PLGoals', 'Appearances': 'PLGames'}, inplace=True)
dflfc_top.index.name='Top Scorer'
dflfc_top['APG'] = dflfc_top['PLGames']/dflfc_top['PLGoals']
dflfc_top
# plot
TITLE = 'Top 5 Scorers Premier League Goal Per Game Ratio \n(2004-05 to 2014-15)'
FIG_SIZE = (9, 6)
ax = dflfc_top['GPA'].plot(kind='bar', color='red', figsize=FIG_SIZE, width=0.8)
# annotate bars with values
for scorer_ix, (PLGames, PLGoals, GPA, APG) in enumerate(dflfc_top.values):
x, y = scorer_ix, GPA+0.02
annotate_str = str(GPA) + '\n\ngames={}\n'.format(str(int(PLGames))) + 'goals={}'.format(str(int(PLGoals)))
ax.annotate(annotate_str, xy=(x,y), xytext=(x,y), va="top", ha="center", fontsize=10)
ax.set_title(TITLE)
ax.set_ylabel('Goals Per Game Ratio')
ax.set_xlabel('Top 5 Scorers')
fig = plt.gcf() # save current figure
plt.show()
fig.savefig('ScorervsGPG.png', bbox_inches='tight')
dflfc_apps_tot.sort('Appearances', ascending=False)
dflfc_topscorers.sort('LeagueGoals', ascending=False)
Explanation: Let's add the appearance data for top 5 players.
End of explanation
FIG_SIZE = (12, 8)
fig = plt.figure(figsize=FIG_SIZE)
ax = dflfc_seasons['SF'].plot(kind='bar', label='Shots For', title='LFC Shots per Season',
ylim=(0, 850), color='red')
ax = dflfc_seasons['SA'].plot(kind='bar', stacked=True, color='green', label='Shots Against')
ax.set_ylabel("Total Shots")
ax.legend(loc='upper left', fancybox=True, shadow=True)
plt.show()
FIG_SIZE = (12, 8)
fig = plt.figure(figsize=FIG_SIZE)
ax = dflfc_seasons['SFT'].plot(kind='bar', label='Shots On Target For', title='LFC Shots On Target per Season',
ylim=(0, dflfc_seasons['SFT'].max()+20), color='red')
ax = dflfc_seasons['SAT'].plot(kind='bar', stacked=True, color='green', label='Shots On Target Against')
ax.set_ylabel("Total Shots on Target")
ax.legend(loc='upper left', fancybox=True, shadow=True)
plt.show()
Explanation: Let's now examine the shots data.
End of explanation
dflfc_seasons_avg = pd.DataFrame()
for season, dflfc in sorted(dflfc_dict.items()):
dflfc_summary = pd.DataFrame(dflfc.sum()).transpose()
dflfc_summary.drop('CUMPTS', axis=1, inplace=True)
tot_games = len(dflfc)
initial_columns = dflfc_summary.columns.values
for col in initial_columns:
dflfc_summary[col+'avg'] = (dflfc_summary[col]/tot_games).round(2)
dflfc_summary['Season'] = season
dflfc_summary.drop(initial_columns, axis=1, inplace=True)
dflfc_seasons_avg = pd.concat([dflfc_seasons_avg, dflfc_summary], axis=0)
dflfc_seasons_avg.set_index('Season', inplace=True)
dflfc_seasons_avg.columns.name = 'Average per game'
dflfc_seasons_avg
FIG_SIZE = (12, 8)
fig = plt.figure(figsize=FIG_SIZE)
ax = dflfc_seasons_avg['PTSavg'].plot(kind='bar', title='LFC Average Points per Match per Season', color='red')
ax.set_ylabel("Average Points per Match")
plt.show()
Explanation: Compare key stats (average per game)
End of explanation
dflfc_result = pd.DataFrame() # new dataframe for results
for season, dflfc in sorted(dflfc_dict.items()):
w = dflfc['R'][dflfc['R'] == 'W'].count()
dflfc_result.set_value(season, 'W', w)
d = dflfc['R'][dflfc['R'] == 'D'].count()
dflfc_result.set_value(season, 'D', d)
l = dflfc['R'][dflfc['R'] == 'L'].count()
dflfc_result.set_value(season, 'L', l)
total_games = len(dflfc_dict[season])
dflfc_result.set_value(season, 'W%', 100*(w/total_games).round(3))
dflfc_result.set_value(season, 'D%', 100*(d/total_games).round(3))
dflfc_result.set_value(season, 'L%', 100*(l/total_games).round(3))
dflfc_result.columns.name = 'Result'
dflfc_result.index.name = 'Season'
dflfc_result
dflfc_result['W']
FIG_SIZE = (12, 8)
dflfc_result[['W%', 'D%', 'L%']].plot(kind='bar', title='Wins, Draws and Losses % per Season',
color=['red', 'green', 'blue'], figsize=FIG_SIZE)
plt.ylabel('Total Result')
plt.legend(loc='upper left', fancybox=True, shadow=True)
plt.show()
Explanation: Compare results (wins, draws, losses)
End of explanation
def key_event(df, event_date):
Return match number on or after given event_date.
input: matches, pandas dataframe in munged format
event_time, string of date in form 'mm/dd/yy'
output: match_number, integer starting at 0 (none if no natch)
DATE_FORMAT = '%d/%m/%y'
# convert event date to numpy datetime64, for comparison
event_date = np.datetime64(datetime.strptime(event_date, DATE_FORMAT))
# find match
for match_date in df['Date'].values:
if match_date >= event_date:
# match found, return match number (the index)
return int(df[df['Date'] == match_date].index.tolist()[0])
# match not found
return None
key_event_dict = {}
# use key_event() function to determine match at which event took place
# dates given are from wikipedia
key_event_dict['2010-11'] = (key_event(dflfc_dict['2010-11'], '08/01/11'),
"Roy Hodgson's final game in season 2010-11, \nhe leaves 8/1/2011 (thank heavens)")
key_event_dict['2013-14'] = (key_event(dflfc_dict['2013-14'], '24/04/14'),
"That game against Chelsea 24/04/14, \nMourinho parks the bus and gets lucky")
key_event_dict
# Roy Hodgson's last game
print dflfc_dict['2010-11'].ix[20-1]
Explanation: What does the graph comparing performance look like?
Create event data structure using a dictionary with key of season and value of a tuple with match number and event description. This is used for plotting annotations.
End of explanation
season_dict = {}
season_dict['2014-15'] = {'label': '2014-15: Brendan Rodgers season 3', 'ls': '-', 'marker': '', 'lw': 2}
season_dict['2013-14'] = {'label': '2013-14: Brendan Rodgers season 2', 'ls': '-', 'marker': '', 'lw': 2}
season_dict['2012-13'] = {'label': '2012-13: Brendan Rodgers season 1', 'ls': '-', 'marker': '', 'lw': 2}
season_dict['2011-12'] = {'label': '2011-12: Kenny Dalglish season 2', 'ls': '-.', 'marker': 'o', 'lw': 1}
season_dict['2010-11'] = {'label': '2010-11: Roy Hodson / Kenny Dalglish season', 'ls': '-.', 'marker': '*', 'lw': 1}
season_dict['2009-10'] = {'label': '2009-10: Rafa Benitez season 6', 'ls': ':', 'marker': '', 'lw': 1}
season_dict['2008-09'] = {'label': '2008-09: Rafa Benitez season 5', 'ls': ':', 'marker': '', 'lw': 1}
season_dict['2007-08'] = {'label': '2007-08: Rafa Benitez season 4', 'ls': ':', 'marker': '', 'lw': 1}
season_dict['2006-07'] = {'label': '2006-07: Rafa Benitez season 3', 'ls': ':', 'marker': '', 'lw': 1}
season_dict['2005-06'] = {'label': '2005-06: Rafa Benitez season 2', 'ls': ':', 'marker': '', 'lw': 1}
season_dict['2004-05'] = {'label': '2004-05: Rafa Benitez season 1', 'ls': ':', 'marker': '', 'lw': 1}
season_dict['2004-05']['label']
Explanation: Create a dictionary to hold the season specific matplotlib plot options.
End of explanation
FIG_SIZE = (12, 8)
# calculate limits
max_played = 38
max_points = int(dflfc_seasons['PTS'].max())
seasons_analysed = ', '.join(dflfc_seasons.index.values)
# plot
fig = plt.figure(figsize=FIG_SIZE)
for season, dflfc in sorted(dflfc_dict.items()):
team_cum_points_list = dflfc['CUMPTS']
team_match_list = range(1, len(team_cum_points_list)+1)
# plot x vs y, with selected season options
plt.plot(team_match_list, team_cum_points_list, **season_dict[season])
# if there is a key event then annotate
if season in key_event_dict:
# get match number and event description
event_match, event_desc = key_event_dict[season]
# calculate position of annotation
x, y = team_match_list[event_match-1], team_cum_points_list[event_match-1]
if y > 50:
# set text position above point and to left
xtext = x - 8
ytext = y + 10
else:
# set text position below point and to right
xtext = x + 1
ytext = y - 15
# annotate with arrow below event
plt.annotate(event_desc, xy=(x,y), xytext=(xtext, ytext), va="bottom", ha="center",
arrowprops=dict(facecolor='black', width=.5, shrink=.05,
headwidth=4, frac=.05))
plt.xticks(range(1, max_played+1))
plt.yticks(range(0, max_points+1 + 20, 5))
plt.xlabel('Match Number')
plt.ylabel('Cumulative Points')
plt.legend(loc='upper left')
plt.title('LFC Match Number vs Cumulative Points\n(for seasons: {} to {})'.format(SEASON_LIST[0],
SEASON_LIST[-1]),
fontsize=16, fontweight='bold')
plt.show()
fig.savefig('MatchvsPTS.png', bbox_inches='tight')
Explanation: Plot the match vs cumlative points for all seasons, with annotations
End of explanation
for season in SEASON_LIST:
best_run = 0
this_run = 0
prev_pts = 0
for pts in dflfc_dict[season]['PTS']:
if pts == 3:
this_run += 1
else:
if this_run > best_run:
best_run = this_run
this_run = 0 # reset
print 'season={}, best winning run: {} winning games'.format(season, best_run)
Explanation: What was the best winning run and worst losing run?
Let's find best winning run
End of explanation
for season in SEASON_LIST:
worst_run = 0
this_run = 0
prev_pts = 0
for pts in dflfc_dict[season]['PTS']:
if pts == 0:
this_run += 1
else:
if this_run > worst_run:
worst_run = this_run
this_run = 0 # reset
print 'season={}, worst losing run: {} losing games'.format(season, worst_run)
Explanation: Let's find worst losing run
End of explanation
def prem_table(df, season):
Return premier league table dataframe for given match dataframe for given season.
results = [] # create results list
for team in df['HomeTeam'].unique():
home_results = df[df['HomeTeam'] == team]
home_played = len(home_results.index)
home_win = home_results.FTR[home_results.FTR == 'H'].count()
home_draw = home_results.FTR[home_results.FTR == 'D'].count()
home_lose = home_results.FTR[home_results.FTR == 'A'].count()
home_goals_for = home_results.FTHG.sum()
home_goals_against = home_results.FTAG.sum()
away_results = df[df['AwayTeam'] == team]
away_played = len(away_results.index)
away_win = away_results.FTR[away_results.FTR == 'A'].count()
away_draw = away_results.FTR[away_results.FTR == 'D'].count()
away_lose = away_results.FTR[away_results.FTR == 'H'].count()
away_goals_for = away_results.FTAG.sum()
away_goals_against = away_results.FTHG.sum()
result_d = {} # create dictionary to hold team results
result_d['Team'] = team
result_d['P'] = home_played + away_played
result_d['W'] = home_win + away_win
result_d['D'] = home_draw + away_draw
result_d['L'] = home_lose + away_lose
result_d['GF'] = home_goals_for + away_goals_for
result_d['GA'] = home_goals_against + away_goals_against
result_d['GD'] = result_d['GF'] - result_d['GA']
result_d['PTS'] = result_d['W']*3 + result_d['D']
results.append(result_d) # append team result dictionary to list of results
# create DataFrame from results and sort by points (and then goal difference)
PLtable = pd.DataFrame(results, columns=['Team', 'P', 'W', 'D', 'L', 'GF', 'GA', 'GD', 'PTS'])
PLtable.sort(columns=['PTS', 'GD'], ascending=False, inplace=True)
PLtable['Position'] = range(1, len(PLtable)+1) # add new column for position, with highest points first
PLtable.set_index(['Position'], inplace=True, drop=True)
return PLtable
# create new dataframe for positions
col_names = ['Champions', 'ChampPoints', 'ChampPPG', 'LFCPos', 'LFCPoints', 'LFCPPG']
df_position = pd.DataFrame(columns=col_names)
for season, df in sorted(df_dict.items()):
PLTdf = prem_table(df, season)
champions, champ_pts, champ_games = PLTdf[['Team', 'PTS', 'P']].iloc[0]
champ_ppg = round(champ_pts/champ_games, 2)
lfc_pos = PLTdf[PLTdf['Team'] == 'Liverpool'].index[0]
lfc_pts = PLTdf['PTS'][PLTdf['Team'] == 'Liverpool'].values[0]
lfc_games = PLTdf['P'][PLTdf['Team'] == 'Liverpool'].values[0]
lfc_ppg = round(lfc_pts/lfc_games, 2)
df_position.loc[season] = [champions, champ_pts, champ_ppg, lfc_pos, lfc_pts, lfc_ppg]
df_position.index.name = 'Season'
df_position
Explanation: How has league position changed?
Let's produce a dataframe that shows league position for each season.
data: season | Champions | Champion Points | Champion PPG | LFC Position | LFC Points | LFC PPG
Start by creating premier league table for each season.
End of explanation
# Ref: http://stackoverflow.com/questions/739241/date-ordinal-output
def n_plus_suffix(n):
Return n plus the suffix e.g. 1 becomes 1st, 2 becomes 2nd.
assert isinstance(n, (int, long)), '{} is not an integer'.format(n)
if 10 <= n % 100 < 20:
return str(n) + 'th'
else:
return str(n) + {1 : 'st', 2 : 'nd', 3 : 'rd'}.get(n % 10, "th")
# test
for i in range(1, 32):
print n_plus_suffix(i),
TITLE = 'LFC Season Points Comparison'
FIG_SIZE = (15, 10)
max_points = int(df_position['ChampPoints'].max())
fig = plt.figure(figsize=FIG_SIZE)
ax = df_position['ChampPoints'].plot(kind='bar', color='y', label='Champions', width=0.6)
ax = df_position['LFCPoints'].plot(kind='bar', color='red', label='LFC', width=0.6)
for sidx, (ch, chpts, chppg, lfcpos, lfcpts, lfcppg) in enumerate(df_position.values):
# annotate description of each season, rotated
season = SEASON_LIST[sidx]
season_desc = season_dict[season]['label'][len(season)+1:]
x, y = (sidx, 2) # calculate position of annotation
ax.annotate(season_desc, xy=(x,y), xytext=(x,y), va="bottom", ha="center",
rotation='vertical', style='italic', fontsize='9')
# annotate above champions bar the name of champions and winning points total
x, y = (sidx, chpts)
ax.annotate(str(ch)+'\n'+str(int(chpts)), xy=(x,y), xytext=(x,y),
va="bottom", ha="center")
# annotate below LFC bar the points total and position
x, y = (sidx, lfcpts - 8)
ax.annotate('LFC\n' + str(n_plus_suffix(int(lfcpos))) + '\n' + str(int(lfcpts)),
xy=(x,y), xytext=(x,y), va="bottom", ha="center")
ax.set_ylabel("Total Points")
ax.set_ylim((0, max_points+20))
ax.set_title(TITLE)
plt.legend(loc='upper left', fancybox=True, shadow=True)
plt.show()
fig.savefig('SeasonvsPTS.png', bbox_inches='tight')
print 'Champions max points per game: {}'.format(df_position['ChampPPG'].max())
TITLE = 'LFC Season Points Per Game Comparison with Win Percentage'
FIG_SIZE = (15, 10)
max_ppg = df_position['ChampPPG'].max()
fig = plt.figure(figsize=FIG_SIZE)
ax = df_position['ChampPPG'].plot(kind='bar', color='y', label='Champions', width=0.75)
ax = df_position['LFCPPG'].plot(kind='bar', color='red', label='LFC', width=0.75)
ax.set_ylabel("Points Per Game")
season_lfcppg = [] # to hold tuple of (season, LFC points per game)
for sidx, (ch, chpts, chppg, lfcpos, lfcpts, lfcppg) in enumerate(df_position.values):
# annotate description of each season, rotated
season = SEASON_LIST[sidx]
season_desc = season_dict[season]['label'][len(season)+1:]
x, y = (sidx, .05) # calculate position of annotation
ax.annotate(season_desc, xy=(x,y), xytext=(x,y), va="bottom", ha="center",
rotation='vertical', style='italic', fontsize='11')
# annotate above champions bar the name of champions and points per game
x, y = (sidx, chppg)
ax.annotate(str(ch)+'\n'+ str(chppg), xy=(x,y), xytext=(x,y),
va="bottom", ha="center")
# annotate below LFC bar the points total and position
x, y = (sidx, lfcppg - 0.38)
w, d, l = dflfc_result[['W%', 'D%', 'L%']].ix[sidx].values
lfc_pos_str = str(n_plus_suffix(int(lfcpos)))
lfc_ppg_str = '\n' + str(lfcppg)
result_str = '\n\nW%={}\nD%={}\nL%={}'.format(w, d, l)
ax.annotate('LFC ' + str(n_plus_suffix(int(lfcpos))) + lfc_ppg_str + result_str, xy=(x,y), xytext=(x,y),
va="bottom", ha="center")
# append ppg to list
season_lfcppg.append((season, lfcppg))
ax.set_ylim((0, max_ppg+0.5))
ax.set_title(TITLE)
plt.legend(loc='upper left', fancybox=True, shadow=True)
plt.show()
fig.savefig('SeasonvsPPG.png', bbox_inches='tight')
DARKEST_SEASON = '2010-11'
DARK_GAMES = 19 # number of games that Hodgson was in charge
TOTAL_GAMES = 38
rh_season_desc = DARKEST_SEASON + ' Part 1\n Roy Hodgson'
kd_season_desc = DARKEST_SEASON + ' Part 2\nKenny Dalglish'
# calculate points per game for Roy Hodgson
dflfc_201011_rh = dflfc_dict[DARKEST_SEASON][0:DARK_GAMES+1]
rh_matches = len(dflfc_201011_rh)
rh_points = dflfc_201011_rh['CUMPTS'].values[-1]
rh_ppg = round(rh_points/rh_matches, 2)
rh_fcast_pts = int(rh_points*(TOTAL_GAMES/rh_matches))
print 'RH: matches={}, points={}, ppg={}, fcast_pts={}'.format(rh_matches, rh_points, rh_ppg, rh_fcast_pts)
# calculate points per game for Kenny Dalglish
dflfc_201011_kd = dflfc_dict[DARKEST_SEASON][DARK_GAMES+1:]
kd_matches = len(dflfc_201011_kd)
kd_points = dflfc_201011_kd['CUMPTS'].values[-1] - rh_points
kd_ppg = round(kd_points/kd_matches, 2)
kd_fcast_pts = int(kd_points*(TOTAL_GAMES/kd_matches))
print 'KD: matches={}, points={}, ppg={}, fcast_pts={}'.format(kd_matches, kd_points, kd_ppg, kd_fcast_pts)
# replace DARKEST SEASON list with 2 ppg entries
# one for Hodgson and one for Dalglish
season_lfcppg_new = season_lfcppg[:] # copy
ppg_201011 = [lfcppg for (season, lfcppg) in season_lfcppg_new if season == DARKEST_SEASON][0]
season_lfcppg_new.remove((DARKEST_SEASON, ppg_201011))
season_lfcppg_new.append((rh_season_desc, rh_ppg))
season_lfcppg_new.append((kd_season_desc, kd_ppg))
# plot ppg as bar chart
TITLE = 'LFC Season Points Per Game Comparison\n \
(with 2010-11 split to show individual performance)'
FIG_SIZE = (12, 8)
fig = plt.figure(figsize=FIG_SIZE)
season_lfcppg_new.sort() # sort by season, in place
season_labels = [s for (s, p) in season_lfcppg_new]
x = range(1, len(season_lfcppg_new)+1)
y = [p for (s, p) in season_lfcppg_new]
ax = plt.bar(x, y, align='center', color='r')
# plot ppg as text above bar
for xidx, yt in enumerate(y):
xt = xidx + 1
plt.annotate(str(yt), xy=(xt,yt), xytext=(xt, yt), va="bottom", ha="center")
# highlight the low bar with yellow border in black
# this is Hodgson's first half of 2010-11
season_low, ppg_low = sorted(season_lfcppg_new, key=lambda tup: tup[1], reverse=False)[0]
xlow = season_lfcppg_new.index((season_low, ppg_low))
ax[xlow].set_color('black')
ax[xlow].set_edgecolor('yellow')
ax[xlow].set_hatch('/')
# and highlight second half of this season with yellow border
# this is Dalglish's second half of 2010-11
ax[xlow+1].set_edgecolor('yellow')
ax[xlow+1].set_hatch('/')
# add labels and plot
plt.xticks(x, season_labels, rotation='vertical')
plt.ylabel("Points Per Game")
plt.xlabel("\nSeason")
plt.title(TITLE)
plt.show()
fig.savefig('SeasonvsPTSdark.png', bbox_inches='tight')
dflfc_201011_rh.tail()
Explanation: Plot position performance.
End of explanation
rich_list_2015.head()
eng_rich_list = rich_list_2015[['Rank', 'Team', 'Value($M)', 'Revenue($M)'] ][rich_list_2015['Country'] == 'England']
eng_rich_list.reset_index(inplace=True, drop=True)
eng_rich_list
eng_rich_list = eng_rich_list.merge(prem_table(df_dict['2014-15'], '2014-15')[['Team', 'PTS']])
eng_rich_list['ValperPT'] = eng_rich_list['Value($M)']/eng_rich_list['PTS']
eng_rich_list['RevperPT'] = eng_rich_list['Revenue($M)']/eng_rich_list['PTS']
eng_rich_list.sort('RevperPT', ascending=False)
FIG_SIZE = (9, 6)
fig = plt.figure(figsize=FIG_SIZE)
# create list of colours corresponding to teams
# set LFC to red
bar_colours = ['b' for _ in range(len(eng_rich_list))]
LFC_idx = int(eng_rich_list[eng_rich_list['Team'] == 'Liverpool'].index.tolist()[0])
bar_colours[LFC_idx] = 'r'
ax = eng_rich_list.plot(x='Team', y='Value($M)', kind='bar', legend=False,
figsize=FIG_SIZE, color=bar_colours, title='English Team Value in 2015')
ax.set_xlabel('English Team')
ax.set_ylabel('Value ($M)')
plt.show()
fig.savefig('TeamvsValue.png', bbox_inches='tight')
TITLE = 'English Team Value in 2015 with Final League Position'
FIG_SIZE = (9, 6)
fig = plt.figure(figsize=FIG_SIZE)
# create list of colours corresponding to teams
# set LFC to red
bar_colours = ['b' for _ in range(len(eng_rich_list))]
LFC_idx = int(eng_rich_list[eng_rich_list['Team'] == 'Liverpool'].index.tolist()[0])
bar_colours[LFC_idx] = 'r'
# create list of rich teams
rich_teams = list(eng_rich_list['Team'].values)
# plot the rich teams
ax = eng_rich_list.plot(x='Team', y='Value($M)', kind='bar', legend=False,
figsize=FIG_SIZE, color=bar_colours)
# create new dataframe for positions
col_names = ['Champions', 'ChampPoints', 'ChampPPG', 'LFCPos', 'LFCPoints', 'LFCPPG']
df_position = pd.DataFrame(columns=col_names)
df_PLT2015 = prem_table(df_dict['2014-15'], '2014-15')['Team']
# plot the positions
for pos, team in df_PLT2015.iteritems():
if team in rich_teams:
# annotate team's final position
team_idx = rich_teams.index(team)
team_value = eng_rich_list['Value($M)'][eng_rich_list['Team'] == team].values[0]
x, y = (team_idx, team_value + 20)
ax.annotate(str(n_plus_suffix(int(pos))), xy=(x,y), xytext=(x,y),
va="bottom", ha="center")
ax.set_xlabel('English Team')
ax.set_ylabel('Value ($M)')
ax.set_title(TITLE)
fig = plt.gcf() # save current figure
plt.show()
fig.savefig('TeamvsValue.png', bbox_inches='tight')
Explanation: Blackburn was his final game - what joy
Who are the richest English clubs?
End of explanation
dflfc_champ_hotshots.tail()
Explanation: What are the title winning top scorer partnerships?
Let's analyse the champion hotshot data.
End of explanation
s_partner_goals = dflfc_champ_hotshots['LeagueGoals'].groupby(dflfc_champ_hotshots['Season']).sum()
df_partner_goals = pd.DataFrame(data=s_partner_goals)
df_partner_goals.sort('LeagueGoals', ascending=False).head(1)
dflfc_champ_hotshots[dflfc_champ_hotshots['Season'] == '1963-64']
Explanation: What is the highest scoring partnership?
End of explanation
df_partner_goals.sort('LeagueGoals', ascending=False).tail(1)
dflfc_champ_hotshots[dflfc_champ_hotshots['Season'] == '1976-77']
Explanation: What is the lowest scoring partnership?
End of explanation
print 'average partnership goals: {}'.format(df_partner_goals['LeagueGoals'].mean())
print 'max partnership goals: {}'.format(df_partner_goals['LeagueGoals'].max())
print 'min partnership goals: {}'.format(df_partner_goals['LeagueGoals'].min())
Explanation: What is the average total goals of title winning partnership? And what is highest and lowest?
End of explanation
df_scorer_first = dflfc_champ_hotshots.groupby('Season').first()
print 'top scorer average: {}'.format(df_scorer_first['LeagueGoals'].mean().round(2))
Explanation: What is the average total goals of title winning top striker?
End of explanation
df_scorer_second = dflfc_champ_hotshots.groupby('Season').last()
print 'partner scorer average: {}'.format(df_scorer_second['LeagueGoals'].mean().round(2))
Explanation: What is the average total goals of title winning partner?
End of explanation
df_scorer_first.sort('LeagueGoals', ascending=False).head(1)
Explanation: Which top striker scored most goals?
End of explanation
df_scorer_first.sort('LeagueGoals', ascending=False).tail(1)
Explanation: Which top striker scored least goals?
End of explanation
df_scorer_second.sort('LeagueGoals', ascending=False).head(1)
Explanation: Which partner striker scored most goals?
End of explanation
dflfc_champ_hotshots[dflfc_champ_hotshots['Season'] == '1946-47']
Explanation: That's high, let's look at the partnership.
End of explanation
df_scorer_second.sort('LeagueGoals', ascending=False).tail(1)
Explanation: Which partner striker scored least goals?
End of explanation
dflfc_champ_hotshots[(dflfc_champ_hotshots['Season'].str.contains('197')) |
(dflfc_champ_hotshots['Season'].str.contains('198'))]
Explanation: Who were hotshots in 70s and 80s?
End of explanation |
1,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Binning in physt
Step1: Ideal number of bins
Step2: Binning schemes
Exponential binning
Uses numpy.logscale to create bins.
Step3: Integer binning
Useful for integer values (or something you want to round to integers), creates bins of width=1 around integers (i.e. 0.5-1.5, ...)
Step4: Quantile-based binning
Based on quantiles, this binning results in all bins containing roughly the same amount
of observances.
Step5: Fixed-width bins
This binning is useful if you want "human-friendly" bin intervals.
Step6: "Human" bins
The width and alignment of bins is guessed from the data with an approximate number of bins as (optional) parameter.
Step7: Astropy binning
Astropy includes its histogramming tools. If this package is available, we reuse its binning
methods. These include | Python Code:
# Necessary import evil
from physt import histogram, binnings
import numpy as np
import matplotlib.pyplot as plt
# Some data
np.random.seed(42)
heights1 = np.random.normal(169, 10, 100000)
heights2 = np.random.normal(180, 6, 100000)
numbers = np.random.rand(100000)
Explanation: Binning in physt
End of explanation
X = [int(x) for x in np.logspace(0, 4, 50)]
algos = binnings.bincount_methods
Ys = { algo: [] for algo in algos}
for x in X:
ex_dataset = np.random.exponential(1, x)
for algo in algos:
Ys[algo].append(binnings.ideal_bin_count(ex_dataset, algo))
figure, axis = plt.subplots(figsize=(8, 8))
for algo in algos:
if algo == "default":
axis.plot(X, Ys[algo], ":.", label=algo, alpha=0.5, lw=2)
else:
axis.plot(X, Ys[algo], "-", label=algo, alpha=0.5, lw=2)
axis.set_xscale("log")
axis.set_yscale("log")
axis.set_xlabel("Sample size")
axis.set_ylabel("Bin count")
axis.legend(loc=2);
Explanation: Ideal number of bins
End of explanation
figure, axis = plt.subplots(1, 2, figsize=(10, 4))
hist1 = histogram(numbers, "exponential", bin_count=10, range=(0.0001, 1))
hist1.plot(color="green", ax=axis[0])
hist1.plot(density=True, errors=True, ax=axis[1])
axis[0].set_title("Absolute scale")
axis[1].set_title("Log scale")
axis[1].set_xscale("log");
Explanation: Binning schemes
Exponential binning
Uses numpy.logscale to create bins.
End of explanation
# Sum of two dice (should be triangle, right?)
dice = np.floor(np.random.rand(10000) * 6) + np.floor(np.random.rand(10000) * 6) + 2
histogram(dice, "integer").plot(ticks="center", density=True);
Explanation: Integer binning
Useful for integer values (or something you want to round to integers), creates bins of width=1 around integers (i.e. 0.5-1.5, ...)
End of explanation
figure, axis = plt.subplots(1, 2, figsize=(10, 4))
# bins2 = binning.quantile_bins(heights1, 40)
hist2 = histogram(heights1, "quantile", bin_count=40)
hist2.plot(ax=axis[0]);
hist2.plot(density=True, ax=axis[1]);
axis[0].set_title("Frequencies")
axis[1].set_title("Density");
hist2
figure, axis = plt.subplots()
histogram(heights1, "quantile", bin_count=10).plot(alpha=0.3, density=True, ax=axis, label="Quantile based")
histogram(heights1, 10).plot(alpha=0.3, density=True, ax=axis, color="green", label="Equal spaced")
axis.legend(loc=2);
Explanation: Quantile-based binning
Based on quantiles, this binning results in all bins containing roughly the same amount
of observances.
End of explanation
hist_fixed = histogram(heights1, "fixed_width", bin_width=3)
hist_fixed.plot()
hist_fixed
Explanation: Fixed-width bins
This binning is useful if you want "human-friendly" bin intervals.
End of explanation
human = histogram(heights1, "human", bin_count=15)
human.plot()
human
Explanation: "Human" bins
The width and alignment of bins is guessed from the data with an approximate number of bins as (optional) parameter.
End of explanation
middle_sized = np.random.normal(180, 6, 5000)
for n in ["blocks", "scott", "knuth", "freedman"]:
algo = "{0}".format(n)
hist = histogram(middle_sized, algo, name=algo)
hist.plot(density=True)
Explanation: Astropy binning
Astropy includes its histogramming tools. If this package is available, we reuse its binning
methods. These include:
Bayesian blocks
Knuth
Freedman
Scott
See http://docs.astropy.org/en/stable/visualization/histogram.html for more details.
End of explanation |
1,550 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting
There are many Python plotting libraries depending on your purpose. However, the standard general-purpose library is matplotlib. This is often used through its pyplot interface.
Step1: We have defined two sequences - in this case lists, but tuples would also work. One contains the $x$-axis coordinates, the other the data points to appear on the $y$-axis. A basic plot is produced using the plot command of pyplot. However, this plot will not automatically appear on the screen, as after plotting the data you may wish to add additional information. Nothing will actually happen until you either save the figure to a file (using pyplot.savefig(<filename>)) or explicitly ask for it to be displayed (with the show command). When the plot is displayed the program will typically pause until you dismiss the plot.
If using the notebook you can include the command %matplotlib inline or %matplotlib notebook before plotting to make the plots appear automatically inside the notebook. If code is included in a program which is run inside spyder through an IPython console, the figures may appear in the console automatically. Either way, it is good practice to always include the show command to explicitly display the plot.
This plotting interface is straightforward, but the results are not particularly nice. The following commands illustrate some of the ways of improving the plot
Step2: Whilst most of the commands are self-explanatory, a note should be made of the strings line r'$x$'. These strings are in LaTeX format, which is the standard typesetting method for professional-level mathematics. The $ symbols surround mathematics. The r before the definition of the string is Python notation, not LaTeX. It says that the following string will be "raw" | Python Code:
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['figure.figsize']=(12,9)
from math import sin, pi
x = []
y = []
for i in range(201):
x_point = 0.01*i
x.append(x_point)
y.append(sin(pi*x_point)**2)
pyplot.plot(x, y)
pyplot.show()
Explanation: Plotting
There are many Python plotting libraries depending on your purpose. However, the standard general-purpose library is matplotlib. This is often used through its pyplot interface.
End of explanation
from math import sin, pi
x = []
y = []
for i in range(201):
x_point = 0.01*i
x.append(x_point)
y.append(sin(pi*x_point)**2)
pyplot.plot(x, y, marker='+', markersize=8, linestyle=':',
linewidth=3, color='b', label=r'$\sin^2(\pi x)$')
pyplot.legend(loc='lower right')
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$y$')
pyplot.title('A basic plot')
pyplot.show()
Explanation: We have defined two sequences - in this case lists, but tuples would also work. One contains the $x$-axis coordinates, the other the data points to appear on the $y$-axis. A basic plot is produced using the plot command of pyplot. However, this plot will not automatically appear on the screen, as after plotting the data you may wish to add additional information. Nothing will actually happen until you either save the figure to a file (using pyplot.savefig(<filename>)) or explicitly ask for it to be displayed (with the show command). When the plot is displayed the program will typically pause until you dismiss the plot.
If using the notebook you can include the command %matplotlib inline or %matplotlib notebook before plotting to make the plots appear automatically inside the notebook. If code is included in a program which is run inside spyder through an IPython console, the figures may appear in the console automatically. Either way, it is good practice to always include the show command to explicitly display the plot.
This plotting interface is straightforward, but the results are not particularly nice. The following commands illustrate some of the ways of improving the plot:
End of explanation
from math import sin, pi, exp, log
x = []
y1 = []
y2 = []
for i in range(201):
x_point = 1.0 + 0.01*i
x.append(x_point)
y1.append(exp(sin(pi*x_point)))
y2.append(log(pi+x_point*sin(x_point)))
pyplot.loglog(x, y1, linestyle='--', linewidth=4,
color='k', label=r'$y_1=e^{\sin(\pi x)}$')
pyplot.loglog(x, y2, linestyle='-.', linewidth=4,
color='r', label=r'$y_2=\log(\pi+x\sin(x))$')
pyplot.legend(loc='lower right')
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$y$')
pyplot.title('A basic logarithmic plot')
pyplot.show()
from math import sin, pi, exp, log
x = []
y1 = []
y2 = []
for i in range(201):
x_point = 1.0 + 0.01*i
x.append(x_point)
y1.append(exp(sin(pi*x_point)))
y2.append(log(pi+x_point*sin(x_point)))
pyplot.semilogy(x, y1, linestyle='None', marker='o',
color='g', label=r'$y_1=e^{\sin(\pi x)}$')
pyplot.semilogy(x, y2, linestyle='None', marker='^',
color='r', label=r'$y_2=\log(\pi+x\sin(x))$')
pyplot.legend(loc='lower right')
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$y$')
pyplot.title('A different logarithmic plot')
pyplot.show()
Explanation: Whilst most of the commands are self-explanatory, a note should be made of the strings line r'$x$'. These strings are in LaTeX format, which is the standard typesetting method for professional-level mathematics. The $ symbols surround mathematics. The r before the definition of the string is Python notation, not LaTeX. It says that the following string will be "raw": that backslash characters should be left alone. Then, special LaTeX commands have a backslash in front of them: here we use \pi and \sin. Most basic symbols can be easily guessed (eg \theta or \int), but there are useful lists of symbols, and a reverse search site available. We can also use ^ to denote superscripts (used here), _ to denote subscripts, and use {} to group terms.
By combining these basic commands with other plotting types (semilogx and loglog, for example), most simple plots can be produced quickly.
Here are some more examples:
End of explanation |
1,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example xml file
Step1: Traversing the parsed tree
To visit all the children in order, user iter() to create a generator that iterates over the ElementTree instance.
Step2: To print only the groups of names and feed URL for the podcasts, leaving out all of the data in the header section by iterating over only the outline nodes and print the text and xmlURL attributes by looking up the values in the attrib dictionary
Step3: Finding Nodes in a Documents
Walking the entire tree like this, searching for relevant nodes, can be error prone. The previous example had to look at each outline node to determine if it was a group (nodes with only a text attribute) or podcast (with both text and xmlUrl). To produce a simple list of the podcast feed URLs, without names or groups, the logic could be simplified using findall() to look for nodes with more descriptive search characteristics.
As a first pass at converting the first version, an XPath argument can be used to look for all outline nodes.
Step4: It is possible to take advantage of the fact that the outline nodes are only nested two levels deep. Changing the search path to .//outline/outline means the loop will process only the second level of outline nodes.
Step5: Parsed Node Attributes
The items returned by findall() and iter() are Element objects, each representing a node in the XML parse tree. Each Element has attributes for accessing data pulled out of the XML. This can be illustrated with a somewhat more contrived example input file, data.xml.
```
<?xml version="1.0" encoding="UTF-8"?>
<top>
<child>Regular text.</child>
<child_with_tail>Regular text.</child_with_tail>"Tail" text.
<with_attributes name="value" foo="bar" />
<entity_expansion attribute="This & That">
That & This
</entity_expansion>
</top>
```
Step6: The text content of the nodes is available, along with the tail text, which comes after the end of a close tag.
Step7: XML entity references embedded in the document are converted to the appropriate characters before values are returned.
Step8: Watching Events While Parsing
The other API for processing XML documents is event-based. The parser generates start events for opening tags and end events for closing tags. Data can be extracted from the document during the parsing phase by iterating over the event stream, which is convenient if it is not necessary to manipulate the entire document afterwards and there is no need to hold the entire parsed document in memory.
Events can be one of
Step9: The event-style of processing is more natural for some operations, such as converting XML input to some other format. This technique can be used to convert list of podcasts from the earlier examples from an XML file to a CSV file, so they can be loaded into a spreadsheet or database application.
Step10: Parsing Strings
To work with smaller bits of XML text, especially string literals that might be embedded in the source of a program, use XML() and the string containing the XML to be parsed as the only argument.
Step11: For structured XML that uses the id attribute to identify unique nodes of interest, XMLID() is a convenient way to access the parse results.
XMLID() returns the parsed tree as an Element object, along with a dictionary mapping the id attribute strings to the individual nodes in the tree. | Python Code:
from xml.etree import ElementTree
with open('podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
print(tree)
Explanation: Example xml file:
```
<?xml version="1.0" encoding="UTF-8"?>
<opml version="1.0">
<head>
<title>My Podcasts</title>
<dateCreated>Sat, 06 Aug 2016 15:53:26 GMT</dateCreated>
<dateModified>Sat, 06 Aug 2016 15:53:26 GMT</dateModified>
</head>
<body>
<outline text="Non-tech">
<outline
text="99% Invisible" type="rss"
xmlUrl="http://feeds.99percentinvisible.org/99percentinvisible"
htmlUrl="http://99percentinvisible.org" />
</outline>
<outline text="Python">
<outline
text="Talk Python to Me" type="rss"
xmlUrl="https://talkpython.fm/episodes/rss"
htmlUrl="https://talkpython.fm" />
<outline
text="Podcast.__init__" type="rss"
xmlUrl="http://podcastinit.podbean.com/feed/"
htmlUrl="http://podcastinit.com" />
</outline>
</body>
</opml>
```
To parse the file, pass an open file handle to parse()
End of explanation
from xml.etree import ElementTree
import pprint
with open('podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
for node in tree.iter():
print(node.tag)
Explanation: Traversing the parsed tree
To visit all the children in order, user iter() to create a generator that iterates over the ElementTree instance.
End of explanation
from xml.etree import ElementTree
with open('podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
for node in tree.iter('outline'):
name = node.attrib.get('text')
url = node.attrib.get('xmlUrl')
if name and url:
print(' %s' % name)
print(' %s' % url)
else:
print(name)
Explanation: To print only the groups of names and feed URL for the podcasts, leaving out all of the data in the header section by iterating over only the outline nodes and print the text and xmlURL attributes by looking up the values in the attrib dictionary
End of explanation
from xml.etree import ElementTree
with open('podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
for node in tree.findall('.//outline'):
url = node.attrib.get('xmlUrl')
if url:
print(url)
Explanation: Finding Nodes in a Documents
Walking the entire tree like this, searching for relevant nodes, can be error prone. The previous example had to look at each outline node to determine if it was a group (nodes with only a text attribute) or podcast (with both text and xmlUrl). To produce a simple list of the podcast feed URLs, without names or groups, the logic could be simplified using findall() to look for nodes with more descriptive search characteristics.
As a first pass at converting the first version, an XPath argument can be used to look for all outline nodes.
End of explanation
from xml.etree import ElementTree
with open('podcasts.opml', 'rt') as f:
tree = ElementTree.parse(f)
for node in tree.findall('.//outline/outline'):
url = node.attrib.get('xmlUrl')
print(url)
Explanation: It is possible to take advantage of the fact that the outline nodes are only nested two levels deep. Changing the search path to .//outline/outline means the loop will process only the second level of outline nodes.
End of explanation
from xml.etree import ElementTree
with open('data.xml', 'rt') as f:
tree = ElementTree.parse(f)
node = tree.find('./with_attributes')
print(node.tag)
for name, value in sorted(node.attrib.items()):
print(' %-4s = "%s"' % (name, value))
Explanation: Parsed Node Attributes
The items returned by findall() and iter() are Element objects, each representing a node in the XML parse tree. Each Element has attributes for accessing data pulled out of the XML. This can be illustrated with a somewhat more contrived example input file, data.xml.
```
<?xml version="1.0" encoding="UTF-8"?>
<top>
<child>Regular text.</child>
<child_with_tail>Regular text.</child_with_tail>"Tail" text.
<with_attributes name="value" foo="bar" />
<entity_expansion attribute="This & That">
That & This
</entity_expansion>
</top>
```
End of explanation
from xml.etree import ElementTree
with open('data.xml', 'rt') as f:
tree = ElementTree.parse(f)
for path in ['./child', './child_with_tail']:
node = tree.find(path)
print(node.tag)
print(' child node text:', node.text)
print(' and tail text :', node.tail)
Explanation: The text content of the nodes is available, along with the tail text, which comes after the end of a close tag.
End of explanation
from xml.etree import ElementTree
with open('data.xml', 'rt') as f:
tree = ElementTree.parse(f)
node = tree.find('entity_expansion')
print(node.tag)
print(' in attribute:', node.attrib['attribute'])
print(' in text :', node.text.strip())
Explanation: XML entity references embedded in the document are converted to the appropriate characters before values are returned.
End of explanation
from xml.etree.ElementTree import iterparse
depth = 0
prefix_width = 8
prefix_dots = '.' * prefix_width
line_template = ''.join([
'{prefix:<0.{prefix_len}}',
'{event:<8}',
'{suffix:<{suffix_len}} ',
'{node.tag:<12} ',
'{node_id}',
])
EVENT_NAMES = ['start', 'end', 'start-ns', 'end-ns']
for (event, node) in iterparse('podcasts.opml', EVENT_NAMES):
if event == 'end':
depth -= 1
prefix_len = depth * 2
print(line_template.format(
prefix=prefix_dots,
prefix_len=prefix_len,
suffix='',
suffix_len=(prefix_width - prefix_len),
node=node,
node_id=id(node),
event=event,
))
if event == 'start':
depth += 1
Explanation: Watching Events While Parsing
The other API for processing XML documents is event-based. The parser generates start events for opening tags and end events for closing tags. Data can be extracted from the document during the parsing phase by iterating over the event stream, which is convenient if it is not necessary to manipulate the entire document afterwards and there is no need to hold the entire parsed document in memory.
Events can be one of:
start
A new tag has been encountered. The closing angle bracket of the tag was processed, but not the contents.
end
The closing angle bracket of a closing tag has been processed. All of the children were already processed.
start-ns
Start a namespace declaration.
end-ns
End a namespace declaration.
End of explanation
import csv
from xml.etree.ElementTree import iterparse
import sys
writer = csv.writer(sys.stdout, quoting=csv.QUOTE_NONNUMERIC)
group_name = ''
parsing = iterparse('podcasts.opml', events=['start'])
for (event, node) in parsing:
if node.tag != 'outline':
# Ignore anything not part of the outline
continue
if not node.attrib.get('xmlUrl'):
# Remember the current group
group_name = node.attrib['text']
else:
# Output a podcast entry
writer.writerow(
(group_name, node.attrib['text'],
node.attrib['xmlUrl'],
node.attrib.get('htmlUrl', ''))
)
Explanation: The event-style of processing is more natural for some operations, such as converting XML input to some other format. This technique can be used to convert list of podcasts from the earlier examples from an XML file to a CSV file, so they can be loaded into a spreadsheet or database application.
End of explanation
from xml.etree.ElementTree import XML
def show_node(node):
print(node.tag)
if node.text is not None and node.text.strip():
print(' text: "%s"' % node.text)
if node.tail is not None and node.tail.strip():
print(' tail: "%s"' % node.tail)
for name, value in sorted(node.attrib.items()):
print(' %-4s = "%s"' % (name, value))
for child in node:
show_node(child)
parsed = XML('''
<root>
<group>
<child id="a">This is child "a".</child>
<child id="b">This is child "b".</child>
</group>
<group>
<child id="c">This is child "c".</child>
</group>
</root>
''')
print('parsed =', parsed)
for elem in parsed:
show_node(elem)
Explanation: Parsing Strings
To work with smaller bits of XML text, especially string literals that might be embedded in the source of a program, use XML() and the string containing the XML to be parsed as the only argument.
End of explanation
from xml.etree.ElementTree import XMLID
tree, id_map = XMLID('''
<root>
<group>
<child id="a">This is child "a".</child>
<child id="b">This is child "b".</child>
</group>
<group>
<child id="c">This is child "c".</child>
</group>
</root>
''')
for key, value in sorted(id_map.items()):
print('%s = %s' % (key, value))
Explanation: For structured XML that uses the id attribute to identify unique nodes of interest, XMLID() is a convenient way to access the parse results.
XMLID() returns the parsed tree as an Element object, along with a dictionary mapping the id attribute strings to the individual nodes in the tree.
End of explanation |
1,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is the program for a student Bayesian network
Step1: Add nodes and edges
Step2: In a Bayesian network, each node has an associated CPD (conditional probability distribution).
Step3: To check the consistency of the model and associated CPDs
Step4: if an influence can flow in a trail in a network, it is known as an active trail
Step5: You can query the network as follows
Step6: Direct Causal Influence
Step7: Indirect Causal Influence | Python Code:
from pgmpy.models import BayesianModel
student_model = BayesianModel()
Explanation: This is the program for a student Bayesian network
End of explanation
student_model.add_nodes_from(['difficulty', 'intelligence', 'grade', 'sat', 'letter'])
student_model.nodes()
student_model.add_edges_from([('difficulty', 'grade'), ('intelligence', 'grade'), ('intelligence', 'sat'), ('grade', 'letter')])
student_model.edges()
Explanation: Add nodes and edges
End of explanation
from pgmpy.factors import TabularCPD
#TabularCPD?
cpd_difficulty = TabularCPD('difficulty', 2, [[0.6], [0.4]])
cpd_intelligence = TabularCPD('intelligence', 2, [[0.7], [0.3]])
cpd_sat = TabularCPD('sat', 2, [[0.95, 0.2],
[0.05, 0.8]], evidence=['intelligence'], evidence_card=[2])
cpd_grade = TabularCPD('grade', 3, [[0.3, 0.05, 0.9, 0.5],
[0.4, 0.25, 0.08, 0.3],
[0.3, 0.7, 0.02, 0.2]],
evidence=['intelligence', 'difficulty'], evidence_card=[2, 2])
cpd_letter = TabularCPD('letter', 2, [[0.1, 0.4, 0.99], [0.9, 0.6, 0.01]], evidence=['grade'], evidence_card=[3])
student_model.add_cpds(cpd_difficulty, cpd_intelligence, cpd_sat, cpd_grade, cpd_letter)
student_model.get_cpds()
print(cpd_difficulty) # 0:easy, 1:hard
print(cpd_intelligence) # 0:low, 1:high
print(cpd_grade) # 0:A, 1:B, 2:C
print(cpd_sat) # 0:low, 1:high
print(cpd_letter) # 0:week, 1:strong
Explanation: In a Bayesian network, each node has an associated CPD (conditional probability distribution).
End of explanation
student_model.check_model()
student_model.get_independencies()
Explanation: To check the consistency of the model and associated CPDs
End of explanation
student_model.is_active_trail('difficulty', 'intelligence')
student_model.is_active_trail('difficulty', 'intelligence',
observed='grade')
Explanation: if an influence can flow in a trail in a network, it is known as an active trail
End of explanation
from pgmpy.inference import VariableElimination
student_infer = VariableElimination(student_model)
# marginal prob of grade
probs = student_infer.query(['grade', 'letter'])
print(probs['grade'])
print(probs['letter'])
Explanation: You can query the network as follows: query(variables, evidence=None, elimination_order=None)
variables: list :
list of variables for which you want to compute the probability
evidence: dict :
a dict key, value pair as {var: state_of_var_observed} None if no evidence
elimination_order: list :
order of variable eliminations (if nothing is provided) order is computed automatically
End of explanation
# probs of grades given knowing nothing about course difficulty and intelligence
print(probs['grade'])
# probs of grades knowing course is hard
prob_grade_hard = student_infer.query(['grade'], {'difficulty':1})
print(prob_grade_hard['grade'])
# probs of getting an A knowing course is easy, and intelligence is low
prob_grade_easy_smart = student_infer.query(['grade'], {'difficulty':0, 'intelligence':1})
print(prob_grade_easy_smart['grade'])
Explanation: Direct Causal Influence
End of explanation
# probs of letter knowing nothing
print(probs['letter'])
# probs of letter knowing course is difficult
prob_letter_hard = student_infer.query(['letter'], {'difficulty':1})
print(prob_letter_hard['letter'])
Explanation: Indirect Causal Influence
End of explanation |
1,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Off-Specular simulation
Off-specular simulation is a technique developed to study roughness or micromagnetism at micrometric scale [1]. For the moment BornAgain has only the limited support for off-specular simulation. User feedback is required to continue development.
Off-specular Geometry [1]
The term off-specular scattering is typically ised for experiment geometries where $\mathbf{q}$ is not strictly perpendicular to the sample surface. Following features can be encountered in off-specular scattering experiment
Step1: Solution
Run the line below to see the solution. | Python Code:
%matplotlib inline
# %load offspec_ex.py
import numpy as np
import bornagain as ba
from bornagain import deg, angstrom, nm, kvector_t
def get_sample():
# Defining Materials
material_1 = ba.HomogeneousMaterial("Air", 0.0, 0.0)
material_2 = ba.HomogeneousMaterial("Si", 7.6e-06, 1.7e-07)
material_3 = ba.HomogeneousMaterial("Nb", 2.4e-05, 1.5e-06)
# Defining Layers
layer_1 = ba.Layer(material_1)
layer_2 = ba.Layer(material_2, 3)
layer_3 = ba.Layer(material_3, 5.8)
layer_4 = ba.Layer(material_2)
# Defining Roughness Parameters
layerRoughness_1 = ba.LayerRoughness(0.46, 0.5, 10.0*nm)
# Defining Multilayers
multiLayer_1 = ba.MultiLayer()
# uncomment the line below to add vertical cross correlation length
# multiLayer_1.setCrossCorrLength(200)
multiLayer_1.addLayer(layer_1)
#=================================
# put your code here
multiLayer_1.addLayer(layer_2)
multiLayer_1.addLayer(layer_3)
#==================================
multiLayer_1.addLayerWithTopRoughness(layer_4, layerRoughness_1)
return multiLayer_1
def get_simulation():
simulation = ba.OffSpecSimulation()
simulation.setDetectorParameters(10, -1.0*deg, 1.0*deg, 100, 0.0*deg, 5*deg)
simulation.setDetectorResolutionFunction(ba.ResolutionFunction2DGaussian(0.005*deg, 0.005*deg))
alpha_i_axis = ba.FixedBinAxis("alpha_i", 100, 0.0*deg, 5*deg)
simulation.setBeamParameters(0.154*nm, alpha_i_axis, 0.0*deg)
simulation.setBeamIntensity(1.0e+08)
simulation.getOptions().setIncludeSpecular(True)
return simulation
def run_simulation():
sample = get_sample()
simulation = get_simulation()
simulation.setSample(sample)
simulation.runSimulation()
return simulation.result()
if __name__ == '__main__':
result = run_simulation()
ba.plot_simulation_result(result, intensity_max=10.0)
Explanation: Off-Specular simulation
Off-specular simulation is a technique developed to study roughness or micromagnetism at micrometric scale [1]. For the moment BornAgain has only the limited support for off-specular simulation. User feedback is required to continue development.
Off-specular Geometry [1]
The term off-specular scattering is typically ised for experiment geometries where $\mathbf{q}$ is not strictly perpendicular to the sample surface. Following features can be encountered in off-specular scattering experiment: Yoneda peaks, Bragg sheets, diffuse scattering, magnetic spin-flip scattering, and correlated and uncorrelated roughness [2].
Create an off-specular simulation in BornAgain GUI
Start a new project Welcome view->New project
Go to the Instrument view and add an Offspec instrument.
Set the instrument parameters as follows.
Switch to the Sample view. Create a sample as shown below:
Create 4 layers (from bottom to top):
Si substrate, $\delta=7.6\cdot 10^{-6}$, $\beta=1.7\cdot 10^{-7}$. Assign roughness with Sigma 0.46 nm, Hurst parameter 0.5 and CorrelationLength 100 nm.
Nb layer of thickness 5.8 nm, $\delta=2.4\cdot 10^{-5}$, $\beta=1.5\cdot 10^{-6}$. No roughness.
Si layer of thickness 3 nm. No roughness.
Air layer.
Switch to Simulation view. Set option Include specular peak to Yes.
Run simulation. Vary the intensity scale. You should be able to see the specular line and Yonedas. To see the Bragg sheets we need to increase a number of [Si/Nb] double-layers to at least 10. Let's do it in Python.
Off-specular simulation with BornAgain Python API
Go to Simulation View and click button Export to Python script. Save the script somewhere. The script should look as shown below.
Exercise:
Change the script to add layer_2 and layer_3 10 times. Hint: use for loop, take care of indentations.
Exercise (Advanced)
Add exponentially decreasing roughness to all Si layers (except of substrate). The RMS roughness of the layer $n$ should be calculated as
$$\sigma_n = \sigma_0\cdot e^{-0.01n}$$
where $\sigma_0=0.46$nm
Set the roughness of all the layers to be fully correlated.
End of explanation
%load offspec.py
Explanation: Solution
Run the line below to see the solution.
End of explanation |
1,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Did the Hall of Fame voter purge make a difference?
In a recent Jayson Stark article and about lessons in hall of fame voting, he mentions the following three assumptions about the Baseball Hall of fame voters after a significant number of non-active voters were eliminated
Step3: As a matter of fact, this year saw one of the largest increases at 8.2%. Taken alone, this may indicate that something has changed with the removal of so many voters, but when viewed with all the other years, it does not look very exceptional as the values range between -6 to +8%. The average change is an increase by 2% per year, but with a standard deviation much larger than it of 4%. The average change in percentage is either highly random or driven by something other than change in the number of votes. In fact, the change in percentages does not show any strong correlation with the number of voters or the change in number of voters.
Step4: Correlations with Hall of Fame classes
At initial glance, there is not much pattern to the data so pure randomness could be an explanation. However, we can define a few other metrics to take a look at the data and it might give us a better idea of what is going on. The first would be the number of Hall of Famers (hofs) elected in the previous class. The second is defined as the strength of the class as the number of first ballot hofs in that class (For the record, I consider Bonds and Clemons as first ballot hall of famers as the would have been if not for their Performance Enhancing Drug (PED) history). The third is the total number of hofs in a class, but that is uncertain for the most recent classes.
A very strong trend does appears between the average change in the percentage and the strength of an incoming class minus the number of hofs elected the year before. Unsurprisingly, when a strong class comes onto the ballot, they tend to take votes away from other players. Likewise, when a large number of players are elected, they free up votes for other players. A linear relationship of $$s = 0.0299*nhof_{previous} -0.0221\times Strength - 0.0034\times(Total-Strength) - 0.00299$$ gives a very good fit to $\Delta p$ and shows a strong linear correlation indicated by an r-pearson statistic of 0.95.
Step5: Change in Voting Habits
If we use this relationship, we can look at what the expected percentage average change in the votes were for 2016. The expected change based on the existing data (1 First ballot hofs, 4 hofs the previous year, 1 total hof for class of 2016) was an increase of +9.0%. The average increase for 2016? That was +8.2%. So, at least overall, the increase in percentages is exactly what was expected based on a moderate incoming class (if you also assume Trevor Hoffman will eventually be elected the expected change for this year is then 8.7%) and four players entering the Hall the previous year. From this perspective, the voting purge made little difference in how the percentage of votes for a player changed.
Step6: Historically, players with higher vote percentage generally have seen their voting percentages increase. In the figure below, we look at the difference between the change in vote percentage for a given player, $\Delta p$, and the expected average change for all players that year as compared to the player's percentage, p, for the previous year. The 2016 year (red squares) does not appear significantly different than any other years (blue circles). It is just more common that players with low vote percentages tend to have their vote percentages suppressed than players with higher vote percentages. Nonetheless, there is large scatter in the distribution, which for any given player in any given year does not make it very predictive.
Step7: Have voters changed in terms of WAR or PEDs?
If we look at the corrected change in voting percentage as a function of WAR, there does appear to be a stronger correlation between WAR and percentage change this year (red and green squares) than seen last year (blue circles), although some correlation does exist. The three points not falling near the correlation are Barry Bonds and Roger Clemons (PED history for otherwise certain hofs) and Lee Smith (reliever). Going back further years shows a large scatter in terms of WAR and corrected percentage change, and it would be interesting to see how this has changed over all the different years and to see if the strength of this correlation has been increasing. Furthermore, it would be interesting to see how this relates to a players other, more traditional metrics like home runs or wins.
The green circles are players that have been a strongly association with PEDs. Barry Bonds and Roger Clemons are exceptions, but the drop in the percentages for the other three players is in line for the drop for players with similar values of WAR. Along with the average change in voting seen for Bonds and Clemons, it does not look like the behavior for players associated with PEDs is very different than other players.
Step8: Conclusions and other thoughts
The overall average change in vote percentage was almost exactly what was predicted based on the strength of the incoming class and the large number of Hall of Famers elected the previous year. Along with the fact that percentages tend to increase relative to the average change for players with higher percentages, it does not look like there were any major changes to the voter patterns between this year and last year due to the purge of voters.
In terms of players that took PEDs, no major differences are detected in the voting patterns as compared to other players or the previous year.
In terms of WAR, the percentage change for a player does seem to correlate with WAR and possible has become a stronger correlation.
However, it should be noted that this is one year, a relatively small sample size, and that something very different could be occurring here.
Relievers still are an exceptional case with Lee Smith having a very low WAR. His vote percentage did decrease relative to the overall class and it will be interesting to see what happens to the three relieviers (Trevor Hoffman and Billy Wagner along with Lee Smith) next year. If Lee Smith is an example of how the new group of voters view relievers, we would expect to see all of their percentages drop relative to the average change, but it will be interesting as Trevor Hoffman is already very close.
The player with the worst performance though was Nomar Garciaparra with a drop in voting percentage of -12% as compared to the average. He was never associated with PEDs, and this was arguably expected due to being the lowest, second year positional player by WAR on the ballot. On the other hand, the player with the largest increase, Mike Mussina, has the largest WAR of any player outside of Bonds or Clemons.
As a final aside, Jeff Bagwell, Curt Schilling, and Mike Mussina are the only players in the last 20 years with no known associated with PEDs and WAR > 75 to not be elected, so far, to the hall of fame. Along with Phil Neikro and Bert Blyleven (and exlcuding Roger Clemons and Barry Bonds), these five players are the only players with WAR > 75 and not be elected on their first ballot in the last twenty years, whereas 13 other players with WAR > 75 were elected on their first ballot. | Python Code:
#read in the data
def read_votes(infile):
Read in the number of votes in each file
lines = open(infile).readlines()
hof_votes = {}
for l in lines:
player={}
l = l.split(',')
name = l[1].replace('X-', '').replace(' HOF', '').strip()
player['year'] = l[2]
player['votes'] = float(l[3])
player['p'] = float(l[4][:-1])/100.0
player['war'] = float(l[8])
hof_votes[name] = player
return hof_votes
#calcuate the total number of votes in each year
hof={}
n_votes = {}
for i in np.arange(1996, 2017):
hof[i] = read_votes('{}_list.csv'.format(i))
k=0
keys = hof[i].keys()
while hof[i][keys[k]]['p']<0.5: k+=1
k = keys[k]
n_votes[i] = int ( hof[i][k]['votes'] / hof[i][k]['p'])
n_years = 2017-1996
def match_years(hof, year1, year2):
"Produce a list of players and the number of votes received between two years"
player_dict={}
for name in hof[year1].keys():
if name in hof[year2].keys():
player_dict[name]=np.array([hof[year1][name]['p'], hof[year2][name]['p']])
return player_dict
end_year = 2017
def number_of_first_year(hof, year):
"Calculate the number of first ballot hall of famers in a class"
first_year = 0
for name in hof[year]:
if hof[year][name]['year']=='1st':
if hof[year][name]['p']>0.75: first_year+= 1
if name in ['Barry Bonds', 'Roger Clemens']: first_year+= 1
return first_year
def number_of_HOF(hof, year):
"Calculte the number of HOF for a year"
first_year = 0
for name in hof[year]:
if hof[year][name]['p']>0.75: first_year+= 1
return first_year
def number_of_drop(hof, year):
"Calculate the number of players dropped in a year"
first_year = 0
for name in hof[year]:
if hof[year][name]['p']<0.05: first_year+= 1
return first_year
def total_number_of_hof(hof, year):
"Total number of hall of famers for a class"
first_year = 0
for name in hof[year]:
if hof[year][name]['year']=='1st':
if hof[year][name]['p']>0.75:
first_year+= 1
if name in ['Barry Bonds', 'Roger Clemens']: first_year+= 1
for y in range(year+1, end_year):
if name in hof[y].keys():
#print year, name, hof[y][name]['p']
if hof[y][name]['p']>0.75:
first_year+= 1
return first_year
def average_change_in_votes(hof, year1, year2):
Determine the statistics change in votes from one class to another
player_dict = match_years(hof, year1, year2)
#print player_dict
change = 0
count = 0
for name in player_dict:
change += player_dict[name][1] - player_dict[name][0]
count += 1
#print count, name, player_dict[name][0], player_dict[name][1], player_dict[name][1] - player_dict[name][0], change
change = change / count
return count, change
def number_of_votes(hof, year):
keys = hof[year].keys()
k=0
while hof[year][keys[k]]['p']<0.5: k+=1
k = keys[k]
return int ( hof[year][k]['votes'] / hof[year][k]['p'])
from astropy.table import Table
data_table = Table(names=('Year','Votes', 'Strength', 'HOF', 'Drop', 'Count', 'Change', 'Total'))
for year in np.arange(1997,2017):
strength = number_of_first_year(hof, year)
nhof = number_of_HOF(hof, year)
nvotes = number_of_votes(hof, year)
ndrop = number_of_drop(hof, year)
total = total_number_of_hof(hof, year)
count, change = average_change_in_votes(hof, year-1, year)
data_table.add_row([year, nvotes, strength, nhof, ndrop, count, change, total])
plt.figure()
plt.plot(data_table['Year'], data_table['Change'], ls='', marker='o')
plt.xlabel('Year', fontsize='x-large')
plt.ylabel('$\Delta p \ (\%)$', fontsize='x-large')
plt.show()
'Mean={} Std={}'.format(data_table['Change'].mean(), data_table['Change'].std())
'Max={} Min={}'.format(data_table['Change'].max(), data_table['Change'].min())
Explanation: Did the Hall of Fame voter purge make a difference?
In a recent Jayson Stark article and about lessons in hall of fame voting, he mentions the following three assumptions about the Baseball Hall of fame voters after a significant number of non-active voters were eliminated:
An electorate in which 109 fewer writers cast a vote in this election than in 2015.
An electorate that had a much different perspective on players who shined brightest under the light of new-age metrics.
And an electorate that appeared significantly less judgmental of players shadowed by those pesky performance-enhancing drug clouds.
However, are these last two assumptions true? Did the purge of Hall of Fame voters make a difference? Did the set of Hall of Fame voters least active have a different set of values than the those who are still voting?
Arbitrarily, I decided to test this against the years 1995-2016, which gives a good 20 elections as well as starting at the year Mike Schmidt was elected to the Hall of Fame (which is utterly arbitrary other than Mike Schmidt being my favorite player when I was young). However to figure this out, the first question that has to be answer is how does the average percentage change from year to year. This ends up being a little surprising when you just look at the numbers:
End of explanation
stats.pearsonr(data_table['Year'], data_table['Change'])
stats.pearsonr(data_table['Votes'], data_table['Change'])
stats.pearsonr(data_table['Votes'][1:]-data_table['Votes'][:-1], data_table['Change'][1:])
data_table['Year', 'Votes', 'Count', 'Change', 'Strength','Total', 'HOF', 'Drop'].show_in_notebook(display_length=21)
#['Year', 'Count', 'Change', 'Strength', 'HOF', 'Drop']
Explanation: As a matter of fact, this year saw one of the largest increases at 8.2%. Taken alone, this may indicate that something has changed with the removal of so many voters, but when viewed with all the other years, it does not look very exceptional as the values range between -6 to +8%. The average change is an increase by 2% per year, but with a standard deviation much larger than it of 4%. The average change in percentage is either highly random or driven by something other than change in the number of votes. In fact, the change in percentages does not show any strong correlation with the number of voters or the change in number of voters.
End of explanation
nhof_2 = data_table['Total'][1:]- data_table['Strength'][1:] #number of HOFs in a class after year 1
p = data_table['Change'][1:]
dv = data_table['Votes'][1:] - data_table['Votes'][:-1]
from scipy import linalg as la
aa = np.vstack((data_table['Strength'][1:],nhof_2,data_table['HOF'][:-1], np.ones_like(nhof_2))).T
polycofs = la.lstsq(aa[:-1], p[:-1])[0]
print polycofs
s = aa * polycofs
s = s.sum(axis=1)
s
plt.figure()
plt.plot(data_table['HOF'][:-1]-data_table['Strength'][1:], p, ls='', marker='o')
plt.xlabel('$nhof_{previous} - Strength$', fontsize='x-large')
plt.ylabel('$\Delta p \ (\%)$', fontsize='x-large')
plt.show()
from scipy import stats
print stats.pearsonr(s,p)
Table((data_table['Year'][1:],data_table['HOF'][:-1]-data_table['Strength'][1:],p)).show_in_notebook()
coef = np.polyfit(s,p,1)
np.polyval(coef,0.08)
print s[-1]
print coef
Explanation: Correlations with Hall of Fame classes
At initial glance, there is not much pattern to the data so pure randomness could be an explanation. However, we can define a few other metrics to take a look at the data and it might give us a better idea of what is going on. The first would be the number of Hall of Famers (hofs) elected in the previous class. The second is defined as the strength of the class as the number of first ballot hofs in that class (For the record, I consider Bonds and Clemons as first ballot hall of famers as the would have been if not for their Performance Enhancing Drug (PED) history). The third is the total number of hofs in a class, but that is uncertain for the most recent classes.
A very strong trend does appears between the average change in the percentage and the strength of an incoming class minus the number of hofs elected the year before. Unsurprisingly, when a strong class comes onto the ballot, they tend to take votes away from other players. Likewise, when a large number of players are elected, they free up votes for other players. A linear relationship of $$s = 0.0299*nhof_{previous} -0.0221\times Strength - 0.0034\times(Total-Strength) - 0.00299$$ gives a very good fit to $\Delta p$ and shows a strong linear correlation indicated by an r-pearson statistic of 0.95.
End of explanation
name_list = []
p_list = []
dp_list = []
pp_list = []
year1 = 2015
year2 = year1+1
expect_p = s[year2 - 1998]
print year2, expect_p
Explanation: Change in Voting Habits
If we use this relationship, we can look at what the expected percentage average change in the votes were for 2016. The expected change based on the existing data (1 First ballot hofs, 4 hofs the previous year, 1 total hof for class of 2016) was an increase of +9.0%. The average increase for 2016? That was +8.2%. So, at least overall, the increase in percentages is exactly what was expected based on a moderate incoming class (if you also assume Trevor Hoffman will eventually be elected the expected change for this year is then 8.7%) and four players entering the Hall the previous year. From this perspective, the voting purge made little difference in how the percentage of votes for a player changed.
End of explanation
plt.figure()
name_list=[]
p_list=[]
pp_list=[]
dp_list=[]
war_list=[]
for year1 in range(1997,2015):
year2 = year1+1
expect_p = s[year2 - 1998]
for name in hof[year1]:
if name in hof[year2].keys():
name_list.append(name)
p_list.append(hof[year1][name]['p'])
dp_list.append(hof[year2][name]['p'] - hof[year1][name]['p'])
pp_list.append((hof[year2][name]['p'] - hof[year1][name]['p'])-expect_p)
war_list.append(hof[year2][name]['war'])
plt.plot(p_list, pp_list, 'bo')
name_list=[]
p_2016_list=[]
pp_2016_list=[]
dp_2016_list=[]
war_2016_list = []
year1=2015
year2 = year1+1
expect_p = s[year2 - 1998]
for name in hof[year1]:
if name in hof[year2].keys():
name_list.append(name)
p_2016_list.append(hof[year1][name]['p'])
dp_2016_list.append(hof[year2][name]['p'] - hof[year1][name]['p'])
pp_2016_list.append((hof[year2][name]['p'] - hof[year1][name]['p'])-expect_p)
war_2016_list.append(hof[year2][name]['war'])
plt.plot(p_2016_list, pp_2016_list, 'rs')
plt.xlabel('p (%)', fontsize='x-large')
plt.ylabel('$\Delta p - s $', fontsize='x-large')
plt.show()
Explanation: Historically, players with higher vote percentage generally have seen their voting percentages increase. In the figure below, we look at the difference between the change in vote percentage for a given player, $\Delta p$, and the expected average change for all players that year as compared to the player's percentage, p, for the previous year. The 2016 year (red squares) does not appear significantly different than any other years (blue circles). It is just more common that players with low vote percentages tend to have their vote percentages suppressed than players with higher vote percentages. Nonetheless, there is large scatter in the distribution, which for any given player in any given year does not make it very predictive.
End of explanation
plt.plot(war_list[-17:], pp_list[-17:], 'bo')
mask = np.zeros(len(war_2016_list), dtype=bool)
for i, name in enumerate(name_list):
if name in ['Sammy Sosa', 'Gary Sheffield', 'Mark McGwire', 'Barry Bonds', 'Roger Clemens']:
mask[i]=True
war = np.array(war_2016_list)
pp = np.array(pp_2016_list)
plt.plot(war, pp, 'rs')
plt.plot(war[mask], pp[mask], 'gs')
plt.xlabel('WAR', fontsize='x-large')
plt.ylabel('$\Delta p - s $', fontsize='x-large')
plt.show()
Table((name_list, p_2016_list, dp_2016_list, pp_2016_list, war_2016_list)).show_in_notebook()
Explanation: Have voters changed in terms of WAR or PEDs?
If we look at the corrected change in voting percentage as a function of WAR, there does appear to be a stronger correlation between WAR and percentage change this year (red and green squares) than seen last year (blue circles), although some correlation does exist. The three points not falling near the correlation are Barry Bonds and Roger Clemons (PED history for otherwise certain hofs) and Lee Smith (reliever). Going back further years shows a large scatter in terms of WAR and corrected percentage change, and it would be interesting to see how this has changed over all the different years and to see if the strength of this correlation has been increasing. Furthermore, it would be interesting to see how this relates to a players other, more traditional metrics like home runs or wins.
The green circles are players that have been a strongly association with PEDs. Barry Bonds and Roger Clemons are exceptions, but the drop in the percentages for the other three players is in line for the drop for players with similar values of WAR. Along with the average change in voting seen for Bonds and Clemons, it does not look like the behavior for players associated with PEDs is very different than other players.
End of explanation
plt.figure()
for year in range(1996,2017):
for name in hof[year].keys():
if hof[year][name]['year']=='1st' :
w = hof[year][name]['war']
p = hof[year][name]['p']
plt.plot([w], [p], 'bo')
if p > 0.75 and w > 75: print name, w, p
plt.show()
Explanation: Conclusions and other thoughts
The overall average change in vote percentage was almost exactly what was predicted based on the strength of the incoming class and the large number of Hall of Famers elected the previous year. Along with the fact that percentages tend to increase relative to the average change for players with higher percentages, it does not look like there were any major changes to the voter patterns between this year and last year due to the purge of voters.
In terms of players that took PEDs, no major differences are detected in the voting patterns as compared to other players or the previous year.
In terms of WAR, the percentage change for a player does seem to correlate with WAR and possible has become a stronger correlation.
However, it should be noted that this is one year, a relatively small sample size, and that something very different could be occurring here.
Relievers still are an exceptional case with Lee Smith having a very low WAR. His vote percentage did decrease relative to the overall class and it will be interesting to see what happens to the three relieviers (Trevor Hoffman and Billy Wagner along with Lee Smith) next year. If Lee Smith is an example of how the new group of voters view relievers, we would expect to see all of their percentages drop relative to the average change, but it will be interesting as Trevor Hoffman is already very close.
The player with the worst performance though was Nomar Garciaparra with a drop in voting percentage of -12% as compared to the average. He was never associated with PEDs, and this was arguably expected due to being the lowest, second year positional player by WAR on the ballot. On the other hand, the player with the largest increase, Mike Mussina, has the largest WAR of any player outside of Bonds or Clemons.
As a final aside, Jeff Bagwell, Curt Schilling, and Mike Mussina are the only players in the last 20 years with no known associated with PEDs and WAR > 75 to not be elected, so far, to the hall of fame. Along with Phil Neikro and Bert Blyleven (and exlcuding Roger Clemons and Barry Bonds), these five players are the only players with WAR > 75 and not be elected on their first ballot in the last twenty years, whereas 13 other players with WAR > 75 were elected on their first ballot.
End of explanation |
1,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Bootcamp Final Project
Step1: II. Contestants' Age
We began by looking at the average age of contestants at different stages of the competition
Step2: The chart above shows the average age of female contestants for each year. While the average age of contestants is slightly older in recent years compared to early seasons of the show, the average age has remained fairly consistent (within 1 year) over the last decade.
Next, we'll explore how age relates to the week in which the contestants left the competition
Step3: Looks like we have more cleaning to do. We'll use the following code to make our x-axis more consistent
Step4: Here, we can see that ages 26 and 27 dominate the elimination age for the first few weeks. This is interesting because it might suggest that the bachelor is interested in a polarized age demographic at first. He seems to be ruling out much of the middle ground. That said, it could also be the case that the bachelor is keeping only the contestants in the polarized age demographic, but that their average ages are 26 and 27.
In the middle weeks, the younger contestants seem to be targeted more for elimination. We think a major takeaway here is that the Bachelor may be becoming more serious about the process as he is moving along, and thus is elimnating more immature contestants (contestants not yet ready for marriage).
We'll now look at the same information using the male contestant dataframe (Sheet 3)
Step5: There's not as much rhyme or reason to this chart; however, we can see that the Bachelorette seems to keep around an older and younger male contestant towards the end of the show. Also, it appears that the winners tend to be on the younger side.
Next, we will compare the data for male and female contestants
Step6: The biggest takeaway here is that the ages of contestents are far younger for women than men in general. Other than the average age being significantly different, the two charts show surprisingly similar patterns. The fluctuations are oddly identical. There is most likely some sort of social psychology trend at play here.
III. Proposals and Marriages
We will now look at the end results of both the Bachelor and the Bachelorette. How many seasons result in a proposal? Are those engagements likely to result in marriage or in a break-up after the show ends? Is there any difference between bachelor and bachelorette seasons?
Step7: The key point gathered here is that the men (or bachelors) are far less likely to propose at the end of a season. We think that much of this has to do with their overall intent when beginning the process of applying for the show. Perhaps the women genuinely want to get engaged and the men seem more keen on becoming a TV personality. There is much discussion during the show about whether or not contestents are there for the "right reasons". Given the chart above, we might assume that more women are there for the right reasons than men, though it of course could be the case that the bachelors did not make a connection with their contestents with greater frequency than the bachelorettes with their contestents.
Step8: This chart is pretty illuminating. It illustrates that the Bachelorette women are far more succesful in choosing a partner. Of the 9 proposals on the Bachelorette, 3 resulted in marriage. Of the 10 proposals on the Bachelor, only 2 have resulted in marriage. Again, this could be related to the fact that the women who enter onto the show are far more dedicated to finding love and a life partner.
IV. Occupations
Finally, we'll take a look at the occupations of female and male contestants
Step9: Next, we'll do the same for the female contestants | Python Code:
#We will begin by importing several packages to use for our analysis:
import sys
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import datetime as dt
import numpy as np
%matplotlib inline
# Check versions
print('Python version: ', sys.version)
print('Pandas version: ', pd.__version__)
print('Matplotlib version: ', mpl.__version__)
print('Today: ', dt.date.today())
# Import Bachelor and Bachelorette datasets
url1 ='https://github.com/NYUDataBootcamp'
url2 = '/Materials/blob/master/Data/Bachelor_Data.xlsx?raw=true'
url = url1+url2
df = pd.read_excel(url, sheetname = 0) # Sheet 1: Bachelors/Bachelorettes
df2 = pd.read_excel(url, sheetname = 1) # Sheet 2: Female contestants
df3 = pd.read_excel(url, sheetname = 2) # Sheet 3: Male contestants
# Check import for each Sheet
df.shape
df2.shape
df3.shape
df.dtypes # 'Wiki Data' refers to whether or not there is information about the season available on Wikipedia.
df2.dtypes
df3.dtypes
df.head(2)
df2.head(2)
df3.head(2)
Explanation: Data Bootcamp Final Project: ABC's The Bachelor
Laura Capucilli, Michael Trentham, and Carter Stone
Since it’s premiere in 2002, ABC’s The Bachelor franchise has invited audiences across the U.S. to watch as single, young men and women attempt to find true love among a pool of eligible bachelors and bachelorettes, all on national TV and within a timeframe of a few months. The show begins with a single bachelor / bachelorette and typically up to 30 contestants. Each week, the bachelor / bachelorette narrows the pool of potential future partners through an elimination round, the Rose Ceremony. Ultimately, the season is expected to end in a proposal, the recipient of which remains a toss up between two men / women until the finale.
Given the variance in the outcome of relationships that began on the show, our final project analyzes data from nearly 30 seasons of the show with several questions in mind: Are there commonalities or differences among contestants, bachelors, and bachelorettes? Are there patterns that determine the ultimate success or failure of a relationship?
I. Setting Up
End of explanation
# Clean up and shape Sheet 2 (Female Contestants dataframe)
df2_mean = df2[['Year','Age']]
df2_mean = df2_mean.groupby('Year')
df2_mean = df2_mean.mean()
df2_mean.plot(linewidth=3, color='black', title= 'Average Age by Year (Women)')
Explanation: II. Contestants' Age
We began by looking at the average age of contestants at different stages of the competition:
End of explanation
# Shaping the data further
df2_AgeWeekF = df2[['Age', 'Eliminated']]
df2_AgeWeekF = df2_AgeWeekF.groupby('Eliminated')
df2_AgeWeekF = df2_AgeWeekF.mean()
# Plotting our data
df2_AgeWeekF.plot(kind='bar', color = 'deeppink', title="Average Age by Elimination Week (Women)")
Explanation: The chart above shows the average age of female contestants for each year. While the average age of contestants is slightly older in recent years compared to early seasons of the show, the average age has remained fairly consistent (within 1 year) over the last decade.
Next, we'll explore how age relates to the week in which the contestants left the competition:
End of explanation
df2_clean = df2
df2_clean['Eliminated'] = df2_clean['Eliminated'].str.replace('Withdrew in episode', 'Week')
df2_clean['Eliminated'] = df2_clean['Eliminated'].str.replace('Eliminated in episode', 'Week')
df2_clean['Eliminated'] = df2_clean['Eliminated'].str.replace('Quit in episode', 'Week')
df2_clean['Eliminated'] = df2_clean['Eliminated'].str.replace('Left in episode', 'Week')
df2_clean['Eliminated'] = df2_clean['Eliminated'].str.replace('Disqualified in episode', 'Week')
df2_clean['Eliminated'] = df2_clean['Eliminated'].str.replace('Removed in episode', 'Week')
df2_clean['Eliminated'] = df2_clean['Eliminated'].str.replace('Quit in epiosde', 'Week')
df2_clean['Eliminated'] = df2_clean['Eliminated'].str.replace('Runner-up', 'Week Runner-Up')
AgeWeekF = df2_clean[['Age', 'Eliminated']]
AgeWeekF = AgeWeekF.groupby('Eliminated')
AgeWeekF = AgeWeekF.mean()
AgeWeekF.plot(kind='bar', ylim=[24,32], color = 'deeppink', title="Average Age by Elimination Week (Women)")
Explanation: Looks like we have more cleaning to do. We'll use the following code to make our x-axis more consistent:
End of explanation
df3_clean = df3.set_index('Season')
df3_clean['Eliminated']=df3_clean['Eliminated'].str.replace('Episode', 'Week')
df3_clean['Eliminated']=df3_clean['Eliminated'].str.replace('Eliminated in episode', 'Week')
df3_clean['Eliminated']=df3_clean['Eliminated'].str.replace('Removed in episode', 'Week')
df3_clean['Eliminated']=df3_clean['Eliminated'].str.replace('Quit in episode', 'Week')
df3_clean['Eliminated']=df3_clean['Eliminated'].str.replace('Withdrew in episode', 'Week')
df3_clean['Eliminated']=df3_clean['Eliminated'].str.replace('Runner-up', 'Week Runner-up')
AgeWeekM = df3_clean[['Age', 'Eliminated']]
AgeWeekM = AgeWeekM.groupby('Eliminated')
AgeWeekM = AgeWeekM.mean()
AgeWeekM.plot(linewidth=3, color = 'black', title='Average Age by Year (Men)')
AgeWeekM.plot(kind='bar', ylim=[24,32], color = 'dodgerblue', title='Average Age by Week Eliminated (Men)')
Explanation: Here, we can see that ages 26 and 27 dominate the elimination age for the first few weeks. This is interesting because it might suggest that the bachelor is interested in a polarized age demographic at first. He seems to be ruling out much of the middle ground. That said, it could also be the case that the bachelor is keeping only the contestants in the polarized age demographic, but that their average ages are 26 and 27.
In the middle weeks, the younger contestants seem to be targeted more for elimination. We think a major takeaway here is that the Bachelor may be becoming more serious about the process as he is moving along, and thus is elimnating more immature contestants (contestants not yet ready for marriage).
We'll now look at the same information using the male contestant dataframe (Sheet 3):
End of explanation
fig, ax = plt.subplots(nrows = 2, ncols = 1, sharex = True, sharey = True)
AgeWeekM.plot(kind='bar', ax=ax[1], ylim=(24,32), color = 'dodgerblue', title = 'Average Age by Week Eliminated (Men)')
AgeWeekF.plot(kind='bar', ax=ax[0], color ='deeppink', title='Average Age by Week Eliminated (Women)')
fig, ax = plt.subplots()
AgeWeekM.plot(ax=ax, kind='bar', ylim=[24,32], color ='dodgerblue', title='Average Age by Week Eliminated (Men + Women)')
AgeWeekF.plot(ax=ax, kind='bar', ylim=[24,32], color ='deeppink')
Explanation: There's not as much rhyme or reason to this chart; however, we can see that the Bachelorette seems to keep around an older and younger male contestant towards the end of the show. Also, it appears that the winners tend to be on the younger side.
Next, we will compare the data for male and female contestants:
End of explanation
df_proposal = df[['Proposal', 'Show']]
df_proposal.head(2)
df_proposal = df_proposal.groupby(['Show', 'Proposal']).size()
df_proposal = df_proposal.unstack()
df_proposal.plot(kind='barh', color = ['silver', 'limegreen'], title='Did the season result in a proposal?')
Explanation: The biggest takeaway here is that the ages of contestents are far younger for women than men in general. Other than the average age being significantly different, the two charts show surprisingly similar patterns. The fluctuations are oddly identical. There is most likely some sort of social psychology trend at play here.
III. Proposals and Marriages
We will now look at the end results of both the Bachelor and the Bachelorette. How many seasons result in a proposal? Are those engagements likely to result in marriage or in a break-up after the show ends? Is there any difference between bachelor and bachelorette seasons?
End of explanation
df_marriage = df[['Success', 'Show']]
df_marriage = df_marriage.groupby(['Show', 'Success']).size()
df_marriage = df_marriage.unstack()
df_marriage.plot(kind='barh', color=['silver', 'plum'], title='Did the season result in marriage?')
Explanation: The key point gathered here is that the men (or bachelors) are far less likely to propose at the end of a season. We think that much of this has to do with their overall intent when beginning the process of applying for the show. Perhaps the women genuinely want to get engaged and the men seem more keen on becoming a TV personality. There is much discussion during the show about whether or not contestents are there for the "right reasons". Given the chart above, we might assume that more women are there for the right reasons than men, though it of course could be the case that the bachelors did not make a connection with their contestents with greater frequency than the bachelorettes with their contestents.
End of explanation
OccM = df3[['Occupation']]
OccM = OccM.groupby(['Occupation']).size()
OccM = pd.DataFrame(OccM)
OccM.columns = ['Count']
OccM = OccM[OccM.Count !=1]
OccM.plot(kind ='bar', color ='dodgerblue', title='Frequency of Occupations, Male Contestants (>1)')
Explanation: This chart is pretty illuminating. It illustrates that the Bachelorette women are far more succesful in choosing a partner. Of the 9 proposals on the Bachelorette, 3 resulted in marriage. Of the 10 proposals on the Bachelor, only 2 have resulted in marriage. Again, this could be related to the fact that the women who enter onto the show are far more dedicated to finding love and a life partner.
IV. Occupations
Finally, we'll take a look at the occupations of female and male contestants:
End of explanation
OccF = df2[['Occupation']]
OccF = OccF.groupby(['Occupation']).size()
OccF = pd.DataFrame(OccF)
OccF.columns = ['Count']
OccF = OccF[OccF.Count !=1]
OccF.plot(kind ='bar', color = 'deeppink', title='Frequency of Occupations, Female Constestants (>1)')
OccM.plot(figsize=(11,11), kind='pie', subplots=True, legend=False, title='Frequency of Occupations, Male Constestants (>1)')
OccF.plot(figsize=(11,11), kind='pie', subplots=True, legend=False, title='Frequency of Occupations, Female Constestants (>1)')
Explanation: Next, we'll do the same for the female contestants
End of explanation |
1,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LORIS API Tour 2/2
This tutorial contains basic examples in Python to demonstrate how to interact with the API.
To run this tutorial, click on Runtime -> Run allThis tutorial is also available as a Google colab notebook so you can run it directly from your browser. To access it, click on the button below
Step1: About the data in demo.loris.ca
The demo instance of LORIS contains the raisinbread dataset, which is used
by LORIS developers. For more informations about raisinbread, click here
Exercise 1. Login
This is a POST request to the /login endpoint that requires 2 parameters
Step2: Store the token in a variable for later
Step8: The requests thoughout this tutorial are very similar, only the URL changes. We will define 5 functions that will make the tutorial easier to read.
For more information on HTTP requests and their differences, you can refer to https
Step9: Exercice 2 - Projects
The endpoints in this section are used to get information on all candidates, to get information on a specific candidate or to add a new candidate.
https
Step10: Exercise 2.2 Get a specific project
The endpoint /projects/{project} can be used to obtain information on a specific project contained in the database.
Step11: Exercise 2.3 Get all the candidates in a specific project
The endpoint /projects/{project}/candidates can be used to obtain information on all the candidates for a specific project.
Step12: Exercise 2.4 Get all the images in a specific project
The endpoint /projects/{project}/images can be used to obtain information on all the images that are used in a project.
Step13: Find recent images
Step14: Exercise 2.5 Get all the instruments in a specific project
The endpoint /projects/{project}/instruments can be used to obtain information on all the instruments that are used in a project.
Step15: Exercise 2.6 Get a specific instruments in a specific project
The endpoint /projects/{project}/instruments/{instrument} can be used to obtain information on a specific instrument that is used in a specific project.
Step16: Exercise 2.7 Get all the electrophysiology recordings in a specific project
The endpoint /projects/{project}/images can be used to obtain information on all the images that are used in a project.
Step17: Exercice 3 - Candidates
The endpoints in this section are used to get information on all candidates, to get information on a specific candidate or to add a new candidate.
https
Step18: Exercise 3.2 Create a candidate
Send a POST request to /candidates with a payload containing an object with a candidate property
json
"Candidate"
Step19: GET request to verify the new candidate is actually created
Step20: Exercise 3.3 Get a specific candidate
This is a GET request to /candidates/{candid}
Step21: Exercice 4 - Visits
The endpoints in this section are used to get information on all candidates, to get information on a specific candidate or to add a new candidate.
https
Step22: Exercise 4.2 Add a timepoint for a given candidate
PUT request to /candidates/candid/{visit}
json
{
"Meta"
Step23: Exercise 4.3 Get QC/imaging of a candidate visit
This is a GET request to /candidates/{candid}/{visit}
Step24: Exercise 4.4 Change QC/imaging of a candidate visit
PUT request to /candidates/{candid}/{visit}/ qc/imaging
json
{
"Meta"
Step25: Exercice 5
Step26: Exercise 5.2 GET all instruments for a candidate
Step27: Exercise 5.3 Input instrument data for a candidate
GET, PUT or PATCH request to /candidates/{candid}/{visit}/instruments/{instrument}
https
Step29: 5.3.2 PATCH request
Step31: 5.3.3 PUT request
Step32: Exercise 5.4 Input instrument flags for a candidate
PUT or PATCH request to /candidates/{candid}/{visit}/instruments/{instrument}
https
Step33: 5.4.2 PUT request containing all the fields
Step34: 5.4.3 PATCH request containing some of the fields
Step35: GET request to verify the Administration and Data_entry fields have been updated successfully
Step36: Exercise 5.5 Input instrument DDE for a candidate
GET, PUT or PATCH request to /candidates/{candid}/{visit}/instruments/{instrument}
https
Step37: 5.5.2 PATCH request containing all the fields
Step38: 5.5.3 PUT request containing a single field to modify
Step39: Exercise 5.6 Input instrument DDE flags for a candidate
PUT or PATCH request to /candidates/{candid}/{visit}/instruments/{instrument}
https
Step40: 5.6.2 PATCH request
Step41: 5.6.3 PUT request
Step42: Exercice 6
Step43: Exercise 6.1 Find all images of a candidate for a visit
Step44: Exercise 6.2 Download all minc files of a candidate
Step45: Exercise 6.3 GET QC data for a MINC image file
Step46: Exercise 6.4 PUT QC data for a MINC image file
PUT request to /candidates/$CandID/$VisitLabel/images/$imagename/qc
https
Step47: Exercise 6.5 GET Image formats
Exercise 6.5.1 GET Image Brainbrowser format
Step48: Exercise 6.5.2 GET Image Raw format
Step49: Exercise 6.5.3 GET Image Thumbnail format
Step50: Exercise 6.6 GET Image headers
Exercise 6.6.1 GET Image headers
Step51: Exercise 6.6.2 GET Image headers full
Step52: Exercise 6.6.3 GET Image headers headername
Step53: Exercice 7
Step54: Exercise 7.1 Find all recordings of a candidate for a visit
Step55: Exercise 7.2 Download electrophysiology recording (edf) files
Step56: Exercise 7.3 Get electrophysiology recording channels data
Step57: Exercise 7.4 Get electrophysiology recording channels meta data
Step58: Exercise 7.5 Get electrophysiology recording electrodes data
Step59: Exercise 7.6 Get electrophysiology recording electrodes meta data
Step60: Exercise 7.7 Get electrophysiology recording events data
Step61: Exercise 7.8 Get electrophysiology recording events meta data
Step62: Exercise 8 - Dicoms
Step63: Exercise 8.1 GET a list of Dicom files for a candidate
Step64: Exercise 8.2 Download a Dicom tar file | Python Code:
import getpass # For input prompt hide what is entered
import json # Provides convenient functions to handle json objects
import re # For regular expression
import requests # To handle http requests
import warnings # To ignore warnings
# Because the ssl certificates are unverified, warnings are thrown at every
# HTTPS request. The following command will prevent warning messages to be
# printed at every HTTPS request.
warnings.simplefilter('ignore')
baseurl = 'https://demo.loris.ca/api/v0.0.3'
def prettyPrint(string):
print(json.dumps(string, indent=2, sort_keys=True))
Explanation: LORIS API Tour 2/2
This tutorial contains basic examples in Python to demonstrate how to interact with the API.
To run this tutorial, click on Runtime -> Run allThis tutorial is also available as a Google colab notebook so you can run it directly from your browser. To access it, click on the button below: <a href="https://colab.research.google.com/github/aces/Loris/blob/main/docs/notebooks/LORIS_API_Part2_Python-script.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Setup
End of explanation
payload = {
'username': input('username: '),
'password': getpass.getpass('password: ')
}
response = requests.post(
url = '/'.join([baseurl, 'login']),
json = payload,
verify = False
)
text = response.content.decode('ascii')
data = json.loads(text)
prettyPrint(data)
Explanation: About the data in demo.loris.ca
The demo instance of LORIS contains the raisinbread dataset, which is used
by LORIS developers. For more informations about raisinbread, click here
Exercise 1. Login
This is a POST request to the /login endpoint that requires 2 parameters: username and password
The expected response is a json string that contains a token property.
https://github.com/aces/Loris/blob/main/modules/api/docs/LorisRESTAPI.md
End of explanation
token = data['token']
Explanation: Store the token in a variable for later
End of explanation
def GET(url):
Function to send an HTTP GET request
Parameters:
url : url where the GET request is sent
Returns: projects (Information on all projects)
response = json.loads(requests.get(
url = url,
verify = False,
headers = {'Authorization': 'Bearer %s' % token}
).content.decode('ascii'))
return response
def GETFile(url):
Function to send an HTTP GET request to download files
Parameters:
url : url where the GET request is sent
Returns: projects (Information on all projects)
filename = file['Filename']
response = requests.get(
url = url,
verify = False,
headers = {'Authorization': 'Bearer %s' % token}
)
f = open(filename, "w+b")
f.write(bytes(response.content))
return response
def PUT(url, json_input):
Function to send an HTTP PUT request
Parameters:
url : url where the GET request is sent
json_input : json object to query the information for the PUT request
response = requests.put(
url = url,
json = json_input,
verify = False,
headers = {'Authorization': 'Bearer %s' % token}
)
return response
def PATCH(url, json_input):
Function to send an HTTP PATCH request
Parameters:
url : url where the GET request is sent
json_input : json object to query the information for the PATCH request
response = requests.patch(
url = url,
json = json_input,
verify = False,
headers = {'Authorization': 'Bearer %s' % token}
)
return response
def POST(url, json_input):
Function to send an HTTP POST request
Parameters:
url : url where the GET request is sent
json_input : json object to query the information for the PATCH request
response = requests.post(
url = url,
json = json_input,
verify = False,
headers = {'Authorization': 'Bearer %s' % token}
)
return response
Explanation: The requests thoughout this tutorial are very similar, only the URL changes. We will define 5 functions that will make the tutorial easier to read.
For more information on HTTP requests and their differences, you can refer to https://github.com/aces/Loris/blob/main/modules/api/docs/LorisRESTAPI.md#10-overview
End of explanation
# Save the project names to test other endpoints later
url = '/'.join([baseurl, 'projects'])
projects = GET(url)
projectNames = list(projects['Projects'].keys())
prettyPrint(projects)
Explanation: Exercice 2 - Projects
The endpoints in this section are used to get information on all candidates, to get information on a specific candidate or to add a new candidate.
https://github.com/aces/Loris/blob/master/modules/api/docs/LorisRESTAPI.md#20-project-api
Exercise 2.1 Get all projects
The endpoint /projects can be used to obtain information on all the projects contained in the database.
End of explanation
url = '/'.join([baseurl, 'projects', projectNames[0]])
project = GET(url)
prettyPrint(project)
Explanation: Exercise 2.2 Get a specific project
The endpoint /projects/{project} can be used to obtain information on a specific project contained in the database.
End of explanation
url = '/'.join([baseurl, 'projects', projectNames[0], 'candidates'])
projectCandidates = GET(url)
prettyPrint(projectCandidates)
Explanation: Exercise 2.3 Get all the candidates in a specific project
The endpoint /projects/{project}/candidates can be used to obtain information on all the candidates for a specific project.
End of explanation
url = '/'.join([baseurl, 'projects', projectNames[0], 'images'])
projectImages = GET(url)
prettyPrint(projectImages)
Explanation: Exercise 2.4 Get all the images in a specific project
The endpoint /projects/{project}/images can be used to obtain information on all the images that are used in a project.
End of explanation
url = '/'.join([baseurl, 'projects', projectNames[0],
'images?since=2018-12-13T10:20:18-05:00'])
projectRecentImages = GET(url)
prettyPrint(projectRecentImages)
Explanation: Find recent images
End of explanation
url = '/'.join([baseurl, 'projects', projectNames[0], 'instruments'])
projectInstruments = GET(url)
prettyPrint(projectInstruments)
Explanation: Exercise 2.5 Get all the instruments in a specific project
The endpoint /projects/{project}/instruments can be used to obtain information on all the instruments that are used in a project.
End of explanation
singleInstrument = list(projectInstruments['Instruments'].keys())[1]
url = '/'.join([baseurl, 'projects', projectNames[0],
'instruments', singleInstrument])
projectSingleInstrument = GET(url)
prettyPrint(projectSingleInstrument)
Explanation: Exercise 2.6 Get a specific instruments in a specific project
The endpoint /projects/{project}/instruments/{instrument} can be used to obtain information on a specific instrument that is used in a specific project.
End of explanation
url = '/'.join([baseurl, 'projects', projectNames[0], 'recordings'])
projectRecordings = GET(url)
prettyPrint(projectRecordings)
Explanation: Exercise 2.7 Get all the electrophysiology recordings in a specific project
The endpoint /projects/{project}/images can be used to obtain information on all the images that are used in a project.
End of explanation
url = '/'.join([baseurl, 'candidates'])
allCandidates = GET(url)
prettyPrint(allCandidates)
Explanation: Exercice 3 - Candidates
The endpoints in this section are used to get information on all candidates, to get information on a specific candidate or to add a new candidate.
https://github.com/aces/Loris/blob/master/modules/api/docs/LorisRESTAPI.md#30-candidate-api
Exercise 3.1 Get all candid
The endpoint /candidates can be used to obtain information on all the candidates contained in the database.
End of explanation
# Keep some variables for the next examples
projectname = list(projects['Projects'].keys())[0]
sitename = allCandidates['Candidates'][0]['Site']
json_data = {
'Candidate' : {
'Project' : projectname,
'DoB' : "2015-09-10",
'EDC' : "2015-09-10", #Optional
'Sex' : "Female",
'Site' : sitename,
}
}
url = '/'.join([baseurl, 'candidates'])
response = POST(url, json_data)
print('POST response status code:', response.status_code)
Explanation: Exercise 3.2 Create a candidate
Send a POST request to /candidates with a payload containing an object with a candidate property
json
"Candidate" : {
"Project" : ProjectName,
"PSCID" : PSCID, # only if config is set to prompt
"EDC" : "YYYY-MM-DD", # optional
"DoB" : "YYYY-MM-DD",
"Sex" : "Male|Female",
"Site" : SiteName,
}
End of explanation
newCandidate = GET('/'.join([baseurl, 'candidates', json.loads(response.content)['CandID']]))
prettyPrint(newCandidate)
Explanation: GET request to verify the new candidate is actually created
End of explanation
candid = allCandidates['Candidates'][0]['CandID']
singleCandidate = GET('/'.join([baseurl, 'candidates', candid]))
prettyPrint(singleCandidate)
# keep a visit for the next examples
candidateVisit = singleCandidate['Visits'][0]
Explanation: Exercise 3.3 Get a specific candidate
This is a GET request to /candidates/{candid}
End of explanation
url = '/'.join([baseurl, 'candidates', candid, candidateVisit])
candidateVisits = GET(url)
prettyPrint(candidateVisits)
Explanation: Exercice 4 - Visits
The endpoints in this section are used to get information on all candidates, to get information on a specific candidate or to add a new candidate.
https://github.com/aces/Loris/blob/master/modules/api/docs/LorisRESTAPI.md#40-imaging-data
Exercise 4.1 Get all visits for a specific candidate
This is a GET request to /candidates/{candid}/{visit}
End of explanation
json_data = {
"Battery" : 'Stale',
"Site" : 'Data Coordinating Center',
"CandID" : '400266',
"Visit" : 'V3',
"Project": 'Pumpernickel'
}
url = '/'.join([baseurl, '/candidates/400266/V3'])
response = PUT(url, json_data)
print('PUT response status code:', response.status_code)
editedVisit = GET(url)
prettyPrint(editedVisit)
Explanation: Exercise 4.2 Add a timepoint for a given candidate
PUT request to /candidates/candid/{visit}
json
{
"Meta" : {
"Battery" : "Fresh",
"CandID" : candid,
"Project" : "Pumpernickel",
"Visit" : visit,
"Battery": "NameOfSubproject"
"Stages": {
"Visit": {
"Date": "2017-03-26",
}
}
}
All VisitLabels for a given candidates can be found using /candidates/\$candid/
Every possible visit_labels for a project can be found using /projects/\$projectname/visits. If a {visit} is in /projects/{project}/visits but not in /candidates/{candid}/ , it can be added with a PUT request to /candidates/\$candid/\$visit_label
Battery (NameOfSubproject) must be guessed...
End of explanation
url = '/'.join([baseurl, 'candidates', candid, candidateVisit, 'qc', 'imaging'])
singleCandidateQcImaging = GET(url)
prettyPrint(singleCandidateQcImaging)
Explanation: Exercise 4.3 Get QC/imaging of a candidate visit
This is a GET request to /candidates/{candid}/{visit}
End of explanation
# In the previous example, "SessionQC" was set to true.
# We'll change it with a PUT request
json_data = {
'Meta' : {
'CandID' : candid,
'Visit' : candidateVisit,
},
'SessionQC': "",
'Pending': False
}
url = '/'.join([baseurl, 'candidates', candid, candidateVisit, 'qc', 'imaging'])
response = PUT(url, json_data)
print('PUT response status code:', response.status_code)
# Pending should now be false
visitqcImaging = GET(url)
prettyPrint(visitqcImaging)
Explanation: Exercise 4.4 Change QC/imaging of a candidate visit
PUT request to /candidates/{candid}/{visit}/ qc/imaging
json
{
"Meta" : {
"CandID" : candid,
"Visit" : visit,
"Battery": "NameOfSubproject"
}
All VisitLabels for a given candidates can be found using /candidates/{candid}/
Every possible {visit} labels for a project can be found using /projects/{project}/visits. If a {visit} label is in /projects/{project}/visits but not in /candidates/{candid}/ , it can be added using a PUT request to /candidates/{candid}/{visit}
Battery (NameOfSubproject) must be guessed...
End of explanation
request_count = 0
for candidate in allCandidates['Candidates'][:4]:
candid = candidate['CandID']
request_count += 1
visit_labels = GET('/'.join([baseurl, 'candidates', candid]))['Visits']
for visit_label in visit_labels:
request_count += 1
instruments = GET('/'.join([baseurl,
'candidates',
candid,
visit_label,
'instruments']))['Instruments']
for instrument in instruments:
request_count += 1
instr = GET('/'.join([baseurl,
'candidates',
candid,
visit_label,
'instruments',
instrument]))
print(json.dumps(instr, indent=2, sort_keys=True))
print(request_count)
# Get an example of candid and visit_label from the last
# sample visited in the loop
candid_instruments = instr['Meta']['Candidate']
visit_label_instruments = instr['Meta']['Visit']
candid_instrument = instr['Meta']['Instrument']
Explanation: Exercice 5: Instrument data for a candidate
Exercise 5.1. Find all candidates and session with a given instruments
This is a series of GET request
https://github.com/aces/Loris/blob/minor/docs/API/LorisRESTAPI.md#31-specific-candidate
https://github.com/aces/Loris/blob/minor/docs/API/LorisRESTAPI.md#33-candidate-instruments
https://github.com/aces/Loris/blob/minor/docs/API/LorisRESTAPI.md#33-the-candidate-instrument-data
End of explanation
url = '/'.join([baseurl, 'candidates', candid_instruments,
visit_label_instruments, 'instruments'])
candidateinstruments = GET(url)
prettyPrint(candidateinstruments)
Explanation: Exercise 5.2 GET all instruments for a candidate
End of explanation
url = '/'.join([baseurl, 'candidates', candid_instruments,
visit_label_instruments, 'instruments', candid_instrument])
candidateSelectedInstrument = GET(url)
prettyPrint(candidateSelectedInstrument)
Explanation: Exercise 5.3 Input instrument data for a candidate
GET, PUT or PATCH request to /candidates/{candid}/{visit}/instruments/{instrument}
https://github.com/aces/Loris/blob/main/modules/api/docs/LorisRESTAPI.md#33-the-candidate-instrument-data
data format:
json
{
"Meta": {
"Candidate": string,
"Visit": string
"DDE": true|false,
"Instrument": string,
},
<instrument_name>: {
<field1_name>: <value1>,
<field2_name>: <value2>,
...
}
}
5.3.1 GET request containing all the fields
End of explanation
# Get all the fields an meta data
url = '/'.join([baseurl, 'candidates', candid_instruments,
visit_label_instruments, 'instruments', candid_instrument])
json_input = GET(url)
# Update one field
json_input[candid_instrument]['Candidate_Age'] = 3
json_input[candid_instrument]['UserID'] = 'something'
response = PATCH(url, json_input)
print('PATCH response status code:', response.status_code)
# GET request to verify the Candidate_Age and UserID fields have been updated
candidateSelectedInstrument = GET(url)
prettyPrint(candidateSelectedInstrument)
# Update one field
json_input[candid_instrument]['Candidate_Age'] = 42
response = PATCH(url, json_input)
print('PATCH response status code:', response.status_code)
GET request to verify the Candidate_Age is changed to 42 and UserID is still
"something". PATCH requests only change the field requested should, so UserID
still be "something"
candidateSelectedInstrument = GET(url)
prettyPrint(candidateSelectedInstrument)
Explanation: 5.3.2 PATCH request
End of explanation
json_input = {
'Meta': {
'Candidate': candid_instruments,
'DDE': False,
'Instrument': 'mri_parameter_form',
'Visit': visit_label_instruments},
candid_instrument:
{
'UserID': 'something_else'
}
}
url = '/'.join([baseurl, 'candidates', candid_instruments,
visit_label_instruments, 'instruments', candid_instrument])
response = PUT(url, json_input)
print('PUT response status code:', response.status_code)
UserID should now be something_else. Because it is a PUT request, Candidate_Age
should be changed back to null. This is a good example to display the main
difference between PUT and PATCH: PUT restores every value to their default
values, except for the values explicitely modified.
candidateSelectedInstrument = GET(url)
prettyPrint(candidateSelectedInstrument)
Explanation: 5.3.3 PUT request
End of explanation
url = '/'.join([baseurl, 'candidates', candid_instruments,
visit_label_instruments, 'instruments', candid_instrument, 'flags'])
candidateSelectedInstrumentFlags = GET(url)
prettyPrint(candidateSelectedInstrumentFlags)
Explanation: Exercise 5.4 Input instrument flags for a candidate
PUT or PATCH request to /candidates/{candid}/{visit}/instruments/{instrument}
https://github.com/aces/Loris/blob/minor/docs/API/LorisRESTAPI.md#33-the-candidate-instrument-data
data format:
json
{
"Meta": {
"Candidate": string,
"Visit": string
"DDE": false,
"Instrument": string,
},
"Flags": {
"Data_entry": string,
"Administration": string
"Validity": true|false
}
}
5.4.1 GET request containing all the fields for an instrument flags
End of explanation
# Update all fields
json_input = {
'Flags': {
"Data_entry": "In Progress",
"Administration": "All",
"Validity": "Valid"
}
}
response = PUT(url, json_input)
print('PUT response status code:', response.status_code)
url
# Administration should now be changed to 'None'
candidateSelectedInstrument = GET(url)
prettyPrint(candidateSelectedInstrument)
Explanation: 5.4.2 PUT request containing all the fields
End of explanation
# Update one field
json_input = {
'Flags': {
"Administration": "None",
}
}
response = PATCH(url, json_input)
print('PATCH response status code:', response.status_code)
Explanation: 5.4.3 PATCH request containing some of the fields
End of explanation
# Administration should be 'Partial' and Data_entry should be 'Valid'
candidateSelectedInstrument = GET(url)
prettyPrint(candidateSelectedInstrument)
Explanation: GET request to verify the Administration and Data_entry fields have been updated successfully
End of explanation
url = '/'.join([baseurl, 'candidates', candid_instruments, visit_label_instruments,
'instruments', candid_instrument, 'dde'])
candidateSelectedInstrumentDde = GET(url)
prettyPrint(candidateSelectedInstrumentDde)
Explanation: Exercise 5.5 Input instrument DDE for a candidate
GET, PUT or PATCH request to /candidates/{candid}/{visit}/instruments/{instrument}
https://github.com/aces/Loris/blob/minor/docs/API/LorisRESTAPI.md#33-the-candidate-instrument-data
data format:
json
{
"Meta": {
"Candidate": string,
"DDE": true,
"Instrument": string,
"Visit": string
},
<instrument_name>: {
<field1_name>: <value1>,
<field2_name>: <value2>,
...
}
}
5.5.1 GET request containing all the fields
End of explanation
# Get all the fields and meta data
json_input = GET(url)
# Update one field
json_input[candid_instrument]['Candidate_Age'] = 4
json_input[candid_instrument]['UserID'] = '29'
response = PATCH(url, json_input)
print('PATCH response status code:', response.status_code)
# Candidate_Age should be 4 and UserID should be 29
candidateSelectedInstrumentDde = GET(url)
prettyPrint(candidateSelectedInstrumentDde)
Explanation: 5.5.2 PATCH request containing all the fields
End of explanation
## Update one field
json_input = {
'Meta': {
'Candidate': candid_instruments,
'DDE': True,
'Instrument': candid_instrument,
'Visit': candidateVisit},
'mri_parameter_form': {'UserID': 1}}
response = PUT(url, json_input)
print('PUT response status code:', response.status_code)
# UserID should be changed to "1".
# Because it is a PUT request, Candidate_Age should be back to the default: null
candidateSelectedInstrumentDde = GET(url)
prettyPrint(candidateSelectedInstrumentDde)
Explanation: 5.5.3 PUT request containing a single field to modify
End of explanation
url = '/'.join([baseurl, 'candidates', candid, visit_label_instruments,
'instruments', instrument, 'dde', 'flags'])
candidateSelectedInstrumentDdeFlags = GET(url)
prettyPrint(candidateSelectedInstrumentDdeFlags)
Explanation: Exercise 5.6 Input instrument DDE flags for a candidate
PUT or PATCH request to /candidates/{candid}/{visit}/instruments/{instrument}
https://github.com/aces/Loris/blob/minor/docs/API/LorisRESTAPI.md#33-the-candidate-instrument-data
data format:
json
{
"Meta": {
"Candidate": string,
"Visit": string
"DDE": true,
"Instrument": string,
},
"Flags": {
"Data_entry": string,
"Administration": string
"Validity": true|false
}
}
5.6.1 GET request to query all the fields
End of explanation
json_input = {
'Flags': {
"Administration": "Partial",
}
}
response = PATCH(url, json_input)
print('PATCH response status code:', response.status_code)
# Data_entry should be 'Complete'
candidateSelectedInstrumentDdeFlags = GET(url)
prettyPrint(candidateSelectedInstrumentDdeFlags)
Explanation: 5.6.2 PATCH request
End of explanation
json_input = {
'Flags': {
"Data_entry": "In Progress",
"Administration": "Partial",
"Validity": "Valid"
}
}
response = PUT(url, json_input)
print('PUT response status code:', response.status_code)
# Administration shoud be 'All'.
candidateSelectedInstrumentDdeFlags = GET(url)
prettyPrint(candidateSelectedInstrumentDdeFlags)
Explanation: 5.6.3 PUT request
End of explanation
candid_img = projectImages['Images'][0]['Candidate']
visit_img = projectImages['Images'][0]['Visit']
imagename = projectImages['Images'][0]['Link'].split('/')[-1]
Explanation: Exercice 6: Images data for a candidate
End of explanation
url = '/'.join([baseurl, 'candidates', candid_img, visit_img, 'images'])
candidateImages = GET(url)
prettyPrint(candidateImages)
Explanation: Exercise 6.1 Find all images of a candidate for a visit
End of explanation
for file in candidateImages['Files']:
filename = file['Filename']
url = '/'.join([baseurl, 'candidates', candid_img,
visit_img, 'images', filename])
response = GETFile(url)
print('GET response status code for ' + filename + ':', response.status_code)
Explanation: Exercise 6.2 Download all minc files of a candidate
End of explanation
url = '/'.join([baseurl, 'candidates', candid_img, visit_img, 'images',
imagename, 'qc'])
candidateImagesQc = GET(url)
prettyPrint(candidateImagesQc)
Explanation: Exercise 6.3 GET QC data for a MINC image file
End of explanation
json_input = GET(url)
json_input['Selected'] = False
response = PUT(url, json_input)
print('PUT response status code:', response.status_code)
candidateSelectedInstrumentDdeFlags = GET(url)
prettyPrint(candidateSelectedInstrumentDdeFlags)
Explanation: Exercise 6.4 PUT QC data for a MINC image file
PUT request to /candidates/$CandID/$VisitLabel/images/$imagename/qc
https://github.com/aces/Loris/blob/minor/docs/API/LorisRESTAPI.md#
data format:
json
{
'Meta' : {
'CandID' : string,
'Visit' : string,
'File' : string
},
"QC" : string|null,
"Selected" : boolean,
'Caveats' : {
'0' : {
'Severity' : string|null,
'Header' : string|null,
'Value' : string|null,
'ValidRange' : string|null,
'ValidRegex' : string|null
}
}
}
End of explanation
candidateImageFormatBrainbrowser = GET('/'.join([baseurl, 'candidates',
candid_img, visit_img,
'images', imagename,
'format', 'brainbrowser']))
prettyPrint(candidateImageFormatBrainbrowser)
Explanation: Exercise 6.5 GET Image formats
Exercise 6.5.1 GET Image Brainbrowser format
End of explanation
url = '/'.join([baseurl, 'candidates', candid_img, visit_img,
'images', imagename, 'format', 'raw'])
response = GETFile(url)
print('GET response status code', response.status_code)
Explanation: Exercise 6.5.2 GET Image Raw format
End of explanation
url = '/'.join([baseurl, 'candidates', candid_img, visit_img,
'images', imagename, 'format', 'thumbnail'])
response = GETFile(url)
print('GET response status code', response.status_code)
Explanation: Exercise 6.5.3 GET Image Thumbnail format
End of explanation
url = '/'.join([baseurl, 'candidates', candid_img,
visit_img, 'images', imagename, 'headers'])
candidateImageHeaders = GET(url)
prettyPrint(candidateImageHeaders)
Explanation: Exercise 6.6 GET Image headers
Exercise 6.6.1 GET Image headers
End of explanation
url = '/'.join([baseurl, 'candidates', candid_img,
visit_img, 'images', imagename, 'headers', 'full'])
candidateImageHeadersFull = GET(url)
prettyPrint(candidateImageHeadersFull)
Explanation: Exercise 6.6.2 GET Image headers full
End of explanation
url = '/'.join([baseurl, 'candidates', candid_img, visit_img,
'images', imagename, 'headers', 'specific'])
candidateImageHeadersHeadername = GET(url)
prettyPrint(candidateImageHeadersHeadername)
Explanation: Exercise 6.6.3 GET Image headers headername
End of explanation
# First, find a candidate visit with an electrophysiology recording
candid_recs = projectRecordings['Recordings'][0]['Candidate']
visit_recs = projectRecordings['Recordings'][0]['Visit']
recording_filename = projectRecordings['Recordings'][0]['Link'].split('/')[-1]
Explanation: Exercice 7: Electrophysiology recordings data for a candidate
End of explanation
url = '/'.join([baseurl, 'candidates', candid_recs, visit_recs, 'recordings'])
candidateRecordings = GET(url)
prettyPrint(candidateRecordings)
Explanation: Exercise 7.1 Find all recordings of a candidate for a visit
End of explanation
url = '/'.join([baseurl, 'candidates', candid_recs, visit_recs,
'recordings', recording_filename])
response = GETFile(url)
print('GET response status code', response.status_code)
Explanation: Exercise 7.2 Download electrophysiology recording (edf) files
End of explanation
url = '/'.join([baseurl, 'candidates', candid_recs, visit_recs,
'recordings', recording_filename, 'channels'])
candidateRecordingsChannels = GET(url)
prettyPrint(candidateRecordingsChannels)
Explanation: Exercise 7.3 Get electrophysiology recording channels data
End of explanation
url = '/'.join([baseurl, 'candidates', candid_recs, visit_recs, 'recordings',
recording_filename, 'channels', 'meta'])
candidateRecordingsChannelsMeta = GET(url)
prettyPrint(candidateRecordingsChannels)
Explanation: Exercise 7.4 Get electrophysiology recording channels meta data
End of explanation
url = '/'.join([baseurl, 'candidates', candid_recs, visit_recs, 'recordings',
recording_filename, 'electrodes'])
candidateRecordingsElectrodes = GET(url)
prettyPrint(candidateRecordingsChannels)
Explanation: Exercise 7.5 Get electrophysiology recording electrodes data
End of explanation
url = '/'.join([baseurl, 'candidates', candid_recs, visit_recs, 'recordings',
recording_filename, 'electrodes', 'meta'])
candidateRecordingsElectrodesMeta = GET(url)
prettyPrint(candidateRecordingsChannels)
Explanation: Exercise 7.6 Get electrophysiology recording electrodes meta data
End of explanation
url = '/'.join([baseurl, 'candidates', candid_recs, visit_recs,
'recordings', recording_filename, 'events'])
candidateRecordingsEvents = GET(url)
prettyPrint(candidateRecordingsEvents)
Explanation: Exercise 7.7 Get electrophysiology recording events data
End of explanation
url = '/'.join([baseurl, 'candidates', candid_recs, visit_recs, 'recordings',
recording_filename, 'events', 'meta'])
candidateRecordingsEventsMeta = GET(url)
prettyPrint(candidateRecordingsEventsMeta)
Explanation: Exercise 7.8 Get electrophysiology recording events meta data
End of explanation
# The flag is used to break the loops once an example that contains a
# Dicom file is found
flag = 0
for candidate in allCandidates['Candidates']:
candid = candidate['CandID']
visit_labels = GET('/'.join([baseurl, 'candidates', candid]))['Visits']
if flag == 1:
break
candid = candidate['CandID']
visit_labels = GET('/'.join([baseurl, 'candidates', candid]))['Visits']
for visit_label in visit_labels:
dicoms = GET('/'.join([baseurl, 'candidates', candid,
visit_label, 'dicoms']))
# We only want to get a single valid example of a candidate and visit that
# has Dicom files that can be used to test the Dicoms endpoints
if len(dicoms['DicomTars']) > 0:
flag = 1
break
Explanation: Exercise 8 - Dicoms
End of explanation
# Get an example of candid, visit_label and dicom tarname from the last
# sample visited in the loop
candid_dicom = dicoms['Meta']['CandID']
dicom_visit = dicoms['Meta']['Visit']
dicom_name = dicoms['DicomTars'][0]['Tarname']
url = '/'.join([baseurl, 'candidates', candid_dicom, dicom_visit, 'dicoms'])
candidateDicoms = GET(url)
prettyPrint(candidateRecordingsEventsMeta)
Explanation: Exercise 8.1 GET a list of Dicom files for a candidate
End of explanation
print('Downloading file', dicom_name)
response = GETFile('/'.join([baseurl, 'candidates', candid_dicom, dicom_visit, 'dicoms', dicom_name]))
print('GET response status code:', response.status_code)
Explanation: Exercise 8.2 Download a Dicom tar file
End of explanation |
1,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installating Julia/IJulia
1 - Downloading and Installing the right Julia binary in the right place
Step1: overwrite links, since v1.5.3 installation does not work properly due to
https
Step2: 2 - Initialize Julia , IJulia, and make them link to winpython
Step3: Print julia's versioninfo()
The environment should point to the usb drive and not to C
Step4: Install julia Packages
Step6: Fix the kernel.json to allow arbitrary drive letters and modify the env.bat
the path to kernel.jl is hardcoded in the kernel.json file
this will cause trouble, if the drive letter of the usb drive changes
use relative paths instead
rewrite kernel.json and delete the one created from IJulia.jl Package | Python Code:
import os
import sys
import io
import re
import urllib.request as request # Python 3
# get latest stable release info, download link and hashes
g = request.urlopen("https://julialang.org/downloads/")
s = g.read().decode()
g.close;
r = r'<a href=".current_stable_release">([^<]+)</a></h2> ' + \
r'<p>Checksums for this release are available in both <a href="([^"]*)">MD5</a> and <a href="([^"]*)">SHA256</a> formats.</p>' + \
r'[^W]*Windows <a href="/downloads/platform/.windows">.help.</a> <td colspan=3 > <a href="[^"]*">64-bit .installer.</a>, <a href="([^"]*)">64-bit .portable.</a>' + \
r' <td colspan=3 > <a href="[^"]*">32-bit .installer.</a>, <a href="([^"]*)">32-bit .portable.</a>'
release_str, md5link, sha256link, ziplink64bit, ziplink32bit = re.findall(r,s)[0]
julia_version=re.findall(r"v([^\s]+)",release_str)[0]
print(release_str)
print(julia_version)
print(ziplink64bit)
print(ziplink32bit)
print(md5link)
print(sha256link)
Explanation: Installating Julia/IJulia
1 - Downloading and Installing the right Julia binary in the right place
End of explanation
if julia_version=='1.5.3':
julia_version='1.6.0-rc1'
ziplink64bit='https://julialang-s3.julialang.org/bin/winnt/x64/1.6/julia-1.6.0-rc1-win64.zip'
md5link='https://julialang-s3.julialang.org/bin/checksums/julia-1.6.0-rc1.md5'
sha256link='https://julialang-s3.julialang.org/bin/checksums/julia-1.6.0-rc1.sha256'
print(julia_version)
# download checksums
g = request.urlopen(md5link)
md5hashes = g.read().decode()
g.close;
g = request.urlopen(sha256link)
sha256hashes = g.read().decode()
g.close;
# downloading julia (may take 1 minute or 2)
if 'amd64' in sys.version.lower():
julia_zip=ziplink64bit.split("/")[-1]
julia_url=ziplink64bit
else:
julia_zip=ziplink32bit.split("/")[-1]
julia_url=ziplink32bit
hashes=(re.findall(r"([0-9a-f]{32})\s"+julia_zip, md5hashes)[0] , re.findall(r"([0-9a-f]{64})\s+"+julia_zip, sha256hashes)[0])
julia_zip_fullpath = os.path.join(os.environ["WINPYDIRBASE"], "t", julia_zip)
g = request.urlopen(julia_url)
with io.open(julia_zip_fullpath, 'wb') as f:
f.write(g.read())
g.close
g = None
#checking it's there
assert os.path.isfile(julia_zip_fullpath)
# checking the hashes
import hashlib
def give_hash(of_file, with_this):
with io.open(julia_zip_fullpath, 'rb') as f:
return with_this(f.read()).hexdigest()
print (" "*12+"MD5"+" "*(32-12-3)+" "+" "*15+"SHA-256"+" "*(40-15-5)+"\n"+"-"*32+" "+"-"*64)
print ("%s %s %s" % (give_hash(julia_zip_fullpath, hashlib.md5) , give_hash(julia_zip_fullpath, hashlib.sha256),julia_zip))
assert give_hash(julia_zip_fullpath, hashlib.md5) == hashes[0].lower()
assert give_hash(julia_zip_fullpath, hashlib.sha256) == hashes[1].lower()
# will be in env next time
os.environ["JUPYTER"] = os.path.join(os.environ["WINPYDIR"],"Scripts","jupyter.exe")
os.environ["JULIA_HOME"] = os.path.join(os.environ["WINPYDIRBASE"], "t", "julia-"+julia_version)
os.environ["JULIA_EXE_PATH"] = os.path.join(os.environ["JULIA_HOME"], "bin")
os.environ["JULIA_EXE"] = "julia.exe"
os.environ["JULIA"] = os.path.join(os.environ["JULIA_EXE_PATH"],os.environ["JULIA_EXE"])
os.environ["JULIA_PKGDIR"] = os.path.join(os.environ["WINPYDIRBASE"],"settings",".julia")
os.environ["JULIA_DEPOT_PATH"] = os.environ["JULIA_PKGDIR"]
os.environ["JULIA_HISTORY"] = os.path.join(os.environ["JULIA_PKGDIR"],"logs","repl_history.jl")
os.environ["CONDA_JL_HOME"] = os.path.join(os.environ["JULIA_HOME"], "conda", "3")
# move JULIA_EXE_PATH to the beginning of PATH, since a julia installation may be present on the machine
os.environ["PATH"] = os.environ["JULIA_EXE_PATH"] + ";" + os.environ["PATH"]
if not os.path.isdir(os.environ["JULIA_PKGDIR"]):
os.mkdir(os.environ["JULIA_PKGDIR"])
if not os.path.isdir(os.path.join(os.environ["JULIA_PKGDIR"],"logs")):
os.mkdir(os.path.join(os.environ["JULIA_PKGDIR"],"logs"))
if not os.path.isfile(os.environ["JULIA_HISTORY"]):
open(os.environ["JULIA_HISTORY"], 'a').close() # create empty file
# extract the zip archive
import zipfile
try:
with zipfile.ZipFile(julia_zip_fullpath) as z:
z.extractall(os.path.join(os.environ["WINPYDIRBASE"], "t"))
print("Extracted all files")
except:
print("Invalid file")
# delete zip file
os.remove(julia_zip_fullpath)
Explanation: overwrite links, since v1.5.3 installation does not work properly due to
https://github.com/JuliaLang/julia/issues/38411
End of explanation
# connecting Julia to WinPython (only once, or everytime you move things)
# see the Windows terminal window for the detailed status. This may take
# a minute or two.
import julia
julia.install()
%load_ext julia.magic
info = julia.juliainfo.JuliaInfo.load()
print(info.julia)
print(info.sysimage)
print(info.version_raw)
from julia.api import Julia
jl = Julia(compiled_modules=False)
# sanity check
assert jl.eval("1+2") == 3
Explanation: 2 - Initialize Julia , IJulia, and make them link to winpython
End of explanation
jl.eval("using InteractiveUtils")
jl.eval('file = open("julia_versioninfo.txt","w")')
jl.eval("versioninfo(file,verbose=false)")
jl.eval("close(file)")
with open('julia_versioninfo.txt', 'r') as f:
print(f.read())
os.remove('julia_versioninfo.txt')
Explanation: Print julia's versioninfo()
The environment should point to the usb drive and not to C:\ (your local installation of julia maybe...)
End of explanation
%%julia
using Pkg
Pkg.instantiate()
Pkg.update()
%%julia
# add useful packages. Again, this may take a while...
Pkg.add("IJulia")
Pkg.add("Plots")
Pkg.add("Interact")
Pkg.add("Compose")
Pkg.add("SymPy")
using Compose
using SymPy
using IJulia
using Plots
Explanation: Install julia Packages
End of explanation
kernel_path = os.path.join(os.environ["WINPYDIRBASE"], "settings", "kernels", "julia-"+julia_version[0:3])
assert os.path.isdir(kernel_path)
with open(os.path.join(kernel_path,"kernel.json"), 'r') as f:
kernel_str = f.read()
new_kernel_str = kernel_str.replace(os.environ["WINPYDIRBASE"].replace("\\","\\\\"),"{prefix}\\\\..")
print(new_kernel_str)
with open(os.path.join(kernel_path,"kernel.json"), 'w') as f:
f.write(new_kernel_str)
# add JULIA env variables to env.bat
inp_str = r
rem ******************
rem handle Julia {0} if included
rem ******************
if not exist "%WINPYDIRBASE%\t\julia-{0}\bin" goto julia_bad_{0}
set JULIA_PKGDIR=%WINPYDIRBASE%\settings\.julia
set JULIA_DEPOT_PATH=%JULIA_PKGDIR%
set JULIA_EXE=julia.exe
set JULIA_HOME=%WINPYDIRBASE%\t\julia-{0}
set JULIA_HISTORY=%JULIA_PKGDIR%\logs\repl_history.jl
:julia_bad_{0}
.format(julia_version)
# append to env.bat
with open(os.path.join(os.environ["WINPYDIRBASE"],"scripts","env.bat"), 'a') as file :
file.write(inp_str)
Explanation: Fix the kernel.json to allow arbitrary drive letters and modify the env.bat
the path to kernel.jl is hardcoded in the kernel.json file
this will cause trouble, if the drive letter of the usb drive changes
use relative paths instead
rewrite kernel.json and delete the one created from IJulia.jl Package
End of explanation |
1,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions and exceptions
Functions
Write a function that converts from Celsius to Kelvin.
To convert from Celsius to Kelvin you add 273.15 from the value.
Try your solution for a few values.
Step1: Now write another function to convert from Fahrenheit to Celsius.
The formula for doing so is
C = 5/9*(F-32)
Again, verify that your function does what is expected.
Step2: Now make a function to convert from Fahrenheit to Kelvin.
Before you start coding, stop to think for a second. You can actually re-use the two other functions you have made. Fahrenheit to Kelvin can be represented as Fahrenheit to Celsius followed by Celsius to Kelvin.
Step3: Finally, implement a more general conversion function that takes as arguments also the input and output scales, e.g. from_scale and to_scale. Provide default values for from_scale and to_scale, and call the function with different number of arguments. Try to call the function using both positional and keyword arguments. Which approach is more readable for you?
Exceptions
Ok, here's some code that fails. Find out at least 2 errors it raises by giving different inputs.
Then construct a try-except clause around the lines of code.
Step4: The open function is used to open files for reading or writing. We'll get to that but first let's try to open a file that doesn't exist.
Filesystem related errors are very common. A file might not exist or for some reason the user might not have the rights to open the file. Go ahead and make a try-except clause to catch this error.
Step5: Compound
Implement the three remaining functions so you can convert freely between Fahrenheit and Kelvin.
Now look at the temperature_converter function. Try to figure out what errors malformed user input can cause. You can either wrap the function call in a try-except or you can wrap parts of the function.
If you have time you can increase the complexity of the function to cover centigrade conversions as well but this is not required. Hint | Python Code:
def celsius_to_kelvin(c):
# implementation here
pass
celsius_to_kelvin(0)
Explanation: Functions and exceptions
Functions
Write a function that converts from Celsius to Kelvin.
To convert from Celsius to Kelvin you add 273.15 from the value.
Try your solution for a few values.
End of explanation
def fahrenheit_to_celsius(f):
pass
fahrenheit_to_celsius(0)
Explanation: Now write another function to convert from Fahrenheit to Celsius.
The formula for doing so is
C = 5/9*(F-32)
Again, verify that your function does what is expected.
End of explanation
def fahrenheit_to_kelvin(f):
pass
fahrenheit_to_kelvin(0)
Explanation: Now make a function to convert from Fahrenheit to Kelvin.
Before you start coding, stop to think for a second. You can actually re-use the two other functions you have made. Fahrenheit to Kelvin can be represented as Fahrenheit to Celsius followed by Celsius to Kelvin.
End of explanation
var = float(input("give a number: "))
divided = 1/var
Explanation: Finally, implement a more general conversion function that takes as arguments also the input and output scales, e.g. from_scale and to_scale. Provide default values for from_scale and to_scale, and call the function with different number of arguments. Try to call the function using both positional and keyword arguments. Which approach is more readable for you?
Exceptions
Ok, here's some code that fails. Find out at least 2 errors it raises by giving different inputs.
Then construct a try-except clause around the lines of code.
End of explanation
file_handle = open("i_dont_exist", "r")
Explanation: The open function is used to open files for reading or writing. We'll get to that but first let's try to open a file that doesn't exist.
Filesystem related errors are very common. A file might not exist or for some reason the user might not have the rights to open the file. Go ahead and make a try-except clause to catch this error.
End of explanation
def celsius_to_fahrenheit(c):
pass
def kelvin_to_celsius(k):
pass
def kelvin_to_fahrenheit(k):
pass
def temperature_converter():
from_scale = input("Give scale to convert from: ")
to_scale = input("Give scale to convert to: ")
value = float(input("Give temperature: "))
if from_scale == "K" and to_scale == "F":
return kelvin_to_fahrenheit(value)
elif from_scale == "F" and to_scale == "K":
return fahrenheit_to_kelvin
elif from_scale == "C" or to_scale == "C":
raise NotImplementedError("Conversion to Celsius not implemented!")
return
temperature_converter()
Explanation: Compound
Implement the three remaining functions so you can convert freely between Fahrenheit and Kelvin.
Now look at the temperature_converter function. Try to figure out what errors malformed user input can cause. You can either wrap the function call in a try-except or you can wrap parts of the function.
If you have time you can increase the complexity of the function to cover centigrade conversions as well but this is not required. Hint: if you always convert the value to centigrade if it is not and to the desired output if desired output is not you can simplify the code.
End of explanation |
1,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
# Interpret results through aggregation
Since I'm working with long documents, I'm not really concerned with BERT's raw predictions about individual text chunks. Instead I need to know how good the predictions are when aggregated at the volume level.
This notebook answers that question, pairing BERT's predictions with a metadata file that got spun off when data was originally created. For a given TASK, this file will be named, for instance, bertmeta/dev_rows_{TASK_NAME}.tsv. This metadata file lists the index of each text chunk but also the docid (usually, volume-level ID) associated with a larger document.
We can then group predictions by docid and evaluate accuracy at the volume level. I have tried doing this by averaging logits, as well as binary voting.
My tentative conclusion is that in most cases binary voting is preferable; I'm not sure whether the logits are scaled in a way that produces a reliable mean.
Step1: Aggregate results; use binary voting
The generat strategy here is to create a dataframe called pred that holds the predictions, and another one called meta that holds indexes paired with volume IDs (or review IDs when we're doing this for the sentiment dataset).
Then we align the dataframes.
Step2: Measure accuracy at the chunk level
Step3: And now at the document level
Step4: adding logits
The same process as above, except we load predictions from a file called logits.tsv.
Step5: Random curiosity
I was interested to know how closely BERT predictions correlate with bag-of-words modeling, and whether it's less closely than BoW models with each other. The answer is, yes, the correlation is less strong, and there's potential here for an ensemble model. | Python Code:
# modules needed
import pandas as pd
from scipy.stats import pearsonr
import numpy as np
Explanation: # Interpret results through aggregation
Since I'm working with long documents, I'm not really concerned with BERT's raw predictions about individual text chunks. Instead I need to know how good the predictions are when aggregated at the volume level.
This notebook answers that question, pairing BERT's predictions with a metadata file that got spun off when data was originally created. For a given TASK, this file will be named, for instance, bertmeta/dev_rows_{TASK_NAME}.tsv. This metadata file lists the index of each text chunk but also the docid (usually, volume-level ID) associated with a larger document.
We can then group predictions by docid and evaluate accuracy at the volume level. I have tried doing this by averaging logits, as well as binary voting.
My tentative conclusion is that in most cases binary voting is preferable; I'm not sure whether the logits are scaled in a way that produces a reliable mean.
End of explanation
pred = pd.read_csv('reports/sf512max/predictions.tsv', sep = '\t', header = None, names = ['real', 'pred'])
pred.head()
meta = pd.read_csv('bertmeta/dev_rows_SF512max.tsv', sep = '\t')
meta.head()
pred.shape
meta.shape
# Here we're aligning the dataframes by setting the index of "pred"
# to match the idx column of "meta."
pred = pred.assign(idx = meta['idx'])
pred = pred.set_index('idx')
pred.head()
Explanation: Aggregate results; use binary voting
The generat strategy here is to create a dataframe called pred that holds the predictions, and another one called meta that holds indexes paired with volume IDs (or review IDs when we're doing this for the sentiment dataset).
Then we align the dataframes.
End of explanation
correct = []
right = 0
for idx, row in pred.iterrows():
if row['pred'] == row['real']:
correct.append(True)
right += 1
else:
correct.append(False)
print(right / len(pred))
Explanation: Measure accuracy at the chunk level
End of explanation
byvol = meta.groupby('docid')
rightvols = 0
allvols = 0
bertprobs = dict()
for vol, df in byvol:
total = 0
right = 0
positive = 0
df.set_index('idx', inplace = True)
for idx, row in df.iterrows():
total += 1
true_class = row['class']
predicted_class = pred.loc[idx, 'pred']
assert true_class == pred.loc[idx, 'real']
if true_class == predicted_class:
right += 1
if predicted_class:
positive += 1
bertprobs[vol] = positive/total
if right/ total >= 0.5:
rightvols += 1
allvols += 1
print()
print('Overall accuracy:', rightvols / allvols)
Explanation: And now at the document level
End of explanation
pred = pd.read_csv('reports/sf512max/logits.tsv', sep = '\t', header = None, names = ['real', 'pred'])
pred.head()
right = 0
for idx, row in pred.iterrows():
if row['pred'] >= 0:
predclass = 1
else:
predclass = 0
if predclass == row['real']:
correct.append(True)
right += 1
else:
correct.append(False)
print(right / len(pred))
# Here we're aligning the dataframes by setting the index of "pred"
# to match the idx column of "meta."
pred = pred.assign(idx = meta['idx'])
pred = pred.set_index('idx')
pred.head()
byvol = meta.groupby('docid')
rightvols = 0
allvols = 0
bertprobs = dict()
for vol, df in byvol:
total = 0
right = 0
positive = 0
df.set_index('idx', inplace = True)
predictions = []
for idx, row in df.iterrows():
predict = pred.loc[idx, 'pred']
predictions.append(predict)
true_class = row['class']
volmean = sum(predictions) / len(predictions)
if volmean >= 0:
predicted_class = 1
else:
predicted_class = 0
if true_class == predicted_class:
rightvols += 1
allvols += 1
print()
print('Overall accuracy:', rightvols / allvols)
Explanation: adding logits
The same process as above, except we load predictions from a file called logits.tsv.
End of explanation
def corrdist(filename, bertprobs):
'''
Checks for correlation.
'''
# If I were coding elegantly, I would not repeat
# the same code twice, but this is just a sanity check, so
# the structure here is that we do exactly the same thing
# for models 0-4 and for models 5-9.
root = '../temp/' + filename
logisticprob = dict()
for i in range(0, 10):
# note the range endpoints
tt_df = pd.read_csv(root + str(i) + '.csv', index_col = 'docid')
for key, value in bertprobs.items():
if key in tt_df.index:
l_prob = tt_df.loc[key, 'probability']
if key not in logisticprob:
logisticprob[key] = []
logisticprob[key].append(l_prob)
a = []
b = []
for key, value in logisticprob.items():
aval = sum(value) / len(value)
bval = bertprobs[key]
a.append(aval)
b.append(bval)
print(pearsonr(a, b))
print(len(a), len(b))
corrdist('BoWSF', bertprobs)
thisprobs = dict()
lastprobs = dict()
root = '../temp/BoWSF'
for i in range(0, 10):
df = pd.read_csv(root + str(i) + '.csv', index_col = 'docid')
a = []
b = []
for idx, row in df.iterrows():
thisprobs[idx] = row.probability
if idx in lastprobs:
a.append(lastprobs[idx])
b.append(thisprobs[idx])
if len(a) > 0:
print(pearsonr(a, b))
lastprobs = thisprobs
thisprobs = dict()
met = pd.read_csv('bertmeta/dev_rows_SF0_500.tsv', sep = '\t')
met.head()
# regression
byvol = meta.groupby('docid')
volpred = []
volreal = []
for vol, df in byvol:
total = 0
right = 0
positive = 0
df.set_index('idx', inplace = True)
predictions = []
for idx, row in df.iterrows():
predict = pred.loc[idx, 'pred']
predictions.append(predict)
true_class = float(row['class'])
volmean = sum(predictions) / len(predictions)
volpred.append(volmean)
volreal.append(true_class)
print()
print('Overall accuracy:', pearsonr(volpred, volreal))
Explanation: Random curiosity
I was interested to know how closely BERT predictions correlate with bag-of-words modeling, and whether it's less closely than BoW models with each other. The answer is, yes, the correlation is less strong, and there's potential here for an ensemble model.
End of explanation |
1,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.загружаем файлы .json
Step1: Смотрим, где именно в файле интересующие нас данные
Step2: Считываем нужные нам данные как датафреймы
Step3: Создаем в датафреймах отдельные столбцы с данными в удобных нам форматах.
Step4: Создаем столбцы в датафрейме с "Goal1Complitations", где будем хранить количество сессий и конверсию
Step5: Переносим из таблицы сессий количество сессий и считаем конверсию для каждой страницы, которая есть в "Goal1Complitations"
Step6: Обнулим конверсию для тех страниц по которым не було сессий. В даннос случае это страница "(entrance)"
Step7: Строим график
Step8: Выводим результат | Python Code:
path = 'task_data/Sessions_Page.json'
path2 = 'task_data/Goal1CompletionLocation_Goal1Completions.json'
with open(path, 'r') as f:
sessions_page = json.loads(f.read())
with open(path2, 'r') as f:
goals_page = json.loads(f.read())
Explanation: .загружаем файлы .json
End of explanation
type (sessions_page)
sessions_page.keys()
sessions_page['reports'][0].keys()
sessions_page['reports'][0]['data']['rows']
Explanation: Смотрим, где именно в файле интересующие нас данные
End of explanation
sessions_df = pd.DataFrame(sessions_page['reports'][0]['data']['rows'])
goals_df = pd.DataFrame(goals_page['reports'][0]['data']['rows'])
#sessions_df
goals_df
Explanation: Считываем нужные нам данные как датафреймы
End of explanation
x=[]
for i in sessions_df.dimensions:
x.append(str(i[0]))
sessions_df.insert(2, 'name', x)
x=[]
for i in goals_df.dimensions:
x.append(str(i[0]))
goals_df.insert(2, 'name', x)
x=[]
for i in sessions_df.metrics:
x.append(float(i[0]['values'][0]))
sessions_df.insert(3, 'sessions', x)
x=[]
for i in goals_df.metrics:
x.append(float(i[0]['values'][0]))
goals_df.insert(3, 'goals', x)
Explanation: Создаем в датафреймах отдельные столбцы с данными в удобных нам форматах.
End of explanation
goals_df.insert(4, 'sessions', 0)
goals_df.insert(5, 'convers_rate', 0)
Explanation: Создаем столбцы в датафрейме с "Goal1Complitations", где будем хранить количество сессий и конверсию
End of explanation
for i in range(7):
goals_df.sessions[i] = sum(sessions_df.sessions[sessions_df.name==goals_df.name[i]])
goals_df.convers_rate = goals_df.goals/goals_df.sessions*100
Explanation: Переносим из таблицы сессий количество сессий и считаем конверсию для каждой страницы, которая есть в "Goal1Complitations"
End of explanation
goals_df.convers_rate[goals_df.sessions==0] = 0
goals_df.ix[range(1,7),[2,5]]
Explanation: Обнулим конверсию для тех страниц по которым не було сессий. В даннос случае это страница "(entrance)"
End of explanation
goals_df.ix[range(1,7),[2,5]].plot(kind="bar", legend=False)
plt.xticks([0, 1, 2, 3, 4, 5], goals_df.name, rotation="vertical")
plt.show()
Explanation: Строим график
End of explanation
name = goals_df.ix[goals_df.convers_rate==max(goals_df.convers_rate),2]
print ('The best converting page on your site is "',str(name)[5:len(name)-28], '" with conversion rate', max(goals_df.convers_rate),'%')
Explanation: Выводим результат
End of explanation |
1,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aim
Motive of the notebook is to give a brief overview as to how to use the evolutionary sampling powered ensemble models as part of the EvoML research project.
Will make the notebook more verbose if time permits. Priority will be to showcase the flexible API of the new estimators which encourage research and tinkering.
Contents
Subsampling
Subspacing
1. Subsampling - Sampling in the example space - rows will be mutated and evolved.
Step1: 2. Subspacing - sampling in the domain of features - evolving and mutating columns | Python Code:
from evoml.subsampling import BasicSegmenter_FEMPO, BasicSegmenter_FEGT, BasicSegmenter_FEMPT
df = pd.read_csv('datasets/ozone.csv')
df.head(2)
X, y = df.iloc[:,:-1], df['output']
print(BasicSegmenter_FEGT.__doc__)
from sklearn.tree import DecisionTreeRegressor
clf_dt = DecisionTreeRegressor(max_depth=3)
clf = BasicSegmenter_FEGT(base_estimator=clf_dt, statistics=True)
clf.fit(X, y)
clf.score(X, y)
EGs = clf.segments_
len(EGs)
sampled_datasets = [eg.get_data() for eg in EGs]
[sd.shape for sd in sampled_datasets]
Explanation: Aim
Motive of the notebook is to give a brief overview as to how to use the evolutionary sampling powered ensemble models as part of the EvoML research project.
Will make the notebook more verbose if time permits. Priority will be to showcase the flexible API of the new estimators which encourage research and tinkering.
Contents
Subsampling
Subspacing
1. Subsampling - Sampling in the example space - rows will be mutated and evolved.
End of explanation
from evoml.subspacing import FeatureStackerFEGT, FeatureStackerFEMPO
print(FeatureStackerFEGT.__doc__)
clf = FeatureStackerFEGT(ngen=30)
clf.fit(X, y)
clf.score(X, y)
## Get the Hall of Fame individual
hof = clf.segment[0]
sampled_datasets = [eg.get_data() for eg in hof]
[data.columns.tolist() for data in sampled_datasets]
## Original X columns
X.columns
Explanation: 2. Subspacing - sampling in the domain of features - evolving and mutating columns
End of explanation |
1,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
unique_word = set(text)
vocab_to_int = {word:i for i,word in enumerate(unique_word)}
int_to_vocab = {vocab_to_int[word]:word for word in vocab_to_int}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
token_dict = {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation||',
'!': '||Exclamation||',
'?': '||Question||',
'(': '||Left_par||',
')': '||Rigth_par||',
'--': '||Dash||',
'\n': '||Return||',
';': '||Semicolon||'
}
return token_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='labels')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return (inputs, targets, learning_rate)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm(rnn_size) for _ in range(2)])
# Getting an initial state of all zeros
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name="initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)
final_state = tf.identity(final_state, name="final_state")
return (outputs, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
Logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn = None)
return Logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
n_batches = int(len(int_text) / (batch_size * seq_length))
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.zeros_like(xdata)
ydata[: n_batches * batch_size * seq_length-1] = np.array(int_text[1: (n_batches * batch_size * seq_length)])
ydata[-1] = int_text[0]
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.asarray(list(zip(x_batches, y_batches)))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 32
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 50
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input0 = loaded_graph.get_tensor_by_name('input:0')
init_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return input0, init_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
# next_word = np.random.choice(list(int_to_vocab.values()), p=probabilities)
next_word_index = np.random.choice(len(int_to_vocab), p=probabilities)
return int_to_vocab[next_word_index]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
1,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: Create a Dakota instance to perform a centered parameter study with HydroTrend.
Step2: Define the HydroTrend input variables to be used in the parameter study, as well as the initial point in parameter space, the step size, and the range of the variables.
Step3: Define the HydroTrend outputs to be used in the parameter study, as well as the statistics to be calculated from them.
Step4: HydroTrend requires a set of files to run.
They're included in the data directory of this example.
They can also be obtained directly from the HydroTrend GitHub repository.
Set paths to these files with the following statements.
Step5: The template file provides the configuration file for HydroTrend, but with all parameter values replaced by variables in the form {parameter_name}. The parameters file provides descriptions, ranges, and default values for all of the parameters represented in the template file. The hypsometry file describes the change in elevation along the river's course from source to sea.
From the template and parameters files,
we can create an input file that HydroTrend can run.
Included in the CSDMS Dakota package is a routine that replaces the variables in the template file with default values from the parameters file.
Import this routine and use it to create a HydroTrend input file.
Step6: Next, we must replace the default values for the variables for starting_mean_annual_temperature and total_annual_precipitation with variable names for Dakota to substitute into. The CSDMS Dakota package also includes a routine to do this. Import this routine and use it to create a Dakota template file.
Step7: Associate the Dakota template file and the hypsometry file with the Dakota instance.
Step8: Call the setup method to create files needed by Dakota, then run the experiment.
Step9: Check the output; in particular, the dakota.dat file. | Python Code:
from dakotathon import Dakota
Explanation: <img src="http://csdms.colorado.edu/mediawiki/images/CSDMS_high_res_weblogo.jpg">
Centered Parameter Study with HydroTrend
HydroTrend is a numerical model that creates synthetic river discharge and sediment load time series as a function of climate trends and basin morphology.
In this example, we'll perform a centered parameter study,
evaluating how changing two HydroTrend input parameters:
starting_mean_annual_temperature and
total_annual_precipitation
affects two output parameters
median long-term suspended sediment load at the river mouth and
mean discharge at the river mouth
over a one-year run.
Before we start, make sure that you've installed Dakota, HydroTrend, and this package on your computer, using the instructions in the README file.
Start by importing the Dakota class.
End of explanation
d = Dakota(method='centered_parameter_study', plugin='hydrotrend')
Explanation: Create a Dakota instance to perform a centered parameter study with HydroTrend.
End of explanation
d.variables.descriptors = ['starting_mean_annual_temperature', 'total_annual_precipitation']
d.variables.initial_point = [15.0, 2.0]
d.method.steps_per_variable = [2, 5]
d.method.step_vector = [2.5, 0.2]
Explanation: Define the HydroTrend input variables to be used in the parameter study, as well as the initial point in parameter space, the step size, and the range of the variables.
End of explanation
d.responses.response_descriptors = ['Qs_median', 'Q_mean']
d.responses.response_files = ['HYDROASCII.QS', 'HYDROASCII.Q']
d.responses.response_statistics = ['median', 'mean']
Explanation: Define the HydroTrend outputs to be used in the parameter study, as well as the statistics to be calculated from them.
End of explanation
import os
data_dir = os.path.join(os.getcwd(), 'data')
template_file = os.path.join(data_dir, 'hydrotrend.in.tmpl')
parameters_file = os.path.join(data_dir, 'parameters.yaml')
hypsometry_file = os.path.join(data_dir, 'HYDRO0.HYPS')
Explanation: HydroTrend requires a set of files to run.
They're included in the data directory of this example.
They can also be obtained directly from the HydroTrend GitHub repository.
Set paths to these files with the following statements.
End of explanation
from dakotathon.plugins.base import write_dflt_file
default_input_file = write_dflt_file(template_file, parameters_file, run_duration=365)
print default_input_file
Explanation: The template file provides the configuration file for HydroTrend, but with all parameter values replaced by variables in the form {parameter_name}. The parameters file provides descriptions, ranges, and default values for all of the parameters represented in the template file. The hypsometry file describes the change in elevation along the river's course from source to sea.
From the template and parameters files,
we can create an input file that HydroTrend can run.
Included in the CSDMS Dakota package is a routine that replaces the variables in the template file with default values from the parameters file.
Import this routine and use it to create a HydroTrend input file.
End of explanation
from dakotathon.plugins.base import write_dtmpl_file
dakota_template_file = write_dtmpl_file(template_file, default_input_file, d.variables.descriptors)
print dakota_template_file
Explanation: Next, we must replace the default values for the variables for starting_mean_annual_temperature and total_annual_precipitation with variable names for Dakota to substitute into. The CSDMS Dakota package also includes a routine to do this. Import this routine and use it to create a Dakota template file.
End of explanation
d.template_file = dakota_template_file
d.auxiliary_files = hypsometry_file
Explanation: Associate the Dakota template file and the hypsometry file with the Dakota instance.
End of explanation
d.setup()
d.run()
Explanation: Call the setup method to create files needed by Dakota, then run the experiment.
End of explanation
%cat dakota.dat
Explanation: Check the output; in particular, the dakota.dat file.
End of explanation |
1,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sine Wave Generator
Step1: To implement our sine wave generator, we'll use a counter to index into a ROM that is programmed to output the value of discrete points in the sine wave.
Step2: Compile and test.
Step3: We can wire up the GPIO pins to a logic analyzer to verify that our circuit produces the correct sine waveform.
We can also use Saleae's export data feature to output a csv file. We'll load this data into Python and plot the results.
Step4: TODO | Python Code:
import math
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def sine(x):
return np.sin(2 * math.pi * x)
x = np.linspace(0., 1., num=256, endpoint=False)
plt.plot(x, sine(x))
import magma as m
m.set_mantle_target("ice40")
import mantle
from loam.boards.icestick import IceStick
N = 8
icestick = IceStick()
icestick.Clock.on()
for i in range(N):
icestick.J3[i].output().on()
Explanation: Sine Wave Generator
End of explanation
main = icestick.main()
counter = mantle.Counter(32)
sawtooth = counter.O[8:8+8]
wavetable = 128 + 127 * sine(x)
wavetable = [int(x) for x in wavetable]
rom = mantle.Memory(height=256, width=16, rom=list(wavetable), readonly=True)
m.wire( rom(sawtooth)[0:8], main.J3 )
m.wire( 1, rom.RE )
m.EndCircuit()
Explanation: To implement our sine wave generator, we'll use a counter to index into a ROM that is programmed to output the value of discrete points in the sine wave.
End of explanation
m.compile('build/sin', main)
%%bash
cd build
cat sin.pcf
yosys -q -p 'synth_ice40 -top main -blif sin.blif' sin.v
arachne-pnr -q -d 1k -o sin.txt -p sin.pcf sin.blif
icepack sin.txt sin.bin
iceprog sin.bin
Explanation: Compile and test.
End of explanation
import csv
import magma as m
with open("data/sine-capture.csv") as sine_capture_csv:
csv_reader = csv.reader(sine_capture_csv)
next(csv_reader, None) # skip the headers
rows = [row for row in csv_reader]
timestamps = [float(row[0]) for row in rows]
values = [m.bitutils.seq2int(tuple(int(x) for x in row[1:])) for row in rows]
Explanation: We can wire up the GPIO pins to a logic analyzer to verify that our circuit produces the correct sine waveform.
We can also use Saleae's export data feature to output a csv file. We'll load this data into Python and plot the results.
End of explanation
plt.plot(timestamps[:250], values[:250], "b.")
Explanation: TODO: Why do we have this jitter? Logic analyzer is running at 25 MS/s, 3.3+ Volts for 1s
End of explanation |
1,565 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GroupBy examples
Allen Downey
MIT License
Step1: Let's load the GSS dataset.
Step2: The GSS interviews a few thousand respondents each year.
Step3: One of the questions they ask is "Do you think the use of marijuana should be made legal or not?"
The answer codes are
Step4: I'll replace "Don't know", "No answer", and "Not applicable" with NaN.
Step5: And replace 2, which represents "No", with 1. That way we can use mean to compute the fraction in favor.
Step6: Here are the value counts after replacement.
Step7: And here's the mean.
Step8: So 30% of respondents thought marijuana should be legal, at the time they were interviewed.
Now we can see how that fraction depends on age, cohort (year of birth), and period (year of interview).
Group by year
First we'll group respondents by year.
Step9: The result in a DataFrameGroupBy object we can iterate through
Step10: And we can compute summary statistics for each group.
Step11: Using a for loop can be useful for debugging, but it is more concise, more idiomatic, and faster to apply operations directly to the DataFrameGroupBy object.
For example, if you select a column from a DataFrameGroupBy, the result is a SeriesGroupBy that represents one Series for each group.
Step12: You can loop through the SeriesGroupBy, but you normally don't.
Step13: Instead, you can apply a function to the SeriesGroupBy; the result is a new Series that maps from group names to the results from the function; in this case, it's the fraction of support for each interview year.
Step14: Overall support for legalization has been increasing since 1990.
Step15: Group by cohort
The variable cohort contains respondents' year of birth.
Step16: Pulling together the code from the previous section, we can plot support for legalization by year of birth.
Step17: Later generations are more likely to support legalization than earlier generations.
Group by age
Finally, let's see how support varies with age at time of interview. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='white')
from thinkstats2 import Pmf, Cdf
import thinkstats2
import thinkplot
decorate = thinkplot.config
Explanation: GroupBy examples
Allen Downey
MIT License
End of explanation
%time gss = pd.read_hdf('../homeworks/gss.hdf5', 'gss')
gss.head()
def counts(series):
return series.value_counts(sort=False).sort_index()
Explanation: Let's load the GSS dataset.
End of explanation
counts(gss['year'])
Explanation: The GSS interviews a few thousand respondents each year.
End of explanation
counts(gss['grass'])
Explanation: One of the questions they ask is "Do you think the use of marijuana should be made legal or not?"
The answer codes are:
1 Legal
2 Not legal
8 Don't know
9 No answer
0 Not applicable
Here is the distribution of responses for all years.
End of explanation
gss['grass'].replace([0,8,9], np.nan, inplace=True)
Explanation: I'll replace "Don't know", "No answer", and "Not applicable" with NaN.
End of explanation
gss['grass'].replace(2, 0, inplace=True)
Explanation: And replace 2, which represents "No", with 1. That way we can use mean to compute the fraction in favor.
End of explanation
counts(gss['grass'])
Explanation: Here are the value counts after replacement.
End of explanation
gss['grass'].mean()
Explanation: And here's the mean.
End of explanation
grouped = gss.groupby('year')
grouped
Explanation: So 30% of respondents thought marijuana should be legal, at the time they were interviewed.
Now we can see how that fraction depends on age, cohort (year of birth), and period (year of interview).
Group by year
First we'll group respondents by year.
End of explanation
for name, group in grouped:
print(name, len(group))
Explanation: The result in a DataFrameGroupBy object we can iterate through:
End of explanation
for name, group in grouped:
print(name, group['grass'].mean())
Explanation: And we can compute summary statistics for each group.
End of explanation
grouped['grass']
Explanation: Using a for loop can be useful for debugging, but it is more concise, more idiomatic, and faster to apply operations directly to the DataFrameGroupBy object.
For example, if you select a column from a DataFrameGroupBy, the result is a SeriesGroupBy that represents one Series for each group.
End of explanation
for name, series in grouped['grass']:
print(name, series.mean())
Explanation: You can loop through the SeriesGroupBy, but you normally don't.
End of explanation
series = grouped['grass'].mean()
series
Explanation: Instead, you can apply a function to the SeriesGroupBy; the result is a new Series that maps from group names to the results from the function; in this case, it's the fraction of support for each interview year.
End of explanation
series.plot(color='C0')
decorate(xlabel='Year of interview',
ylabel='% in favor',
title='Should marijuana be made legal?')
Explanation: Overall support for legalization has been increasing since 1990.
End of explanation
counts(gss['cohort'])
Explanation: Group by cohort
The variable cohort contains respondents' year of birth.
End of explanation
grouped = gss.groupby('cohort')
series = grouped['grass'].mean()
series.plot(color='C1')
decorate(xlabel='Year of birth',
ylabel='% in favor',
title='Should marijuana be made legal?')
Explanation: Pulling together the code from the previous section, we can plot support for legalization by year of birth.
End of explanation
grouped = gss.groupby('age')
series = grouped['grass'].mean()
series.plot(color='C2')
decorate(xlabel='Age at interview',
ylabel='% in favor',
title='Should marijuana be made legal?')
Explanation: Later generations are more likely to support legalization than earlier generations.
Group by age
Finally, let's see how support varies with age at time of interview.
End of explanation |
1,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="top"></a>
Db2 Compatibility Features
Moving from one database vendor to another can sometimes be difficult due to syntax differences between data types, functions, and language elements. Db2 already has a high degree of compatibility with Oracle PLSQL along with some of the Oracle data types.
Db2 11 introduces some additional data type and function compatibility that will reduce some of the migration effort required when porting from other systems. There are some specific features within Db2 that are targeted at Netezza SQL and that is discussed in a separate section.
Step1: We populate the database with the EMPLOYEE and DEPARTMENT tables so that we can run the various examples.
Step2: Table of Contents
Outer Join Operator
CHAR datatype size increase
Binary Data Type
Boolean Data Type
Synonyms for Data Types
Function Synonymns
Netezza Compatibility
Select Enhancements
Hexadecimal Functions
Table Creation with Data
<a id='outer'></a>
Outer Join Operator
Db2 allows the use of the Oracle outer-join operator when Oracle compatibility is turned on within a database. In Db2 11, the outer join operator is available by default and does not require the DBA to turn on Oracle compatibility.
Db2 supports standard join syntax for LEFT and RIGHT OUTER JOINS.
However, there is proprietary syntax used by Oracle employing a keyword
Step3: This example works in the same manner as the last one, but uses
the "+" sign syntax. The format is a lot simpler to remember than OUTER JOIN
syntax, but it is not part of the SQL standard.
Step4: Back to Top
<a id='char'></a>
CHAR Datatype Size Increase
The CHAR datatype was limited to 254 characters in prior releases of Db2. In Db2 11, the limit has been increased
to 255 characters to bring it in line with other SQL implementations.
First we drop the table if it already exists.
Step5: Back to Top
<a id='binary'></a>
Binary Data Types
Db2 11 introduces two new binary data types
Step6: Inserting data into a binary column can be done through the use of BINARY functions, or the use of X'xxxx' modifiers when using the VALUE clause. For fixed strings you use the X'00' format to specify a binary value and BX'00' for variable length binary strings. For instance, the following SQL will insert data into the previous table that was created.
Step7: Handling binary data with a FOR BIT DATA column was sometimes tedious, so the BINARY columns will make coding a little simpler. You can compare and assign values between any of these types of columns. The next SQL statement will update the AUDIO_CHAR column with the contents of the AUDIO_SHORT column. Then the SQL will test to make sure they are the same value.
Step8: We should have one record that is equal.
Step9: Back to Top
<a id='boolean'></a>
Boolean Data Type
The boolean data type (true/false) has been available in SQLPL and PL/SQL scripts for some time. However,
the boolean data type could not be used in a table definition. Db2 11 FP1 now allows you to use this
data type in a table definition and use TRUE/FALSE clauses to compare values.
This simple table will be used to demonstrate how BOOLEAN types can be used.
Step10: The keywords for a true value are TRUE, 'true', 't', 'yes', 'y', 'on', and '1'. For false the values are
FALSE, 'false', 'f', 'no', 'n', and '0'.
Step11: Now we can check to see what has been inserted into the table.
Step12: Retrieving the data in a SELECT statement will return an integer value for display purposes.
1 is true and 0 is false (binary 1 and 0).
Comparison operators with BOOLEAN data types will use TRUE, FALSE, 1 or 0 or any of the supported binary values. You have the choice of using the equal (=) operator or the IS or IS NOT syntax as shown in the following SQL.
Step13: Back to Top
<a id='synonyms'></a>
Synonym Data types
Db2 has the standard data types that most developers are familiar with, like CHAR, INTEGER, and DECIMAL. There are other SQL implementations that use different names for these data types, so Db2 11 now allows these data types as syonomys for the base types.
These data types are
Step14: When you create a table with these other data types, Db2 does not use these "types" in the catalog. What Db2 will do is use the Db2 type instead of these synonym types. What this means is that if you describe the contents of a table,
you will see the Db2 types displayed, not these synonym types.
Step15: Back to Top
<a id='function'></a>
Function Name Compatibility
Db2 has a wealth of built-in functions that are equivalent to competitive functions, but with a different name. In
Db2 11, these alternate function names are mapped to the Db2 function so that there is no re-write of the function
name required. This first SQL statement generates some data required for the statistical functions.
Generate Linear Data
This command generates X,Y coordinate pairs in the xycoord table that are based on the
function y = 2x + 5. Note that the table creation uses Common Table Expressions
and recursion to generate the data!
Step16: COVAR_POP is an alias for COVARIANCE
Step17: VAR_POP is an alias for VARIANCE
Step18: VAR_SAMP is an alias for VARIANCE_SAMP
Step19: ISNULL, NOTNULL is an alias for IS NULL, IS NOT NULL
Step20: LOG is an alias for LN
Step21: RANDOM is an alias for RAND
Notice that the random number that is generated for the two calls results in a different value! This behavior is the
not the same with timestamps, where the value is calculated once during the execution of the SQL.
Step22: STRPOS is an alias for POSSTR
Step23: STRLEFT is an alias for LEFT
Step24: STRRIGHT is an alias for RIGHT
Step25: Additional Synonyms
There are a couple of additional keywords that are synonyms for existing Db2 functions. The list below includes only
those features that were introduced in Db2 11.
|Keyword | Db2 Equivalent
|
Step26: If we turn on NPS compatibility, you see a couple of special characters change behavior. Specifically the
^ operator becomes a "power" operator, and the # becomes an XOR operator.
Step27: GROUP BY Ordinal Location
The GROUP BY command behavior also changes in NPS mode. The following SQL statement groups results
using the default Db2 syntax
Step28: If you try using the ordinal location (similar to an ORDER BY clause), you will
get an error message.
Step29: If NPS compatibility is turned on then then you use the GROUP BY clause with an ordinal location.
Step30: TRANSLATE Function
The translate function syntax in Db2 is
Step31: In this example, the letter 'o' will be replaced with an '1'.
Step32: Note that you could replace more than one character by expanding both the "to" and "from" strings. This
example will replace the letter "e" with an "2" as well as "o" with "1".
Step33: Translate will also remove a character if it is not in the "to" list.
Step34: Reset the behavior back to Db2 mode.
Step35: Back to Top
<a id='select'></a>
SELECT Enhancements
Db2 has the ability to limit the amount of data retrieved on a SELECT statement
through the use of the FETCH FIRST n ROWS ONLY clause. In Db2 11, the ability to offset
the rows before fetching was added to the FETCH FIRST clause.
Simple SQL with Fetch First Clause
The FETCH first clause can be used in a variety of locations in a SELECT clause. This
first example fetches only 10 rows from the EMPLOYEE table.
Step36: You can also add ORDER BY and GROUP BY clauses in the SELECT statement. Note that
Db2 still needs to process all of the records and do the ORDER/GROUP BY work
before limiting the answer set. So you are not getting the first 5 rows "sorted". You
are actually getting the entire answer set sorted before retrieving just 5 rows.
Step37: Here is an example with the GROUP BY statement. This first SQL statement gives us the total
answer set - the count of employees by WORKDEPT.
Step38: Adding the FETCH FIRST clause only reduces the rows returned, not the rows that
are used to compute the GROUPing result.
Step39: OFFSET Extension
The FETCH FIRST n ROWS ONLY clause can also include an OFFSET keyword. The OFFSET keyword
allows you to retrieve the answer set after skipping "n" number of rows. The syntax of the OFFSET
keyword is
Step40: You can specify a zero offset to begin from the beginning.
Step41: Now we can move the answer set ahead by 5 rows and get the remaining
5 rows in the answer set.
Step42: FETCH FIRST and OFFSET in SUBSELECTs
The FETCH FIRST/OFFSET clause is not limited to regular SELECT statements. You can also
limit the number of rows that are used in a subselect. In this case you are limiting the amount of
data that Db2 will scan when determining the answer set.
For instance, say you wanted to find the names of the employees who make more than the
average salary of the 3rd highest paid department. (By the way, there are multiple ways to
do this, but this is one approach).
The first step is to determine what the average salary is of all departments.
Step43: We only want one record from this list (the third one), so we can use the FETCH FIRST clause with
an OFFSET to get the value we want (Note
Step44: And here is the list of employees that make more than the average salary of the 3rd highest department in the
company.
Step45: Alternate Syntax for FETCH FIRST
The FETCH FIRST n ROWS ONLY and OFFSET clause can also be specified using a simpler LIMIT/OFFSET syntax.
The LIMIT clause and the equivalent FETCH FIRST syntax are shown below.
|Syntax |Equivalent
|
Step46: Here is the list of employees that make more than the average salary of the 3rd highest department in the
company. Note that the LIMIT clause specifies only the offset (LIMIT x) or the offset and limit (LIMIT y,x) when you do not use the LIMIT keyword. One would think that LIMIT x OFFSET y would translate into LIMIT x,y but that is not the case. Don't try to figure out the SQL standards reasoning behind the syntax!
Step47: Back to Top
<a id='hexadecimal'></a>
Hexadecimal Functions
A number of new HEX manipulation functions have been added to Db2 11. There are a class of functions
that manipulate different size integers (SMALL, INTEGER, BIGINT) using NOT, OR, AND, and XOR. In addition to
these functions, there are a number of functions that display and convert values into hexadecimal values.
INTN Functions
The INTN functions are bitwise functions that operate on the "two's complement" representation of
the integer value of the input arguments and return the result as a corresponding base 10 integer value.
The function names all include the size of the integers that are being manipulated
Step48: This example will show the four functions used against SMALLINT (INT2) data types.
Step49: This example will use the 4 byte (INT4) data type.
Step50: Finally, the INT8 data type is used in the SQL. Note that you can mix and match the INT2, INT4, and INT8 values
in these functions but you may get truncation if the value is too big.
Step51: TO_HEX Function
The TO_HEX function converts a numeric expression into a character hexadecimal representation. For example, the
numeric value 255 represents x'FF'. The value returned from this function is a VARCHAR value and its
length depends on the size of the number you supply.
Step52: RAWTOHEX Function
The RAWTOHEX function returns a hexadecimal representation of a value as a character string. The
result is a character string itself.
Step53: The string "00" converts to a hex representation of x'3030' which is 12336 in Decimal.
So the TO_HEX function would convert this back to the HEX representation.
Step54: The string that is returned by the RAWTOHEX function should be the same.
Step55: Back to Top
<a id="create"><a/>
Table Creation Extensions
The CREATE TABLE statement can now use a SELECT clause to generate the definition and LOAD the data
at the same time.
Create Table Syntax
The syntax of the CREATE table statement has been extended with the AS (SELECT ...) WITH DATA clause
Step56: You can name a column in the SELECT list or place it in the table definition.
Step57: You can check the SYSTEM catalog to see the table definition.
Step58: The DEFINITION ONLY clause will create the table but not load any data into it. Adding the WITH DATA
clause will do an INSERT of rows into the newly created table. If you have a large amount of data
to load into the table you may be better off creating the table with DEFINITION ONLY and then using LOAD
or other methods to load the data into the table.
Step59: The SELECT statement can be very sophisticated. It can do any type of calculation or limit the
data to a subset of information.
Step60: You can also use the OFFSET clause as part of the FETCH FIRST ONLY to get chunks of data from the
original table. | Python Code:
%run db2.ipynb
Explanation: <a id="top"></a>
Db2 Compatibility Features
Moving from one database vendor to another can sometimes be difficult due to syntax differences between data types, functions, and language elements. Db2 already has a high degree of compatibility with Oracle PLSQL along with some of the Oracle data types.
Db2 11 introduces some additional data type and function compatibility that will reduce some of the migration effort required when porting from other systems. There are some specific features within Db2 that are targeted at Netezza SQL and that is discussed in a separate section.
End of explanation
%sql -sampledata
Explanation: We populate the database with the EMPLOYEE and DEPARTMENT tables so that we can run the various examples.
End of explanation
%%sql -a
SELECT DEPTNAME, LASTNAME FROM
DEPARTMENT D LEFT OUTER JOIN EMPLOYEE E
ON D.DEPTNO = E.WORKDEPT
Explanation: Table of Contents
Outer Join Operator
CHAR datatype size increase
Binary Data Type
Boolean Data Type
Synonyms for Data Types
Function Synonymns
Netezza Compatibility
Select Enhancements
Hexadecimal Functions
Table Creation with Data
<a id='outer'></a>
Outer Join Operator
Db2 allows the use of the Oracle outer-join operator when Oracle compatibility is turned on within a database. In Db2 11, the outer join operator is available by default and does not require the DBA to turn on Oracle compatibility.
Db2 supports standard join syntax for LEFT and RIGHT OUTER JOINS.
However, there is proprietary syntax used by Oracle employing a keyword: "(+)"
to mark the "null-producing" column reference that precedes it in an
implicit join notation. That is (+) appears in the WHERE clause and
refers to a column of the inner table in a left outer join.
For instance:
Python
SELECT * FROM T1, T2
WHERE T1.C1 = T2.C2 (+)
Is the same as:
Python
SELECT * FROM T1 LEFT OUTER JOIN T2 ON T1.C1 = T2.C2
In this example, we get list of departments and their employees, as
well as the names of departments who have no employees.
This example uses the standard Db2 syntax.
End of explanation
%%sql
SELECT DEPTNAME, LASTNAME FROM
DEPARTMENT D, EMPLOYEE E
WHERE D.DEPTNO = E.WORKDEPT (+)
Explanation: This example works in the same manner as the last one, but uses
the "+" sign syntax. The format is a lot simpler to remember than OUTER JOIN
syntax, but it is not part of the SQL standard.
End of explanation
%%sql -q
DROP TABLE LONGER_CHAR;
CREATE TABLE LONGER_CHAR
(
NAME CHAR(255)
);
Explanation: Back to Top
<a id='char'></a>
CHAR Datatype Size Increase
The CHAR datatype was limited to 254 characters in prior releases of Db2. In Db2 11, the limit has been increased
to 255 characters to bring it in line with other SQL implementations.
First we drop the table if it already exists.
End of explanation
%%sql -q
DROP TABLE HEXEY;
CREATE TABLE HEXEY
(
AUDIO_SHORT BINARY(255),
AUDIO_LONG VARBINARY(1024),
AUDIO_CHAR VARCHAR(255) FOR BIT DATA
);
Explanation: Back to Top
<a id='binary'></a>
Binary Data Types
Db2 11 introduces two new binary data types: BINARY and VARBINARY. These two data types can contain any combination
of characters or binary values and are not affected by the codepage of the server that the values are stored on.
A BINARY data type is fixed and can have a maximum length of 255 bytes, while a VARBINARY column can contain up to
32672 bytes. Each of these data types is compatible with columns created with the FOR BIT DATA keyword.
The BINARY data type will reduce the amount of conversion required from other data bases. Although binary data was supported with the FOR BIT DATA clause on a character column, it required manual DDL changes when migrating a table definition.
This example shows the creation of the three types of binary data types.
End of explanation
%%sql
INSERT INTO HEXEY VALUES
(BINARY('Hello there'),
BX'2433A5D5C1',
VARCHAR_BIT_FORMAT(HEX('Hello there')));
SELECT * FROM HEXEY;
Explanation: Inserting data into a binary column can be done through the use of BINARY functions, or the use of X'xxxx' modifiers when using the VALUE clause. For fixed strings you use the X'00' format to specify a binary value and BX'00' for variable length binary strings. For instance, the following SQL will insert data into the previous table that was created.
End of explanation
%%sql
UPDATE HEXEY
SET AUDIO_CHAR = AUDIO_SHORT
Explanation: Handling binary data with a FOR BIT DATA column was sometimes tedious, so the BINARY columns will make coding a little simpler. You can compare and assign values between any of these types of columns. The next SQL statement will update the AUDIO_CHAR column with the contents of the AUDIO_SHORT column. Then the SQL will test to make sure they are the same value.
End of explanation
%%sql
SELECT COUNT(*) FROM HEXEY WHERE
AUDIO_SHORT = AUDIO_CHAR
Explanation: We should have one record that is equal.
End of explanation
%%sql -q
DROP TABLE TRUEFALSE;
CREATE TABLE TRUEFALSE (
EXAMPLE INT,
STATE BOOLEAN
);
Explanation: Back to Top
<a id='boolean'></a>
Boolean Data Type
The boolean data type (true/false) has been available in SQLPL and PL/SQL scripts for some time. However,
the boolean data type could not be used in a table definition. Db2 11 FP1 now allows you to use this
data type in a table definition and use TRUE/FALSE clauses to compare values.
This simple table will be used to demonstrate how BOOLEAN types can be used.
End of explanation
%%sql
INSERT INTO TRUEFALSE VALUES
(1, TRUE),
(2, FALSE),
(3, 0),
(4, 't'),
(5, 'no')
Explanation: The keywords for a true value are TRUE, 'true', 't', 'yes', 'y', 'on', and '1'. For false the values are
FALSE, 'false', 'f', 'no', 'n', and '0'.
End of explanation
%sql SELECT * FROM TRUEFALSE
Explanation: Now we can check to see what has been inserted into the table.
End of explanation
%%sql
SELECT * FROM TRUEFALSE
WHERE STATE = TRUE OR STATE = 1 OR STATE = 'on' OR STATE IS TRUE
Explanation: Retrieving the data in a SELECT statement will return an integer value for display purposes.
1 is true and 0 is false (binary 1 and 0).
Comparison operators with BOOLEAN data types will use TRUE, FALSE, 1 or 0 or any of the supported binary values. You have the choice of using the equal (=) operator or the IS or IS NOT syntax as shown in the following SQL.
End of explanation
%%sql -q
DROP TABLE SYNONYM_EMPLOYEE;
CREATE TABLE SYNONYM_EMPLOYEE
(
NAME VARCHAR(20),
SALARY INT4,
BONUS INT2,
COMMISSION INT8,
COMMISSION_RATE FLOAT4,
BONUS_RATE FLOAT8
);
Explanation: Back to Top
<a id='synonyms'></a>
Synonym Data types
Db2 has the standard data types that most developers are familiar with, like CHAR, INTEGER, and DECIMAL. There are other SQL implementations that use different names for these data types, so Db2 11 now allows these data types as syonomys for the base types.
These data types are:
|Type |Db2 Equivalent
|:----- |:-------------
|INT2 |SMALLINT
|INT4 |INTEGER
|INT8 |BIGINT
|FLOAT4 |REAL
|FLOAT8 |FLOAT
The following SQL will create a table with all of these data types.
End of explanation
%%sql
SELECT DISTINCT(NAME), COLTYPE, LENGTH FROM SYSIBM.SYSCOLUMNS
WHERE TBNAME='SYNONYM_EMPLOYEE' AND TBCREATOR=CURRENT USER
Explanation: When you create a table with these other data types, Db2 does not use these "types" in the catalog. What Db2 will do is use the Db2 type instead of these synonym types. What this means is that if you describe the contents of a table,
you will see the Db2 types displayed, not these synonym types.
End of explanation
%%sql -q
DROP TABLE XYCOORDS;
CREATE TABLE XYCOORDS
(
X INT,
Y INT
);
INSERT INTO XYCOORDS
WITH TEMP1(X) AS
(
VALUES (0)
UNION ALL
SELECT X+1 FROM TEMP1 WHERE X < 10
)
SELECT X, 2*X + 5
FROM TEMP1;
Explanation: Back to Top
<a id='function'></a>
Function Name Compatibility
Db2 has a wealth of built-in functions that are equivalent to competitive functions, but with a different name. In
Db2 11, these alternate function names are mapped to the Db2 function so that there is no re-write of the function
name required. This first SQL statement generates some data required for the statistical functions.
Generate Linear Data
This command generates X,Y coordinate pairs in the xycoord table that are based on the
function y = 2x + 5. Note that the table creation uses Common Table Expressions
and recursion to generate the data!
End of explanation
%%sql
SELECT 'COVAR_POP', COVAR_POP(X,Y) FROM XYCOORDS
UNION ALL
SELECT 'COVARIANCE', COVARIANCE(X,Y) FROM XYCOORDS
Explanation: COVAR_POP is an alias for COVARIANCE
End of explanation
%%sql
SELECT 'STDDEV_POP', STDDEV_POP(X) FROM XYCOORDS
UNION ALL
SELECT 'STDDEV', STDDEV(X) FROM XYCOORDS
Explanation: VAR_POP is an alias for VARIANCE
End of explanation
%%sql
SELECT 'VAR_SAMP', VAR_SAMP(X) FROM XYCOORDS
UNION ALL
SELECT 'VARIANCE_SAMP', VARIANCE_SAMP(X) FROM XYCOORDS
Explanation: VAR_SAMP is an alias for VARIANCE_SAMP
End of explanation
%%sql
WITH EMP(LASTNAME, WORKDEPT) AS
(
VALUES ('George','A01'),
('Fred',NULL),
('Katrina','B01'),
('Bob',NULL)
)
SELECT * FROM EMP WHERE
WORKDEPT ISNULL
Explanation: ISNULL, NOTNULL is an alias for IS NULL, IS NOT NULL
End of explanation
%%sql
VALUES ('LOG',LOG(10))
UNION ALL
VALUES ('LN', LN(10))
Explanation: LOG is an alias for LN
End of explanation
%%sql
VALUES ('RANDOM', RANDOM())
UNION ALL
VALUES ('RAND', RAND())
Explanation: RANDOM is an alias for RAND
Notice that the random number that is generated for the two calls results in a different value! This behavior is the
not the same with timestamps, where the value is calculated once during the execution of the SQL.
End of explanation
%%sql
VALUES ('POSSTR',POSSTR('Hello There','There'))
UNION ALL
VALUES ('STRPOS',STRPOS('Hello There','There'))
Explanation: STRPOS is an alias for POSSTR
End of explanation
%%sql
VALUES ('LEFT',LEFT('Hello There',5))
UNION ALL
VALUES ('STRLEFT',STRLEFT('Hello There',5))
Explanation: STRLEFT is an alias for LEFT
End of explanation
%%sql
VALUES ('RIGHT',RIGHT('Hello There',5))
UNION ALL
VALUES ('STRRIGHT',STRRIGHT('Hello There',5))
Explanation: STRRIGHT is an alias for RIGHT
End of explanation
%%sql
WITH SPECIAL(OP, DESCRIPTION, EXAMPLE, RESULT) AS
(
VALUES
(' | ','OR ', '2 | 3 ', 2 | 3),
(' & ','AND ', '2 & 3 ', 2 & 3),
(' ^ ','XOR ', '2 ^ 3 ', 2 ^ 3),
(' ~ ','COMPLEMENT', '~2 ', ~2),
(' # ','NONE ', ' ',0)
)
SELECT * FROM SPECIAL
Explanation: Additional Synonyms
There are a couple of additional keywords that are synonyms for existing Db2 functions. The list below includes only
those features that were introduced in Db2 11.
|Keyword | Db2 Equivalent
|:------------| :-----------------------------
|BPCHAR | VARCHAR (for casting function)
|DISTRIBUTE ON| DISTRIBUTE BY
Back to Top
<a id='netezza'></a>
Netezza Compatibility
Db2 provides features that enable applications that were written for a Netezza Performance Server (NPS)
database to use a Db2 database without having to be rewritten.
The SQL_COMPAT global variable is used to activate the following optional NPS compatibility features:
Double-dot notation - When operating in NPS compatibility mode, you can use double-dot notation to specify a database object.
TRANSLATE parameter syntax - The syntax of the TRANSLATE parameter depends on whether NPS compatibility mode is being used.
Operators - Which symbols are used to represent operators in expressions depends on whether NPS compatibility mode is being used.
Grouping by SELECT clause columns - When operating in NPS compatibility mode, you can specify the ordinal position or exposed name of a SELECT clause column when grouping the results of a query.
Routines written in NZPLSQL - When operating in NPS compatibility mode, the NZPLSQL language can be used in addition to the SQL PL language.
Special Characters
A quick review of Db2 special characters. Before we change the behavior of Db2, we need to understand
what some of the special characters do. The following SQL shows how some of the special characters
work. Note that the HASH/POUND sign (#) has no meaning in Db2.
End of explanation
%%sql
SET SQL_COMPAT = 'NPS';
WITH SPECIAL(OP, DESCRIPTION, EXAMPLE, RESULT) AS
(
VALUES
(' | ','OR ', '2 | 3 ', 2 | 3),
(' & ','AND ', '2 & 3 ', 2 & 3),
(' ^ ','POWER ', '2 ^ 3 ', 2 ^ 3),
(' ~ ','COMPLIMENT', '~2 ', ~2),
(' # ','XOR ', '2 # 3 ', 2 # 3)
)
SELECT * FROM SPECIAL;
Explanation: If we turn on NPS compatibility, you see a couple of special characters change behavior. Specifically the
^ operator becomes a "power" operator, and the # becomes an XOR operator.
End of explanation
%%sql
SET SQL_COMPAT='DB2';
SELECT WORKDEPT,INT(AVG(SALARY))
FROM EMPLOYEE
GROUP BY WORKDEPT;
Explanation: GROUP BY Ordinal Location
The GROUP BY command behavior also changes in NPS mode. The following SQL statement groups results
using the default Db2 syntax:
End of explanation
%%sql
SELECT WORKDEPT, INT(AVG(SALARY))
FROM EMPLOYEE
GROUP BY 1;
Explanation: If you try using the ordinal location (similar to an ORDER BY clause), you will
get an error message.
End of explanation
%%sql
SET SQL_COMPAT='NPS';
SELECT WORKDEPT, INT(AVG(SALARY))
FROM EMPLOYEE
GROUP BY 1;
Explanation: If NPS compatibility is turned on then then you use the GROUP BY clause with an ordinal location.
End of explanation
%%sql
SET SQL_COMPAT = 'NPS';
VALUES TRANSLATE('Hello');
Explanation: TRANSLATE Function
The translate function syntax in Db2 is:
Python
TRANSLATE(expression, to_string, from_string, padding)
The TRANSLATE function returns a value in which one or more characters in a string expression might
have been converted to other characters. The function converts all the characters in char-string-exp
in from-string-exp to the corresponding characters in to-string-exp or, if no corresponding characters exist,
to the pad character specified by padding.
If no parameters are given to the function, the original string is converted to uppercase.
In NPS mode, the translate syntax is:
Python
TRANSLATE(expression, from_string, to_string)
If a character is found in the from string, and there is no corresponding character in the to string, it is removed. If it was using Db2 syntax, the padding character would be used instead.
Note: If ORACLE compatibility is ON then the behavior of TRANSLATE is identical to NPS mode.
This first example will uppercase the string.
End of explanation
%sql VALUES TRANSLATE('Hello','o','1')
Explanation: In this example, the letter 'o' will be replaced with an '1'.
End of explanation
%sql VALUES TRANSLATE('Hello','oe','12')
Explanation: Note that you could replace more than one character by expanding both the "to" and "from" strings. This
example will replace the letter "e" with an "2" as well as "o" with "1".
End of explanation
%sql VALUES TRANSLATE('Hello','oel','12')
Explanation: Translate will also remove a character if it is not in the "to" list.
End of explanation
%sql SET SQL_COMPAT='DB2'
Explanation: Reset the behavior back to Db2 mode.
End of explanation
%%sql
SELECT LASTNAME FROM EMPLOYEE
FETCH FIRST 5 ROWS ONLY
Explanation: Back to Top
<a id='select'></a>
SELECT Enhancements
Db2 has the ability to limit the amount of data retrieved on a SELECT statement
through the use of the FETCH FIRST n ROWS ONLY clause. In Db2 11, the ability to offset
the rows before fetching was added to the FETCH FIRST clause.
Simple SQL with Fetch First Clause
The FETCH first clause can be used in a variety of locations in a SELECT clause. This
first example fetches only 10 rows from the EMPLOYEE table.
End of explanation
%%sql
SELECT LASTNAME FROM EMPLOYEE
ORDER BY LASTNAME
FETCH FIRST 5 ROWS ONLY
Explanation: You can also add ORDER BY and GROUP BY clauses in the SELECT statement. Note that
Db2 still needs to process all of the records and do the ORDER/GROUP BY work
before limiting the answer set. So you are not getting the first 5 rows "sorted". You
are actually getting the entire answer set sorted before retrieving just 5 rows.
End of explanation
%%sql
SELECT WORKDEPT, COUNT(*) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY WORKDEPT
Explanation: Here is an example with the GROUP BY statement. This first SQL statement gives us the total
answer set - the count of employees by WORKDEPT.
End of explanation
%%sql
SELECT WORKDEPT, COUNT(*) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY WORKDEPT
FETCH FIRST 5 ROWS ONLY
Explanation: Adding the FETCH FIRST clause only reduces the rows returned, not the rows that
are used to compute the GROUPing result.
End of explanation
%%sql
SELECT LASTNAME FROM EMPLOYEE
FETCH FIRST 10 ROWS ONLY
Explanation: OFFSET Extension
The FETCH FIRST n ROWS ONLY clause can also include an OFFSET keyword. The OFFSET keyword
allows you to retrieve the answer set after skipping "n" number of rows. The syntax of the OFFSET
keyword is:
Python
OFFSET n ROWS FETCH FIRST x ROWS ONLY
The OFFSET n ROWS must precede the FETCH FIRST x ROWS ONLY clause. The OFFSET clause can be used to
scroll down an answer set without having to hold a cursor. For instance, you could have the
first SELECT call request 10 rows by just using the FETCH FIRST clause. After that you could
request the first 10 rows be skipped before retrieving the next 10 rows.
The one thing you must be aware of is that that answer set could change between calls if you use
this technique of a "moving" window. If rows are updated or added after your initial query you may
get different results. This is due to the way that Db2 adds rows to a table. If there is a DELETE and then
an INSERT, the INSERTed row may end up in the empty slot. There is no guarantee of the order of retrieval. For
this reason you are better off using an ORDER by to force the ordering although this too won't always prevent
rows changing positions.
Here are the first 10 rows of the employee table (not ordered).
End of explanation
%%sql
SELECT LASTNAME FROM EMPLOYEE
OFFSET 0 ROWS
FETCH FIRST 10 ROWS ONLY
Explanation: You can specify a zero offset to begin from the beginning.
End of explanation
%%sql
SELECT LASTNAME FROM EMPLOYEE
OFFSET 5 ROWS
FETCH FIRST 5 ROWS ONLY
Explanation: Now we can move the answer set ahead by 5 rows and get the remaining
5 rows in the answer set.
End of explanation
%%sql
SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC;
Explanation: FETCH FIRST and OFFSET in SUBSELECTs
The FETCH FIRST/OFFSET clause is not limited to regular SELECT statements. You can also
limit the number of rows that are used in a subselect. In this case you are limiting the amount of
data that Db2 will scan when determining the answer set.
For instance, say you wanted to find the names of the employees who make more than the
average salary of the 3rd highest paid department. (By the way, there are multiple ways to
do this, but this is one approach).
The first step is to determine what the average salary is of all departments.
End of explanation
%%sql
SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
OFFSET 2 ROWS FETCH FIRST 1 ROWS ONLY
Explanation: We only want one record from this list (the third one), so we can use the FETCH FIRST clause with
an OFFSET to get the value we want (Note: we need to skip 2 rows to get to the 3rd one).
End of explanation
%%sql
SELECT LASTNAME, SALARY FROM EMPLOYEE
WHERE
SALARY > (
SELECT AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
OFFSET 2 ROWS FETCH FIRST 1 ROW ONLY
)
ORDER BY SALARY
Explanation: And here is the list of employees that make more than the average salary of the 3rd highest department in the
company.
End of explanation
%%sql
SELECT WORKDEPT, AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
LIMIT 1 OFFSET 2
Explanation: Alternate Syntax for FETCH FIRST
The FETCH FIRST n ROWS ONLY and OFFSET clause can also be specified using a simpler LIMIT/OFFSET syntax.
The LIMIT clause and the equivalent FETCH FIRST syntax are shown below.
|Syntax |Equivalent
|:-----------------|:-----------------------------
|LIMIT x |FETCH FIRST x ROWS ONLY
|LIMIT x OFFSET y |OFFSET y ROWS FETCH FIRST x ROWS ONLY
|LIMIT y,x |OFFSET y ROWS FETCH FIRST x ROWS ONLY
The previous examples are rewritten using the LIMIT clause.
We can use the LIMIT clause with an OFFSET to get the value we want from the table.
End of explanation
%%sql
SELECT LASTNAME, SALARY FROM EMPLOYEE
WHERE
SALARY > (
SELECT AVG(SALARY) FROM EMPLOYEE
GROUP BY WORKDEPT
ORDER BY AVG(SALARY) DESC
LIMIT 2,1
)
ORDER BY SALARY
Explanation: Here is the list of employees that make more than the average salary of the 3rd highest department in the
company. Note that the LIMIT clause specifies only the offset (LIMIT x) or the offset and limit (LIMIT y,x) when you do not use the LIMIT keyword. One would think that LIMIT x OFFSET y would translate into LIMIT x,y but that is not the case. Don't try to figure out the SQL standards reasoning behind the syntax!
End of explanation
%%sql -q
DROP VARIABLE XINT2;
DROP VARIABLE YINT2;
DROP VARIABLE XINT4;
DROP VARIABLE YINT4;
DROP VARIABLE XINT8;
DROP VARIABLE YINT8;
CREATE VARIABLE XINT2 INT2 DEFAULT(1);
CREATE VARIABLE YINT2 INT2 DEFAULT(3);
CREATE VARIABLE XINT4 INT4 DEFAULT(1);
CREATE VARIABLE YINT4 INT4 DEFAULT(3);
CREATE VARIABLE XINT8 INT8 DEFAULT(1);
CREATE VARIABLE YINT8 INT8 DEFAULT(3);
Explanation: Back to Top
<a id='hexadecimal'></a>
Hexadecimal Functions
A number of new HEX manipulation functions have been added to Db2 11. There are a class of functions
that manipulate different size integers (SMALL, INTEGER, BIGINT) using NOT, OR, AND, and XOR. In addition to
these functions, there are a number of functions that display and convert values into hexadecimal values.
INTN Functions
The INTN functions are bitwise functions that operate on the "two's complement" representation of
the integer value of the input arguments and return the result as a corresponding base 10 integer value.
The function names all include the size of the integers that are being manipulated:
N = 2 (Smallint), 4 (Integer), 8 (Bigint)
There are four functions:
INTNAND - Performs a bitwise AND operation, 1 only if the corresponding bits in both arguments are 1
INTNOR - Performs a bitwise OR operation, 1 unless the corresponding bits in both arguments are zero
INTNXOR Performs a bitwise exclusive OR operation, 1 unless the corresponding bits in both arguments are the same
INTNNOT - Performs a bitwise NOT operation, opposite of the corresponding bit in the argument
Six variables will be created to use in the examples. The X/Y values will be set to X=1 (01) and Y=3 (11)
and different sizes to show how the functions work.
End of explanation
%%sql
WITH LOGIC(EXAMPLE, X, Y, RESULT) AS
(
VALUES
('INT2AND(X,Y)',XINT2,YINT2,INT2AND(XINT2,YINT2)),
('INT2OR(X,Y) ',XINT2,YINT2,INT2OR(XINT2,YINT2)),
('INT2XOR(X,Y)',XINT2,YINT2,INT2XOR(XINT2,YINT2)),
('INT2NOT(X) ',XINT2,YINT2,INT2NOT(XINT2))
)
SELECT * FROM LOGIC
Explanation: This example will show the four functions used against SMALLINT (INT2) data types.
End of explanation
%%sql
WITH LOGIC(EXAMPLE, X, Y, RESULT) AS
(
VALUES
('INT4AND(X,Y)',XINT4,YINT4,INT4AND(XINT4,YINT4)),
('INT4OR(X,Y) ',XINT4,YINT4,INT4OR(XINT4,YINT4)),
('INT4XOR(X,Y)',XINT4,YINT4,INT4XOR(XINT4,YINT4)),
('INT4NOT(X) ',XINT4,YINT4,INT4NOT(XINT4))
)
SELECT * FROM LOGIC
Explanation: This example will use the 4 byte (INT4) data type.
End of explanation
%%sql
WITH LOGIC(EXAMPLE, X, Y, RESULT) AS
(
VALUES
('INT8AND(X,Y)',XINT8,YINT8,INT8AND(XINT8,YINT8)),
('INT8OR(X,Y) ',XINT8,YINT8,INT8OR(XINT8,YINT8)),
('INT8XOR(X,Y)',XINT8,YINT8,INT8XOR(XINT8,YINT8)),
('INT8NOT(X) ',XINT8,YINT8,INT8NOT(XINT8))
)
SELECT * FROM LOGIC
Explanation: Finally, the INT8 data type is used in the SQL. Note that you can mix and match the INT2, INT4, and INT8 values
in these functions but you may get truncation if the value is too big.
End of explanation
%sql VALUES TO_HEX(255)
Explanation: TO_HEX Function
The TO_HEX function converts a numeric expression into a character hexadecimal representation. For example, the
numeric value 255 represents x'FF'. The value returned from this function is a VARCHAR value and its
length depends on the size of the number you supply.
End of explanation
%sql VALUES RAWTOHEX('Hello')
Explanation: RAWTOHEX Function
The RAWTOHEX function returns a hexadecimal representation of a value as a character string. The
result is a character string itself.
End of explanation
%sql VALUES TO_HEX(12336)
Explanation: The string "00" converts to a hex representation of x'3030' which is 12336 in Decimal.
So the TO_HEX function would convert this back to the HEX representation.
End of explanation
%sql VALUES RAWTOHEX('00');
Explanation: The string that is returned by the RAWTOHEX function should be the same.
End of explanation
%sql -q DROP TABLE AS_EMP
%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS FROM EMPLOYEE) DEFINITION ONLY;
Explanation: Back to Top
<a id="create"><a/>
Table Creation Extensions
The CREATE TABLE statement can now use a SELECT clause to generate the definition and LOAD the data
at the same time.
Create Table Syntax
The syntax of the CREATE table statement has been extended with the AS (SELECT ...) WITH DATA clause:
Python
CREATE TABLE <name> AS (SELECT ...) [ WITH DATA | DEFINITION ONLY ]
The table definition will be generated based on the SQL statement that you specify. The column names
are derived from the columns that are in the SELECT list and can only be changed by specifying the columns names
as part of the table name: EMP(X,Y,Z,...) AS (...).
For example, the following SQL will fail because a column list was not provided:
End of explanation
%sql -q DROP TABLE AS_EMP
%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS AS PAY FROM EMPLOYEE) DEFINITION ONLY;
Explanation: You can name a column in the SELECT list or place it in the table definition.
End of explanation
%%sql
SELECT DISTINCT(NAME), COLTYPE, LENGTH FROM SYSIBM.SYSCOLUMNS
WHERE TBNAME='AS_EMP' AND TBCREATOR=CURRENT USER
Explanation: You can check the SYSTEM catalog to see the table definition.
End of explanation
%sql -q DROP TABLE AS_EMP
%sql CREATE TABLE AS_EMP AS (SELECT EMPNO, SALARY+BONUS AS PAY FROM EMPLOYEE) WITH DATA;
Explanation: The DEFINITION ONLY clause will create the table but not load any data into it. Adding the WITH DATA
clause will do an INSERT of rows into the newly created table. If you have a large amount of data
to load into the table you may be better off creating the table with DEFINITION ONLY and then using LOAD
or other methods to load the data into the table.
End of explanation
%%sql -q
DROP TABLE AS_EMP;
CREATE TABLE AS_EMP(LAST,PAY) AS
(
SELECT LASTNAME, SALARY FROM EMPLOYEE
WHERE WORKDEPT='D11'
FETCH FIRST 3 ROWS ONLY
) WITH DATA;
Explanation: The SELECT statement can be very sophisticated. It can do any type of calculation or limit the
data to a subset of information.
End of explanation
%%sql -q
DROP TABLE AS_EMP;
CREATE TABLE AS_EMP(DEPARTMENT, LASTNAME) AS
(SELECT WORKDEPT, LASTNAME FROM EMPLOYEE
OFFSET 5 ROWS
FETCH FIRST 10 ROWS ONLY
) WITH DATA;
SELECT * FROM AS_EMP;
Explanation: You can also use the OFFSET clause as part of the FETCH FIRST ONLY to get chunks of data from the
original table.
End of explanation |
1,567 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hospital readmissions data analysis and recommendations for reduction
Background
In October 2012, the US government's Center for Medicare and Medicaid Services (CMS) began reducing Medicare payments for Inpatient Prospective Payment System hospitals with excess readmissions. Excess readmissions are measured by a ratio, by dividing a hospital’s number of “predicted” 30-day readmissions for heart attack, heart failure, and pneumonia by the number that would be “expected,” based on an average hospital with similar patients. A ratio greater than 1 indicates excess readmissions.
Exercise overview
In this exercise, you will
Step1: Preliminary analysis | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import bokeh.plotting as bkp
from mpl_toolkits.axes_grid1 import make_axes_locatable
%matplotlib inline
# read in readmissions data provided
hospital_read_df = pd.read_csv('data/cms_hospital_readmissions.csv')
Explanation: Hospital readmissions data analysis and recommendations for reduction
Background
In October 2012, the US government's Center for Medicare and Medicaid Services (CMS) began reducing Medicare payments for Inpatient Prospective Payment System hospitals with excess readmissions. Excess readmissions are measured by a ratio, by dividing a hospital’s number of “predicted” 30-day readmissions for heart attack, heart failure, and pneumonia by the number that would be “expected,” based on an average hospital with similar patients. A ratio greater than 1 indicates excess readmissions.
Exercise overview
In this exercise, you will:
+ critique a preliminary analysis of readmissions data and recommendations (provided below) for reducing the readmissions rate
+ construct a statistically sound analysis and make recommendations of your own
More instructions provided below. Include your work in this notebook and submit to your Github account.
Resources
Data source: https://data.medicare.gov/Hospital-Compare/Hospital-Readmission-Reduction/9n3s-kdb3
More information: http://www.cms.gov/Medicare/medicare-fee-for-service-payment/acuteinpatientPPS/readmissions-reduction-program.html
Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
End of explanation
# deal with missing and inconvenient portions of data
clean_hospital_read_df = hospital_read_df[(hospital_read_df['Number of Discharges'] != 'Not Available')]
clean_hospital_read_df.loc[:, 'Number of Discharges'] = clean_hospital_read_df['Number of Discharges'].astype(int)
clean_hospital_read_df = clean_hospital_read_df.sort('Number of Discharges')
# generate a scatterplot for number of discharges vs. excess rate of readmissions
# lists work better with matplotlib scatterplot function
x = [a for a in clean_hospital_read_df['Number of Discharges'][81:-3]]
y = list(clean_hospital_read_df['Excess Readmission Ratio'][81:-3])
fig, ax = plt.subplots(figsize=(8,5))
ax.scatter(x, y,alpha=0.2)
ax.fill_between([0,350], 1.15, 2, facecolor='red', alpha = .15, interpolate=True)
ax.fill_between([800,2500], .5, .95, facecolor='green', alpha = .15, interpolate=True)
ax.set_xlim([0, max(x)])
ax.set_xlabel('Number of discharges', fontsize=12)
ax.set_ylabel('Excess rate of readmissions', fontsize=12)
ax.set_title('Scatterplot of number of discharges vs. excess rate of readmissions', fontsize=14)
ax.grid(True)
fig.tight_layout()
Explanation: Preliminary analysis
End of explanation |
1,568 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Is it possible to delete or insert a step in a sklearn.pipeline.Pipeline object? | Problem:
import numpy as np
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC
from sklearn.decomposition import PCA
from sklearn.preprocessing import PolynomialFeatures
estimators = [('reduce_poly', PolynomialFeatures()), ('dim_svm', PCA()), ('sVm_233', SVC())]
clf = Pipeline(estimators)
clf.steps.pop(-1) |
1,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Project
Step1: Read in an Image
Step9: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are
Step10: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
Step11: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
Step12: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos
Step13: Let's try the one with the solid white lane on the right first ...
Step15: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step17: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
Step19: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! | Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
Explanation: Read in an Image
End of explanation
import math
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, draw_lane_ceil=.4, color=[255, 0, 0], thickness=4):
# aim: getting extrapolated average lane lines from lane fragments we got through hough transform
# -> get intercepts of the extrapolated hough lines
lane_y_region=1-draw_lane_ceil # using the complement as parameter as it is more understandable (40% from the bottom up are "drawable")
left_bottom_intercepts = []
right_bottom_intercepts = []
left_top_intercepts = []
right_top_intercepts = []
imshape = img.shape
for line in lines:
for x1,y1,x2,y2 in line:
# calc slope for lines for further calculations + decide whether line should be right or left
slope = (y2-y1)/(x2-x1) # y=m*x+b => b=y-m*x -> x=(y-b)/m y=
intercept = y2-slope*x2
intercept_bottom = (imshape[0] - intercept)/slope
intercept_top = (imshape[0]*.6 - intercept)/slope
if abs(slope) > 0.2 and abs(slope) < 10: # rule out too horizontal/vertical lines
if slope < 0:
left_bottom_intercepts.append(intercept_bottom)
left_top_intercepts.append(intercept_top)
else:
right_bottom_intercepts.append(intercept_bottom)
right_top_intercepts.append(intercept_top)
# average intercepts to eliminate double hough lines on around thick lines and merge multiple lane fragments
avg_left_bottom_intercept = round(np.average(left_bottom_intercepts))
avg_right_bottom_intercept = round(np.average(right_bottom_intercepts))
avg_left_top_intercept = round(np.average(left_top_intercepts))
avg_right_top_intercept = round(np.average(right_top_intercepts))
# draw lines in between the intercepts
if not np.isnan(avg_left_bottom_intercept) and not np.isnan(avg_left_top_intercept):
cv2.line(img,
(int(avg_left_bottom_intercept), int(imshape[0])),
(int(avg_left_top_intercept), int(imshape[0]*lane_y_region)), color, thickness)
if not np.isnan(avg_right_bottom_intercept) and not np.isnan(avg_right_top_intercept):
cv2.line(img,
(int(avg_right_bottom_intercept), int(imshape[0])),
(int(avg_right_top_intercept), int(imshape[0]*lane_y_region)), color, thickness)
def draw_lines_e(img, lines, color=[255, 0, 0], thickness=4):
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
left_lines = []
right_lines = []
for line in lines:
for x1,y1,x2,y2 in line:
slope = (y2-y1)/(x2-x1) # y=m*x+b => b=y-m*x ->
intercept = y2-slope*x2
line=np.append(line, slope)
line = np.append(line, intercept)
if abs(slope) > 0.2 and abs(slope) < 10: # rule out too horizontal lines
if slope > 0 and x2 > 550: #positive slope should be marked green (I expected this to mark the left lanes but it marked the lanes on the right -> Maybe x2 and x1 and y2 and y1 are in the wrong order))
right_lines.append(line)
elif slope < 0 and x1 < 400:
left_lines.append(line)
# cut-off the line below a y value of:
y_cutoff = 320
left_avg_slope, left_avg_intercept = find_avg_slope_and_intercept(left_lines)
if not math.isnan(left_avg_slope) and math.isnan(left_avg_intercept):
left_bottom_intercept = int(np.rint((539-left_avg_intercept)/left_avg_slope)) # y=mx+b -> y=539 539=mx+b -> x=(539-b)/m
#calculate x values of the lines at the y cutoff
# y=mx+b -> x=(y-b)/m
left_x_cutoff=int(np.rint((y_cutoff-left_avg_intercept)/left_avg_slope))
cv2.line(img, (left_bottom_intercept, 539), (left_x_cutoff, y_cutoff), [0, 255, 0], thickness)
right_avg_slope, right_avg_intercept = find_avg_slope_and_intercept(right_lines)
if not math.isnan(left_avg_slope) and math.isnan(left_avg_intercept):
right_bottom_intercept = int(np.rint((539-right_avg_intercept)/right_avg_slope))
right_x_cutoff=int(np.rint((y_cutoff-right_avg_intercept)/right_avg_slope))
cv2.line(img, (right_bottom_intercept, 539), (right_x_cutoff, y_cutoff), [0, 0, 255], thickness)
def find_avg_slope_and_intercept(lines):
slopes=[]
intercepts=[]
for line in lines:
x1= line[0]
y1= line[1]
x2= line[2]
y2= line[3]
slope = line[4]
intercept = line[5]
#for x1,y1,x2,y2, slope, intercept in line:
slopes.append(slope)
intercepts.append(intercept)
avg_slope = []
avg_intercept = []
if slopes and intercepts:
avg_slope = np.average(slopes)
avg_intercept = np.average(intercepts)
if math.isnan(avg_slope) or math.isnan(avg_intercept):
#print("Returning NAN!!!")
return float('NaN'), float('NaN')
return avg_slope, avg_intercept
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
if lines is not None:
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, λ)
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
import os
test_img_dir = ("test_images/")
test_images = os.listdir(test_img_dir)
print(test_images)
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
#test_img_dir = test_img_dir + "generated/"
print("test image dir: " + test_img_dir)
def process_image(test_img_dir, img):
# read in original image
ori_image = mpimg.imread(os.path.join(test_img_dir, img))
#convert to HSV colorspace for color filtering
image = cv2.cvtColor(ori_image, cv2.COLOR_RGB2HSV)
#select white and yellow only get the lane lines
sensitivity = 50
lower_white = np.array([0,0,255-sensitivity])
upper_white = np.array([255,sensitivity,255])
lower_yellow = np.array([20, 120, 100])
upper_yellow = np.array([50, 170, 255])
white = cv2.inRange(image, lower_white, upper_white)
yellow = cv2.inRange(image, lower_yellow, upper_yellow)
white_yellow = white + yellow
# apply the mask to the image
image = cv2.bitwise_and(image,image, mask= white_yellow)
image=cv2.cvtColor(image, cv2.COLOR_HSV2RGB)
# output directory for generated images
output_dir = "generated/" + iteration + "/"
if not os.path.exists(test_img_dir + output_dir):
os.makedirs(test_img_dir + output_dir)
# convert to grayscale
gray = grayscale(image)
plt.imsave(test_img_dir + output_dir + img + "_grayscale_" + iteration + ".png", gray, vmin=None, vmax=None, cmap='gray', format='png', origin=None, dpi=100)
# apply gaussian blurring
blurred_img = gaussian_blur(gray, 7)
plt.imsave(test_img_dir + output_dir + img + "_blurred_" + iteration + ".png", blurred_img, vmin=None, vmax=None, cmap='gray', format='png', origin=None, dpi=100)
# canny edge detection
canny_img = canny(blurred_img, 50, 150)
plt.imsave(test_img_dir + output_dir + img + "_canny_" + iteration + ".png", canny_img, vmin=None, vmax=None, cmap='gray', format='png', origin=None, dpi=100)
# region of interest
imshape = canny_img.shape
vertices = np.array([[(.51 *imshape[1], .6*imshape[0]),(.49*imshape[1], .6*imshape[0]), (0, imshape[0]), (imshape[1],imshape[0])]], dtype=np.int32)
masked_img = region_of_interest(canny_img, vertices)
plt.imsave(test_img_dir + output_dir + img + "_masked_" + iteration + ".png", masked_img, vmin=None, vmax=None, cmap='gray', format='png', origin=None, dpi=100)
# hugh lines
rho = 1
theta = 1 * np.pi/180
threshold = 40 #40
min_line_len = 150 #150
max_line_gap = 130 #130
hough_img = hough_lines(masked_img, rho, theta, threshold, min_line_len, max_line_gap)
plt.imsave(test_img_dir + output_dir + img + "_hough_" + iteration + ".png", masked_img, vmin=None, vmax=None, cmap='gray', format='png', origin=None, dpi=100)
#def weighted_img(img, initial_img, α=0.8, β=1., λ=0.)
result = weighted_img(hough_img, ori_image, α=0.8, β=1., λ=0.)
plt.imsave(test_img_dir + output_dir + img + "_final_" + iteration + ".png", result, vmin=None, vmax=None, cmap=None, format='png', origin=None, dpi=100)
plt.imshow(result)
iteration = "10"
print(test_images)
gen_ex = (img for img in test_images if os.path.isfile(os.path.join(test_img_dir, img)))
for img in gen_ex:
process_image(test_img_dir, img)
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(ori_image):
image = cv2.cvtColor(ori_image, cv2.COLOR_RGB2HSV)
image = gaussian_blur(image, 3)
plt.imshow(image, cmap='gray')
#select white and yellow only get the lane lines
sensitivity = 50
lower_white = np.array([0,0,255-sensitivity])
upper_white = np.array([255,sensitivity,255])
lower_yellow = np.array([20, 120, 100])
upper_yellow = np.array([50, 170, 255])
white = cv2.inRange(image, lower_white, upper_white)
yellow = cv2.inRange(image, lower_yellow, upper_yellow)
white_yellow = white + yellow
image = cv2.bitwise_and(image,image, mask= white_yellow)
image=cv2.cvtColor(image, cv2.COLOR_HSV2RGB)
# convert to grayscale
gray = grayscale(image)
# apply gaussian blurring
blurred_img = gaussian_blur(gray, 7)
# canny edge detection
canny_img = canny(blurred_img, 50, 150)
# region of interest
imshape = canny_img.shape
#vertices = np.array([[(0,imshape[0]),(450, 320), (490, 320), (imshape[1],imshape[0])]], dtype=np.int32)
vertices = np.array([[(.51 *imshape[1], .6*imshape[0]),(.49*imshape[1], .6*imshape[0]), (0, imshape[0]), (imshape[1],imshape[0])]], dtype=np.int32)
masked_img = region_of_interest(canny_img, vertices)
# hugh lines
#hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap)
rho = 1
theta = 1 * np.pi/180
threshold = 30 #40
min_line_len = 5 #150
max_line_gap = 3 #130
hough_img = hough_lines(masked_img, rho, theta, threshold, min_line_len, max_line_gap)
#def weighted_img(img, initial_img, α=0.8, β=1., λ=0.)
result = weighted_img(hough_img, ori_image, α=0.8, β=1., λ=0.)
return result
def process_image2(image):
# convert to grayscale
gray = grayscale(image)
#plt.imsave(test_img_dir + output_dir + img + "_grayscale_" + iteration + ".png", gray, vmin=None, vmax=None, cmap='gray', format='png', origin=None, dpi=100)
# apply gaussian blurring
blurred_img = gaussian_blur(gray, 7)
#plt.imsave(test_img_dir + output_dir + img + "_blurred_" + iteration + ".png", blurred_img, vmin=None, vmax=None, cmap='gray', format='png', origin=None, dpi=100)
# canny edge detection
canny_img = canny(blurred_img, 50, 150)
#plt.imsave(test_img_dir + output_dir + img + "_canny_" + iteration + ".png", canny_img, vmin=None, vmax=None, cmap='gray', format='png', origin=None, dpi=100)
# region of interest5shape
vertices = np.array([[(.51 *imshape[1], .58*imshape[0]),(.49*imshape[1], .58*imshape[0]), (0, imshape[0]), (imshape[1],imshape[0])]], dtype=np.int32)
masked_img = region_of_interest(canny_img, vertices)
#plt.imsave(test_img_dir + output_dir + img + "_masked_" + iteration + ".png", masked_img, vmin=None, vmax=None, cmap='gray', format='png', origin=None, dpi=100)
# hugh lines
#hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap)
rho = 1
theta = 1 * np.pi/180
threshold = 40 #40
min_line_len = 20 #150
max_line_gap = 10 #130
hough_img = hough_lines(masked_img, rho, theta, threshold, min_line_len, max_line_gap)
#plt.imsave(test_img_dir + output_dir + img + "_hough_" + iteration + ".png", masked_img, vmin=None, vmax=None, cmap='gray', format='png', origin=None, dpi=100)
#def weighted_img(img, initial_img, α=0.8, β=1., λ=0.)
result = weighted_img(hough_img, image, α=0.8, β=1., λ=0.)
return result
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(white_output))
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(challenge_output))
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation |
1,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Generate Features And Target Data
Step2: Create Logistic Regression
Step3: Cross-Validate Model Using Accuracy | Python Code:
# Load libraries
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
Explanation: Title: Accuracy
Slug: accuracy
Summary: How to evaluate a Python machine learning using accuracy.
Date: 2017-09-15 12:00
Category: Machine Learning
Tags: Model Evaluation
Authors: Chris Albon
<a alt="Accuracy" href="https://machinelearningflashcards.com">
<img src="accuracy/Accuracy_print.png" class="flashcard center-block">
</a>
Preliminaries
End of explanation
# Generate features matrix and target vector
X, y = make_classification(n_samples = 10000,
n_features = 3,
n_informative = 3,
n_redundant = 0,
n_classes = 2,
random_state = 1)
Explanation: Generate Features And Target Data
End of explanation
# Create logistic regression
logit = LogisticRegression()
Explanation: Create Logistic Regression
End of explanation
# Cross-validate model using accuracy
cross_val_score(logit, X, y, scoring="accuracy")
Explanation: Cross-Validate Model Using Accuracy
End of explanation |
1,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Literate programming using IPython notebooks
Literate programming is a concept promoted by Donald Knuth, the famous computer scientist (and the author of the Art of Computer Programming.) According to this concept, computer programs should be written in a combination of the programming language (the usual source code) and the natural language, which explains the logic of the program.
When it comes to scientific programming, using comments for natural-language explanations is not always convenient. Moreover, it is limited, because such explanations may require figures, equations, and other common elements of scientific texts. IPython/Jupyter notebooks provide a convenient tool for combining different text elements with code. In this notebook, I show how to use them effective with SConstruct data-analysis workflows in Madagascar.
Madagascar interface
The only element that we will need from the Python interface to Madagascar is the view function.
Step1: File magic
Instead of writing SConstruct as one file, we are going to break this file into different parts and include them in the IPython notebook using %%file magic. Each part will be a file with .scons suffix.
In the first part, we are downloading input data (a short gather) and converting it to RSF format.
Step2: The shot gather comes from the collection of shot gathers by Yilmaz and Cumro.
The pipeline in the second Flow converts the data to the native format, windows the offsets to the range from -2 to +2 km, and adds apppropriate labels and units. Next, let us display the data with and without the time-power correction.
Step3: The time-power correction is simply a multiplication of the data $d(t,x)$ by time $t$ to some power $\alpha$
Step4: How does it work?
The function view() is specified in m8r.py as follows | Python Code:
from m8r import view
Explanation: Literate programming using IPython notebooks
Literate programming is a concept promoted by Donald Knuth, the famous computer scientist (and the author of the Art of Computer Programming.) According to this concept, computer programs should be written in a combination of the programming language (the usual source code) and the natural language, which explains the logic of the program.
When it comes to scientific programming, using comments for natural-language explanations is not always convenient. Moreover, it is limited, because such explanations may require figures, equations, and other common elements of scientific texts. IPython/Jupyter notebooks provide a convenient tool for combining different text elements with code. In this notebook, I show how to use them effective with SConstruct data-analysis workflows in Madagascar.
Madagascar interface
The only element that we will need from the Python interface to Madagascar is the view function.
End of explanation
%%file data.scons
# Download data
#Fetch('wz.25.H','wz')
# Convert and window
Flow('data','/Users/sergey/geo/fomels/cise/tpow/wz.25.H',
'''
dd form=native | window min2=-2 max2=2 |
put label1=Time label2=Offset unit1=s unit2=km
''')
Explanation: File magic
Instead of writing SConstruct as one file, we are going to break this file into different parts and include them in the IPython notebook using %%file magic. Each part will be a file with .scons suffix.
In the first part, we are downloading input data (a short gather) and converting it to RSF format.
End of explanation
%%file display.scons
# two plots displayed side by side
Plot('data','grey title="(a) Original Data"')
Plot('tpow2','data',
'pow pow1=3 | grey title="(b) Time Power Correction" ')
Result('tpow','data tpow2','SideBySideAniso')
Explanation: The shot gather comes from the collection of shot gathers by Yilmaz and Cumro.
The pipeline in the second Flow converts the data to the native format, windows the offsets to the range from -2 to +2 km, and adds apppropriate labels and units. Next, let us display the data with and without the time-power correction.
End of explanation
view('tpow')
Explanation: The time-power correction is simply a multiplication of the data $d(t,x)$ by time $t$ to some power $\alpha$:
$$d_{\alpha}(t,x) = d(t,x)\,t^{\alpha}.$$
In the example above, $\alpha=2$. Try changing the value of the time power and observe the results.
Now that we defined our Result plot, we can display it using view().
End of explanation
!scons -cQ
!rm *.png
Explanation: How does it work?
The function view() is specified in m8r.py as follows:
python
def view(name):
try:
from IPython.display import Image
png = name+'.png'
makefile = os.path.join(rsf.prog.RSFROOT,'include','Makefile')
os.system('make -f %s %s' % (makefile,png))
return Image(filename=png)
except:
print 'No IPython Image support'
return None
It runs the make command to generate the image and imports it into IPython. The corresponding Makefile is
```
SConstruct: .scons
echo "from rsf.proj import " > $@
cat $^ >> $@
echo "\nEnd()" >> $@
%.png: SConstruct
scons Fig/$.vpl
vpconvert pen=gd fat=3 serifs=n bgcolor=w Fig/$.vpl $@
```
It collects all *.scons files into one SConstruct file, uses scons to create the result figure, and converts the figure to the PNG format using vpconvert.
Cleaning up
End of explanation |
1,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The MIT License (MIT)<br>
Copyright (c) 2016, 2017, 2018 Massachusetts Institute of Technology<br>
Authors
Step1: Get scale factor
Step2: Plot EWD $\times$ scale factor | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi']=150
# Gravity Recovery and Climate Experiment (GRACE) Data
# Source: http://grace.jpl.nasa.gov/
# Current surface mass change data, measuring equivalent water thickness in cm, versus time
# This data fetcher uses results from the Mascon solutions
from skdaccess.geo.grace.mascon.cache import DataFetcher as GR_DF
from skdaccess.framework.param_class import *
geo_point = AutoList([(38, -117)]) # location in Nevada
grace_fetcher = GR_DF([geo_point],start_date='2010-01-01',end_date='2014-01-01')
grace_data_wrapper = grace_fetcher.output() # Get a data wrapper
grace_label, grace_data = next(grace_data_wrapper.getIterator())# Get GRACE data
grace_data.head()
Explanation: The MIT License (MIT)<br>
Copyright (c) 2016, 2017, 2018 Massachusetts Institute of Technology<br>
Authors: Justin Li, Cody Rude<br>
This software has been created in projects supported by the US National<br>
Science Foundation and NASA (PI: Pankratius)<br>
Permission is hereby granted, free of charge, to any person obtaining a copy<br>
of this software and associated documentation files (the "Software"), to deal<br>
in the Software without restriction, including without limitation the rights<br>
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell<br>
copies of the Software, and to permit persons to whom the Software is<br>
furnished to do so, subject to the following conditions:<br>
The above copyright notice and this permission notice shall be included in<br>
all copies or substantial portions of the Software.<br>
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR<br>
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,<br>
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE<br>
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER<br>
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,<br>
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN<br>
THE SOFTWARE.<br>
End of explanation
scale_factor = grace_data_wrapper.info(grace_label)['scale_factor']
Explanation: Get scale factor
End of explanation
plt.plot(grace_data['EWD']*scale_factor);
plt.xticks(rotation=35);
Explanation: Plot EWD $\times$ scale factor
End of explanation |
1,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iris introduction course
2. Loading and Saving
Learning Outcome
Step1: 2.1 Iris Load Functions<a id='iris_load_functions'></a>
There are three main load functions in Iris
Step2: If we give this filepath to load, we see that Iris returns a cubelist.
Step3: A CubeList is a specialised version of a Python list object
Step4: If we compare the first cube in the cubelist returned by calling load and the cube returned by calling load_cube we see that they are equal.
Step5: <div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise
Step6: 2.2 Saving Cubes<a id='saving'></a>
The iris.save function provides a convenient interface to save Cube and CubeList instances.
Below we load the uk_hires.pp file from Iris' provided sample data which returns a cubelist of the cubes that were produced from that file. We then save this cubelist out to netcdf.
Step7: We can check the ncdump to see what Iris saved
Step8: Extra keywords can be passed to specific fileformat savers.
<div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise
Step9: 2. Go to the Iris reference documentation for iris.save. What keywords are accepted to iris.save when saving a PP file? | Python Code:
import iris
Explanation: Iris introduction course
2. Loading and Saving
Learning Outcome: by the end of this section, you will be able to use Iris to load datasets from disk as Iris cubes and save Iris cubes back to disk.
Duration: 30 minutes.
Overview:<br>
2.1 Iris Load Functions<br>
2.2 Saving Cubes<br>
2.3 Exercise<br>
2.4 Summary of the Section
End of explanation
fname = iris.sample_data_path('air_temp.pp')
Explanation: 2.1 Iris Load Functions<a id='iris_load_functions'></a>
There are three main load functions in Iris: load, load_cube and load_cubes.
load is a general purpose loading function. Typically this is where all data analysis will start, before more loading is refined with the more controlled loading from the other two functions.
load_cube returns a single cube from the given source(s) and constraint. There will be exactly one cube, or an exception will be raised.
load_cubes returns a cubelist of cubes from the given sources(s) and constraint(s). There will be exactly one cube per constraint, or an exception will be raised.
Note: load_cube is a special case of load.
Let's compare the result of calling load and load_cube. We start by selecting the air_temp.pp file from Iris' sample data.
End of explanation
cubes = iris.load(fname)
print(type(cubes))
print(cubes)
Explanation: If we give this filepath to load, we see that Iris returns a cubelist.
End of explanation
cube = iris.load_cube(fname)
print(type(cube))
print(cube)
Explanation: A CubeList is a specialised version of a Python list object :
If you look at the CubeList reference documentation, you can see that it has the behaviours of an ordinary list, like len(cubes), cubes.remove(x), cubes[index]. However, it prints out in a special way and has additional cube-specific methods, some of which we discuss in later sections (extract, merge and concatenate).
If we pass this filepath instead to iris.load_cube, we see that Iris then returns a cube.
End of explanation
cubes[0] == cube
Explanation: If we compare the first cube in the cubelist returned by calling load and the cube returned by calling load_cube we see that they are equal.
End of explanation
#
# edit space for user code ...
#
Explanation: <div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise: </font></b>
<p>Try loading in the `uk_hires.pp` from Iris' sample data, first with iris.load then iris.load_cube. Why does iris.load_cube fail if you only supply the filepath?</p>
</div>
End of explanation
fname = iris.sample_data_path('uk_hires.pp')
cubes = iris.load(fname)
iris.save(cubes, 'saved_cubes.nc')
Explanation: 2.2 Saving Cubes<a id='saving'></a>
The iris.save function provides a convenient interface to save Cube and CubeList instances.
Below we load the uk_hires.pp file from Iris' provided sample data which returns a cubelist of the cubes that were produced from that file. We then save this cubelist out to netcdf.
End of explanation
!ncdump -h saved_cubes.nc | head -n 20
!rm saved_cubes.nc
Explanation: We can check the ncdump to see what Iris saved:
End of explanation
# space for user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_2.3a
Explanation: Extra keywords can be passed to specific fileformat savers.
<div class="alert alert-block alert-warning">
<b><font color='brown'>Exercise: </font></b>
<p>Take a look at the above link to the `iris.save` documentation, to see which fileformats iris can save to.</p>
</div>
2.3 Section Review Exercise<a id='exercise_2'></a>
1. Load the file in iris.sample_data_path('atlantic_profiles.nc'), using iris.load_cube to load in the sea_water_potential_temperature cube only.
End of explanation
# space for user code ...
# SAMPLE SOLUTION
# %load solutions/iris_exercise_2.3b
Explanation: 2. Go to the Iris reference documentation for iris.save. What keywords are accepted to iris.save when saving a PP file?
End of explanation |
1,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project
Step3: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
Step5: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
Step7: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https
Step10: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step13: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
Step16: Generator
Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
Step19: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented
Step22: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
Step25: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
Step27: Train
Implement train to build and train the GANs. Use the following functions you implemented
Step29: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
Step31: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces. | Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Explanation: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project:
- MNIST
- CelebA
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Explanation: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Explanation: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).
Build the Neural Network
You'll build the components necessary to build a GANs by implementing the following functions below:
- model_inputs
- discriminator
- generator
- model_loss
- model_opt
- train
Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
learning_rate = tf.placeholder(tf.float32, (None))
return inputs_real, inputs_z, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
- Z input placeholder with rank 2 using z_dim.
- Learning rate placeholder with rank 0.
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
End of explanation
def discriminator(images, reuse=False):
Create the discriminator network
:param images: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
alpha=0.2
x = images
with tf.variable_scope('discriminator', reuse=reuse):
x = tf.layers.conv2d(x, 64, 4, strides=2, padding="same")
x = tf.layers.batch_normalization(x, training=True)
x = tf.maximum(alpha * x, x)
#x = tf.layers.dropout(x, 0.5)
x = tf.layers.conv2d(x, 128, 4, strides=2, padding="same")
x = tf.layers.batch_normalization(x, training=True)
x = tf.maximum(alpha * x, x)
#x = tf.layers.dropout(x, 0.5)
x = tf.layers.conv2d(x, 256, 4, strides=2, padding="same")
x = tf.layers.batch_normalization(x, training=True)
x = tf.maximum(alpha * x, x)
#x = tf.layers.dropout(x, 0.5)
x = tf.reshape(x, (-1, 4 * 4 * 256))
logits = tf.layers.dense(x, 1)
out = tf.sigmoid(logits)
return out, logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
Explanation: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
End of explanation
def generator(z, out_channel_dim, is_train=True):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
reuse = not is_train
alpha= 0.2
with tf.variable_scope('generator', reuse=reuse):
x = tf.layers.dense(z, 4 * 4 * 512)
x = tf.reshape(x, (-1, 4, 4, 512))
x = tf.layers.batch_normalization(x, training=is_train)
#x = tf.layers.dropout(x, 0.5)
x = tf.maximum(alpha * x, x)
#print(x.shape)
x = tf.layers.conv2d_transpose(x, 256, 4, strides=1, padding="valid")
x = tf.layers.batch_normalization(x,training=is_train)
x = tf.maximum(alpha * x, x)
#print(x.shape)
x = tf.layers.conv2d_transpose(x, 128, 4, strides=2, padding="same")
x = tf.layers.batch_normalization(x,training=is_train)
x = tf.maximum(alpha * x, x)
#print(x.shape)
x = tf.layers.conv2d_transpose(x, out_channel_dim, 4, strides=2, padding="same")
#x = tf.maximum(alpha * x, x)
logits = x
out = tf.tanh(logits)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
Explanation: Generator
Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
End of explanation
def model_loss(input_real, input_z, out_channel_dim):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
smooth = 0.1
_, d_logits_real = discriminator(input_real, reuse=False)
fake = generator(input_z, out_channel_dim, is_train=True)
d_logits_fake = discriminator(fake, reuse=True)
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_fake)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
return d_loss, g_loss
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
Explanation: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
- discriminator(images, reuse=False)
- generator(z, out_channel_dim, is_train=True)
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
all_update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
g_update_ops = [var for var in all_update_ops if var.name.startswith('generator')]
d_update_ops = [var for var in all_update_ops if var.name.startswith('discriminator')]
with tf.control_dependencies(d_update_ops):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
with tf.control_dependencies(g_update_ops):
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
Explanation: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Explanation: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
End of explanation
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
inputs_real, inputs_z, lr = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)
d_loss, g_loss = model_loss(inputs_real, inputs_z, data_shape[-1])
d_train_opt, g_train_opt = model_opt(d_loss, g_loss, learning_rate, beta1)
batch_num = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
batch_num = batch_num+1
batch_images = batch_images * 2
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
_ = sess.run(d_train_opt, feed_dict={inputs_real: batch_images, inputs_z: batch_z, lr:learning_rate})
_ = sess.run(g_train_opt, feed_dict={inputs_z: batch_z, lr:learning_rate})
if batch_num % 100 == 0:
train_loss_d = d_loss.eval({inputs_z:batch_z, inputs_real: batch_images})
train_loss_g = g_loss.eval({inputs_z:batch_z})
print("Epoch {}/{} batch {}...".format(epoch_i+1, epoch_count, batch_num),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
Explanation: Train
Implement train to build and train the GANs. Use the following functions you implemented:
- model_inputs(image_width, image_height, image_channels, z_dim)
- model_loss(input_real, input_z, out_channel_dim)
- model_opt(d_loss, g_loss, learning_rate, beta1)
Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
End of explanation
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.6
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
Explanation: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
End of explanation
batch_size = 64
z_dim = 100
learning_rate = 0.001
beta1 = 0.6
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
Explanation: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
End of explanation |
1,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: a) We see from the above histogram that the distribution is approximately normal when we draw a sample of 1000 values from a standard normal distribution.
b) We also see from the Q-Q plot that the distribution is approximately normal when we draw a sample of 1000 values from a standard normal distribution.
In this question, you test whether the central limit theorem works. You generate 1000 variables with two normal distributions. You can determine the mean and standard deviation of these variables yourself. All you have to do is generate the first variable 50 times and averaged it each time. Generate the second variable 1000 times and averages this variable each time. Then plot the histogram of the averages of the two variables. Which of the variables has a mean distribution closer to the normal distribution? Do you think the Central Limit Theorem seems to have worked? | Python Code:
#codes here for a)
import math
import pandas as pd
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
def demo1():
mu, sigma = 0, 0.1
sampleNo = 1000
s = np.random.normal(mu, sigma, sampleNo)
plt.hist(s, bins=100, density=True)
plt.show()
demo1()
Explanation: <a href="https://colab.research.google.com/github/gaargly/gaargly.github.io/blob/master/Lira_Assignment_distribution.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
By using distribution=np.random.name_distribution([],[]), write the name of distribution of your choice in place of name_distribution and fill out the bracket with your choice again. Then please, a) Draw the histogram and interpret b) Draw Q-Q plot and interpret
End of explanation
#codes here
def demo2():
mu, sigma = 9, 10
sampleNo = 50
s = np.random.normal(mu, sigma, sampleNo)
plt.hist(s, bins=100, density=True)
plt.show()
demo2()
def demo3():
mu, sigma = 9, 10
sampleNo = 1000
s = np.random.normal(mu, sigma, sampleNo)
plt.hist(s, bins=100, density=True)
plt.show()
demo3()
Explanation: a) We see from the above histogram that the distribution is approximately normal when we draw a sample of 1000 values from a standard normal distribution.
b) We also see from the Q-Q plot that the distribution is approximately normal when we draw a sample of 1000 values from a standard normal distribution.
In this question, you test whether the central limit theorem works. You generate 1000 variables with two normal distributions. You can determine the mean and standard deviation of these variables yourself. All you have to do is generate the first variable 50 times and averaged it each time. Generate the second variable 1000 times and averages this variable each time. Then plot the histogram of the averages of the two variables. Which of the variables has a mean distribution closer to the normal distribution? Do you think the Central Limit Theorem seems to have worked?
End of explanation |
1,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS229 Machine Learning Exercise Homework 1 Problem 5b
Step1: Part i
The linear regression function below implements linear regression using the normal equations. We could also use some form of gradient descent to do this.
Step2: Here we just load some data and get it into a form we can use.
Step3: Performing linear regression on the first training example
Step4: yields the following parameters
Step5: Now we wish to display the results for part i. Evaluate the model
Step6: at a set of design points. The data set and the results of linear regression come in the following figure.
Step7: The following plot displays the results.
Step8: Part ii
For the next part, we perform locally weighted linear regression on the data set with a Gaussian weighting function. We use the parameters that follow.
Step9: Training the model yields the following results. Here we place the results into the same plot at the data in part i. The figure shows that the weighted linear regression algorithm best fits the data, especially in the region around wavelength ~1225 Angstroms.
Step10: Part III
Here we perform the same regression for more values of tau and plot the results. | Python Code:
import numpy as np
import numpy.linalg as linalg
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.colors as clrs
Explanation: CS229 Machine Learning Exercise Homework 1 Problem 5b
End of explanation
def linear_regression(X, y):
return linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
Explanation: Part i
The linear regression function below implements linear regression using the normal equations. We could also use some form of gradient descent to do this.
End of explanation
# Load the data
data = np.loadtxt('quasar_train.csv', delimiter=',')
wavelengths = data[0]
fluxes = data[1]
ones = np.ones(fluxes.size)
df_ones = pd.DataFrame(ones, columns=['xint'])
df_wavelengths = pd.DataFrame(wavelengths, columns=['wavelength'])
df_fluxes = pd.DataFrame(fluxes, columns=['flux'])
df = pd.concat([df_ones, df_wavelengths, df_fluxes], axis=1)
X = pd.concat([df['xint'], df['wavelength']], axis=1)
y = df['flux']
x = X['wavelength']
Explanation: Here we just load some data and get it into a form we can use.
End of explanation
theta = linear_regression(X, y)
Explanation: Performing linear regression on the first training example
End of explanation
print('theta = {}'.format(theta))
Explanation: yields the following parameters:
End of explanation
p = np.poly1d([theta[1], theta[0]])
z = np.linspace(x[0], x[x.shape[0]-1])
Explanation: Now we wish to display the results for part i. Evaluate the model
End of explanation
fig = plt.figure(1, figsize=(12,10))
plt.xlabel('Wavelength (Angstroms)')
plt.ylabel('Flux (Watts/m^2')
plt.xticks(np.linspace(x[0], x[x.shape[0]-1], 10))
plt.yticks(np.linspace(-1, 9, 11))
scatter = plt.scatter(x, y, marker='+', color='purple', label='quasar data')
reg = plt.plot(z, p(z), color='blue', label='regression line')
plt.legend()
Explanation: at a set of design points. The data set and the results of linear regression come in the following figure.
End of explanation
plt.show()
Explanation: The following plot displays the results.
End of explanation
import homework1_5b as hm1b
import importlib as im
Xtrain = X.as_matrix()
ytrain = y.as_matrix()
tau = 5
Explanation: Part ii
For the next part, we perform locally weighted linear regression on the data set with a Gaussian weighting function. We use the parameters that follow.
End of explanation
W = hm1b.weightM(tau)(Xtrain)
m = hm1b.LWLRModel(W, Xtrain, ytrain)
z = np.linspace(x[0], x[x.shape[0]-1], 200)
fig = plt.figure(1, figsize=(12,10))
plt.xlabel('Wavelength (Angstroms)')
plt.ylabel('Flux (Watts/m^2')
plt.xticks(np.arange(x[0], x[x.shape[0]-1]+50, step=50))
plt.yticks(np.arange(-1, 9, step=0.5))
plot1 = plt.scatter(x, y, marker='+', color='black', label='quasar data')
plot2 = plt.plot(z, p(z), color='blue', label='regression line')
plot3 = plt.plot(z, m(z), color='red', label='tau = 5')
plt.legend()
plt.show()
Explanation: Training the model yields the following results. Here we place the results into the same plot at the data in part i. The figure shows that the weighted linear regression algorithm best fits the data, especially in the region around wavelength ~1225 Angstroms.
End of explanation
taus = [1,5,10,100,1000]
models = {}
for tau in taus:
W = hm1b.weightM(tau)(Xtrain)
models[tau] = hm1b.LWLRModel(W, Xtrain, ytrain)
z = np.linspace(x[0], x[x.shape[0]-1], 200)
fig = plt.figure(1, figsize=(12,10))
plt.xlabel('Wavelength (Angstroms)')
plt.ylabel('Flux (Watts/m^2')
plt.xticks(np.arange(x[0], x[x.shape[0]-1]+50, step=50))
plt.yticks(np.arange(-2, 9, step=0.5))
plot1 = plt.scatter(x, y, marker='+', color='k', label='quasar data')
plot4 = plt.plot(z, models[1](z), color='red', label='tau = 1')
plot4 = plt.plot(z, models[5](z), color='blue', label='tau = 5')
plot5 = plt.plot(z, models[10](z), color='green', label='tau = 10')
plot6 = plt.plot(z, models[100](z), color='magenta', label='tau = 100')
plot7 = plt.plot(z, models[1000](z), color='cyan', label='tau = 1000')
plt.legend()
plt.show()
Explanation: Part III
Here we perform the same regression for more values of tau and plot the results.
End of explanation |
1,577 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Control Structures
Control Structures construct a fundamental part of language along with syntax,semantics and core libraries. It is the Control Structures which makes the program more lively. Since they contol the flow of execution of program, they are named Control Structures
if statement
Usage
Step1: <div class="alert alert-info">
**Note** Typecasting
`int(response)` converted the string `response` to integer. If user enters anything other than integer, `ValueError` is raised
</div>
if-else statement
Usage
Step2: Single Line if-else
This serves as a replacement for ternery operator avaliable in C
Usage
Step3: if-else ladder
Usage
Step4: <div class="alert alert-info">
**Note**
Step5: <div class="alert alert-info">
**Note**
- Multiple assignments in single statement can be done
-`Python` doesn't support `++` and `--` operators as in `C`
- There is no `do-while` loop in Python
</div>
for loop
Usage | Python Code:
response = input("Enter an integer : ")
num = int(response)
if num % 2 == 0:
print("{} is an even number".format(num))
Explanation: Control Structures
Control Structures construct a fundamental part of language along with syntax,semantics and core libraries. It is the Control Structures which makes the program more lively. Since they contol the flow of execution of program, they are named Control Structures
if statement
Usage:
python
if condition:
statement_1
statement_2
...
statement_n
<div class="alert alert-info">
**Note**
In `Python`, block of code means, the lines with same indentation( i.e., same number of tabs or spaces before it). Here `statement_1` upto `statement_n` are in `if` block. This enhances the code readability
</div>
Example:
End of explanation
response = input("Enter an integer : ")
num = int(response)
if num % 2 == 0:
print("{} is an even number".format(num))
else:
print("{} is an odd number".format(num))
Explanation: <div class="alert alert-info">
**Note** Typecasting
`int(response)` converted the string `response` to integer. If user enters anything other than integer, `ValueError` is raised
</div>
if-else statement
Usage:
python
if condition:
statement_1
statement_2
...
statement_n
else:
statement_1
statement_2
...
statement_n
Example:
End of explanation
response = input("Enter an integer : ")
num = int(response)
result = "even" if num % 2 == 0 else "odd"
print("{} is {} number".format(num,result))
Explanation: Single Line if-else
This serves as a replacement for ternery operator avaliable in C
Usage:
C ternery
c
result = (condition) ? value_true : value_false
Python Single Line if else
python
result = value_true if condition else value_false
Example:
End of explanation
response = input("Enter an integer (+ve or -ve) : ")
num = int(response)
if num > 0:
print("{} is +ve".format(num))
elif num == 0:
print("Zero")
else:
print("{} is -ve".format(num))
Explanation: if-else ladder
Usage:
python
if condition_1:
statements_1
elif condition_2:
statements_2
elif condition_3:
statements_3
...
...
...
elif condition_n:
statements_n
else:
statements_last
<div class="alert alert-info">
**Note**
`Python` uses `elif` instead of `else if` like in `C`,`Java` or `C#`
</div>
Example:
End of explanation
response = input("Enter an integer : ")
num = int(response)
prev,current = 0,1
i = 0
while i < num:
prev,current = current,prev + current
print('Fib[{}] = {}'.format(i,current),end=',')
i += 1
Explanation: <div class="alert alert-info">
**Note**: No `switch-case`
There is no `switch-case` structure in Python. It can be realized using `if-else ladder` or any other ways
</div>
while loop
Usage:
python
while condition:
statement_1
statement_2
...
statement_n
Example:
End of explanation
for i in range(10):
print(i, end=',')
for i in range(2,10,3):
print(i, end=',')
response = input("Enter an integer : ")
num = int(response)
prev,current = 0,1
for i in range(num):
prev,current = current,prev + current
print('Fib[{}] = {}'.format(i,current),end=',')
Explanation: <div class="alert alert-info">
**Note**
- Multiple assignments in single statement can be done
-`Python` doesn't support `++` and `--` operators as in `C`
- There is no `do-while` loop in Python
</div>
for loop
Usage:
python
for object in collection:
do_something_with_object
<div class="alert alert-info">
**Notes**
- `C` like `for(init;test;modify)` is not supported in Python
- Python provides `range` object for iterating over numbers
Usage of `range` object:
```python
x = range(start = 0,stop,step = 1)
```
now `x` can be iterated, and it generates numbers including `start` excluding `stop` differing in the steps of `step`
</div>
Example:
End of explanation |
1,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Density of States Analysis Example
Given sample and empty-can data, compute phonon DOS
To use this notebook, first click jupyter menu File->Make a copy
Click the title of the copied jupyter notebook and change it to a new title
Start executing cells
Preparation
Step1: Run GetDOS
Step2: Check output
Step3: Refine GetDOS | Python Code:
# where am I now?
!pwd
# create a new working directory and change into it
workdir = '~/reduction/ARCS/getdos-multiple-Ei-demo'
!mkdir -p {workdir}
%cd {workdir}
# Data to reduce. Change the IPTS number and run numbers to suit your need
samplenxs = "/SNS/ARCS/IPTS-15398/shared/mantid_reduce/non-radC/non-radC_130p00.nxspe"
mtnxs = "/SNS/ARCS/IPTS-15398/shared/mantid_reduce/MT/MT_130p00.nxspe"
initdos = '/SNS/ARCS/IPTS-15398/shared/getdos/graphite-Ei_300-dos.h5'
Explanation: Density of States Analysis Example
Given sample and empty-can data, compute phonon DOS
To use this notebook, first click jupyter menu File->Make a copy
Click the title of the copied jupyter notebook and change it to a new title
Start executing cells
Preparation
End of explanation
# import tools
import os, numpy as np
from multiphonon.getdos import notebookUI
import histogram.hdf as hh, histogram as H
%matplotlib notebook
from matplotlib import pyplot as plt
# create the UI for the first time
notebookUI(samplenxs, mtnxs, initdos=initdos, load_options_path='/SNS/ARCS/IPTS-15398/shared/getdos/130meV-getdos-opts.yaml')
Explanation: Run GetDOS
End of explanation
ls work/
dos0= hh.load(initdos)
plt.plot(dos0.E, dos0.I, '+', label='DOS from Ei=300meV data')
dos = hh.load('work/final-dos.h5')
plt.plot(dos.E, dos.I, label='new DOS')
plt.xlim(0, 230)
plt.legend(loc='top left')
Explanation: Check output
End of explanation
# if you need to run getdos again with slightly modified options
# you can start from the previous settings
notebookUI(samplenxs, mtnxs, load_options_path="./work/getdos-opts.yaml")
Explanation: Refine GetDOS
End of explanation |
1,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Causal Effect for Logistic Regression
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
Step1: Utility function
We define a utility function to draw the directed acyclic graph.
Step2: Test data
We use 'Wine Quality Data Set' (https
Step3: Causal Discovery
To run causal discovery, we create a DirectLiNGAM object and call the fit method.
Step4: Prediction Model
We create the logistic regression model because the target is a discrete variable.
Step5: Identification of Feature with Greatest Causal Influence on Prediction
To identify of the feature having the greatest intervention effect on the prediction, we create a CausalEffect object and call the estimate_effects_on_prediction method. | Python Code:
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import make_prior_knowledge
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
Explanation: Causal Effect for Logistic Regression
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
End of explanation
def make_graph(adjacency_matrix, labels=None):
idx = np.abs(adjacency_matrix) > 0.01
dirs = np.where(idx)
d = graphviz.Digraph(engine='dot')
names = labels if labels else [f'x{i}' for i in range(len(adjacency_matrix))]
for to, from_, coef in zip(dirs[0], dirs[1], adjacency_matrix[idx]):
d.edge(names[from_], names[to], label=f'{coef:.2f}')
return d
Explanation: Utility function
We define a utility function to draw the directed acyclic graph.
End of explanation
X = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv', sep=';')
X['quality'] = np.where(X['quality']>5, 1, 0)
print(X.shape)
X.head()
Explanation: Test data
We use 'Wine Quality Data Set' (https://archive.ics.uci.edu/ml/datasets/Wine+Quality)
End of explanation
pk = make_prior_knowledge(
n_variables=len(X.columns),
sink_variables=[11])
model = lingam.DirectLiNGAM(prior_knowledge=pk)
model.fit(X)
labels = [f'{i}. {col}' for i, col in enumerate(X.columns)]
make_graph(model.adjacency_matrix_, labels)
Explanation: Causal Discovery
To run causal discovery, we create a DirectLiNGAM object and call the fit method.
End of explanation
from sklearn.linear_model import LogisticRegression
target = 11 # quality
features = [i for i in range(X.shape[1]) if i != target]
reg = LogisticRegression(solver='liblinear')
reg.fit(X.iloc[:, features], X.iloc[:, target])
Explanation: Prediction Model
We create the logistic regression model because the target is a discrete variable.
End of explanation
ce = lingam.CausalEffect(model)
effects = ce.estimate_effects_on_prediction(X, target, reg)
df_effects = pd.DataFrame()
df_effects['feature'] = X.columns
df_effects['effect_plus'] = effects[:, 0]
df_effects['effect_minus'] = effects[:, 1]
df_effects
max_index = np.unravel_index(np.argmax(effects), effects.shape)
print(X.columns[max_index[0]])
Explanation: Identification of Feature with Greatest Causal Influence on Prediction
To identify of the feature having the greatest intervention effect on the prediction, we create a CausalEffect object and call the estimate_effects_on_prediction method.
End of explanation |
1,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Wrangling with Pandas
The are two datasets in CSV format, both are from weather station 'USC00116760' in Petersburg, IL
Data ranges from 2015-01-01 to 2015-06-29
'Temp_116760.csv' stores temperture data, the index is day-of-year.
'Prcp_116760.csv' stores precipation data, the index is date-time.
Now how can we read the data such that they appear like the followings?
Tip
Step1: Try pandas.concat
Step2: Try pandas.merge
Why merge might be the better apporach?
Step3: Using pivot_table to summarize data
How many snow days and non-snow days are there for each month?
Can you generate the following result, say, with the merged data?
Dose the result make sense to you
If not, why it dosen't and how to fix it?
Step4: Generate the CORRECT summary table for snowy days
It can be done with just 3 method calls
TIP | Python Code:
# How to read the 'Temp_116760.csv' file?
df_temp.tail()
# How to read the 'Prcp_116760.csv' file and make its index datetime dtype?
df_prcp.head()
# and I want the index to be of date-time, rather than just strings
df_prcp.index.dtype
Explanation: Data Wrangling with Pandas
The are two datasets in CSV format, both are from weather station 'USC00116760' in Petersburg, IL
Data ranges from 2015-01-01 to 2015-06-29
'Temp_116760.csv' stores temperture data, the index is day-of-year.
'Prcp_116760.csv' stores precipation data, the index is date-time.
Now how can we read the data such that they appear like the followings?
Tip: Pandas will always try to align index
Tip: try to bring up the docsting of Pandas.read_csv
Tip: use Pandas.concat to join DataFrame together
End of explanation
# How to use concat to make a combined dataframe?
Explanation: Try pandas.concat
End of explanation
# How to use merge to make a combined dataframe?
Explanation: Try pandas.merge
Why merge might be the better apporach?
End of explanation
# Assuming your now have the combined dataframe from above, named df
# How to make the following summary table using pivot_table?
Explanation: Using pivot_table to summarize data
How many snow days and non-snow days are there for each month?
Can you generate the following result, say, with the merged data?
Dose the result make sense to you
If not, why it dosen't and how to fix it?
End of explanation
# How should the correct summary table look like?
# How to make it with just 3 method calls?
Explanation: Generate the CORRECT summary table for snowy days
It can be done with just 3 method calls
TIP: lookup the pandas.DataFrame.update() method.
TIP: lookup the pandas.date_range() method.
End of explanation |
1,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Results with 128 hidden units
Epoch 72/100... Discriminator Loss
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
print(tf.__version__)
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape = (None, real_dim), name="inputs_real")
inputs_z = tf.placeholder(tf.float32, shape = (None, z_dim), name ="inputs_z")
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('Generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation = None)
# Leaky ReLU
h1 = tf.maximum( (alpha * h1), h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation = None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('Discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation = None)
# Leaky ReLU
h1 = tf.maximum ( (alpha * h1), h1)
logits = tf.layers.dense(h1, 1, activation = None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 784
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 256
d_hidden_size = 256
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Generator network here
g_model = generator(input_z, input_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
# One's like for real labels for Discriminator
real_labels = tf.ones_like(d_logits_real) * (1 - smooth)
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits = d_logits_real, labels=real_labels))
# Zeros's like for real labels for Discriminator
fake_labels = tf.zeros_like(d_logits_real)
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits = d_logits_fake, labels= fake_labels))
d_loss = d_loss_real + d_loss_fake
# One's like for fake labels for generator
generated_labels = tf.ones_like(d_logits_fake)
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits = d_logits_fake,
labels = generated_labels))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('Generator')]
d_vars = [var for var in t_vars if var.name.startswith('Discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list = d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list = g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 80
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g),
"Difference Loss: {:.4f}...".format(train_loss_d-train_loss_g),
)
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
# With 128 hidden
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
# With 256 hidden
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Results with 128 hidden units
Epoch 72/100... Discriminator Loss: 1.2292... Generator Loss: 1.0937 Difference Loss: 0.1355...
Epoch 73/100... Discriminator Loss: 1.1977... Generator Loss: 1.0838 Difference Loss: 0.1139...
Epoch 74/100... Discriminator Loss: 1.0160... Generator Loss: 1.4791 Difference Loss: -0.4632...
Epoch 75/100... Discriminator Loss: 1.1122... Generator Loss: 1.0486 Difference Loss: 0.0637...
Epoch 76/100... Discriminator Loss: 1.0662... Generator Loss: 1.5303 Difference Loss: -0.4641...
Epoch 77/100... Discriminator Loss: 1.1943... Generator Loss: 1.1728 Difference Loss: 0.0215...
Epoch 78/100... Discriminator Loss: 1.1579... Generator Loss: 1.3853 Difference Loss: -0.2274...
Epoch 79/100... Discriminator Loss: 1.1481... Generator Loss: 1.1773 Difference Loss: -0.0292...
Epoch 80/100... Discriminator Loss: 1.1529... Generator Loss: 1.6801 Difference Loss: -0.5272...
Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
plt.imshow(mnist.train.images[3].reshape(28,28), cmap='Greys_r')
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
# with 128
_ = view_samples(-1, samples)
# with 256
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
# with 256
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
# with 128
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, reuse=True),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
1,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the sampler
spvcm is a generic gibbs sampling framework for spatially-correlated variance components models. The current supported models are
Step1: Depending on the structure of the model, you need at least
Step2: Reading in the data, we'll extract these values we need from the dataframe.
Step3: Then, we'll construct some queen contiguity weights from the files to show how to run a model.
Step4: With the data, upper-level weights, and lower-level weights, we can construct a membership vector or a dummy data matrix. For now, I'll create the membership vector.
Step5: But, we could also build the dummy variable matrix using pandas, if we have a suitable categorical variable
Step6: Every call to the sampler is of the following form
Step7: This models, spvcm.upper.SMA, is a variance components/varying intercept model with a state-level SMA-correlated error.
Thus, there are only five parameters in this model, since $\rho$, the lower-level autoregressive parameter, is constrained to zero
Step8: The results and state of the sampler are stored within the vcsma object. I'll step through the most important parts of this object.
trace
The quickest way to get information out of the model is via the trace object. This is where the results of the tracked parameters are stored each iteration. Any variable in the sampler state can be added to the tracked params. Trace objects are essentially dictionaries with the keys being the name of the tracked parameter and the values being a list of each iteration's sampler output.
Step9: In this case, Lambda is the upper-level moving average parameter, Alphas is the vector of correlated group-level random effects, Tau2 is the upper-level variance, Betas are the marginal effects, and Sigma2 is the lower-level error variance.
I've written two helper functions for working with traces. First is to just dump all the output into a pandas dataframe, which makes it super easy to do work on the samples, or write them out to csv and assess convergence in R's coda package.
Step10: the dataframe will have columns containing the elements of the parameters and each row is a single iteration of the sampler
Step11: You can write this out to a csv or analyze it in memory like a typical pandas dataframes
Step12: The second is a method to plot the traces
Step13: The trace object can be sliced by (chain, parameter, index) tuples, or any subset thereof.
Step14: We only ran a single chain, so the first index is assumed to be zero. You can run more than one chain in parallel, using the builtin python multiprocessing library
Step15: and the chain plotting works also for the multi-chain traces. In addition, there are quite a few traceplot options, and all the plots are returned by the methods as matplotlib objects, so they can also be saved using plt.savefig().
Step16: To get stuff like posterior quantiles, you can use the attendant pandas dataframe functionality, like describe.
Step17: There is also a trace.summarize function that will compute various things contained in spvcm.diagnostics on the chain. It takes a while for large chains, because the statsmodels.tsa.AR estimator is much slower than the ar estimator in R. If you have rpy2 installed and CODA installed in your R environment, I attempt to use R directly.
Step18: So, 5000 iterations, but many parameters have an effective sample size that's much less than this. There's debate about whether it's necesasry to thin these samples in accordance with the effective size, and I think you should thin your sample to the effective size and see if it affects your HPD/Standard Errorrs.
The existing python packages for MCMC diagnostics were incorrect. So, I've implemented many of the diagnostics from CODA, and have verified that the diagnostics comport with CODA diagnostics. One can also use numpy & statsmodels functions. I'll show some types of analysis.
Step19: For example, a plot of the partial autocorrelation in $\lambda$, the upper-level spatial moving average parameter, over the last half of the chain is
Step20: So, the chain is close-to-first order
Step21: We could do this for many parameters, too. An Autocorrelation/Partial Autocorrelation plot can be made of the marginal effects by
Step22: As far as the builtin diagnostics for convergence and simulation quality, the diagnostics module exposes a few things
Step23: Typically, this means the chain is converged at the given "bin" count if the line stays within $\pm2$. The geweke statistic is a test of differences in means between the given chunk of the chain and the remaining chain. If it's outside of +/- 2 in the early part of the chain, you should discard observations early in the chain. If you get extreme values of these statistics throughout, you need to keep running the chain.
Step24: We can also compute Monte Carlo Standard Errors like in the mcse R package, which represent the intrinsic error contained in the estimate
Step25: Another handy statistic is the Partial Scale Reduction factor, which measures of how likely a set of chains run in parallel have converged to the same stationary distribution. It provides the difference in variance between between chains vs. within chains.
If these are significantly larger than one (say, 1.5), the chain probably has not converged. Being marginally below $1$ is fine, too.
Step26: Highest posterior density intervals provide a kind of interval estimate for parameters in Bayesian models
Step27: Sometimes, you want to apply arbitrary functions to each parameter trace. To do this, I've written a map function that works like the python builtin map. For example, if you wanted to get arbitrary percentiles from the chain
Step28: In addition, you can pop the trace results pretty simply to a .csv file and analyze it elsewhere, like if you want to use use the coda Bayesian Diagnostics package in R.
To write out a model to a csv, you can use
Step29: And, you can even load traces from csvs
Step30: Working with models
Step31: And sample steps forward an arbitrary number of times
Step32: At this point, we did 5000 initial samples and 11 extra samples. Thus
Step33: Parallel models can suspend/resume sampling too
Step34: Under the hood, it's the draw method that actually ends up calling one run of model._iteration, which is where the actual statistical code lives. Then, it updates all model.traced_params by adding their current value in model.state to model.trace. In addition, model._finalize is called the first time sampling is run, which computes some of the constants & derived quantities that save computing time.
Working with models
Step35: If you want to track how something (maybe a hyperparameter) changes over sampling, you can pass extra_traced_params to the model declaration
Step36: configs
this is where configuration options for the various MCMC steps are stored. For multilevel variance components models, these are called $\rho$ for the lower-level error parameter and $\lamdba$ for the upper-level parameter. Two exact sampling methods are implemented, Metropolis sampling & Slice sampling.
Each MCMC step has its own config
Step37: Since vcsma is an upper-level-only model, the Rho config is skipped. But, we can look at the Lambda config. The number of accepted lambda draws is contained in
Step38: so, the acceptance rate is
Step39: Also, if you want to get verbose output from the metropolis sampler, there is a "debug" flag
Step40: Which stores the information about each iteration in a list, accessible from model.configs.<parameter>._cache
Step41: Configuration of the MCMC steps is done using the config options dictionary, like done in spBayes in R. The actual configuration classes exist in spvcm.steps
Step42: Most of the common options are
Step43: Working with models
Step44: This sets up a two-level spatial error model with the default uninformative configuration. This means the prior precisions are all I * .001*, prior means are all 0, spatial parameters are set to -1/(n-1), and prior scale factors are set arbitrarily.
Configs
Options are set by assgning to the relevant property in model.configs.
The model configuration object is another dictionary with a few special methods.
Configuration options are stored for each parameter separately
Step45: So, for example, if we wanted to turn off adaptation in the upper-level parameter, and fix the Metrpolis jump variance to .25
Step46: Priors
Another thing that might be interesting (though not "bayesian") would be to fix the prior mean of $\beta$ to the OLS estimates. One way this could be done would be to pull the Delta matrix out from the state, and estimate
Step47: Starting Values
If you wanted to start the sampler at a given starting value, you can do so by assigning that value to the Lambda value in state.
Step48: Sometimes, it's suggested that you start the beta vector randomly, rather than at zero. For the parallel sampling, the model starting values are adjusted to induce overdispersion in the start values.
You could do this manually, too
Step49: Spatial Priors
Changing the spatial parameter priors is also done by changing their prior in state. This prior must be a function that takes a value of the parameter and return the log of the prior probability for that value.
For example, we could assign P(\lambda) = Beta(2,1) and zero if outside $(0,1)$, and asign $\rho$ a truncated $\mathcal{N}(0,.5)$ prior by first defining their functional form
Step50: And then assigning to their symbols, LogLambda0 and LogRho0 in the state
Step51: Performance
The efficiency of the sampler is contingent on the lower-level size. If we were to estimate the draw in a dual-level SAR-Error Variance Components iteration
Step52: To make it easy to work with the model, you can interrupt and resume sampling using keyboard interrupts (ctrl-c or the stop button in the notebook).
Step53: Under the Hood
Package Structure
Most of the tools in the package are stored in relevant python files in the top level or a dedicated subfolder. Explaining a few | Python Code:
import spvcm.api as spvcm #package API
spvcm.both.Generic # abstract customizable class, ignores rho/lambda, equivalent to MVCM
spvcm.both.MVCM # no spatial effect
spvcm.both.SESE # both spatial error (SE)
spvcm.both.SESMA # response-level SE, region-level spatial moving average
spvcm.both.SMASE # response-level SMA, region-level SE
spvcm.both.SMASMA # both levels SMA
spvcm.upper.SE # response-level uncorrelated, region-level SE
spvcm.upper.SMA # response-level uncorrelated, region-level SMA
spvcm.lower.SE # response-level SE, region-level uncorrelated
spvcm.lower.SMA # response-level SMA, region-level uncorrelated
Explanation: Using the sampler
spvcm is a generic gibbs sampling framework for spatially-correlated variance components models. The current supported models are:
spvcm.both contains specifications with correlated errors in both levels, with the first statement se/sma describing the lower level and the second statement se/sma describing the upper level. In addition, MVCM, the multilevel variance components model with no spatial correlation, is in the both namespace.
spvcm.lower contains two specifications, se/sma, that can be used for a variance components model with correlated lower-level errors.
spvcm.upper contains two specifications, se/sma that can be used for a variance components model with correlated upper-level errors.
Specification
These derive from a variance components specification:
$$ Y \sim \mathcal{N}(X\beta, \Psi_1(\lambda, \sigma^2) + \Delta\Psi_2(\rho, \tau^2)\Delta') $$
Where:
1. $\beta$, called Betas in code, is the marginal effect parameter. In this implementation, any region-level covariates $Z$ get appended to the end of $X$. So, if $X$ is $n \times p$ ($n$ observations of $p$ covariates) and $Z$ is $J \times p'$ ($p'$ covariates observed for $J$ regions), then the model's $X$ matrix is $n \times (p + p')$ and $\beta$ is $p + p' \times 1$.
2. $\Psi_1$ is the covariance function for the response-level model. In the software, a separable covariance is assumed, so that $\Psi_1(\rho, \sigma^2) = \Psi_1(\rho) * I \sigma^2)$, where $I$ is the $n \times n$ covariance matrix. Thus, $\rho$ is the spatial autoregressive parameter and $\sigma^2$ is the variance parameter. In the software, $\Psi_1$ takes any of the following forms:
- Spatial Error (SE): $\Psi_1(\rho) = [(I - \rho \mathbf{W})'(I - \rho \mathbf{W})]^{-1} \sigma^2$
- Spatial Moving Average (SMA): $\Psi_1(\rho) = (I + \rho \mathbf{W})(I + \lambda \mathbf{W})'$
- Identity: $\Psi_1(\rho) = I$
2. $\Psi_2$ is the region-level covariance function, with region-level autoregressive parameter $\lambda$ and region-level variance $\tau^2$. It has the same potential forms as $\Psi_1$.
3. $\alpha$, called Alphas in code, is the region-level random effect. In a variance components model, this is interpreted as a random effect for the upper-level. For a Varying-intercept format, this random component should be added to a region-level fixed effect to provide the varying intercept. This may also make it more difficult to identify the spatial parameter.
Softare implementation
All of the possible combinations of Spatial Moving Average and Spatial Error processes are contained in the following classes. I will walk through estimating one below, and talk about the various features of the package.
First, the API of the package is defined by the spvcm.api submodule. To load it, use import spvcm.api as spvcm:
End of explanation
#seaborn is required for the traceplots
import pysal as ps
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import geopandas as gpd
%matplotlib inline
Explanation: Depending on the structure of the model, you need at least:
- X, data at the response (lower) level
- Y, system response in the lower level
- membership or Delta, the membership vector relating each observation to its group or the "dummy variable" matrix encoding the same information.
Then, if spatial correlation is desired, M is the "upper-level" weights matrix and W the lower-level weights matrix.
Any upper-level data should be passed in $Z$, and have $J$ rows. To fit a varying-intercept model, include an identity matrix in $Z$. You can include state-level and response-level intercept terms simultaneously.
Finally, there are many configuration and tuning options that can be passed in at the start, or assigned after the model is initialized.
First, though, let's set up some data for a model on southern counties predicting HR90, the Homicide Rate in the US South in 1990, using the the percent of the labor force that is unemployed (UE90), a principal component expressing the population structure (PS90), and a principal component expressing resource deprivation.
We will also use the state-level average percentage of families below the poverty line and the average Gini coefficient at the state level for a $Z$ variable.
End of explanation
data = ps.pdio.read_files(ps.examples.get_path('south.shp'))
gdf = gpd.read_file(ps.examples.get_path('south.shp'))
data = data[data.STATE_NAME != 'District of Columbia']
X = data[['UE90', 'PS90', 'RD90']].values
N = X.shape[0]
Z = data.groupby('STATE_NAME')[['FP89', 'GI89']].mean().values
J = Z.shape[0]
Y = data.HR90.values.reshape(-1,1)
Explanation: Reading in the data, we'll extract these values we need from the dataframe.
End of explanation
W2 = ps.queen_from_shapefile(ps.examples.get_path('us48.shp'),
idVariable='STATE_NAME')
W2 = ps.w_subset(W2, ids=data.STATE_NAME.unique().tolist()) #only keep what's in the data
W1 = ps.queen_from_shapefile(ps.examples.get_path('south.shp'),
idVariable='FIPS')
W1 = ps.w_subset(W1, ids=data.FIPS.tolist()) #again, only keep what's in the data
W1.transform = 'r'
W2.transform = 'r'
Explanation: Then, we'll construct some queen contiguity weights from the files to show how to run a model.
End of explanation
membership = data.STATE_NAME.apply(lambda x: W2.id_order.index(x)).values
Explanation: With the data, upper-level weights, and lower-level weights, we can construct a membership vector or a dummy data matrix. For now, I'll create the membership vector.
End of explanation
Delta_frame = pd.get_dummies(data.STATE_NAME)
Delta = Delta_frame.values
Explanation: But, we could also build the dummy variable matrix using pandas, if we have a suitable categorical variable:
End of explanation
vcsma = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=5000,
configs=dict(tuning=1000, adapt_step=1.01))
Explanation: Every call to the sampler is of the following form:
sampler(Y, X, W, M, Z, membership, Delta, n_samples, **configuration)
Where W, M are passed if appropriate, Z is passed if used, and only one of membership or Delta is required. In the end, Z is appended to X, so the effects pertaining to the upper level will be at the tail end of the $\beta$ effects vector. If both Delta and membership are supplied, they're verified against each other to ensure that they agree before they are used in the model.
For all models, the membership vector or an equivalent dummy variable matrix is required. For models with correlation in the upper level, only the upper-level weights matrix $\mathbf{M}$ is needed. For lower level models, the lower-level weights matrix $\mathbf{W}$ is required. For models with correlation in both levels, both $\mathbf{W}$ and $\mathbf{M}$ are required.
Every sampler uses, either in whole or in part, spvcm.both.generic, which implements the full generic sampler discussed in the working paper. For efficiency, the upper-level samplers modify this runtime to avoid processing the full lower-level covariance matrix.
Like many of the R packages dedicated to bayesian models, configuration occurs by passing the correct dictionary to the model call. In addition, you can "setup" the model, configure it, and then run samples in separate steps.
The most common way to call the sampler is something like:
End of explanation
vcsma.trace.varnames
Explanation: This models, spvcm.upper.SMA, is a variance components/varying intercept model with a state-level SMA-correlated error.
Thus, there are only five parameters in this model, since $\rho$, the lower-level autoregressive parameter, is constrained to zero:
End of explanation
vcsma.trace.varnames
Explanation: The results and state of the sampler are stored within the vcsma object. I'll step through the most important parts of this object.
trace
The quickest way to get information out of the model is via the trace object. This is where the results of the tracked parameters are stored each iteration. Any variable in the sampler state can be added to the tracked params. Trace objects are essentially dictionaries with the keys being the name of the tracked parameter and the values being a list of each iteration's sampler output.
End of explanation
trace_dataframe = vcsma.trace.to_df()
Explanation: In this case, Lambda is the upper-level moving average parameter, Alphas is the vector of correlated group-level random effects, Tau2 is the upper-level variance, Betas are the marginal effects, and Sigma2 is the lower-level error variance.
I've written two helper functions for working with traces. First is to just dump all the output into a pandas dataframe, which makes it super easy to do work on the samples, or write them out to csv and assess convergence in R's coda package.
End of explanation
trace_dataframe.head()
Explanation: the dataframe will have columns containing the elements of the parameters and each row is a single iteration of the sampler:
End of explanation
trace_dataframe.mean()
Explanation: You can write this out to a csv or analyze it in memory like a typical pandas dataframes:
End of explanation
fig, ax = vcsma.trace.plot()
plt.show()
Explanation: The second is a method to plot the traces:
End of explanation
vcsma.trace['Lambda',-4:] #last 4 draws of lambda
vcsma.trace[['Tau2', 'Sigma2'], 0:2] #the first 2 variance parameters
Explanation: The trace object can be sliced by (chain, parameter, index) tuples, or any subset thereof.
End of explanation
vcsma_p = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=5000, n_jobs=3, #run 3 chains
configs=dict(tuning=500, adapt_step=1.01))
vcsma_p.trace[0, 'Betas', -1] #the last draw of Beta on the first chain.
vcsma_p.trace[1, 'Betas', -1] #the last draw of Beta on the second chain
Explanation: We only ran a single chain, so the first index is assumed to be zero. You can run more than one chain in parallel, using the builtin python multiprocessing library:
End of explanation
vcsma_p.trace.plot(burn=1000, thin=10)
plt.suptitle('SMA of Homicide Rate in Southern US Counties', y=0, fontsize=20)
#plt.savefig('trace.png') #saves to a file called "trace.png"
plt.show()
vcsma_p.trace.plot(burn=-100, varnames='Lambda') #A negative burn-in works like negative indexing in Python & R
plt.suptitle('First 100 iterations of $\lambda$', fontsize=20, y=.02)
plt.show() #so this plots Lambda in the first 100 iterations.
Explanation: and the chain plotting works also for the multi-chain traces. In addition, there are quite a few traceplot options, and all the plots are returned by the methods as matplotlib objects, so they can also be saved using plt.savefig().
End of explanation
df = vcsma.trace.to_df()
df.describe()
Explanation: To get stuff like posterior quantiles, you can use the attendant pandas dataframe functionality, like describe.
End of explanation
vcsma.trace.summarize()
Explanation: There is also a trace.summarize function that will compute various things contained in spvcm.diagnostics on the chain. It takes a while for large chains, because the statsmodels.tsa.AR estimator is much slower than the ar estimator in R. If you have rpy2 installed and CODA installed in your R environment, I attempt to use R directly.
End of explanation
from statsmodels.api import tsa
#if you don't have it, try removing the comment and:
#! pip install statsmodels
Explanation: So, 5000 iterations, but many parameters have an effective sample size that's much less than this. There's debate about whether it's necesasry to thin these samples in accordance with the effective size, and I think you should thin your sample to the effective size and see if it affects your HPD/Standard Errorrs.
The existing python packages for MCMC diagnostics were incorrect. So, I've implemented many of the diagnostics from CODA, and have verified that the diagnostics comport with CODA diagnostics. One can also use numpy & statsmodels functions. I'll show some types of analysis.
End of explanation
plt.plot(tsa.pacf(vcsma.trace['Lambda', -2500:]))
Explanation: For example, a plot of the partial autocorrelation in $\lambda$, the upper-level spatial moving average parameter, over the last half of the chain is:
End of explanation
tsa.pacf(df.Lambda)[0:3]
Explanation: So, the chain is close-to-first order:
End of explanation
betas = [c for c in df.columns if c.startswith('Beta')]
f,ax = plt.subplots(len(betas), 2, figsize=(10,8))
for i, col in enumerate(betas):
ax[i,0].plot(tsa.acf(df[col].values))
ax[i,1].plot(tsa.pacf(df[col].values)) #the pacf plots take a while
ax[i,0].set_title(col +' (ACF)')
ax[i,1].set_title('(PACF)')
f.tight_layout()
plt.show()
Explanation: We could do this for many parameters, too. An Autocorrelation/Partial Autocorrelation plot can be made of the marginal effects by:
End of explanation
gstats = spvcm.diagnostics.geweke(vcsma, varnames='Tau2') #takes a while
print(gstats)
Explanation: As far as the builtin diagnostics for convergence and simulation quality, the diagnostics module exposes a few things:
Geweke statistics for differences in means between chain components:
End of explanation
plt.plot(gstats[0]['Tau2'][:-1])
Explanation: Typically, this means the chain is converged at the given "bin" count if the line stays within $\pm2$. The geweke statistic is a test of differences in means between the given chunk of the chain and the remaining chain. If it's outside of +/- 2 in the early part of the chain, you should discard observations early in the chain. If you get extreme values of these statistics throughout, you need to keep running the chain.
End of explanation
spvcm.diagnostics.mcse(vcsma, varnames=['Tau2', 'Sigma2'])
Explanation: We can also compute Monte Carlo Standard Errors like in the mcse R package, which represent the intrinsic error contained in the estimate:
End of explanation
spvcm.diagnostics.psrf(vcsma_p, varnames=['Tau2', 'Sigma2'])
Explanation: Another handy statistic is the Partial Scale Reduction factor, which measures of how likely a set of chains run in parallel have converged to the same stationary distribution. It provides the difference in variance between between chains vs. within chains.
If these are significantly larger than one (say, 1.5), the chain probably has not converged. Being marginally below $1$ is fine, too.
End of explanation
spvcm.diagnostics.hpd_interval(vcsma, varnames=['Betas', 'Lambda', 'Sigma2'])
Explanation: Highest posterior density intervals provide a kind of interval estimate for parameters in Bayesian models:
End of explanation
vcsma.trace.map(np.percentile,
varnames=['Lambda', 'Tau2', 'Sigma2'],
#arguments to pass to the function go last
q=[25, 50, 75])
Explanation: Sometimes, you want to apply arbitrary functions to each parameter trace. To do this, I've written a map function that works like the python builtin map. For example, if you wanted to get arbitrary percentiles from the chain:
End of explanation
vcsma.trace.to_csv('./model_run.csv')
Explanation: In addition, you can pop the trace results pretty simply to a .csv file and analyze it elsewhere, like if you want to use use the coda Bayesian Diagnostics package in R.
To write out a model to a csv, you can use:
End of explanation
tr = spvcm.Trace.from_csv('./model_run.csv')
print(tr.varnames)
tr.plot(varnames=['Tau2'])
Explanation: And, you can even load traces from csvs:
End of explanation
vcsma.draw()
Explanation: Working with models: draw and sample
These two functions are used to call the underlying Gibbs sampler. They take no arguments, and operate on the sampler in place. draw provides a single new sample:
End of explanation
vcsma.sample(10)
Explanation: And sample steps forward an arbitrary number of times:
End of explanation
vcsma.cycles
Explanation: At this point, we did 5000 initial samples and 11 extra samples. Thus:
End of explanation
vcsma_p.sample(10)
vcsma_p.cycles
Explanation: Parallel models can suspend/resume sampling too:
End of explanation
print(vcsma.state.keys())
Explanation: Under the hood, it's the draw method that actually ends up calling one run of model._iteration, which is where the actual statistical code lives. Then, it updates all model.traced_params by adding their current value in model.state to model.trace. In addition, model._finalize is called the first time sampling is run, which computes some of the constants & derived quantities that save computing time.
Working with models: state
This is the collection of current values in the sampler. To be efficient, Gibbs sampling must keep around some of the computations used in the simulation, since sometimes the same terms show up in different conditional posteriors. So, the current values of the sampler are stored in state.
All of the following are tracked in the state:
End of explanation
example = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=250,
extra_traced_params = ['DeltaAlphas'],
configs=dict(tuning=500, adapt_step=1.01))
example.trace.varnames
Explanation: If you want to track how something (maybe a hyperparameter) changes over sampling, you can pass extra_traced_params to the model declaration:
End of explanation
vcsma.configs
Explanation: configs
this is where configuration options for the various MCMC steps are stored. For multilevel variance components models, these are called $\rho$ for the lower-level error parameter and $\lamdba$ for the upper-level parameter. Two exact sampling methods are implemented, Metropolis sampling & Slice sampling.
Each MCMC step has its own config:
End of explanation
vcsma.configs.Lambda.accepted
Explanation: Since vcsma is an upper-level-only model, the Rho config is skipped. But, we can look at the Lambda config. The number of accepted lambda draws is contained in :
End of explanation
vcsma.configs.Lambda.accepted / float(vcsma.cycles)
Explanation: so, the acceptance rate is
End of explanation
example = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=500,
configs=dict(tuning=250, adapt_step=1.01,
debug=True))
Explanation: Also, if you want to get verbose output from the metropolis sampler, there is a "debug" flag:
End of explanation
example.configs.Lambda._cache[-1] #let's only look at the last one
Explanation: Which stores the information about each iteration in a list, accessible from model.configs.<parameter>._cache:
End of explanation
from spvcm.steps import Metropolis, Slice
Explanation: Configuration of the MCMC steps is done using the config options dictionary, like done in spBayes in R. The actual configuration classes exist in spvcm.steps:
End of explanation
example = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=500,
configs=dict(tuning=250, adapt_step=1.01,
debug=True, ar_low=.1, ar_hi=.4))
example.configs.Lambda.ar_hi, example.configs.Lambda.ar_low
example_slicer = spvcm.upper.SMA(Y, X, M=W2, Z=Z, membership=membership,
n_samples=500,
configs=dict(Lambda_method='slice'))
example_slicer.trace.plot(varnames='Lambda')
plt.show()
example_slicer.configs.Lambda.adapt, example_slicer.configs.Lambda.width
Explanation: Most of the common options are:
Metropolis
jump: the starting standard deviation of the proposal distribution
tuning: the number of iterations to tune the scale of the proposal
ar_low: the lower bound of the target acceptance rate range
ar_hi: the upper bound of the target acceptance rate range
adapt_step: a number (bigger than 1) that will be used to modify the jump in order to keep the acceptance rate betwen ar_lo and ar_hi. Values much larger than 1 result in much more dramatic tuning.
Slice
width: starting width of the level set
adapt: number of previous slices use in the weighted average for the next slice. If 0, the width is not dynamically tuned.
End of explanation
vcsese = spvcm.both.SESE(Y, X, W=W1, M=W2, Z=Z, membership=membership,
n_samples=0)
Explanation: Working with models: customization
If you're doing heavy customization, it makes the most sense to first initialize the class without sampling. We did this before when showing how the "extra_traced_params" option worked.
To show, let's initialize a double-level SAR-Error variance components model, but not actually draw anything.
To do this, you pass the option n_samples=0.
End of explanation
vcsese.configs
Explanation: This sets up a two-level spatial error model with the default uninformative configuration. This means the prior precisions are all I * .001*, prior means are all 0, spatial parameters are set to -1/(n-1), and prior scale factors are set arbitrarily.
Configs
Options are set by assgning to the relevant property in model.configs.
The model configuration object is another dictionary with a few special methods.
Configuration options are stored for each parameter separately:
End of explanation
vcsese.configs.Lambda.max_tuning = 0
vcsese.configs.Lambda.jump = .25
Explanation: So, for example, if we wanted to turn off adaptation in the upper-level parameter, and fix the Metrpolis jump variance to .25:
End of explanation
Delta = vcsese.state.Delta
DeltaZ = Delta.dot(Z)
vcsese.state.Betas_mean0 = ps.spreg.OLS(Y, np.hstack((X, DeltaZ))).betas
Explanation: Priors
Another thing that might be interesting (though not "bayesian") would be to fix the prior mean of $\beta$ to the OLS estimates. One way this could be done would be to pull the Delta matrix out from the state, and estimate:
$$ Y = X\beta + \Delta Z + \epsilon $$
using PySAL:
End of explanation
vcsese.state.Lambda = -.25
Explanation: Starting Values
If you wanted to start the sampler at a given starting value, you can do so by assigning that value to the Lambda value in state.
End of explanation
vcsese.state.Betas += np.random.uniform(-10, 10, size=(vcsese.state.p,1))
Explanation: Sometimes, it's suggested that you start the beta vector randomly, rather than at zero. For the parallel sampling, the model starting values are adjusted to induce overdispersion in the start values.
You could do this manually, too:
End of explanation
from scipy import stats
def Lambda_prior(val):
if (val < 0) or (val > 1):
return -np.inf
return np.log(stats.beta.pdf(val, 2,1))
def Rho_prior(val):
if (val > .5) or (val < -.5):
return -np.inf
return np.log(stats.truncnorm.pdf(val, -.5, .5, loc=0, scale=.5))
Explanation: Spatial Priors
Changing the spatial parameter priors is also done by changing their prior in state. This prior must be a function that takes a value of the parameter and return the log of the prior probability for that value.
For example, we could assign P(\lambda) = Beta(2,1) and zero if outside $(0,1)$, and asign $\rho$ a truncated $\mathcal{N}(0,.5)$ prior by first defining their functional form:
End of explanation
vcsese.state.LogLambda0 = Lambda_prior
vcsese.state.LogRho0 = Rho_prior
Explanation: And then assigning to their symbols, LogLambda0 and LogRho0 in the state:
End of explanation
%timeit vcsese.draw()
Explanation: Performance
The efficiency of the sampler is contingent on the lower-level size. If we were to estimate the draw in a dual-level SAR-Error Variance Components iteration:
End of explanation
%time vcsese.sample(100)
vcsese.sample(10)
Explanation: To make it easy to work with the model, you can interrupt and resume sampling using keyboard interrupts (ctrl-c or the stop button in the notebook).
End of explanation
vcsese.state.Psi_1 #lower-level covariance
vcsese.state.Psi_2 #upper-level covariance
vcsma.state.Psi_2 #upper-level covariance
vcsma.state.Psi_2i
vcsma.state.Psi_1
Explanation: Under the Hood
Package Structure
Most of the tools in the package are stored in relevant python files in the top level or a dedicated subfolder. Explaining a few:
abstracts.py - the abstract class machinery to iterate over a sampling loop. This is where the classes are defined, like Trace, Sampler_Mixin, or Hashmap.
plotting.py - tools for plotting output
steps.py - the step method definitions
verify.py - like user checks in pysal.spreg, this contains a few sanity checks.
utils.py- contains statistical or numerical utilities to make the computation easier, like cholesky multivariate normal sampling, more sparse utility functions, etc.
diagnostics.py - all the diagnostics
priors.py - definitions of alternative prior forms. Right now, this is pretty simple.
sqlite.py - functions to use a sqlite database instead of an in-memory chain are defined here.
The implementation of a Model
The package is implemented so that every "model type" first sends off to the spvcm.both.Base_Generic, which sets up the state, trace, and priors.
Models are added by writing a model.py file and possibly a sample.py file. The model.py file defines a Base/User class pair (like spreg) that sets up the state and trace. It must define hyperparameters, and can precompute objects used in the sampling loop. The base class should inherit from Sampler_Mixin, which defines all of the machinery of sampling.
The loop through the conditional posteriors should be defined in model.py, in the model._iteration function. This should update the model state in place.
The model may also define a _finalize function which is run once before sampling.
So, if I write a new model, like a varying-intercept model with endogenously-lagged intercepts, I would write a model.py containing something like:
```python
class Base_VISAR(spvcm.generic.Base_Generic):
def init(self, Y, X, M, membership=None, Delta=None,
extra_traced_params=None, #record extra things in state
n_samples=1000, n_jobs=1, #sampling config
priors = None, # dict with prior values for params
configs=None, # dict with configs for MCMC steps
starting_values=None, # dict with starting values
truncation=None, # options to truncate MCMC step priors
center=False, # Whether to center the X,Z matrices
scale=False # Whether re-scale the X,Z matrices
):
super(Base_VISAR, self).init(self, Y, X, M, W=None,
membership=membership,
Delta=Delta,
n_samples=0, n_jobs=n_jobs,
priors=priors, configs=configs,
starting_values=starting_values,
truncation=truncation,
center=center,
scale=scale
)
self.sample(n_samples, n_jobs=n_jobs)
def _finalize(self):
# the degrees of freedom of the variance parameter is constant
self.state.Sigma2_an = self.state.N/2 + self.state.Sigma2_a0
...
def _iteration(self):
# computing the values needed to sample from the conditional posteriors
mean = spdot(X.T, spdot(self.PsiRhoi, X)) / Sigma2 + self.state.bmean0
...
...
``
I've organized the directories in this project intoboth_levels,upper_level,lower_level, andhierarchical`, which contains some of the spatially-varying coefficient models & other models I'm working on that are unrelated to the multilevel variance components stuff.
Since most of the _iteration loop is the same between models, most of the models share the same sampling code, but customize the structure of the covariance in each level. These covariance variables are stored in the state.Psi_1, for the lower-level covariance, and state.Psi_2 for the upper-level covariance. Likewise, the precision functions are state.Psi_1i and state.Psi_2i.
For example:
End of explanation |
1,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
xarray with MetPy Tutorial
xarray <http
Step1: Getting Data
While xarray can handle a wide variety of n-dimensional data (essentially anything that can
be stored in a netCDF file), a common use case is working with model output. Such model
data can be obtained from a THREDDS Data Server using the siphon package, but for this
tutorial, we will use an example subset of GFS data from Hurrican Irma (September 5th,
2017).
Step2: Preparing Data
To make use of the data within MetPy, we need to parse the dataset for projection and
coordinate information following the CF conventions. For this, we use the
data.metpy.parse_cf() method, which will return a new, parsed DataArray or
Dataset.
Additionally, we rename our data variables for easier reference.
Step3: Units
MetPy's DataArray accessor has a unit_array property to obtain a pint.Quantity array
of just the data from the DataArray (metadata is removed) and a convert_units method to
convert the the data from one unit to another (keeping it as a DataArray). For now, we'll
just use convert_units to convert our pressure coordinates to hPa.
Step4: Coordinates
You may have noticed how we directly accessed the vertical coordinates above using their
names. However, in general, if we are working with a particular DataArray, we don't have to
worry about that since MetPy is able to parse the coordinates and so obtain a particular
coordinate type directly. There are two ways to do this
Step5: Projections
Getting the cartopy coordinate reference system (CRS) of the projection of a DataArray is as
straightforward as using the data_var.metpy.cartopy_crs property
Step6: The cartopy Globe can similarly be accessed via the data_var.metpy.cartopy_globe
property
Step7: Calculations
Most of the calculations in metpy.calc will accept DataArrays by converting them
into their corresponding unit arrays. While this may often work without any issues, we must
keep in mind that because the calculations are working with unit arrays and not DataArrays
Step8: Also, a limited number of calculations directly support xarray DataArrays or Datasets (they
can accept and return xarray objects). Right now, this includes
Derivative functions
first_derivative
second_derivative
gradient
laplacian
Cross-section functions
cross_section_components
normal_component
tangential_component
absolute_momentum
More details can be found by looking at the documentation for the specific function of
interest.
There is also the special case of the helper function, grid_deltas_from_dataarray, which
takes a DataArray input, but returns unit arrays for use in other calculations. We could
rewrite the above geostrophic wind example using this helper function as follows
Step9: Plotting
Like most meteorological data, we want to be able to plot these data. DataArrays can be used
like normal numpy arrays in plotting code, which is the recommended process at the current
point in time, or we can use some of xarray's plotting functionality for quick inspection of
the data.
(More detail beyond the following can be found at xarray's plotting reference
<http | Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import xarray as xr
# Any import of metpy will activate the accessors
import metpy.calc as mpcalc
from metpy.testing import get_test_data
Explanation: xarray with MetPy Tutorial
xarray <http://xarray.pydata.org/>_ is a powerful Python package that provides N-dimensional
labeled arrays and datasets following the Common Data Model. While the process of integrating
xarray features into MetPy is ongoing, this tutorial demonstrates how xarray can be used
within the current version of MetPy. MetPy's integration primarily works through accessors
which allow simplified projection handling and coordinate identification. Unit and calculation
support is currently available in a limited fashion, but should be improved in future
versions.
End of explanation
# Open the netCDF file as a xarray Dataset
data = xr.open_dataset(get_test_data('irma_gfs_example.nc', False))
# View a summary of the Dataset
print(data)
Explanation: Getting Data
While xarray can handle a wide variety of n-dimensional data (essentially anything that can
be stored in a netCDF file), a common use case is working with model output. Such model
data can be obtained from a THREDDS Data Server using the siphon package, but for this
tutorial, we will use an example subset of GFS data from Hurrican Irma (September 5th,
2017).
End of explanation
# To parse the full dataset, we can call parse_cf without an argument, and assign the returned
# Dataset.
data = data.metpy.parse_cf()
# If we instead want just a single variable, we can pass that variable name to parse_cf and
# it will return just that data variable as a DataArray.
data_var = data.metpy.parse_cf('Temperature_isobaric')
# To rename variables, supply a dictionary between old and new names to the rename method
data.rename({
'Vertical_velocity_pressure_isobaric': 'omega',
'Relative_humidity_isobaric': 'relative_humidity',
'Temperature_isobaric': 'temperature',
'u-component_of_wind_isobaric': 'u',
'v-component_of_wind_isobaric': 'v',
'Geopotential_height_isobaric': 'height'
}, inplace=True)
Explanation: Preparing Data
To make use of the data within MetPy, we need to parse the dataset for projection and
coordinate information following the CF conventions. For this, we use the
data.metpy.parse_cf() method, which will return a new, parsed DataArray or
Dataset.
Additionally, we rename our data variables for easier reference.
End of explanation
data['isobaric1'].metpy.convert_units('hPa')
data['isobaric3'].metpy.convert_units('hPa')
Explanation: Units
MetPy's DataArray accessor has a unit_array property to obtain a pint.Quantity array
of just the data from the DataArray (metadata is removed) and a convert_units method to
convert the the data from one unit to another (keeping it as a DataArray). For now, we'll
just use convert_units to convert our pressure coordinates to hPa.
End of explanation
# Get multiple coordinates (for example, in just the x and y direction)
x, y = data['temperature'].metpy.coordinates('x', 'y')
# If we want to get just a single coordinate from the coordinates method, we have to use
# tuple unpacking because the coordinates method returns a generator
vertical, = data['temperature'].metpy.coordinates('vertical')
# Or, we can just get a coordinate from the property
time = data['temperature'].metpy.time
# To verify, we can inspect all their names
print([coord.name for coord in (x, y, vertical, time)])
Explanation: Coordinates
You may have noticed how we directly accessed the vertical coordinates above using their
names. However, in general, if we are working with a particular DataArray, we don't have to
worry about that since MetPy is able to parse the coordinates and so obtain a particular
coordinate type directly. There are two ways to do this:
Use the data_var.metpy.coordinates method
Use the data_var.metpy.x, data_var.metpy.y, data_var.metpy.vertical,
data_var.metpy.time properties
The valid coordinate types are:
x
y
vertical
time
(Both approaches and all four types are shown below)
End of explanation
data_crs = data['temperature'].metpy.cartopy_crs
print(data_crs)
Explanation: Projections
Getting the cartopy coordinate reference system (CRS) of the projection of a DataArray is as
straightforward as using the data_var.metpy.cartopy_crs property:
End of explanation
data_globe = data['temperature'].metpy.cartopy_globe
print(data_globe)
Explanation: The cartopy Globe can similarly be accessed via the data_var.metpy.cartopy_globe
property:
End of explanation
lat, lon = xr.broadcast(y, x)
f = mpcalc.coriolis_parameter(lat)
dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat, initstring=data_crs.proj4_init)
heights = data['height'].loc[time[0]].loc[{vertical.name: 500.}]
u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy)
print(u_geo)
print(v_geo)
Explanation: Calculations
Most of the calculations in metpy.calc will accept DataArrays by converting them
into their corresponding unit arrays. While this may often work without any issues, we must
keep in mind that because the calculations are working with unit arrays and not DataArrays:
The calculations will return unit arrays rather than DataArrays
Broadcasting must be taken care of outside of the calculation, as it would only recognize
dimensions by order, not name
As an example, we calculate geostropic wind at 500 hPa below:
End of explanation
heights = data['height'].loc[time[0]].loc[{vertical.name: 500.}]
lat, lon = xr.broadcast(y, x)
f = mpcalc.coriolis_parameter(lat)
dx, dy = mpcalc.grid_deltas_from_dataarray(heights)
u_geo, v_geo = mpcalc.geostrophic_wind(heights, f, dx, dy)
print(u_geo)
print(v_geo)
Explanation: Also, a limited number of calculations directly support xarray DataArrays or Datasets (they
can accept and return xarray objects). Right now, this includes
Derivative functions
first_derivative
second_derivative
gradient
laplacian
Cross-section functions
cross_section_components
normal_component
tangential_component
absolute_momentum
More details can be found by looking at the documentation for the specific function of
interest.
There is also the special case of the helper function, grid_deltas_from_dataarray, which
takes a DataArray input, but returns unit arrays for use in other calculations. We could
rewrite the above geostrophic wind example using this helper function as follows:
End of explanation
# A very simple example example of a plot of 500 hPa heights
data['height'].loc[time[0]].loc[{vertical.name: 500.}].plot()
plt.show()
# Let's add a projection and coastlines to it
ax = plt.axes(projection=ccrs.LambertConformal())
ax._hold = True # Work-around for CartoPy 0.16/Matplotlib 3.0.0 incompatibility
data['height'].loc[time[0]].loc[{vertical.name: 500.}].plot(ax=ax, transform=data_crs)
ax.coastlines()
plt.show()
# Or, let's make a full 500 hPa map with heights, temperature, winds, and humidity
# Select the data for this time and level
data_level = data.loc[{vertical.name: 500., time.name: time[0]}]
# Create the matplotlib figure and axis
fig, ax = plt.subplots(1, 1, figsize=(12, 8), subplot_kw={'projection': data_crs})
# Plot RH as filled contours
rh = ax.contourf(x, y, data_level['relative_humidity'], levels=[70, 80, 90, 100],
colors=['#99ff00', '#00ff00', '#00cc00'])
# Plot wind barbs, but not all of them
wind_slice = slice(5, -5, 5)
ax.barbs(x[wind_slice], y[wind_slice],
data_level['u'].metpy.unit_array[wind_slice, wind_slice].to('knots'),
data_level['v'].metpy.unit_array[wind_slice, wind_slice].to('knots'),
length=6)
# Plot heights and temperature as contours
h_contour = ax.contour(x, y, data_level['height'], colors='k', levels=range(5400, 6000, 60))
h_contour.clabel(fontsize=8, colors='k', inline=1, inline_spacing=8,
fmt='%i', rightside_up=True, use_clabeltext=True)
t_contour = ax.contour(x, y, data_level['temperature'], colors='xkcd:deep blue',
levels=range(248, 276, 2), alpha=0.8, linestyles='--')
t_contour.clabel(fontsize=8, colors='xkcd:deep blue', inline=1, inline_spacing=8,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Add geographic features
ax.add_feature(cfeature.LAND.with_scale('50m'), facecolor=cfeature.COLORS['land'])
ax.add_feature(cfeature.OCEAN.with_scale('50m'), facecolor=cfeature.COLORS['water'])
ax.add_feature(cfeature.STATES.with_scale('50m'), edgecolor='#c7c783', zorder=0)
ax.add_feature(cfeature.LAKES.with_scale('50m'), facecolor=cfeature.COLORS['water'],
edgecolor='#c7c783', zorder=0)
# Set a title and show the plot
ax.set_title(('500 hPa Heights (m), Temperature (K), Humidity (%) at ' +
time[0].dt.strftime('%Y-%m-%d %H:%MZ')))
plt.show()
Explanation: Plotting
Like most meteorological data, we want to be able to plot these data. DataArrays can be used
like normal numpy arrays in plotting code, which is the recommended process at the current
point in time, or we can use some of xarray's plotting functionality for quick inspection of
the data.
(More detail beyond the following can be found at xarray's plotting reference
<http://xarray.pydata.org/en/stable/plotting.html>_.)
End of explanation |
1,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Renaming files based on EXIF info
Digital cameras generally name their files DSC00001.JPG or something silly like that. I prefer that have them named according to the shooting data first, and possibly some additional information thereafter so that they sort by time taken. Fortunately the EXIF information contains the shooting data, so it is just a matter of extracting it from the file and renaming it. There is a slight complication
Step1: Reading out the EXIF info and dealing with datetime reformatting
Step2: Renaming files | Python Code:
#!wget https://www.dropbox.com/s/-/DSC00005.JPG
#!cp DSC00005.JPG IMAGE.jpg
!cp IMAGE.jpg DSC00005.JPG
!ls -l *.JPG
Explanation: Renaming files based on EXIF info
Digital cameras generally name their files DSC00001.JPG or something silly like that. I prefer that have them named according to the shooting data first, and possibly some additional information thereafter so that they sort by time taken. Fortunately the EXIF information contains the shooting data, so it is just a matter of extracting it from the file and renaming it. There is a slight complication: some cameras can take more than one shot per second, but they typically only provide the EXIF information with second-granularity, so we have to deal with this.
End of explanation
from PIL import Image,ExifTags
from datetime import datetime as dt
import pytz
class exif:
def __init__(self, fn, tz=None, fmt_str=None):
self.fn = fn
if tz == None: tz = 'Europe/London'
self.timezone = pytz.timezone(tz)
if fmt_str == None: fmt_str = '%Y:%m:%d %H:%M:%S'
self.fmt_str = fmt_str
img = img = Image.open(fn)
self.data = {
ExifTags.TAGS[k]: v
for k, v in img._getexif().items()
if k in ExifTags.TAGS
}
dto = dt.strptime(self.data['DateTime'], fmt_str)
self.data['DateTime_o'] = self.timezone.localize(dto)
dto = dt.strptime(self.data['DateTimeOriginal'], fmt_str)
self.data['DateTimeOrignal_o'] = self.timezone.localize(dto)
dto = dt.strptime(self.data['DateTimeDigitized'], fmt_str)
self.data['DateTimeDigitized_o'] = self.timezone.localize(dto)
def time (self, fmt=None, which_time=None, tz=None):
if fmt == None: fmt = "%Y%m%d_%H%M%S"
if which_time == None: which_time = 'DateTime'
dto = self.data[which_time+"_o"];
if tz:
timezone = pytz.timezone(tz)
dto = timezone.normalize(dto.astimezone(timezone))
if fmt == False: return dto
return dto.strftime(fmt)
e = exif('DSC00005.JPG')
e.data['MakerNote'] = ""
e.data['UserComment'] = ""
e.data
type(e.time())
e.time()
Explanation: Reading out the EXIF info and dealing with datetime reformatting
End of explanation
import glob
def exifRename (fmt1=None, pattern=None, fmt=None, dryrun=False):
if fmt==None: fmt='%Y%m%d_%H%M%S_'
if pattern==None: pattern='DSC*.JPG'
files = glob.glob(pattern)
flist = []
for fn in files:
fn2 = 'test'
e = exif(fn)
fn2a = e.time(fmt)
if fmt1 == None: fn2 = fn2a + fn
else: fn2 = fn2a + fmt1
flist.append((fn, fn2))
if dryrun==True: return flist
for oldfn, newfn in flist:
!mv $oldfn $newfn
return flist
exifRename()
!ls -l *.JPG
Explanation: Renaming files
End of explanation |
1,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Run the following
Step1: From official website ( http | Python Code:
#Run this from command line
!python -c "import this"
#Inside a python console
import this
Explanation: Run the following:
End of explanation
# execution semantics
for i in range(10):
print i**2
print "Outside for loop"
# dynamic binding
my_str = "hola"
type(my_str)
# dynamic binding
my_str = 90
type(my_str)
# fully dynamic typing
4 +5 +6
# fully dynamic typing
4 + "hola"
Explanation: From official website ( http://www.python.org/ ):
Python is a programming language that lets you work more quickly and integrate
your systems more effectively. You can learn to use Python and see almost
immediate gains in productivity and lower maintenance costs.
Executive summary from official website ( http://www.python.org/doc/essays/blurb.html )
Python is an interpreted, object-oriented, high-level programming language
with dynamic semantics. Its high-level built in data structures, combined
with dynamic typing and dynamic binding, make it very attractive for Rapid
Application Development, as well as for use as a scripting or glue language
to connect existing components together. Python's simple, easy to learn syntax
emphasizes readability and therefore reduces the cost of program maintenance.
Python supports modules and packages, which encourages program modularity and
code reuse. The Python interpreter and the extensive standard library are
available in source or binary form without charge for all major platforms,
and can be freely distributed.
TO SUM UP
quick development
simple, readable, easy to learn syntax
general purpose
interpreted (not compiled)
object-oriented
high-level
dynamic semantics (aka execution semantics)
fully dynamic typing
dynamic binding
low programs manteinance cost
modularity and code reuse
no licensing costs
extensive standard library, "batteries included"
imperative and functional programming
automatic memory management
End of explanation |
1,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The follow code is same as before, but you can send the commands all in one go.
However, there are implicit wait for the driver so it can do AJAX request and render the page for elements
also, you can you find_element_by_xpath method
Step1: let play with one of the row | Python Code:
# browser = webdriver.Firefox() #I only tested in firefox
# browser.get('http://costcotravel.com/Rental-Cars')
# browser.implicitly_wait(5)#wait for webpage download
# browser.find_element_by_id('pickupLocationTextWidget').send_keys("PHX");
# browser.implicitly_wait(5) #wait for the airport suggestion box to show
# browser.find_element_by_xpath('//li[@class="sayt-result"]').click()
# #click the airport suggestion box
# browser.find_element_by_xpath('//input[@id="pickupDateWidget"]').send_keys('08/27/2016')
# browser.find_element_by_xpath('//input[@id="dropoffDateWidget"]').send_keys('08/30/2016',Keys.RETURN)
# browser.find_element_by_xpath('//select[@id="pickupTimeWidget"]/option[@value="09:00 AM"]').click()
# browser.find_element_by_xpath('//select[@id="dropoffTimeWidget"]/option[@value="05:00 PM"]').click()
# browser.implicitly_wait(5) #wait for the clicks to be completed
# browser.find_element_by_link_text('SEARCH').click()
# #click the search box
# time.sleep(8) #wait for firefox to download and render the page
# n = browser.page_source #grab the html source code
type(n) #the site use unicode
soup = BeautifulSoup(n,'lxml') #use BeautifulSoup to parse the source
print "--------------first 1000 characters:--------------\n"
print soup.prettify()[:1000]
print "\n--------------last 1000 characters:--------------"
print soup.prettify()[-1000:]
table = soup.find('div',{'class':'rentalCarTableDetails'}) #find the table
print "--------------first 1000 characters:--------------\n"
print table.prettify()[:1000]
print "\n--------------last 1000 characters:--------------"
print table.prettify()[-1000:]
tr = table.select('tr') #let's look at one of the row
type(tr)
#lets look at first three row
for i in tr[0:3]:
print i.prettify()
print "-----------------------------------"
Explanation: The follow code is same as before, but you can send the commands all in one go.
However, there are implicit wait for the driver so it can do AJAX request and render the page for elements
also, you can you find_element_by_xpath method
End of explanation
row = tr[3]
row.find('th',{'class':'tar'}).text.encode('utf-8')
row
row.contents[4].text #1. this is unicode, 2. the dollar sign is in the way
'Car' in 'Econ Car' #use this string logic to filter out unwanted data
rows = [i for i in tr if (('Price' not in i.contents[0].text and 'Fees' not in i.contents[0].text and 'Location' not in i.contents[0].text and i.contents[0].text !='') and len(i.contents[0].text)<30)]
# use this crazy list comprehension to get the data we want
#1. don't want the text 'Price' in the first column
#2. don't want the text 'Fee' in the first column
#3. don't want the text 'Location' in the first column
#4. the text length of first column must be less than 30 characters long
rows[0].contents[0].text #just exploring here...
rows[0].contents[4].text #need to get rid of the $....
rows[3].contents[0].text #need to make it utf-8
#process the data
prices = {}
for i in rows:
#print the 1st column text
print i.contents[0].text.encode('utf-8')
prices[i.contents[0].text.encode('utf-8')] = [i.contents[1].text.encode('utf-8'),i.contents[2].text.encode('utf-8'), i.contents[3].text.encode('utf-8'),i.contents[4].text.encode('utf-8')]
prices
iteritems = prices.iteritems()
#call .iteritems() on a dictionary will give you a generator which you can iter over
iteritems.next() #run me five times
for name, priceList in prices.iteritems():
newPriceList = []
for i in priceList:
newPriceList.append(i.replace('$',''))
prices[name] = newPriceList
prices
data = pd.DataFrame.from_dict(prices, orient='index') #get a pandas DataFrame from the prices dictionary
data
data = data.replace('Not Available', numpy.nan) #replace the 'Not Available' data point to numpy.nan
data = pd.to_numeric(data, errors='coerce') #cast to numeric data
data
data.columns= ['Alamo','Avis','Budget','Enterprise'] #set column names
data
data.notnull() #check for missing data
data.min(axis=1, skipna=True) #look at the cheapest car in each class
Explanation: let play with one of the row
End of explanation |
1,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Routing Protocol Sessions and Policies
This category of questions reveals information regarding which routing
protocol sessions are compatibly configured and which ones are
established. It also allows to you analyze BGP routing policies.
BGP Session Compatibility
BGP Session Status
BGP Edges
OSPF Session Compatibility
OSPF Edges
Test Route Policies
Search Route Policies
Step1: BGP Session Compatibility
Returns the compatibility of configured BGP sessions.
Checks the settings of each configured BGP peering and reports any issue with those settings locally or incompatiblity with its remote counterparts. Each row represents one configured BGP peering on a node and contains information about the session it is meant to establish. For dynamic peers, there is one row per compatible remote peer. Statuses that indicate an independently misconfigured peerings include NO_LOCAL_AS, NO_REMOTE_AS, NO_LOCAL_IP (for eBGP single-hop peerings), LOCAL_IP_UNKNOWN_STATICALLY (for iBGP or eBGP multi-hop peerings), NO_REMOTE_IP (for point-to-point peerings), and NO_REMOTE_PREFIX (for dynamic peerings). INVALID_LOCAL_IP indicates that the peering's configured local IP does not belong to any active interface on the node; UNKNOWN_REMOTE indicates that the configured remote IP is not present in the network. A locally valid point-to-point peering is deemed HALF_OPEN if it has no compatible remote peers, UNIQUE_MATCH if it has exactly one compatible remote peer, or MULTIPLE_REMOTES if it has multiple compatible remote peers. A locally valid dynamic peering is deemed NO_MATCH_FOUND if it has no compatible remote peers, or DYNAMIC_MATCH if it has at least one compatible remote peer.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include sessions whose first node matches this specifier. | NodeSpec | True |
remoteNodes | Include sessions whose second node matches this specifier. | NodeSpec | True |
status | Only include sessions for which compatibility status matches this specifier. | BgpSessionCompatStatusSpec | True |
type | Only include sessions that match this specifier. | BgpSessionTypeSpec | True |
Invocation
Step2: Return Value
Name | Description | Type
--- | --- | ---
Node | The node where this session is configured | str
VRF | The VRF in which this session is configured | str
Local_AS | The local AS of the session | int
Local_Interface | Local interface of the session | Interface
Local_IP | The local IP of the session | str
Remote_AS | The remote AS or list of ASes of the session | str
Remote_Node | Remote node for this session | str
Remote_Interface | Remote interface for this session | Interface
Remote_IP | Remote IP or prefix for this session | str
Address_Families | Address Families participating in this session | Set of str
Session_Type | The type of this session | str
Configured_Status | Configured status | str
Print the first 5 rows of the returned Dataframe
Step3: Print the first row of the returned Dataframe
Step4: BGP Session Status
Returns the dynamic status of configured BGP sessions.
Checks whether configured BGP peerings can be established. Each row represents one configured BGP peering and contains information about the session it is configured to establish. For dynamic peerings, one row is shown per compatible remote peer. Possible statuses for each session are NOT_COMPATIBLE, ESTABLISHED, and NOT_ESTABLISHED. NOT_COMPATIBLE sessions are those where one or both peers are misconfigured; the BgpSessionCompatibility question provides further insight into the nature of the configuration error. NOT_ESTABLISHED sessions are those that are configured compatibly but will not come up because peers cannot reach each other (e.g., due to being blocked by an ACL). ESTABLISHED sessions are those that are compatible and are expected to come up.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include sessions whose first node matches this specifier. | NodeSpec | True |
remoteNodes | Include sessions whose second node matches this specifier. | NodeSpec | True |
status | Only include sessions for which status matches this specifier. | BgpSessionStatusSpec | True |
type | Only include sessions that match this specifier. | BgpSessionTypeSpec | True |
Invocation
Step5: Return Value
Name | Description | Type
--- | --- | ---
Node | The node where this session is configured | str
VRF | The VRF in which this session is configured | str
Local_AS | The local AS of the session | int
Local_Interface | Local interface of the session | Interface
Local_IP | The local IP of the session | str
Remote_AS | The remote AS or list of ASes of the session | str
Remote_Node | Remote node for this session | str
Remote_Interface | Remote interface for this session | Interface
Remote_IP | Remote IP or prefix for this session | str
Address_Families | Address Families participating in this session | Set of str
Session_Type | The type of this session | str
Established_Status | Established status | str
Print the first 5 rows of the returned Dataframe
Step6: Print the first row of the returned Dataframe
Step7: BGP Edges
Returns BGP adjacencies.
Lists all BGP adjacencies in the network.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include adjacencies whose first node matches this name or regex. | NodeSpec | True | .
remoteNodes | Include adjacencies whose second node matches this name or regex. | NodeSpec | True | .
Invocation
Step8: Return Value
Name | Description | Type
--- | --- | ---
Node | Node from which the edge originates | str
IP | IP at the side of originator | str
Interface | Interface at which the edge originates | str
AS_Number | AS Number at the side of originator | str
Remote_Node | Node at which the edge terminates | str
Remote_IP | IP at the side of the responder | str
Remote_Interface | Interface at which the edge terminates | str
Remote_AS_Number | AS Number at the side of responder | str
Print the first 5 rows of the returned Dataframe
Step9: Print the first row of the returned Dataframe
Step10: OSPF Session Compatibility
Returns compatible OSPF sessions.
Returns compatible OSPF sessions in the network. A session is compatible if the interfaces involved are not shutdown and do run OSPF, are not OSPF passive and are associated with the same OSPF area.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include nodes matching this name or regex. | NodeSpec | True |
remoteNodes | Include remote nodes matching this name or regex. | NodeSpec | True |
statuses | Only include sessions matching this status specifier. | OspfSessionStatusSpec | True |
Invocation
Step11: Return Value
Name | Description | Type
--- | --- | ---
Interface | Interface | Interface
VRF | VRF | str
IP | Ip | str
Area | Area | int
Remote_Interface | Remote Interface | Interface
Remote_VRF | Remote VRF | str
Remote_IP | Remote IP | str
Remote_Area | Remote Area | int
Session_Status | Status of the OSPF session | str
Print the first 5 rows of the returned Dataframe
Step12: Print the first row of the returned Dataframe
Step13: OSPF Edges
Returns OSPF adjacencies.
Lists all OSPF adjacencies in the network.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include adjacencies whose first node matches this name or regex. | NodeSpec | True | .
remoteNodes | Include edges whose second node matches this name or regex. | NodeSpec | True | .
Invocation
Step14: Return Value
Name | Description | Type
--- | --- | ---
Interface | Interface from which the edge originates | Interface
Remote_Interface | Interface at which the edge terminates | Interface
Print the first 5 rows of the returned Dataframe
Step15: Print the first row of the returned Dataframe
Step16: Test Route Policies
Evaluates the processing of a route by a given policy.
Find how the specified route is processed through the specified routing policies.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Only examine filters on nodes matching this specifier. | NodeSpec | True |
policies | Only consider policies that match this specifier. | RoutingPolicySpec | True |
inputRoutes | The BGP route announcements to test the policy on. | List of BgpRoute | False |
direction | The direction of the route, with respect to the device (IN/OUT). | str | False |
Invocation
Step17: Return Value
Name | Description | Type
--- | --- | ---
Node | The node that has the policy | str
Policy_Name | The name of this policy | str
Input_Route | The input route | BgpRoute
Action | The action of the policy on the input route | str
Output_Route | The output route, if any | BgpRoute
Difference | The difference between the input and output routes, if any | BgpRouteDiffs
Trace | Route policy trace that shows which clauses/terms matched the input route. If the trace is empty, either nothing matched or tracing is not yet been implemented for this policy type. This is an experimental feature whose content and format is subject to change. | List of TraceTree
Print the first 5 rows of the returned Dataframe
Step18: Print the first row of the returned Dataframe
Step19: Search Route Policies
Finds route announcements for which a route policy has a particular behavior.
This question finds route announcements for which a route policy has a particular behavior. The behaviors can be
Step20: Return Value
Name | Description | Type
--- | --- | ---
Node | The node that has the policy | str
Policy_Name | The name of this policy | str
Input_Route | The input route | BgpRoute
Action | The action of the policy on the input route | str
Output_Route | The output route, if any | BgpRoute
Difference | The difference between the input and output routes, if any | BgpRouteDiffs
Trace | Route policy trace that shows which clauses/terms matched the input route. If the trace is empty, either nothing matched or tracing is not yet been implemented for this policy type. This is an experimental feature whose content and format is subject to change. | List of TraceTree
Print the first 5 rows of the returned Dataframe
Step21: Print the first row of the returned Dataframe | Python Code:
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
Explanation: Routing Protocol Sessions and Policies
This category of questions reveals information regarding which routing
protocol sessions are compatibly configured and which ones are
established. It also allows to you analyze BGP routing policies.
BGP Session Compatibility
BGP Session Status
BGP Edges
OSPF Session Compatibility
OSPF Edges
Test Route Policies
Search Route Policies
End of explanation
result = bf.q.bgpSessionCompatibility().answer().frame()
Explanation: BGP Session Compatibility
Returns the compatibility of configured BGP sessions.
Checks the settings of each configured BGP peering and reports any issue with those settings locally or incompatiblity with its remote counterparts. Each row represents one configured BGP peering on a node and contains information about the session it is meant to establish. For dynamic peers, there is one row per compatible remote peer. Statuses that indicate an independently misconfigured peerings include NO_LOCAL_AS, NO_REMOTE_AS, NO_LOCAL_IP (for eBGP single-hop peerings), LOCAL_IP_UNKNOWN_STATICALLY (for iBGP or eBGP multi-hop peerings), NO_REMOTE_IP (for point-to-point peerings), and NO_REMOTE_PREFIX (for dynamic peerings). INVALID_LOCAL_IP indicates that the peering's configured local IP does not belong to any active interface on the node; UNKNOWN_REMOTE indicates that the configured remote IP is not present in the network. A locally valid point-to-point peering is deemed HALF_OPEN if it has no compatible remote peers, UNIQUE_MATCH if it has exactly one compatible remote peer, or MULTIPLE_REMOTES if it has multiple compatible remote peers. A locally valid dynamic peering is deemed NO_MATCH_FOUND if it has no compatible remote peers, or DYNAMIC_MATCH if it has at least one compatible remote peer.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include sessions whose first node matches this specifier. | NodeSpec | True |
remoteNodes | Include sessions whose second node matches this specifier. | NodeSpec | True |
status | Only include sessions for which compatibility status matches this specifier. | BgpSessionCompatStatusSpec | True |
type | Only include sessions that match this specifier. | BgpSessionTypeSpec | True |
Invocation
End of explanation
result.head(5)
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Node | The node where this session is configured | str
VRF | The VRF in which this session is configured | str
Local_AS | The local AS of the session | int
Local_Interface | Local interface of the session | Interface
Local_IP | The local IP of the session | str
Remote_AS | The remote AS or list of ASes of the session | str
Remote_Node | Remote node for this session | str
Remote_Interface | Remote interface for this session | Interface
Remote_IP | Remote IP or prefix for this session | str
Address_Families | Address Families participating in this session | Set of str
Session_Type | The type of this session | str
Configured_Status | Configured status | str
Print the first 5 rows of the returned Dataframe
End of explanation
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
Explanation: Print the first row of the returned Dataframe
End of explanation
result = bf.q.bgpSessionStatus().answer().frame()
Explanation: BGP Session Status
Returns the dynamic status of configured BGP sessions.
Checks whether configured BGP peerings can be established. Each row represents one configured BGP peering and contains information about the session it is configured to establish. For dynamic peerings, one row is shown per compatible remote peer. Possible statuses for each session are NOT_COMPATIBLE, ESTABLISHED, and NOT_ESTABLISHED. NOT_COMPATIBLE sessions are those where one or both peers are misconfigured; the BgpSessionCompatibility question provides further insight into the nature of the configuration error. NOT_ESTABLISHED sessions are those that are configured compatibly but will not come up because peers cannot reach each other (e.g., due to being blocked by an ACL). ESTABLISHED sessions are those that are compatible and are expected to come up.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include sessions whose first node matches this specifier. | NodeSpec | True |
remoteNodes | Include sessions whose second node matches this specifier. | NodeSpec | True |
status | Only include sessions for which status matches this specifier. | BgpSessionStatusSpec | True |
type | Only include sessions that match this specifier. | BgpSessionTypeSpec | True |
Invocation
End of explanation
result.head(5)
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Node | The node where this session is configured | str
VRF | The VRF in which this session is configured | str
Local_AS | The local AS of the session | int
Local_Interface | Local interface of the session | Interface
Local_IP | The local IP of the session | str
Remote_AS | The remote AS or list of ASes of the session | str
Remote_Node | Remote node for this session | str
Remote_Interface | Remote interface for this session | Interface
Remote_IP | Remote IP or prefix for this session | str
Address_Families | Address Families participating in this session | Set of str
Session_Type | The type of this session | str
Established_Status | Established status | str
Print the first 5 rows of the returned Dataframe
End of explanation
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
Explanation: Print the first row of the returned Dataframe
End of explanation
result = bf.q.bgpEdges().answer().frame()
Explanation: BGP Edges
Returns BGP adjacencies.
Lists all BGP adjacencies in the network.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include adjacencies whose first node matches this name or regex. | NodeSpec | True | .
remoteNodes | Include adjacencies whose second node matches this name or regex. | NodeSpec | True | .
Invocation
End of explanation
result.head(5)
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Node | Node from which the edge originates | str
IP | IP at the side of originator | str
Interface | Interface at which the edge originates | str
AS_Number | AS Number at the side of originator | str
Remote_Node | Node at which the edge terminates | str
Remote_IP | IP at the side of the responder | str
Remote_Interface | Interface at which the edge terminates | str
Remote_AS_Number | AS Number at the side of responder | str
Print the first 5 rows of the returned Dataframe
End of explanation
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
Explanation: Print the first row of the returned Dataframe
End of explanation
result = bf.q.ospfSessionCompatibility().answer().frame()
Explanation: OSPF Session Compatibility
Returns compatible OSPF sessions.
Returns compatible OSPF sessions in the network. A session is compatible if the interfaces involved are not shutdown and do run OSPF, are not OSPF passive and are associated with the same OSPF area.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include nodes matching this name or regex. | NodeSpec | True |
remoteNodes | Include remote nodes matching this name or regex. | NodeSpec | True |
statuses | Only include sessions matching this status specifier. | OspfSessionStatusSpec | True |
Invocation
End of explanation
result.head(5)
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Interface | Interface | Interface
VRF | VRF | str
IP | Ip | str
Area | Area | int
Remote_Interface | Remote Interface | Interface
Remote_VRF | Remote VRF | str
Remote_IP | Remote IP | str
Remote_Area | Remote Area | int
Session_Status | Status of the OSPF session | str
Print the first 5 rows of the returned Dataframe
End of explanation
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
Explanation: Print the first row of the returned Dataframe
End of explanation
result = bf.q.ospfEdges().answer().frame()
Explanation: OSPF Edges
Returns OSPF adjacencies.
Lists all OSPF adjacencies in the network.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include adjacencies whose first node matches this name or regex. | NodeSpec | True | .
remoteNodes | Include edges whose second node matches this name or regex. | NodeSpec | True | .
Invocation
End of explanation
result.head(5)
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Interface | Interface from which the edge originates | Interface
Remote_Interface | Interface at which the edge terminates | Interface
Print the first 5 rows of the returned Dataframe
End of explanation
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
Explanation: Print the first row of the returned Dataframe
End of explanation
result = bf.q.testRoutePolicies(policies='/as1_to_/', direction='in', inputRoutes=list([BgpRoute(network='10.0.0.0/24', originatorIp='4.4.4.4', originType='egp', protocol='bgp', asPath=[[64512, 64513], [64514]], communities=['64512:42', '64513:21'])])).answer().frame()
Explanation: Test Route Policies
Evaluates the processing of a route by a given policy.
Find how the specified route is processed through the specified routing policies.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Only examine filters on nodes matching this specifier. | NodeSpec | True |
policies | Only consider policies that match this specifier. | RoutingPolicySpec | True |
inputRoutes | The BGP route announcements to test the policy on. | List of BgpRoute | False |
direction | The direction of the route, with respect to the device (IN/OUT). | str | False |
Invocation
End of explanation
result.head(5)
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Node | The node that has the policy | str
Policy_Name | The name of this policy | str
Input_Route | The input route | BgpRoute
Action | The action of the policy on the input route | str
Output_Route | The output route, if any | BgpRoute
Difference | The difference between the input and output routes, if any | BgpRouteDiffs
Trace | Route policy trace that shows which clauses/terms matched the input route. If the trace is empty, either nothing matched or tracing is not yet been implemented for this policy type. This is an experimental feature whose content and format is subject to change. | List of TraceTree
Print the first 5 rows of the returned Dataframe
End of explanation
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
Explanation: Print the first row of the returned Dataframe
End of explanation
result = bf.q.searchRoutePolicies(nodes='/^as1/', policies='/as1_to_/', inputConstraints=BgpRouteConstraints(prefix=["10.0.0.0/8:8-32", "172.16.0.0/28:28-32", "192.168.0.0/16:16-32"]), action='permit').answer().frame()
Explanation: Search Route Policies
Finds route announcements for which a route policy has a particular behavior.
This question finds route announcements for which a route policy has a particular behavior. The behaviors can be: that the policy permits the route (permit) or that it denies the route (deny). Constraints can be imposed on the input route announcements of interest and, in the case of a permit action, also on the output route announcements of interest. Route policies are selected using node and policy specifiers, which might match multiple policies. In this case, a (possibly different) answer will be found for each policy. Note: This question currently does not support all of the route policy features that Batfish supports. The question only supports common forms of matching on prefixes, communities, and AS-paths, as well as common forms of setting communities, the local preference, and the metric. The question logs all unsupported features that it encounters as warnings. Due to unsupported features, it is possible for the question to return no answers even for route policies that can in fact exhibit the specified behavior.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Only examine policies on nodes matching this specifier. | NodeSpec | True |
policies | Only consider policies that match this specifier. | RoutingPolicySpec | True |
inputConstraints | Constraints on the set of input BGP route announcements to consider. | BgpRouteConstraints | True |
action | The behavior to be evaluated. Specify exactly one of permit or deny. | str | True |
outputConstraints | Constraints on the set of output BGP route announcements to consider. | BgpRouteConstraints | True |
Invocation
End of explanation
result.head(5)
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Node | The node that has the policy | str
Policy_Name | The name of this policy | str
Input_Route | The input route | BgpRoute
Action | The action of the policy on the input route | str
Output_Route | The output route, if any | BgpRoute
Difference | The difference between the input and output routes, if any | BgpRouteDiffs
Trace | Route policy trace that shows which clauses/terms matched the input route. If the trace is empty, either nothing matched or tracing is not yet been implemented for this policy type. This is an experimental feature whose content and format is subject to change. | List of TraceTree
Print the first 5 rows of the returned Dataframe
End of explanation
result.iloc[0]
Explanation: Print the first row of the returned Dataframe
End of explanation |
1,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to NumPy
This notebook demonstrates the limitations of Python's built-in data types in executing some scientific analyses.
Source
Step1: If we assume body mass index (BMI) = weight / height ** 2, what would it take to compute BMI for our data?
Step2: The above attempt raises an error because we can't do this with lists.<br>
The only way around this is to iterate through each item in the lists...
Step3: However with NumPy, we have access to more data types, specifically arrays, that can speed through this process.
Step4: NumPy arrays allow us to do computations on entire collections... | Python Code:
#Create a list of heights and weights
height = [1.73, 1.68, 1.17, 1.89, 1.79]
weight = [65.4, 59.2, 63.6, 88.4, 68.7]
print height
print weight
Explanation: Intro to NumPy
This notebook demonstrates the limitations of Python's built-in data types in executing some scientific analyses.
Source: https://campus.datacamp.com/courses/intro-to-python-for-data-science
First, let's create a dummy datasets of heights and weights of 5 imaginary people.
End of explanation
#[Attempt to] compute BMI from lists
bmi = weight/height ** 2
Explanation: If we assume body mass index (BMI) = weight / height ** 2, what would it take to compute BMI for our data?
End of explanation
#Compute BMI from lists
bmi = []
for idx in range(len(height)):
bmi.append(weight[idx] / height[idx] ** 2)
print bmi
Explanation: The above attempt raises an error because we can't do this with lists.<br>
The only way around this is to iterate through each item in the lists...
End of explanation
#Import numpy, often done using the alias 'np'
import numpy as np
#Convert the height and weight lists to arrays
arrHeight = np.array(height)
arrWeight = np.array(weight)
print arrHeight
print arrWeight
Explanation: However with NumPy, we have access to more data types, specifically arrays, that can speed through this process.
End of explanation
arrBMI = arrWeight / arrHeight ** 2
print arrBMI
Explanation: NumPy arrays allow us to do computations on entire collections...
End of explanation |
1,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gassuain Processes Regression (GPR)
Step1: Single point prediction with GPR
We wont use optimized kernel hyperparameters here and only guess some values and predict the target for a single point.
Step2: Next we will predict 100 points
Step3: GPR parameter optimization
Our last model doesn't fit the data that well and that is because kernel parameters aren' adjusted according to data need.
To address this probelm we will determine the best hyperparameters of our GP. We will use MAP estimate
Step4: With current optimized parameters we will check our GP again | Python Code:
def get_kernel(X1,X2,sigmaf,l,sigman):
k = lambda x1,x2,sigmaf,l,sigman:(sigmaf**2)*np.exp(-(1/float(2*(l**2)))*np.dot((x1-x2),(x1-x2).T)) + (sigman**2);
K = np.zeros((X1.shape[0],X2.shape[0]))
for i in range(0,X1.shape[0]):
for j in range(0,X2.shape[0]):
if i==j:
K[i,j] = k(X1[i,:],X2[j,:],sigmaf,l,sigman);
else:
K[i,j] = k(X1[i,:],X2[j,:],sigmaf,l,0);
return K
Explanation: Gassuain Processes Regression (GPR):
We have noisy sensor readings (indicated by errorbars) and we want to predict readings for desired new points. In GPs every new point will be a dimension to infinite dimensional multivariate gaussian distribution (usualy mean zero) in which covariance between points will be defined by the kernel function.
In other words, in GPs we can have infinte number of random variables, that any finite subset of them will be jointly gaussian as well. GP can then be called a distribution over functions where value of each function at a certain point will be a gussian RV.
By fiting a GP to our (n) trainig points (observations) we can get then nearly back with a single sample from a n-dimentional GP.
Train set points:
$$\mathbf{x} = \begin{bmatrix}
x_1 & x_2 & \cdots & x_n
\end{bmatrix}$$
Test set points:
$$\mathbf{x} = \begin{bmatrix}
x_{1} & x{2} & \cdots & x_{m}
\end{bmatrix}$$
Kernel function with integrated reading noise:
$$ k(x,x') = \sigma_f^2 e^{\frac{-(x-x')^2}{2l^2}} + \sigma_n^2\delta(x,x')$$
and then our GP kernel will read:
\begin{equation}
\begin{bmatrix}
\mathbf{y}\
\mathbf{y}_
\end{bmatrix}
\sim N\Bigl(
0,\begin{bmatrix}
\mathbf{K} & \mathbf{K}^T \
\mathbf{K}_ & \mathbf{K}{*} \
\end{bmatrix}
)
\end{equation}
where
$$\mathbf{K} = \begin{bmatrix}
k(x_1,x_1) & k(x_1,x_2) & \cdots & k(x_1,x_n) \
k(x_2,x_1) & k(x_2,x_2) & \cdots & k(x_2,x_n) \
\vdots & \vdots & \ddots & \vdots \
k(x_n,x_1) & k(x_n,x_2) & \cdots & k(x_n,x_n) \\end{bmatrix}$$
$$\mathbf{K} = \begin{bmatrix}
k(x_{1},x_1) & k(x{1},x_2) & \cdots & k(x_{1},x_n) \
k(x_{2},x_1) & k(x_{2},x_2) & \cdots & k(x_{2},x_n) \
\vdots & \vdots & \ddots & \vdots \
k(x_{m},x_1) & k(x_{m},x_2) & \cdots & k(x_{m},x_n) \\end{bmatrix}$$
$$\mathbf{K}{} = \begin{bmatrix}
k(x_{1},x_{1}) & k(x{1},x_{2}) & \cdots & k(x_{1},x_{m}) \
k(x_{2},x_{1}) & k(x_{2},x_{2}) & \cdots & k(x_{2},x_{m}) \
\vdots & \vdots & \ddots & \vdots \
k(x_{m},x_{1}) & k(x_{m},x_{2}) & \cdots & k(x_{m},x_{m}) \ \end{bmatrix}$$
Next, for prediction we are interested to know the conditional probability of $y_$ given data which will also follow a gaussian distribution acroding to equation below:
$$y_ \vert y \sim N\Bigl(K_K^{-1}y,K_{}-K_K^{-1}K_*^T)$$
the mean will be our best estimate and the variance will indicate our uncertainty.
We will define kernel function here:
End of explanation
n_pts = 1
x = np.array([-1.2, -1., -0.8, -0.6, -.4, -0.2, 0.0, 0.2, 0.4, 0.6],ndmin=2).T
y = np.array([-2, -1, -0.5, -0.25, 0.5, 0.4, 0.0, 1.2, 1.7, 1.4],ndmin=2).T
x_predict = np.array([0.8,]).reshape(n_pts,1)
sigman = 0.1; # noise of the reading
sigmaf = 1.1; # parameters of the GP - next to be computed by optimization
l = 0.2; #lenght-scale of our GP with squared exponential kernel
K = get_kernel(x, x, sigmaf, l, sigman) #+ np.finfo(float).eps*np.identity(x.size) # numerically stable
K_s = get_kernel(x_predict, x, sigmaf, l, 0)
K_ss = get_kernel(x_predict, x_predict, sigmaf, l, sigman)
y_predict_mean = np.dot(np.dot(K_s,np.linalg.inv(K)),y).reshape(n_pts,1)
y_predict_var = np.diag(K_ss - np.dot(K_s,(np.dot(np.linalg.inv(K),K_s.T)))).reshape(n_pts,1)
plt.errorbar(x[:,0], y[:,0], sigman*np.ones_like(y),linestyle='None',marker = '.')
plt.errorbar(x_predict[:,0], y_predict_mean[:,0], y_predict_var[:,0], linestyle='None',marker = '.')
plt.xlabel('x');plt.ylabel('y');plt.title('single point prediction')
plt.show()
y_predict_var.shape
Explanation: Single point prediction with GPR
We wont use optimized kernel hyperparameters here and only guess some values and predict the target for a single point.
End of explanation
n_pts = 100
sigmaf=.1; l=0.5;
x_predict = np.linspace(-1.7,1,n_pts).reshape(n_pts,1)
K = get_kernel(x, x, sigmaf, l, sigman) #+ np.finfo(float).eps*np.identity(x.size) # numerically stable
K_s = get_kernel(x_predict, x, sigmaf, l, 0)
K_ss = get_kernel(x_predict, x_predict, sigmaf, l, sigman)
y_predict_mean = np.dot(np.dot(K_s,np.linalg.inv(K)),y).reshape(n_pts,1)
y_predict_var = np.diag(K_ss - np.dot(K_s,(np.dot(np.linalg.inv(K),K_s.T)))).reshape(n_pts,1)
plt.errorbar(x[:,0], y[:,0], sigman*np.ones_like(y),linestyle='None',marker = '.')
plt.errorbar(x_predict[:,0], y_predict_mean[:,0], y_predict_var[:,0], linestyle='None',marker = '.')
plt.xlabel('x');plt.ylabel('y');plt.title('multiple prediction with non-optimized hyperparamters');
plt.show()
Explanation: Next we will predict 100 points:
End of explanation
p = [1, 0.1]; # inital start point
fun = lambda p: 0.5*(np.dot(y.T,np.dot(np.linalg.inv(get_kernel(x,x,p[0],p[1],sigman)),y)) + np.log(np.linalg.det(get_kernel(x,x,p[0],p[1],sigman))) + x.shape[0]*np.log(2*np.pi));
p = fmin(func=fun, x0=p)
sigmaf, l = p;
print sigmaf,l
Explanation: GPR parameter optimization
Our last model doesn't fit the data that well and that is because kernel parameters aren' adjusted according to data need.
To address this probelm we will determine the best hyperparameters of our GP. We will use MAP estimate:
$$log p(y \vert x,\theta) = -\frac{1}{2}(y^TK^{-1}y-log(det(K))-nlog2\pi) $$
End of explanation
K = get_kernel(x, x, sigmaf, l, sigman) #+ np.finfo(float).eps*np.identity(x.size) # numerically stable
K_s = get_kernel(x_predict, x, sigmaf, l, 0)
K_ss = get_kernel(x_predict, x_predict, sigmaf, l, sigman)
y_predict_mean = np.dot(np.dot(K_s,np.linalg.inv(K)),y).reshape(n_pts,1)
y_predict_var = np.diag(K_ss - np.dot(K_s,(np.dot(np.linalg.inv(K),K_s.T)))).reshape(n_pts,1)
plt.errorbar(x[:,0], y[:,0], sigman*np.ones_like(y),linestyle='None',marker = '.')
plt.errorbar(x_predict[:,0], y_predict_mean[:,0], y_predict_var[:,0], linestyle='None',marker = '.')
plt.xlabel('x');plt.ylabel('y');plt.title('multiple prediction with MAP hyperparamters')
plt.show()
Explanation: With current optimized parameters we will check our GP again:
End of explanation |
1,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A New Hope (for integrating functions)
Names of group members
// put your names here!
Goals of this assignment
The main goal of this assignment is to use https
Step1: Part 2
A torus that is radially symmetric about the z-axis (think of a donut pierced by the x-y plane) can be described by the equation
Step3: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! | Python Code:
# Put your code here!
Explanation: A New Hope (for integrating functions)
Names of group members
// put your names here!
Goals of this assignment
The main goal of this assignment is to use https://en.wikipedia.org/wiki/Monte_Carlo_integration - a technique for numerical integration that uses random numbers to compute the value of a definite integral. Monte Carlo integration works well for one-dimensional functions, but is especially helpful for higher-dimensional integrals or complicated functions.
Part 1
Write a function that uses Monte Carlo integration to $f(x) = 2 x^2 + 3$ from $x_{beg}= -2$ to $x_{end} = +4$. The analytic answer is:
$\int_{-2}^{4} (2x^2 + 3) dx = \left. \frac{2}{3}x^3 + 3x \right|_{-2}^4 = 66$
As you increase the number of samples ($N_{sampple}$) from 10 to $10^6$, how does your calculated solution approach the true answer? In other words, calculate the fractional error defined as $\epsilon = |\frac{I - T}{T}|$, where I is the integrated answer, T is the true (i.e., analytic) answer, and the vertical bars denote that you take the absolute value. This gives you the fractional difference between your integrated answer and the true answer.
End of explanation
# Put your code here!
Explanation: Part 2
A torus that is radially symmetric about the z-axis (think of a donut pierced by the x-y plane) can be described by the equation:
$\large( R - \sqrt{x^2 + y^2} \large)^2 + z^2 = r^2$
where R is the distance from the center of the tube to the center of the torus, and r is the radius of the tube (with the 'tube' meaning the tasty baked part of the donut). Assuming that $R = 12$ cm, $r = 8$ cm, and $\rho_{donut} = 0.8$ g cm$^{-3}$, use Monte Carlo integration to calculate the mass of this excessively large donut. Note that for the situation described here, a point (x,y,z) is inside of the tasty cake part of the donut when:
$\large( R - \sqrt{x^2 + y^2} \large)^2 + z^2 < r^2$
(Try testing this relation in the x-y plane to see that it is true.) Assume that the donut is of uniform density and that the mass of the icing can be neglected. You can use the formulae shown in the Wikipedia page linked above to get the analytic answer. Run the test several times, both repeatedly with the same number of samples and with different numbers of samples. How many points do you have to use to get an answer that converges to within 1%? What about 0.1%?
Hint: does the box that encompasses the donut have to be a cube? I.e., when calculating this problem, what is the minimum practical bounding box that can be described simply and which fully encompasses the donut?
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/NOKKHPQ0oKn1B7e23?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation |
1,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Most people find target leakage very tricky until they've thought about it for a long time.
So, before trying to think about leakage in the housing price example, we'll go through a few examples in other applications. Things will feel more familiar once you come back to a question about house prices.
Setup
The questions below will give you feedback on your answers. Run the following cell to set up the feedback system.
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5 | Python Code:
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex7 import *
print("Setup Complete")
Explanation: Most people find target leakage very tricky until they've thought about it for a long time.
So, before trying to think about leakage in the housing price example, we'll go through a few examples in other applications. Things will feel more familiar once you come back to a question about house prices.
Setup
The questions below will give you feedback on your answers. Run the following cell to set up the feedback system.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_1.check()
Explanation: Step 1: The Data Science of Shoelaces
Nike has hired you as a data science consultant to help them save money on shoe materials. Your first assignment is to review a model one of their employees built to predict how many shoelaces they'll need each month. The features going into the machine learning model include:
- The current month (January, February, etc)
- Advertising expenditures in the previous month
- Various macroeconomic features (like the unemployment rate) as of the beginning of the current month
- The amount of leather they ended up using in the current month
The results show the model is almost perfectly accurate if you include the feature about how much leather they used. But it is only moderately accurate if you leave that feature out. You realize this is because the amount of leather they use is a perfect indicator of how many shoes they produce, which in turn tells you how many shoelaces they need.
Do you think the leather used feature constitutes a source of data leakage? If your answer is "it depends," what does it depend on?
After you have thought about your answer, check it against the solution below.
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_2.check()
Explanation: Step 2: Return of the Shoelaces
You have a new idea. You could use the amount of leather Nike ordered (rather than the amount they actually used) leading up to a given month as a predictor in your shoelace model.
Does this change your answer about whether there is a leakage problem? If you answer "it depends," what does it depend on?
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_3.check()
Explanation: Step 3: Getting Rich With Cryptocurrencies?
You saved Nike so much money that they gave you a bonus. Congratulations.
Your friend, who is also a data scientist, says he has built a model that will let you turn your bonus into millions of dollars. Specifically, his model predicts the price of a new cryptocurrency (like Bitcoin, but a newer one) one day ahead of the moment of prediction. His plan is to purchase the cryptocurrency whenever the model says the price of the currency (in dollars) is about to go up.
The most important features in his model are:
- Current price of the currency
- Amount of the currency sold in the last 24 hours
- Change in the currency price in the last 24 hours
- Change in the currency price in the last 1 hour
- Number of new tweets in the last 24 hours that mention the currency
The value of the cryptocurrency in dollars has fluctuated up and down by over $\$$100 in the last year, and yet his model's average error is less than $\$$1. He says this is proof his model is accurate, and you should invest with him, buying the currency whenever the model says it is about to go up.
Is he right? If there is a problem with his model, what is it?
End of explanation
# Check your answer (Run this code cell to receive credit!)
q_4.check()
Explanation: Step 4: Preventing Infections
An agency that provides healthcare wants to predict which patients from a rare surgery are at risk of infection, so it can alert the nurses to be especially careful when following up with those patients.
You want to build a model. Each row in the modeling dataset will be a single patient who received the surgery, and the prediction target will be whether they got an infection.
Some surgeons may do the procedure in a manner that raises or lowers the risk of infection. But how can you best incorporate the surgeon information into the model?
You have a clever idea.
1. Take all surgeries by each surgeon and calculate the infection rate among those surgeons.
2. For each patient in the data, find out who the surgeon was and plug in that surgeon's average infection rate as a feature.
Does this pose any target leakage issues?
Does it pose any train-test contamination issues?
End of explanation
# Fill in the line below with one of 1, 2, 3 or 4.
potential_leakage_feature = ____
# Check your answer
q_5.check()
#%%RM_IF(PROD)%%
potential_leakage_feature = 1
q_5.assert_check_failed()
#%%RM_IF(PROD)%%
potential_leakage_feature = 2
q_5.assert_check_passed()
#_COMMENT_IF(PROD)_
q_5.hint()
#_COMMENT_IF(PROD)_
q_5.solution()
Explanation: Step 5: Housing Prices
You will build a model to predict housing prices. The model will be deployed on an ongoing basis, to predict the price of a new house when a description is added to a website. Here are four features that could be used as predictors.
1. Size of the house (in square meters)
2. Average sales price of homes in the same neighborhood
3. Latitude and longitude of the house
4. Whether the house has a basement
You have historic data to train and validate the model.
Which of the features is most likely to be a source of leakage?
End of explanation |
1,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation - Using Tensorboard
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation - Using Tensorboard
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
counts = Counter(text)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {w:i for i, w in enumerate(vocab)}
int_to_vocab = {i:w for i, w in enumerate(vocab)}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
known_tokens = {".":"||period||",
",":"||comma||",
"\"":"||quotation_mrk||",
";":"||semicolon||",
"!":"||exclamation_mrk||",
"?":"||question_mrk||",
"(":"||l-parentesis||",
")":"||r-parentesis||",
"--":"||dash||",
"\n":"||nl||"
}
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return known_tokens
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input_ = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="output")
learning_rate = tf.placeholder(tf.float32, name="lr")
return input_, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
lstm_layers = 2
keep_prob = 0.7
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
global lstm_layers, keep_prob
#with tf.name_scope("init_cell"):
base_cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
base_dropout = tf.contrib.rnn.DropoutWrapper(base_cell, output_keep_prob=keep_prob)
layers = tf.contrib.rnn.MultiRNNCell([base_dropout] * lstm_layers)
initial_state = layers.zero_state(batch_size, tf.float32)
return layers, tf.identity(initial_state, "initial_state")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
with tf.name_scope("embeddings"):
embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1), name="embeddigs")
emb = tf.nn.embedding_lookup(embeddings, input_data)
tf.summary.histogram("embeddings", embeddings)
return emb
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=None, dtype=tf.float32)
return outputs, tf.identity(final_state, "final_state")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
emb = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, emb)
tf.summary.histogram("RNN_OUTPUTS", outputs)
with tf.name_scope("nn"):
logits = tf.contrib.layers.fully_connected(outputs,
vocab_size,
activation_fn=None,
weights_initializer = tf.truncated_normal_initializer(mean=0, stddev=0.1),
biases_initializer=tf.zeros_initializer())
return (logits, final_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
num_of_batches = int(len(int_text) / (batch_size * seq_length))
valid_data_len = num_of_batches * batch_size * seq_length
inputs = int_text[:valid_data_len]
targets = int_text[1:valid_data_len]
targets.append(int_text[0])
x_reshaped = np.reshape(inputs, (batch_size, num_of_batches, seq_length))
y_reshaped = np.reshape(targets, (batch_size, num_of_batches, seq_length))
result = []
for b in range(num_of_batches):
x_samples = x_reshaped[:,b]
y_samples = y_reshaped[:,b]
batch_row = [x_samples, y_samples]
result.append(batch_row)
result = np.array(result)
return result
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
#get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2)
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For example, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 512
# Sequence Length
seq_length = 32
# Learning Rate
learning_rate = 0.004
# Show stats for every n number of batches
show_every_n_batches = 25
## Override the following hyperparameters just to concentrate all of them in a single place
lstm_layers = 2
keep_prob = 0.7
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
def prepare_graph():
global train_graph, initial_state, input_text, targets, lr, cost, final_state, train_op
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
tf.summary.histogram("logits", logits)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
tf.summary.histogram("predictions", probs)
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
tf.summary.scalar("cost", cost)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
import random
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
def train_model():
prepare_graph()
summary_file = "./tensorboard-data/TST5_EP-{}_BS-{}_RNNS-{}_EMB_DIM-{}_SqLEN-{}_LR-{}_LSTMLayCnt-{}_Keep-{}" \
.format(num_epochs, batch_size, rnn_size, embed_dim, seq_length, learning_rate, lstm_layers,keep_prob)
print("Saving execution report to", summary_file)
step = 0
with tf.Session(graph=train_graph) as sess:
writer = tf.summary.FileWriter(summary_file, sess.graph)
merged_summary = tf.summary.merge_all()
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
step += 1
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
s = sess.run(merged_summary, feed_dict=feed)
writer.add_summary(s, epoch_i)
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
# use_final_parameters is used to run the single configuration defined.
use_final_parameters = True
if(use_final_parameters):
train_model()
else:
print("Running multiple combinations")
show_every_n_batches = 50
batch_sizes = [128]
rnn_sizes = [256]
embed_dims = [512]
seq_lens = [32]
learning_rates = [0.005, 0.007]
combinations = []
for batch_size in batch_sizes:
for rnn_size in rnn_sizes:
for embed_dim in embed_dims:
for seq_length in seq_lens:
for learning_rate in learning_rates:
combinations.append([batch_size, rnn_size, embed_dim, seq_length, learning_rate])
random.shuffle(combinations)
execs_done = 0
for comb in combinations:
batch_size, rnn_size, embed_dim, seq_length, learning_rate = comb
print("Running {}/{}".format(execs_done+1, len(combinations)))
train_model()
execs_done += 1
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input_ts = loaded_graph.get_tensor_by_name("input:0")
init_st_ts = loaded_graph.get_tensor_by_name("initial_state:0")
fin_st_ts = loaded_graph.get_tensor_by_name("final_state:0")
prob_ts = loaded_graph.get_tensor_by_name("probs:0")
return (input_ts, init_st_ts, fin_st_ts, prob_ts)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
max_idx = np.argmax(probabilities)
result = int_to_vocab[max_idx]
return result
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
1,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2018/2019
http
Step1: 1. Probabilistički grafički modeli -- Bayesove mreže
Ovaj zadatak bavit će se Bayesovim mrežama, jednim od poznatijih probabilističkih grafičkih modela (probabilistic graphical models; PGM). Za lakše eksperimentiranje koristit ćemo programski paket pgmpy. Molimo Vas da provjerite imate li ovaj paket te da ga instalirate ako ga nemate.
(a)
Prvo ćemo pogledati udžbenički primjer s prskalicom. U ovom primjeru razmatramo Bayesovu mrežu koja modelira zavisnosti između oblačnosti (slučajna varijabla $C$), kiše ($R$), prskalice ($S$) i mokre trave ($W$). U ovom primjeru također pretpostavljamo da već imamo parametre vjerojatnosnih distribucija svih čvorova. Ova mreža prikazana je na sljedećoj slici
Step2: Q
Step3: Q
Step4: Q
Step5: (a)
Prije nego što krenemo u vrednovanje modela za klasifikaciju spama, upoznat ćete se s jednostavnijom apstrakcijom cjelokupnog procesa učenja modela u biblioteci scikit-learn. Ovo je korisno zato što se učenje modela često sastoji od mnoštva koraka prije sâmog pozivanja magične funkcije fit
Step6: Prvo, prilažemo kôd koji to radi "standardnim pristupom"
Step7: Vaš zadatak izvesti je dani kôd korištenjem cjevovoda. Proučite razred pipeline.Pipeline.
NB Ne treba vam više od svega nekoliko naredbi.
Step8: (b)
U prošlom smo podzadatku ispisali točnost našeg modela. Ako želimo vidjeti koliko je naš model dobar po ostalim metrikama, možemo iskoristiti bilo koju funkciju iz paketa metrics. Poslužite se funkcijom metrics.classification_report, koja ispisuje vrijednosti najčešćih metrika. (Obavezno koristite naredbu print kako ne biste izgubili format izlaza funkcije.) Ispišite ponovno točnost za usporedbu.
Step9: Potreba za drugim metrikama osim točnosti može se vidjeti pri korištenju nekih osnovnih modela (engl. baselines). Možda najjednostavniji model takvog tipa je model koji svrstava sve primjere u većinsku klasu (engl. most frequent class; MFC) ili označuje testne primjere nasumično (engl. random). Proučite razred dummy.DummyClassifier i pomoću njega stvorite spomenute osnovne klasifikatore. Opet ćete trebati iskoristiti cjevovod kako biste došli do vektorskog oblika ulaznih primjera, makar ovi osnovni klasifikatori koriste samo oznake pri predikciji.
Step10: Q
Step11: Q
Step12: Q
Step13: Q
Step14: Iskoristite ugrađenu funkciju scipy.stats.ttest_rel za provedbu uparenog t-testa i provjerite koji od ova modela je bolji kada se koristi 5, 10 i 50 preklopa.
Step15: Q
Step16: Iskoristite skup podataka Xp dan gore. Isprobajte vrijednosti hiperparametra $K$ iz $[0,1,\ldots,15]$. Ne trebate dirati nikakve hiperparametre modela osim $K$. Iscrtajte krivulju od $J$ u ovisnosti o broju grupa $K$. Metodom lakta/koljena odredite vrijednost hiperparametra $K$.
Step17: Q
Step18: Q
Step19: Naučite model k-sredina (idealno pretpostavljajući $K=2$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
Step20: Q
Step21: Ponovno, naučite model k-sredina (idealno pretpostavljajući $K=2$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
Step22: Q
Step23: Ponovno, naučite model k-sredina (ovaj put idealno pretpostavljajući $K=3$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
Step24: Q
Step25: (g)
Kako vrednovati točnost modela grupiranja ako imamo stvarne oznake svih primjera (a u našem slučaju imamo, jer smo mi ti koji smo generirali podatke)? Često korištena mjera jest Randov indeks koji je zapravo pandan točnosti u zadatcima klasifikacije. Implementirajte funkciju rand_index_score(y_gold, y_predict) koja ga računa. Funkcija prima dva argumenta
Step26: Q | Python Code:
# Učitaj osnovne biblioteke...
import sklearn
import codecs
import mlutils
import matplotlib.pyplot as plt
import pgmpy as pgm
%pylab inline
Explanation: Sveučilište u Zagrebu
Fakultet elektrotehnike i računarstva
Strojno učenje 2018/2019
http://www.fer.unizg.hr/predmet/su
Laboratorijska vježba 5: Probabilistički grafički modeli, naivni Bayes, grupiranje i vrednovanje klasifikatora
Verzija: 1.4
Zadnji put ažurirano: 11. siječnja 2019.
(c) 2015-2019 Jan Šnajder, Domagoj Alagić
Objavljeno: 11. siječnja 2019.
Rok za predaju: 21. siječnja 2019. u 07:00h
Upute
Peta laboratorijska vježba sastoji se od tri zadatka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija.
Osigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.
Vježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.
End of explanation
from pgmpy.models import BayesianModel
from pgmpy.factors.discrete.CPD import TabularCPD
from pgmpy.inference import VariableElimination
model = BayesianModel([('C', 'S'), ('C', 'R'), ('S', 'W'), ('R', 'W')])
cpd_c = TabularCPD(variable='C', variable_card=2, values=[[0.5, 0.5]])
cpd_s = TabularCPD(variable='S', evidence=['C'], evidence_card=[2],
variable_card=2,
values=[[0.9, 0.5],
[0.1, 0.5]])
cpd_r = TabularCPD(variable='R', evidence=['C'], evidence_card=[2],
variable_card=2,
values=[[0.2, 0.8],
[0.8, 0.2]])
cpd_w = TabularCPD(variable='W', evidence=['S', 'R'], evidence_card=[2,2],
variable_card=2,
values=[[1, 0.1, 0.1, 0.01],
[0, 0.9, 0.9, 0.99]])
model.add_cpds(cpd_c, cpd_r, cpd_s, cpd_w)
model.check_model()
infer = VariableElimination(model)
print(infer.query(['W'])['W'])
print(infer.query(['S'], evidence={'W': 1})['S'])
print(infer.query(['R'], evidence={'W': 1})['R'])
print(infer.query(['C'], evidence={'S': 1, 'R': 1})['C'])
print(infer.query(['C'])['C'])
Explanation: 1. Probabilistički grafički modeli -- Bayesove mreže
Ovaj zadatak bavit će se Bayesovim mrežama, jednim od poznatijih probabilističkih grafičkih modela (probabilistic graphical models; PGM). Za lakše eksperimentiranje koristit ćemo programski paket pgmpy. Molimo Vas da provjerite imate li ovaj paket te da ga instalirate ako ga nemate.
(a)
Prvo ćemo pogledati udžbenički primjer s prskalicom. U ovom primjeru razmatramo Bayesovu mrežu koja modelira zavisnosti između oblačnosti (slučajna varijabla $C$), kiše ($R$), prskalice ($S$) i mokre trave ($W$). U ovom primjeru također pretpostavljamo da već imamo parametre vjerojatnosnih distribucija svih čvorova. Ova mreža prikazana je na sljedećoj slici:
Koristeći paket pgmpy, konstruirajte Bayesovu mrežu iz gornjeg primjera. Zatim, koristeći egzaktno zaključivanje, postavite sljedeće posteriorne upite: $P(w=1)$, $P(s=1|w=1)$, $P(r=1|w=1)$, $P(c=1|s=1, r=1)$ i $P(c=1)$. Provedite zaključivanje na papiru i uvjerite se da ste ispravno konstruirali mrežu. Pomoći će vam službena dokumentacija te primjeri korištenja (npr. ovaj).
End of explanation
print(infer.query(['S'], evidence={'W': 1, 'R': 1})['S'])
print(infer.query(['S'], evidence={'W': 1, 'R': 0})['S'])
print(infer.query(['R'], evidence={'W': 1, 'S': 1})['R'])
print(infer.query(['R'], evidence={'W': 1, 'S': 0})['R'])
Explanation: Q: Koju zajedničku vjerojatnosnu razdiobu ova mreža modelira? Kako tu informaciju očitati iz mreže?
Q: U zadatku koristimo egzaktno zaključivanje. Kako ono radi?
Q: Koja je razlika između posteriornog upita i MAP-upita?
Q: Zašto je vjerojatnost $P(c=1)$ drugačija od $P(c=1|s=1,r=1)$ ako znamo da čvorovi $S$ i $R$ nisu roditelji čvora $C$?
(b)
Efekt objašnjavanja (engl. explaining away) zanimljiv je fenomen u kojem se događa da se dvije varijable "natječu" za objašnjavanje treće. Ovaj fenomen može se primijetiti na gornjoj mreži. U tom se slučaju varijable prskalice ($S$) i kiše ($R$) "natječu" za objašnjavanje mokre trave ($W$). Vaš zadatak je pokazati da se fenomen zaista događa.
End of explanation
model.is_active_trail('C','W')
Explanation: Q: Kako biste svojim riječima opisali ovaj fenomen, koristeći se ovim primjerom?
(c)
Koristeći BayesianModel.is_active_trail provjerite jesu li varijable oblačnosti ($C$) i mokre trave ($W$) uvjetno nezavisne. Što mora vrijediti kako bi te dvije varijable bile uvjetno nezavisne? Provjerite korištenjem iste funkcije.
End of explanation
from sklearn.model_selection import train_test_split
spam_X, spam_y = mlutils.load_SMS_dataset('./spam.csv')
spam_X_train, spam_X_test, spam_y_train, spam_y_test = \
train_test_split(spam_X, spam_y, train_size=0.7, test_size=0.3, random_state=69)
Explanation: Q: Kako možemo na temelju grafa saznati koje dvije varijable su, uz neka opažanja, uvjetno nezavisne?
Q: Zašto bismo uopće htjeli znati koje su varijable u mreži uvjetno nezavisne?
2. Vrednovanje modela (klasifikatora)
Kako bismo se uvjerili koliko naš naučeni model zapravo dobro radi, nužno je provesti evaluaciju modela. Ovaj korak od presudne je važnosti u svim primjenama strojnog učenja, pa je stoga bitno znati provesti evaluaciju na ispravan način.
Vrednovat ćemo modele na stvarnom skupu podataka SMS Spam Collection [1], koji se sastoji od 5,574 SMS-poruka klasificiranih u dvije klase: spam (oznaka: spam) i ne-spam (oznaka: ham). Ako već niste, preuzmite skup podataka s poveznice ili sa stranice kolegija i stavite ga u radni direktorij (otpakirajte arhivu i preimenujte datoteku u spam.csv po potrebi). Sljedeći komad kôda učitava skup podataka i dijeli ga na podskupove za učenje i testiranje.
[1] Almeida, T.A., GÃmez Hidalgo, J.M., Yamakami, A. Contributions to the Study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11), Mountain View, CA, USA, 2011.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.decomposition import TruncatedSVD
from sklearn.preprocessing import Normalizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
Explanation: (a)
Prije nego što krenemo u vrednovanje modela za klasifikaciju spama, upoznat ćete se s jednostavnijom apstrakcijom cjelokupnog procesa učenja modela u biblioteci scikit-learn. Ovo je korisno zato što se učenje modela često sastoji od mnoštva koraka prije sâmog pozivanja magične funkcije fit: ekstrakcije podataka, ekstrakcije značajki, standardizacije, skaliranja, nadopunjavanjem nedostajućih vrijednosti i slično.
U "standardnom pristupu", ovo se svodi na pozamašan broj linija kôda u kojoj konstantno proslijeđujemo podatke iz jednog koraka u sljedeći, tvoreći pritom cjevovod izvođenja. Osim nepreglednosti, ovakav pristup je često i sklon pogreškama, s obzirom na to da je dosta jednostavno proslijediti pogrešan skup podataka i ne dobiti pogrešku pri izvođenju kôda. Stoga je u biblioteci scikit-learn uveden razred pipeline.Pipeline. Kroz ovaj razred, svi potrebni koraci učenja mogu se apstrahirati iza jednog cjevovoda, koji je opet zapravo model s fit i predict funkcijama.
U ovom zadatku ćete napraviti samo jednostavni cjevovod modela za klasifikaciju teksta, koji se sastoji od pretvorbe teksta u vektorsku reprezentaciju vreće riječi s TF-IDF-težinama, redukcije dimenzionalnosti pomoću krnje dekompozicije singularnih vrijednosti, normalizacije, te konačno logističke regresije.
NB: Nije sasvim nužno znati kako rade ovi razredi pomoću kojih dolazimo do konačnih značajki, ali preporučamo da ih proučite ako vas zanima (posebice ako vas zanima obrada prirodnog jezika).
End of explanation
# TF-IDF
vectorizer = TfidfVectorizer(stop_words="english", ngram_range=(1, 2), max_features=500)
spam_X_feat_train = vectorizer.fit_transform(spam_X_train)
# Smanjenje dimenzionalnosti
reducer = TruncatedSVD(n_components=300, random_state=69)
spam_X_feat_train = reducer.fit_transform(spam_X_feat_train)
# Normaliziranje
normalizer = Normalizer()
spam_X_feat_train = normalizer.fit_transform(spam_X_feat_train)
# NB
clf = LogisticRegression(solver='lbfgs')
clf.fit(spam_X_feat_train, spam_y_train)
# I sada ponovno sve ovo za testne podatke.
spam_X_feat_test = vectorizer.transform(spam_X_test)
spam_X_feat_test = reducer.transform(spam_X_feat_test)
spam_X_feat_test = normalizer.transform(spam_X_feat_test)
print(accuracy_score(spam_y_test, clf.predict(spam_X_feat_test)))
x_test123 = ["You were selected for a green card, apply here for only 50 USD!!!",
"Hey, what are you doing later? Want to grab a cup of coffee?"]
x_test = vectorizer.transform(x_test123)
x_test = reducer.transform(x_test)
x_test = normalizer.transform(x_test)
print(clf.predict(x_test))
Explanation: Prvo, prilažemo kôd koji to radi "standardnim pristupom":
End of explanation
vectorizer = TfidfVectorizer(stop_words="english", ngram_range=(1, 2), max_features=500)
reducer = TruncatedSVD(n_components=300, random_state=69)
normalizer = Normalizer()
clf = LogisticRegression(solver='lbfgs')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', clf)])
pipeline.fit(spam_X_train, spam_y_train)
print(accuracy_score(spam_y_test, pipeline.predict(spam_X_test)))
print(pipeline.predict(x_test123))
Explanation: Vaš zadatak izvesti je dani kôd korištenjem cjevovoda. Proučite razred pipeline.Pipeline.
NB Ne treba vam više od svega nekoliko naredbi.
End of explanation
from sklearn.metrics import classification_report, accuracy_score
print(classification_report(y_pred=pipeline.predict(spam_X_test), y_true=spam_y_test))
Explanation: (b)
U prošlom smo podzadatku ispisali točnost našeg modela. Ako želimo vidjeti koliko je naš model dobar po ostalim metrikama, možemo iskoristiti bilo koju funkciju iz paketa metrics. Poslužite se funkcijom metrics.classification_report, koja ispisuje vrijednosti najčešćih metrika. (Obavezno koristite naredbu print kako ne biste izgubili format izlaza funkcije.) Ispišite ponovno točnost za usporedbu.
End of explanation
from sklearn.dummy import DummyClassifier
rando = DummyClassifier(strategy='uniform')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', rando)])
pipeline.fit(spam_X_train, spam_y_train)
print(classification_report(spam_y_test, pipeline.predict(spam_X_test)))
mfc = DummyClassifier(strategy='most_frequent')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', mfc)])
pipeline.fit(spam_X_train, spam_y_train)
print(classification_report(spam_y_test, pipeline.predict(spam_X_test)))
Explanation: Potreba za drugim metrikama osim točnosti može se vidjeti pri korištenju nekih osnovnih modela (engl. baselines). Možda najjednostavniji model takvog tipa je model koji svrstava sve primjere u većinsku klasu (engl. most frequent class; MFC) ili označuje testne primjere nasumično (engl. random). Proučite razred dummy.DummyClassifier i pomoću njega stvorite spomenute osnovne klasifikatore. Opet ćete trebati iskoristiti cjevovod kako biste došli do vektorskog oblika ulaznih primjera, makar ovi osnovni klasifikatori koriste samo oznake pri predikciji.
End of explanation
from sklearn.model_selection import cross_val_score, KFold
kf = KFold(n_splits=5)
vectorizer = TfidfVectorizer(stop_words="english", ngram_range=(1, 2), max_features=500)
reducer = TruncatedSVD(n_components=300, random_state=69)
normalizer = Normalizer()
clf = LogisticRegression(solver='lbfgs')
pipeline = Pipeline([('vectorizer', vectorizer), ('reducer', reducer), ('normalizer', normalizer), ('clf', clf)])
for train_index, test_index in kf.split(spam_X):
X_train, X_test = spam_X[train_index], spam_X[test_index]
y_train, y_test = spam_y[train_index], spam_y[test_index]
pipeline.fit(X_train, y_train)
print()
print(cross_val_score(estimator=pipeline, X=X_test, y=y_test, cv=5))
print(accuracy_score(spam_y_test, pipeline.predict(spam_X_test)))
Explanation: Q: Na temelju ovog primjera objasnite zašto točnost nije uvijek prikladna metrika.
Q: Zašto koristimo F1-mjeru?
(c)
Međutim, provjera za kakvom smo posegli u prošlom podzadatku nije robusna. Stoga se u strojnom učenju obično koristi k-struka unakrsna provjera. Proučite razred model_selection.KFold i funkciju model_selection.cross_val_score te izračunajte procjenu pogreške na cijelom skupu podataka koristeći peterostruku unakrsnu provjeru.
NB: Vaš model je sada cjevovod koji sadrži čitavo pretprocesiranje. Također, u nastavku ćemo se ograničiti na točnost, ali ovi postupci vrijede za sve metrike.
End of explanation
from sklearn.model_selection import GridSearchCV
# Vaš kôd ovdje...
Explanation: Q: Zašto "obična" unakrsna provjera nije dovoljno robusna?
Q: Što je to stratificirana k-struka unakrsna provjera? Zašto ju često koristimo?
(d)
Gornja procjena pogreške je u redu ako imamo već imamo model (bez ili s fiksiranim hiperparametrima). Međutim, mi želimo koristiti model koji ima optimalne vrijednosti hiperparametara te ih je stoga potrebno optimirati korištenjem pretraživanja po rešetci (engl. grid search). Očekivano, biblioteka scikit-learn već ima ovu funkcionalnost u razredu model_selection.GridSearchCV. Jedina razlika vaše implementacije iz prošlih vježbi (npr. kod SVM-a) i ove jest ta da ova koristi k-struku unakrsnu provjeru.
Prije optimizacije vrijednosti hiperparametara, očigledno moramo definirati i samu rešetku vrijednosti hiperparametara. Proučite kako se definira ista kroz rječnik u primjeru.
Proučite spomenuti razred te pomoću njega pronađite i ispišite najbolje vrijednosti hiperparametara cjevovoda iz podzadatka (a): max_features $\in {500, 1000}$ i n_components $\in { 100, 200, 300 }$ korištenjem pretraživanja po rešetci na skupu za učenje ($k=3$, kako bi išlo malo brže).
End of explanation
from sklearn.model_selection import GridSearchCV, KFold
def nested_kfold_cv(clf, param_grid, X, y, k1=10, k2=3):
# Vaš kôd ovdje...
pass
Explanation: Q: Koja se metrika optimira pri ovoj optimizaciji?
Q: Kako biste odredili broj preklopa $k$?
(e)
Ako želimo procijeniti pogrešku, ali pritom i napraviti odabir modela, tada se okrećemo ugniježđenoj k-strukoj unakrsnoj provjeri (engl. nested k-fold cross validation). U ovom zadatku ćete ju sami implementirati.
Implementirajte funkciju nested_kfold_cv(clf, param_grid, X, y, k1, k2) koja provodi ugniježđenu k-struku unakrsnu provjeru. Argument clf predstavlja vaš klasifikator, param_grid rječnik vrijednosti hiperparametara (isto kao i u podzadatku (d)), X i y označeni skup podataka, a k1 i k2 broj preklopa u vanjskoj, odnosno unutarnjoj petlji. Poslužite se razredima model_selection.GridSearchCV i model_selection.KFold.
Funkcija vraća listu pogrešaka kroz preklope vanjske petlje.
End of explanation
np.random.seed(1337)
C1_scores_5folds = np.random.normal(78, 4, 5)
C2_scores_5folds = np.random.normal(81, 2, 5)
C1_scores_10folds = np.random.normal(78, 4, 10)
C2_scores_10folds = np.random.normal(81, 2, 10)
C1_scores_50folds = np.random.normal(78, 4, 50)
C2_scores_50folds = np.random.normal(81, 2, 50)
Explanation: Q: Kako biste odabrali koji su hiperparametri generalno najbolji, a ne samo u svakoj pojedinačnoj unutarnjoj petlji?
Q: Čemu u konačnici odgovara procjena generalizacijske pogreške?
(f)
Scenarij koji nas najviše zanima jest usporedba dvaju klasifikatora, odnosno, je li jedan od njih zaista bolji od drugog. Jedini način kako to možemo zaista potvrditi jest statističkom testom, u našem slučaju uparenim t-testom. Njime ćemo se baviti u ovom zadatku.
Radi bržeg izvođenja, umjetno ćemo generirati podatke koji odgovaraju pogreškama kroz vanjske preklope dvaju klasifikatora (ono što bi vratila funkcija nested_kfold_cv):
End of explanation
from scipy.stats import ttest_rel
# Vaš kôd ovdje...
Explanation: Iskoristite ugrađenu funkciju scipy.stats.ttest_rel za provedbu uparenog t-testa i provjerite koji od ova modela je bolji kada se koristi 5, 10 i 50 preklopa.
End of explanation
from sklearn.datasets import make_blobs
Xp, yp = make_blobs(n_samples=300, n_features=2, centers=[[0, 0], [3, 2.5], [0, 4]],
cluster_std=[0.45, 0.3, 0.45], random_state=96)
plt.scatter(Xp[:,0], Xp[:,1], c=yp, cmap=plt.get_cmap("cool"), s=20)
Explanation: Q: Koju hipotezu $H_0$ i alternativnu hipotezu $H_1$ testiramo ovim testom?
Q: Koja pretpostavka na vjerojatnosnu razdiobu primjera je napravljena u gornjem testu? Je li ona opravdana?
Q: Koji je model u konačnici bolji i je li ta prednost značajna uz $\alpha = 0.05$?
3. Grupiranje
U ovom zadatku ćete se upoznati s algoritmom k-sredina (engl. k-means), njegovim glavnim nedostatcima te pretpostavkama. Također ćete isprobati i drugi algoritam grupiranja: model Gaussovih mješavina (engl. Gaussian mixture model).
(a)
Jedan od nedostataka algoritma k-sredina jest taj što unaprijed zahtjeva broj grupa ($K$) u koje će grupirati podatke. Ta informacija nam često nije dostupna (kao što nam nisu dostupne ni oznake primjera) te je stoga potrebno nekako izabrati najbolju vrijednost hiperparametra $K$. Jedan od naivnijih pristupa jest metoda lakta/koljena (engl. elbow method) koju ćete isprobati u ovom zadatku.
U svojim rješenjima koristite ugrađenu implementaciju algoritma k-sredina, dostupnoj u razredu cluster.KMeans.
NB: Kriterijska funkcija algoritma k-sredina još se i naziva inercijom (engl. inertia). Za naučeni model, vrijednost kriterijske funkcije $J$ dostupna je kroz razredni atribut inertia_.
End of explanation
Ks = range(1,16)
from sklearn.cluster import KMeans
Js = []
for K in Ks:
J = KMeans(n_clusters=K).fit(Xp).inertia_
Js.append(J)
plot(Ks, Js)
Explanation: Iskoristite skup podataka Xp dan gore. Isprobajte vrijednosti hiperparametra $K$ iz $[0,1,\ldots,15]$. Ne trebate dirati nikakve hiperparametre modela osim $K$. Iscrtajte krivulju od $J$ u ovisnosti o broju grupa $K$. Metodom lakta/koljena odredite vrijednost hiperparametra $K$.
End of explanation
# Vaš kôd ovdje...
Explanation: Q: Koju biste vrijednost hiperparametra $K$ izabrali na temelju ovog grafa? Zašto? Je li taj odabir optimalan? Kako to znate?
Q: Je li ova metoda robusna?
Q: Možemo li izabrati onaj $K$ koji minimizira pogrešku $J$? Objasnite.
(b)
Odabir vrijednosti hiperparametra $K$ može se obaviti na mnoštvo načina. Pored metode lakta/koljena, moguće je isto ostvariti i analizom siluete (engl. silhouette analysis). Za to smo pripremili funkciju mlutils.plot_silhouette koja za dani broj grupa i podatke iscrtava prosječnu vrijednost koeficijenta siluete i vrijednost koeficijenta svakog primjera (kroz grupe).
Vaš je zadatak isprobati različite vrijednosti hiperparametra $K$, $K \in {2, 3, 5}$ i na temelju dobivenih grafova odlučiti se za optimalan $K$.
End of explanation
from sklearn.datasets import make_blobs
X1, y1 = make_blobs(n_samples=1000, n_features=2, centers=[[0, 0], [1.3, 1.3]], cluster_std=[0.15, 0.5], random_state=96)
plt.scatter(X1[:,0], X1[:,1], c=y1, cmap=plt.get_cmap("cool"), s=20)
Explanation: Q: Kako biste se gledajući ove slike odlučili za $K$?
Q: Koji su problemi ovog pristupa?
(c)
U ovom i sljedećim podzadatcima fokusirat ćemo se na temeljne pretpostavke algoritma k-sredina te što se događa ako te pretpostavke nisu zadovoljene. Dodatno, isprobat ćemo i grupiranje modelom Gaussovih mješavina (engl. Gaussian Mixture Models; GMM) koji ne nema neke od tih pretpostavki.
Prvo, krenite od podataka X1, koji su generirani korištenjem funkcije datasets.make_blobs, koja stvara grupe podataka pomoću izotropskih Gaussovih distribucija.
End of explanation
# Vaš kôd ovdje...
Explanation: Naučite model k-sredina (idealno pretpostavljajući $K=2$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
End of explanation
from sklearn.datasets import make_circles
X2, y2 = make_circles(n_samples=1000, noise=0.15, factor=0.05, random_state=96)
plt.scatter(X2[:,0], X2[:,1], c=y2, cmap=plt.get_cmap("cool"), s=20)
Explanation: Q: Što se dogodilo? Koja je pretpostavka algoritma k-sredina ovdje narušena?
Q: Što biste morali osigurati kako bi algoritam pronašao ispravne grupe?
(d)
Isprobajte algoritam k-sredina na podatcima generiranim korištenjem funkcije datasets.make_circles, koja stvara dvije grupe podataka tako da je jedna unutar druge.
End of explanation
# Vaš kôd ovdje...
Explanation: Ponovno, naučite model k-sredina (idealno pretpostavljajući $K=2$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
End of explanation
X31, y31 = make_blobs(n_samples=1000, n_features=2, centers=[[0, 0]], cluster_std=[0.2], random_state=69)
X32, y32 = make_blobs(n_samples=50, n_features=2, centers=[[0.7, 0.5]], cluster_std=[0.15], random_state=69)
X33, y33 = make_blobs(n_samples=600, n_features=2, centers=[[0.8, -0.4]], cluster_std=[0.2], random_state=69)
plt.scatter(X31[:,0], X31[:,1], c="#00FFFF", s=20)
plt.scatter(X32[:,0], X32[:,1], c="#F400F4", s=20)
plt.scatter(X33[:,0], X33[:,1], c="#8975FF", s=20)
# Just join all the groups in a single X.
X3 = np.vstack([X31, X32, X33])
y3 = np.hstack([y31, y32, y33])
Explanation: Q: Što se dogodilo? Koja je pretpostavka algoritma k-sredina ovdje narušena?
Q: Što biste morali osigurati kako bi algoritam pronašao ispravne grupe?
(e)
Završno, isprobat ćemo algoritam na sljedećem umjetno stvorenom skupu podataka:
End of explanation
# Vaš kôd ovdje...
Explanation: Ponovno, naučite model k-sredina (ovaj put idealno pretpostavljajući $K=3$) na gornjim podatcima i prikažite dobiveno grupiranje (proučite funkciju scatter, posebice argument c).
End of explanation
from sklearn.mixture import GaussianMixture
# Vaš kôd ovdje...
Explanation: Q: Što se dogodilo? Koja je pretpostavka algoritma k-sredina ovdje narušena?
Q: Što biste morali osigurati kako bi algoritam pronašao ispravne grupe?
(f)
Sada kada ste se upoznali s ograničenjima algoritma k-sredina, isprobat ćete grupiranje modelom mješavine Gaussa (Gaussian Mixture Models; GMM), koji je generalizacija algoritma k-sredina (odnosno, algoritam k-sredina specijalizacija je GMM-a). Implementacija ovog modela dostupna je u mixture.GaussianMixture. Isprobajte ovaj model (s istim pretpostavkama o broju grupa) na podacima iz podzadataka (c)-(e). Ne morate mijenjati nikakve hiperparametre ni postavke osim broja komponenti.
End of explanation
import itertools as it
from scipy.special import comb
def rand_index_score2(y_gold, y_predict):
N = len(y_gold)
grupa1 = ([ y for i, y in enumerate(y_gold) if y_predict[i] == 0])
grupa2 = ([ y for i, y in enumerate(y_gold) if y_predict[i] == 1])
grupa3 = ([ y for i, y in enumerate(y_gold) if y_predict[i] == 2])
n = [[len([y for y in g if y == i])
for i in [0,1,2]]
for g in [grupa1, grupa2, grupa3]]
a = sum([(comb(nnn, 2)) for nn in n for nnn in nn])
b = n[0][0] * (n[1][1] + n[1][2] + n[2][1] + n[2][2]) + \
n[0][1] * (n[1][0] + n[1][2] + n[2][0] + n[2][2]) + \
n[0][2] * (n[1][0] + n[1][1] + n[2][0] + n[2][1]) + \
n[1][0] * (n[2][1] + n[2][2]) + \
n[1][1] * (n[2][0] + n[2][2]) + \
n[1][2] * (n[2][0] + n[2][1])
return (a+b) / comb(N,2)
Explanation: (g)
Kako vrednovati točnost modela grupiranja ako imamo stvarne oznake svih primjera (a u našem slučaju imamo, jer smo mi ti koji smo generirali podatke)? Često korištena mjera jest Randov indeks koji je zapravo pandan točnosti u zadatcima klasifikacije. Implementirajte funkciju rand_index_score(y_gold, y_predict) koja ga računa. Funkcija prima dva argumenta: listu stvarnih grupa kojima primjeri pripadaju (y_gold) i listu predviđenih grupa (y_predict). Dobro će vam doći funkcija itertools.combinations.
End of explanation
y_pred = KMeans(n_clusters=3).fit(Xp).predict(Xp)
rand_index_score(yp, y_pred)
Explanation: Q: Zašto je Randov indeks pandan točnosti u klasifikacijskim problemima?
Q: Koji su glavni problemi ove metrike?
Q: Kako vrednovati kvalitetu grupiranja ako nenamo stvarne oznake primjera? Je li to uopće moguće?
End of explanation |
1,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 8 - Implementing a model in numpy and a survey of machine learning packages for python
This week we will be looking in detail at how to implement a supervised regression model using the base scientific computing packages available with python.
We will also be looking at the different packages available for python that implement many of the algorithms we might want to use.
Regression with numpy
Why implement algorithms from scratch when dedicated packages already exist?
The packages available are very powerful and a real time saver but they can obscure some issues we might encounter if we don't know to look for them. By starting with just numpy these problems will be more obvious. We can address them here and then when we move on we will know what to look for and will be less likely to miss them.
The dedicated machine learning packages implement the different algorithms but we are still responsible for getting our data in a suitable format.
Step1: This is a very simple dataset. There is only one input value for each record and then there is the output value. Our goal is to determine the output value or dependent variable, shown on the y-axis, from the input or independent variable, shown on the x-axis.
Our approach should scale to handle multiple input, or independent, variables. The independent variables can be stored in a vector, a 1-dimensional array
Step2: Numpy contains the linalg module with many common functions for performing linear algebra. Using this module finding a solution is quite simple.
Step3: The values returned are
Step4: Least squares refers to the cost function for this algorithm. The objective is to minimize the residual sum of squares. The difference between the actual and predicted values is calculated, it is squared and then summed over all records. The function is as follows
Step5: Exercise
Plot the residuals. The x axis will be the independent variable (x) and the y axis the residual between our prediction and the true value.
Plot the predictions generated for our model over the entire range of 0-1. One approach is to use the np.linspace method to create equally spaced values over a specified range.
Types of independent variable
The independent variables can be many different types.
Quantitative inputs
Categorical inputs coded using dummy values
Interactions between multiple inputs
Tranformations of other inputs, e.g. logs, raised to different powers, etc.
It is important to note that a linear model is only linear with respect to its inputs. Those input variables can take any form.
One approach we can take to improve the predictions from our model would be to add in the square, cube, etc of our existing variable.
Step6: There is a tradeoff with model complexity. As we add more complexity to our model we can fit our training data increasingly well but eventually will lose our ability to generalize to new data.
Very simple models underfit the data and have high bias.
Very complex models overfit the data and have high variance.
The goal is to detect true sources of variation in the data and ignore variation that is just noise.
How do we know if we have a good model? A common approach is to break up our data into a training set, a validation set, and a test set.
We train models with different parameters on the training set.
We evaluate each model on the validation set, and choose the best
We then measure the performance of our best model on the test set.
What would our best model look like? Because we are using dummy data here we can easily make more.
Step7: Gradient descent
One limitation of our current implementation is that it is resource intensive. For very large datasets an alternative is needed. Gradient descent is often preferred, and particularly stochastic gradient descent for very large datasets.
Gradient descent is an iterative process, repetitively calculating the error and changing the coefficients slightly to reduce that error. It does this by calculating a gradient and then descending to a minimum in small steps.
Stochastic gradient descent calculates the gradient on a small batch of the data, updates the coefficients, loads the next chunk of the data and repeats the process.
We will just look at a basic gradient descent model. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
n = 20
x = np.random.random((n,1))
y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))
plt.plot(x, y, 'b.')
plt.show()
Explanation: Week 8 - Implementing a model in numpy and a survey of machine learning packages for python
This week we will be looking in detail at how to implement a supervised regression model using the base scientific computing packages available with python.
We will also be looking at the different packages available for python that implement many of the algorithms we might want to use.
Regression with numpy
Why implement algorithms from scratch when dedicated packages already exist?
The packages available are very powerful and a real time saver but they can obscure some issues we might encounter if we don't know to look for them. By starting with just numpy these problems will be more obvious. We can address them here and then when we move on we will know what to look for and will be less likely to miss them.
The dedicated machine learning packages implement the different algorithms but we are still responsible for getting our data in a suitable format.
End of explanation
intercept_x = np.hstack((np.ones((n,1)), x))
intercept_x
Explanation: This is a very simple dataset. There is only one input value for each record and then there is the output value. Our goal is to determine the output value or dependent variable, shown on the y-axis, from the input or independent variable, shown on the x-axis.
Our approach should scale to handle multiple input, or independent, variables. The independent variables can be stored in a vector, a 1-dimensional array:
$$X^T = (X_{1}, X_{2}, X_{3})$$
As we have multiple records these can be stacked in a 2-dimensional array. Each record becomes one row in the array. Our x variable is already set up in this way.
In linear regression we can compute the value of the dependent variable using the following formula:
$$f(X) = \beta_{0} + \sum_{j=1}^p X_j\beta_j$$
The $\beta_{0}$ term is the intercept, and represents the value of the dependent variable when the independent variable is zero.
Calculating a solution is easier if we don't treat the intercept as special. Instead of having an intercept co-efficient that is handled separately we can instead add a variable to each of our records with a value of one.
End of explanation
np.linalg.lstsq(intercept_x,y)
Explanation: Numpy contains the linalg module with many common functions for performing linear algebra. Using this module finding a solution is quite simple.
End of explanation
coeff, residuals, rank, sing_vals = np.linalg.lstsq(intercept_x,y)
predictions =
Explanation: The values returned are:
The least-squares solution
The sum of squared residuals
The rank of the independent variables
The singular values of the independent variables
Exercise
Calculate the predictions our model would make
Calculate the sum of squared residuals from our predictions. Does this match the value returned by lstsq?
End of explanation
our_coeff = np.dot(np.dot(np.linalg.inv(np.dot(intercept_x.T, intercept_x)), intercept_x.T), y)
print(coeff, '\n', our_coeff)
our_predictions = np.dot(intercept_x, our_coeff)
np.hstack((predictions,
our_predictions
))
plt.plot(x, y, 'ko', label='True values')
plt.plot(x, our_predictions, 'ro', label='Predictions')
plt.legend(numpoints=1, loc=4)
plt.show()
Explanation: Least squares refers to the cost function for this algorithm. The objective is to minimize the residual sum of squares. The difference between the actual and predicted values is calculated, it is squared and then summed over all records. The function is as follows:
$$RSS(\beta) = \sum_{i=1}^{N}(y_i - x_i^T\beta)^2$$
Matrix arithmetic
Within lstsq all the calculations are performed using matrix arithmetic rather than the more familiar element-wise arithmetic numpy arrays generally perform. Numpy does have a matrix type but matrix arithmetic can also be performed on standard arrays using dedicated methods.
Source: Wikimedia Commons (User:Bilou)
In matrix multiplication the resulting value in any position is the sum of multiplying each value in a row in the first matrix by the corresponding value in a column in the second matrix.
The residual sum of squares can be calculated with the following formula:
$$RSS(\beta) = (y - X\beta)^T(y-X\beta)$$
The value of our co-efficients can be calculated with:
$$\hat\beta = (X^TX)^{-1}X^Ty$$
Unfortunately, the result is not as visually appealing as in languages that use matrix arithmetic by default.
End of explanation
x_expanded = np.hstack((x**i for i in range(1,20)))
b, residuals, rank, s = np.linalg.lstsq(x_expanded, y)
print(b)
plt.plot(x, y, 'ko', label='True values')
plt.plot(x, np.dot(x_expanded, b), 'ro', label='Predictions')
plt.legend(numpoints=1, loc=4)
plt.show()
Explanation: Exercise
Plot the residuals. The x axis will be the independent variable (x) and the y axis the residual between our prediction and the true value.
Plot the predictions generated for our model over the entire range of 0-1. One approach is to use the np.linspace method to create equally spaced values over a specified range.
Types of independent variable
The independent variables can be many different types.
Quantitative inputs
Categorical inputs coded using dummy values
Interactions between multiple inputs
Tranformations of other inputs, e.g. logs, raised to different powers, etc.
It is important to note that a linear model is only linear with respect to its inputs. Those input variables can take any form.
One approach we can take to improve the predictions from our model would be to add in the square, cube, etc of our existing variable.
End of explanation
n = 20
p = 12
training = []
val = []
for i in range(1, p):
np.random.seed(0)
x = np.random.random((n,1))
y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))
x = np.hstack((x**j for j in np.arange(i)))
our_coeff = np.dot(
np.dot(
np.linalg.inv(
np.dot(
x.T, x
)
), x.T
), y
)
our_predictions = np.dot(x, our_coeff)
our_training_rss = np.sum((y - our_predictions) ** 2)
training.append(our_training_rss)
val_x = np.random.random((n,1))
val_y = 5 + 6 * val_x ** 2 + np.random.normal(0,0.5, size=(n,1))
val_x = np.hstack((val_x**j for j in np.arange(i)))
our_val_pred = np.dot(val_x, our_coeff)
our_val_rss = np.sum((val_y - our_val_pred) ** 2)
val.append(our_val_rss)
#print(i, our_training_rss, our_val_rss)
plt.plot(range(1, p), training, 'ko-', label='training')
plt.plot(range(1, p), val, 'ro-', label='validation')
plt.legend(loc=2)
plt.show()
Explanation: There is a tradeoff with model complexity. As we add more complexity to our model we can fit our training data increasingly well but eventually will lose our ability to generalize to new data.
Very simple models underfit the data and have high bias.
Very complex models overfit the data and have high variance.
The goal is to detect true sources of variation in the data and ignore variation that is just noise.
How do we know if we have a good model? A common approach is to break up our data into a training set, a validation set, and a test set.
We train models with different parameters on the training set.
We evaluate each model on the validation set, and choose the best
We then measure the performance of our best model on the test set.
What would our best model look like? Because we are using dummy data here we can easily make more.
End of explanation
np.random.seed(0)
n = 200
x = np.random.random((n,1))
y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))
intercept_x = np.hstack((np.ones((n,1)), x))
coeff, residuals, rank, sing_vals = np.linalg.lstsq(intercept_x,y)
print('lstsq', coeff)
def gradient_descent(x, y, rounds = 1000, alpha=0.01):
theta = np.zeros((x.shape[1], 1))
costs = []
for i in range(rounds):
prediction = np.dot(x, theta)
error = prediction - y
gradient = np.dot(x.T, error / y.shape[0])
theta -= gradient * alpha
costs.append(np.sum(error ** 2))
return (theta, costs)
theta, costs = gradient_descent(intercept_x, y, rounds=10000)
print(theta, costs[::500])
np.random.seed(0)
n = 200
x = np.random.random((n,1))
y = 5 + 6 * x ** 2 + np.random.normal(0,0.5, size=(n,1))
x = np.hstack((x**j for j in np.arange(20)))
coeff, residuals, rank, sing_vals = np.linalg.lstsq(x,y)
print('lstsq', coeff)
theta, costs = gradient_descent(x, y, rounds=10000)
print(theta, costs[::500])
plt.plot(x[:,1], y, 'ko')
plt.plot(x[:,1], np.dot(x, coeff), 'co')
plt.plot(x[:,1], np.dot(x, theta), 'ro')
plt.show()
Explanation: Gradient descent
One limitation of our current implementation is that it is resource intensive. For very large datasets an alternative is needed. Gradient descent is often preferred, and particularly stochastic gradient descent for very large datasets.
Gradient descent is an iterative process, repetitively calculating the error and changing the coefficients slightly to reduce that error. It does this by calculating a gradient and then descending to a minimum in small steps.
Stochastic gradient descent calculates the gradient on a small batch of the data, updates the coefficients, loads the next chunk of the data and repeats the process.
We will just look at a basic gradient descent model.
End of explanation |
1,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The data from Kaggle is already here in the "data" folder. Let's take a look at it.
Step1: Naive manual analysis
Obviously a not-so-good algorithm, used primaraly for illustrating IPython
First, check whether a a signal wire can have energy_deposit = 0
Step2: It can't! So far so good.
Step3: Try plotting time vs. energy vs. label. It's too big, so we'll take a sample.
Step4: Looks like we could use a selection rule.
Step5: Also, np.log(0) is -inf. And it is correcly handled.
Step6: Check how good the model describes the data.
Step7: Let's make a predicion for submission. Take note at the format
Step8: Download your predictions from the cluster.
Step9: Naive machine learning
Step10: CV might take some time
Step11: Moral | Python Code:
hits_train = pd.read_csv("data/train.csv", index_col='global_id')
hits_train.head()
hits_test = pd.read_csv("data/test.csv", index_col='global_id')
hits_test.head()
Explanation: The data from Kaggle is already here in the "data" folder. Let's take a look at it.
End of explanation
set(hits_train.loc[(hits_train.energy_deposit == 0)].label)
Explanation: Naive manual analysis
Obviously a not-so-good algorithm, used primaraly for illustrating IPython
First, check whether a a signal wire can have energy_deposit = 0
End of explanation
candidates = hits_train.loc[(hits_train.energy_deposit > 0)]
Explanation: It can't! So far so good.
End of explanation
plot_sample_indices = np.random.choice(np.arange(len(candidates)), size=50000)
hits_to_plot = candidates.iloc[plot_sample_indices]
fig, ax = plt.subplots()
signal_hits = hits_to_plot.loc[(hits_to_plot.label == 1)]
noise_hits = hits_to_plot.loc[(hits_to_plot.label == 2)]
ax.scatter(noise_hits.energy_deposit, noise_hits.relative_time, c='b', edgecolors='none', alpha=0.3)
ax.scatter(signal_hits.energy_deposit, signal_hits.relative_time, c='r', edgecolors='none')
ax.set_xscale('log')
ax.set_xlim(1e-9, 1e-2)
ax.set_xlabel("energy_deposit")
ax.set_ylabel("relative_time")
Explanation: Try plotting time vs. energy vs. label. It's too big, so we'll take a sample.
End of explanation
fig, ax = plt.subplots()
ax.scatter(np.log(noise_hits.energy_deposit)**2, noise_hits.relative_time**2, c='b', edgecolors='none', alpha=0.3)
ax.scatter(np.log(signal_hits.energy_deposit)**2, signal_hits.relative_time**2, c='r', edgecolors='none')
high_relative_time = 1.35e6
low_relative_time = 256300
low_points = np.array([[160, 0], [194, low_relative_time], [229, low_relative_time], [200, 0]])
high_points = np.array([[164, 1.4e6], [195, 1.4e6], [195, high_relative_time], [164, high_relative_time],
[164, 1.4e6]])
ax.plot(low_points[:, 0], low_points[:, 1], 'g', lw=3)
ax.plot(high_points[:, 0], high_points[:, 1], 'g', lw=3)
ax.set_xlabel(r"$\log(\mathrm{energy\_deposit})^2$")
ax.set_ylabel(r"$\mathrm{relative\_time}^2$")
top_line_coeffs = np.polyfit(low_points[0:2, 0], low_points[0:2, 1], deg=1)
bottom_line_coeffs = np.polyfit(low_points[2:4, 0], low_points[2:4, 1], deg=1)
def is_signal(event):
log_energy_squared = np.log(event.energy_deposit)**2
relative_time_squared = event.relative_time**2
return (((relative_time_squared < low_relative_time) & (
relative_time_squared < np.poly1d(top_line_coeffs)(log_energy_squared)) & (
relative_time_squared > np.poly1d(bottom_line_coeffs)(log_energy_squared))) |
((relative_time_squared > high_relative_time) &
(log_energy_squared > 164) & (log_energy_squared < 195)))
Explanation: Looks like we could use a selection rule.
End of explanation
np.log(0)
hits_train.iloc[1]
is_signal(hits_train.iloc[1])
Explanation: Also, np.log(0) is -inf. And it is correcly handled.
End of explanation
from sklearn.metrics import roc_auc_score
hits_train_is_signal = (hits_train.label == 1)
roc_auc_score(hits_train_is_signal, is_signal(hits_train))
Explanation: Check how good the model describes the data.
End of explanation
prediction = pd.DataFrame({"prediction": is_signal(hits_test.loc[hits_test.energy_deposit > 0]).astype(np.int)})
prediction.to_csv("naive_manual_prediction.csv", index_label='global_id')
Explanation: Let's make a predicion for submission. Take note at the format: only the events with positive energy.
End of explanation
from IPython.display import FileLink
FileLink("naive_manual_prediction.csv")
Explanation: Download your predictions from the cluster.
End of explanation
from sklearn.tree import DecisionTreeClassifier
Explanation: Naive machine learning
End of explanation
from sklearn.cross_validation import cross_val_score
cv_gini = cross_val_score(DecisionTreeClassifier(criterion='gini'),
hits_train[['energy_deposit', 'relative_time']].values, (hits_train.label == 1).values.astype(np.int),
scoring='roc_auc')
print(cv_gini.mean(), cv_gini.std())
cv_entropy = cross_val_score(DecisionTreeClassifier(criterion='entropy'),
hits_train[['energy_deposit', 'relative_time']].values, (hits_train.label == 1).values.astype(np.int),
scoring='roc_auc')
print(cv_entropy.mean(), cv_entropy.std())
classifier = DecisionTreeClassifier(criterion='gini')
classifier.fit(hits_train[['energy_deposit', 'relative_time']], (hits_train.label == 1))
candidates = hits_test.loc[hits_test.energy_deposit > 0]
ml_prediction = pd.DataFrame({
"prediction": classifier.predict_proba(candidates[[
'energy_deposit', 'relative_time']])[:, 1]}, index=candidates.index)
ml_prediction.to_csv("naive_ml_prediction.csv", index_label='global_id')
Explanation: CV might take some time
End of explanation
FileLink("naive_ml_prediction.csv")
Explanation: Moral: sometimes you can outdo simple machine learning by thinking. Corollary: the best result is achieved by combining the approaches.
End of explanation |
1,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistics
Step1: Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots.
Step2: R | Python Code:
%matplotlib inline
import pandas
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (16.0, 8.0)
df = pandas.read_csv('./stroopdata.csv')
df.describe()
Explanation: Statistics: The Science of Decisions Project Instructions
Background Information
In a Stroop task, participants are presented with a list of words, with each word displayed in a color of ink. The participant’s task is to say out loud the color of the ink in which the word is printed. The task has two conditions: a congruent words condition, and an incongruent words condition. In the congruent words condition, the words being displayed are color words whose names match the colors in which they are printed: for example RED, BLUE. In the incongruent words condition, the words displayed are color words whose names do not match the colors in which they are printed: for example PURPLE, ORANGE. In each case, we measure the time it takes to name the ink colors in equally-sized lists. Each participant will go through and record a time from each condition.
Questions For Investigation
As a general note, be sure to keep a record of any resources that you use or refer to in the creation of your project. You will need to report your sources as part of the project submission.
What is our independent variable? What is our dependent variable?
R: Independent: Words congruence condition. Dependent: Naming time.
What is an appropriate set of hypotheses for this task? What kind of statistical test do you expect to perform? Justify your choices.
R:
Where $\mu_{congruent}$ and $\mu_{incongruent}$ stand for congruent and incongruent population means, respectively:
$H_0: \mu_{congruent} = \mu_{incongruent} $ — The time to name the ink colors doesn't change with the congruency condition
$H_A: \mu_{congruent} \neq \mu_{incongruent} $ — The time to name the ink colors changes with the congruency condition
To perform the test I will use a 2-tailed paired t-test. A t-test is apropriated since we don't the standard deviations of the population. A two-sample kind of t-test is necessary since we don't know the population mean. The sample sizes is below 30 (N=24), which is compatible with a t-test. I am also assuming that the population is normally distributed.
<p class="c2"><span>Now it’s your chance to try out the Stroop task for yourself. Go to </span><span class="c4"><a class="c8" href="https://www.google.com/url?q=https://faculty.washington.edu/chudler/java/ready.html&sa=D&usg=AFQjCNFRXmkTGaTjMtk1Xh0SPh-RiaZerA">this link</a></span><span>, which has a Java-based applet for performing the Stroop task. Record the times that you received on the task (you do not need to submit your times to the site.) Now, download </span><span class="c4"><a class="c8" href="https://www.google.com/url?q=https://drive.google.com/file/d/0B9Yf01UaIbUgQXpYb2NhZ29yX1U/view?usp%3Dsharing&sa=D&usg=AFQjCNGAjbK9VYD5GsQ8c_iRT9zH9QdOVg">this dataset</a></span><span> which contains results from a number of participants in the task. Each row of the dataset contains the performance for one participant, with the first number their results on the congruent task and the second number their performance on the incongruent task.</span></p>
Report some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability.
R: Central tendency: mean; measure of variability: standard deviation.
End of explanation
df.hist()
Explanation: Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots.
End of explanation
import math
df['differences'] = df['Incongruent']-df['Congruent']
N =df['differences'].count()
print "Sample size:\t\t%d"% N
print "DoF:\t\t\t%d"%(df['differences'].count()-1)
mean = df['differences'].mean()
std = df['differences'].std()
tscore = mean/(std/math.sqrt(N))
print "Differences Mean:\t%.3f" % mean
print "Differences Std:\t%.3f" % std
print "t-score:\t\t%.3f" %tscore
Explanation: R:
This histograms show that, in this sample, times are longer in the incrongruent experiment than in the congruent experiment.
In the congruent experiment, the interval with more values is aproximately between 14 and 16 values. In the incronguent experiment the interval with more values is aproximately (20,22).
Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations?
R: I'm going to perform the test for a confidence level of 95%, which means that our t-critical values are {-2.069,2.069}
End of explanation |
1,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data
Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step4: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labeled A through J.
Step6: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step7: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step8: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step9: Problem 4
Convince yourself that the data is still good after shuffling!
Finally, let's save the data for later reuse | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'https://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
data_root = '.' # Change me to store data elsewhere
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 5% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
dest_filename = os.path.join(data_root, filename)
if force or not os.path.exists(dest_filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(dest_filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', dest_filename)
else:
raise Exception(
'Failed to verify ' + dest_filename + '. Can you get to it with a browser?')
return dest_filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall(data_root)
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labeled A through J.
End of explanation
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (imageio.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except (IOError, ValueError) as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
End of explanation
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
Problem 3
Another check: we expect the data to be balanced across classes. Verify that.
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
End of explanation
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
pickle_file = os.path.join(data_root, 'notMNIST.pickle')
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
Explanation: Problem 4
Convince yourself that the data is still good after shuffling!
Finally, let's save the data for later reuse:
End of explanation |
1,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detecting Credit Card Fraud
In this notebook we will use GraphLab Create to identify a large majority of fraud cases in real-world data from an online retailer. Starting by a simple fraud classifier we will optimize it for the best available performance.<br>
The dataset is higly sensitive, thus it is anonymized and <b>cannot be shared</b>.
The notebook is orginaized into the following sections
Step1: We see that the data is highly categorical, and highly unbalanced.<br>
Let's visualize some part of the data.
Step2: <a id="features"></a> Create new features
Date features
Step3: Indicator features
Step4: Count features
Step5: We see that although most transactions have been paid for by a single credit card, some transactions have as much as 29 unique credit cards!<br>
Let's join the counts back into our dataset so we can visualize the number of unique cards per transaction vs fraud.
Step6: In total we created 9 new features. One can create any number of additional features which will be helpful to create a better fraud detector. For example, historical user features such as the number of transaction in a given timeframe.<br>
For the purposes of the webinar these features will be enough.
<a id="split"></a>Split data into train and test sets
First we will have to split the data into a training set and a testing set so we can evaluate our models. We will split it based on the date column, where the test set will be composed of the last six months of transactions.
Step7: <a id="model"></a>Create model to predict if a given transaction is fraudulent
Logistic Regression baseline
Step8: <b>Not a single fraud case was detected by the logistic regression model!</b><br>
As indicated while training the logistic regression model, some features are highly categorical, and when expanded result in <b>many</b> coefficients. We could address this by removing these features from the dataset, or by transforming these features into a more manageable form (e.g. Count Thresholder). For this webinar, we will leave these features as-is and will move on to a stronger classifier.
Boosted Trees Classifier
Step9: <b> 29 out of 33 fraud cases were detected by the boosted trees model.</b>
Let's tune the parameters of the model so we can squeeze extra performance out of it. In this example I chose parameters that were evaluated before hand, but GraphLab offers the functionality to do a distributed search across a grid of parameters. To learn more click here.
Step10: <b> The tuned model found one more fraud case than the previous un-tuned model, at the price of a few more false positives.</b> The desired balance between false positives and false negatives depends on the application. In fraud detection we may want to minimize false negatives so we can save more money, while false positives will just waste more time for a fraud detection expert inspecting transactions flagged by our model.
Step11: <a id="deploy"></a>Deploying the model into a resilient & elastic service
To connect to AWS, you will have to set your own AWS credentials by calling
Step12: RESTfully query the service | Python Code:
import graphlab as gl
data = gl.SFrame('fraud_detection.sf')
data.head(3)
len(data)
data.show()
Explanation: Detecting Credit Card Fraud
In this notebook we will use GraphLab Create to identify a large majority of fraud cases in real-world data from an online retailer. Starting by a simple fraud classifier we will optimize it for the best available performance.<br>
The dataset is higly sensitive, thus it is anonymized and <b>cannot be shared</b>.
The notebook is orginaized into the following sections:
- <a href="#load">Load and explore the data</a>
- <a href="#features">Create new features</a>
- <a href="#split">Split data into train and test sets</a>
- <a href="#model">Create different models</a>
- <a href="#deploy">Deploy models as REST service</a>
This notebook is presented in the Detecting Credit Card Fraud webinar, one of many interesting webinars given by Turi. Check out upcoming webinars here.
<a id="load"></a> Load and explore the data
End of explanation
# Tell GraphLab to display canvas in the notebook itself
gl.canvas.set_target('ipynb')
data.show(view='BoxWhisker Plot', x='fraud', y='payment amount')
Explanation: We see that the data is highly categorical, and highly unbalanced.<br>
Let's visualize some part of the data.
End of explanation
# Transform string date into datetime type.
# This will help us further along to compare dates.
data['transaction date'] = data['transaction date'].str_to_datetime(str_format='%d.%m.%Y')
# Split date into its components and set them as categorical features
data.add_columns(data['transaction date'].split_datetime(limit=['year','month','day'], column_name_prefix='transaction'))
data['transaction.year'] = data['transaction.year'].astype(str)
data['transaction.month'] = data['transaction.month'].astype(str)
data['transaction.day'] = data['transaction.day'].astype(str)
# Create day of week feature and set it as a categorical feature
data['transaction week day'] = data['transaction date'].apply(lambda x: x.weekday())
data['transaction week day'] = data['transaction week day'].astype(str)
data.head(3)
Explanation: <a id="features"></a> Create new features
Date features
End of explanation
# Create new features and transform them into true/false indicators
data['same country'] = (data['customer country'] == data['business country']).astype(str)
data['same person'] = (data['customer'] == data['cardholder']).astype(str)
data['expiration near'] = (data['credit card expiration year'] == data['transaction.year']).astype(str)
Explanation: Indicator features
End of explanation
counts = data.groupby('transaction id', {'unique cards per transaction' : gl.aggregate.COUNT_DISTINCT('credit card number'),
'unique cardholders per transaction' : gl.aggregate.COUNT_DISTINCT('cardholder'),
'tries per transaction' : gl.aggregate.COUNT()})
counts.head(3)
counts.show()
Explanation: Count features
End of explanation
data = data.join(counts)
data.show(view='BoxWhisker Plot', x='fraud', y='unique cards per transaction')
print 'Number of columns', len(data.column_names())
Explanation: We see that although most transactions have been paid for by a single credit card, some transactions have as much as 29 unique credit cards!<br>
Let's join the counts back into our dataset so we can visualize the number of unique cards per transaction vs fraud.
End of explanation
from datetime import datetime
split = data['transaction date'] > datetime(2015, 6, 1)
data.remove_column('transaction date')
train = data[split == 0]
test = data[split == 1]
print 'Training set fraud'
train['fraud'].show()
print 'Test set fraud'
test['fraud'].show()
Explanation: In total we created 9 new features. One can create any number of additional features which will be helpful to create a better fraud detector. For example, historical user features such as the number of transaction in a given timeframe.<br>
For the purposes of the webinar these features will be enough.
<a id="split"></a>Split data into train and test sets
First we will have to split the data into a training set and a testing set so we can evaluate our models. We will split it based on the date column, where the test set will be composed of the last six months of transactions.
End of explanation
logreg_model = gl.logistic_classifier.create(train,
target='fraud',
validation_set=None)
print 'Logistic Regression Accuracy', logreg_model.evaluate(test)['accuracy']
print 'Logistic Regression Confusion Matrix\n', logreg_model.evaluate(test)['confusion_matrix']
Explanation: <a id="model"></a>Create model to predict if a given transaction is fraudulent
Logistic Regression baseline
End of explanation
boosted_trees_model = gl.boosted_trees_classifier.create(train,
target='fraud',
validation_set=None)
print 'Boosted trees Accuracy', boosted_trees_model.evaluate(test)['accuracy']
print 'Boosted trees Confusion Matrix\n', boosted_trees_model.evaluate(test)['confusion_matrix']
Explanation: <b>Not a single fraud case was detected by the logistic regression model!</b><br>
As indicated while training the logistic regression model, some features are highly categorical, and when expanded result in <b>many</b> coefficients. We could address this by removing these features from the dataset, or by transforming these features into a more manageable form (e.g. Count Thresholder). For this webinar, we will leave these features as-is and will move on to a stronger classifier.
Boosted Trees Classifier
End of explanation
boosted_trees_model = gl.boosted_trees_classifier.create(train,
target='fraud',
validation_set=None,
max_iterations=40,
max_depth=9,
class_weights='auto')
print 'Boosted trees Accuracy', boosted_trees_model.evaluate(test)['accuracy']
print 'Boosted trees Confusion Matrix\n', boosted_trees_model.evaluate(test)['confusion_matrix']
Explanation: <b> 29 out of 33 fraud cases were detected by the boosted trees model.</b>
Let's tune the parameters of the model so we can squeeze extra performance out of it. In this example I chose parameters that were evaluated before hand, but GraphLab offers the functionality to do a distributed search across a grid of parameters. To learn more click here.
End of explanation
# Inspect the features most used by the boosted trees model
boosted_trees_model.get_feature_importance()
Explanation: <b> The tuned model found one more fraud case than the previous un-tuned model, at the price of a few more false positives.</b> The desired balance between false positives and false negatives depends on the application. In fraud detection we may want to minimize false negatives so we can save more money, while false positives will just waste more time for a fraud detection expert inspecting transactions flagged by our model.
End of explanation
state_path = 's3://gl-demo-usw2/predictive_service/demolab/ps-1.8.5'
ps = gl.deploy.predictive_service.load(state_path)
# Pickle and send the model over to the server.
ps.add('fraud', boosted_trees_model)
ps.apply_changes()
# Predictive services must be displayed in a browser
gl.canvas.set_target('browser')
ps.show()
Explanation: <a id="deploy"></a>Deploying the model into a resilient & elastic service
To connect to AWS, you will have to set your own AWS credentials by calling:
python
gl.aws.set_credentials(<your public key>,
<your private key>)
End of explanation
ps.query('fraud', method='predict', data={'dataset' : test[0]})
test[0]['fraud']
Explanation: RESTfully query the service
End of explanation |
1,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Note
Step1: Chapter 12
Step2: Listing 12.1
Step3: Listing 12.2
Step4: Listing 12.3 | Python Code:
!curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/samples.tar.bz2 -o samples.tar.bz2
!mkdir samples
!tar xvfj samples.tar.bz2 -C samples
!wget https://raw.githubusercontent.com/Serulab/Py4Bio/master/code/ch12/PythonU.sql
!apt-get -y install mysql-server
!/etc/init.d/mysql start
!mysql -e 'create database PythonU;'
!mysql PythonU < PythonU.sql
!mysql -e "UPDATE mysql.user SET authentication_string=password('mypassword'),host='%',plugin='mysql_native_password' WHERE user='root';flush privileges;"
Explanation: Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Note: The code in this chapter requires a database servers to run (a MySQL and MongoDB), so you should provide one and then change the appropiate parameters in the connect string. The example with sqlite can run in this Jupyter Notebook.
End of explanation
!pip install PyMySQL
import pymysql
db = pymysql.connect(host="localhost", user="root", passwd="mypassword", db="PythonU")
cursor = db.cursor()
cursor.execute("SELECT * FROM Students")
cursor.fetchone()
cursor.fetchone()
cursor.fetchone()
cursor.fetchall()
Explanation: Chapter 12: Python and Databases
End of explanation
!/etc/init.d/mysql stop
get_ipython().system_raw('mysqld_safe --skip-grant-tables &')
!mysql -e "UPDATE mysql.user SET authentication_string=password('secret'),host='%',plugin='mysql_native_password' WHERE user='root';flush privileges;"
import pymysql
db = pymysql.connect(host='localhost',
user='root', passwd='secret', db='PythonU')
cursor = db.cursor()
recs = cursor.execute('SELECT * FROM Students')
for x in range(recs):
print(cursor.fetchone())
Explanation: Listing 12.1: pymysql1.py: Reading results once at a time
End of explanation
import pymysql
db = pymysql.connect(host='localhost',
user='root', passwd='secret', db='PythonU')
cursor = db.cursor()
cursor.execute('SELECT * FROM Students')
for row in cursor:
print(row)
Explanation: Listing 12.2: pymysql2.py: Iterating directly over the DB cursor
End of explanation
import sqlite3
db = sqlite3.connect('samples/PythonU.db')
cursor = db.cursor()
cursor.execute('Select * from Students')
for row in cursor:
print(row)
!apt install mongodb
!/etc/init.d/mongodb start
from pymongo import MongoClient
from pymongo import MongoClient
client = MongoClient('localhost:27017')
client.list_database_names()
db = client.PythonU
client.list_database_names()
client.drop_database('Employee')
students = db.Students
student_1 = {'Name':'Harry', 'LastName':'Wilkinson',
'DateJoined':'2016-02-10', 'OutstandingBalance':False,
'Courses':[('Python 101', 7, '2016/1'), ('Mathematics for CS',
8, '2016/1')]}
student_2 = {'Name':'Jonathan', 'LastName':'Hunt',
'DateJoined':'2014-02-16', 'OutstandingBalance':False,
'Courses':[('Python 101', 6, '2016/1'), ('Mathematics for CS',
9, '2015/2')]}
students.count()
students.insert(student_1)
students.insert(student_2)
students.count()
from bson.objectid import ObjectId
search_id = {'_id':ObjectId('5ed902d980378228f849a40d')}
my_student = students.find_one(search_id)
my_student['LastName']
my_student['_id'].generation_time
for student in students.find():
print(student['Name'], student['LastName'])
list(students.find())
Explanation: Listing 12.3: sqlite1.py: Same script as 12.2, but with SQLite
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.