Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
4,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Object reconstruction from a cloud of points in 2D using $\alpha$-shapes and Vietoris-Rips complexes
Step4: Now construct $\alpha$-shapes and Vietoris-Rips complex from this cloud of points.
Step5: Define functions that produce a drawing of a given simplicial complex. | Python Code:
import dionysus
import math
from random import random
from matplotlib import pyplot
def generate_circle(n, radius, max_noise):
Generate n points on a sphere with the center in the point (0,0)
with the given radius.
Noise is added so that the distance from
the generated point to some point on the sphere does not
exceed max_noise parameter.
Returns the list of generated points.
points = []
for i in range(n):
angle = 2 * math.pi * random()
noise = max_noise * random()
r = radius * (1 + noise)
point = [r * math.cos(angle), r * math.sin(angle)]
points.append(point)
return points
def plot_points(points):
Plot the given list of points using matplotlib.
xs, ys = map(list, zip(*points))
pyplot.axis([min(xs)-1, max(xs)+1,min(ys)-1,max(ys)+1])
pyplot.plot(xs, ys, 'ro')
%matplotlib notebook
circle = generate_circle(70, 3, 0.1)
plot_points(circle)
Explanation: Object reconstruction from a cloud of points in 2D using $\alpha$-shapes and Vietoris-Rips complexes
End of explanation
from dionysus import Rips, PairwiseDistances, StaticPersistence, Filtration, points_file, \
ExplicitDistances, data_dim_cmp
import time
def rips(points, skeleton, max):
Generate the Vietoris-Rips complex on the given set of points in 2D.
Only simplexes up to dimension skeleton are computed.
The max parameter denotes the radious used in VR-construction.
distances = PairwiseDistances(points)
rips = Rips(distances)
simplices = Filtration()
rips.generate(skeleton, max, simplices.append)
print time.asctime(), "Generated complex: %d simplices" % len(simplices)
# While this step is unnecessary (Filtration below can be passed rips.cmp),
# it greatly speeds up the running times
for s in simplices: s.data = rips.eval(s)
print time.asctime(), simplices[0], '...', simplices[-1]
return [list(simplex.vertices) for simplex in simplices]
Explanation: Now construct $\alpha$-shapes and Vietoris-Rips complex from this cloud of points.
End of explanation
def get_points(points, indices):
return [points[index] for index in indices]
def draw_triangle(triangle):
p1, p2, p3 = triangle
pyplot.plot([p1[0], p2[0]],[p1[1],p2[1]])
pyplot.plot([p1[0], p3[0]],[p1[1],p3[1]])
pyplot.plot([p2[0], p3[0]],[p2[1],p3[1]])
def draw_line(line):
p1, p2 = line
pyplot.plot([p1[0], p2[0]],[p1[1],p2[1]])
def draw_point(point):
pyplot.plot(point)
def draw_simplicial_complex(simplices, points):
handlers = [draw_point, draw_line, draw_triangle]
for simplex in simplices:
handler = handlers[len(simplex)-1]
handler(get_points(points, simplex))
%matplotlib notebook
rips_complex = rips(points=circle, skeleton=2, max=1)
draw_simplicial_complex(rips_complex, circle)
from dionysus import Filtration, fill_alpha_complex
def alpha(points, radius):
f = Filtration()
fill_alpha_complex(points, f)
ret = [list(s.vertices) for s in f if s.data[0] <= radius]
print "Total number of simplices:", len(ret)
return ret
%matplotlib notebook
alpha_shapes = alpha(circle, 0.25)
draw_simplicial_complex(alpha_shapes, circle)
Explanation: Define functions that produce a drawing of a given simplicial complex.
End of explanation |
4,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word Embeddings
Learning Objectives
You will learn
Step1: This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
Step2: Download the IMDb Dataset
You will use the Large Movie Review Dataset through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the Loading text tutorial.
Download the dataset using Keras file utility and take a look at the directories.
Step3: Take a look at the train/ directory. It has pos and neg folders with movie reviews labelled as positive and negative respectively. You will use reviews from pos and neg folders to train a binary classification model.
Step4: The train directory also has additional folders which should be removed before creating training dataset.
Step5: Next, create a tf.data.Dataset using tf.keras.preprocessing.text_dataset_from_directory. You can read more about using this utility in this text classification tutorial.
Use the train directory to create both train and validation datasets with a split of 20% for validation.
Step6: Take a look at a few movie reviews and their labels (1
Step7: Configure the dataset for performance
These are two important methods you should use when loading data to make sure that I/O does not become blocking.
.cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.
.prefetch() overlaps data preprocessing and model execution while training.
You can learn more about both methods, as well as how to cache data to disk in the data performance guide.
Step8: Using the Embedding layer
Keras makes it easy to use word embeddings. Take a look at the Embedding layer.
The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
Step9: When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).
If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table
Step10: For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape (samples, sequence_length), where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes (32, 10) (batch of 32 sequences of length 10) or (64, 15) (batch of 64 sequences of length 15).
The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a (2, 3) input batch and the output is (2, 3, N)
Step11: When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape (samples, sequence_length, embedding_dimensionality). To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The Text Classification with an RNN tutorial is a good next step.
Text preprocessing
Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the Text Classification tutorial.
Step12: Create a classification model
Use the Keras Sequential API to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.
* The TextVectorization layer transforms strings into vocabulary indices. You have already initialized vectorize_layer as a TextVectorization layer and built it's vocabulary by calling adapt on text_ds. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer.
* The Embedding layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are
Step13: Compile and train the model
Create a tf.keras.callbacks.TensorBoard.
Step14: Compile and train the model using the Adam optimizer and BinaryCrossentropy loss.
Step15: With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).
Note
Step16: Visualize the model metrics in TensorBoard.
Step17: Run the following command in Cloud Shell
Step18: Write the weights to disk. To use the Embedding Projector, you will upload two files in tab separated format
Step19: Two files will created as vectors.tsv and metadata.tsv. Download both files. | Python Code:
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import io
import os
import re
import shutil
import string
import tensorflow as tf
from datetime import datetime
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Embedding, GlobalAveragePooling1D
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
Explanation: Word Embeddings
Learning Objectives
You will learn:
How to use Embedding layer
How to create a classification model
Compile and train the model
How to retrieve the trained word embeddings, save them to disk and visualize it.
Introduction
This notebook contains an introduction to word embeddings. You will train your own word embeddings using a simple Keras model for a sentiment classification task, and then visualize them in the Embedding Projector (shown in the image below).
Representing text as numbers
Machine learning models take vectors (arrays of numbers) as input. When working with text, the first thing you must do is come up with a strategy to convert strings to numbers (or to "vectorize" the text) before feeding it to the model. In this section, you will look at three strategies for doing so.
One-hot encodings
As a first idea, you might "one-hot" encode each word in your vocabulary. Consider the sentence "The cat sat on the mat". The vocabulary (or unique words) in this sentence is (cat, mat, on, sat, the). To represent each word, you will create a zero vector with length equal to the vocabulary, then place a one in the index that corresponds to the word. This approach is shown in the following diagram.
To create a vector that contains the encoding of the sentence, you could then concatenate the one-hot vectors for each word.
Key point: This approach is inefficient. A one-hot encoded vector is sparse (meaning, most indices are zero). Imagine you have 10,000 words in the vocabulary. To one-hot encode each word, you would create a vector where 99.99% of the elements are zero.
Encode each word with a unique number
A second approach you might try is to encode each word using a unique number. Continuing the example above, you could assign 1 to "cat", 2 to "mat", and so on. You could then encode the sentence "The cat sat on the mat" as a dense vector like [5, 1, 4, 3, 5, 2]. This approach is efficient. Instead of a sparse vector, you now have a dense one (where all elements are full).
There are two downsides to this approach, however:
The integer-encoding is arbitrary (it does not capture any relationship between words).
An integer-encoding can be challenging for a model to interpret. A linear classifier, for example, learns a single weight for each feature. Because there is no relationship between the similarity of any two words and the similarity of their encodings, this feature-weight combination is not meaningful.
Word embeddings
Word embeddings give us a way to use an efficient, dense representation in which similar words have a similar encoding. Importantly, you do not have to specify this encoding by hand. An embedding is a dense vector of floating point values (the length of the vector is a parameter you specify). Instead of specifying the values for the embedding manually, they are trainable parameters (weights learned by the model during training, in the same way a model learns weights for a dense layer). It is common to see word embeddings that are 8-dimensional (for small datasets), up to 1024-dimensions when working with large datasets. A higher dimensional embedding can capture fine-grained relationships between words, but takes more data to learn.
Above is a diagram for a word embedding. Each word is represented as a 4-dimensional vector of floating point values. Another way to think of an embedding is as "lookup table". After these weights have been learned, you can encode each word by looking up the dense vector it corresponds to in the table.
Each learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference.
Setup
End of explanation
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
Explanation: This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
End of explanation
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1.tar.gz", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
Explanation: Download the IMDb Dataset
You will use the Large Movie Review Dataset through the tutorial. You will train a sentiment classifier model on this dataset and in the process learn embeddings from scratch. To read more about loading a dataset from scratch, see the Loading text tutorial.
Download the dataset using Keras file utility and take a look at the directories.
End of explanation
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
Explanation: Take a look at the train/ directory. It has pos and neg folders with movie reviews labelled as positive and negative respectively. You will use reviews from pos and neg folders to train a binary classification model.
End of explanation
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
Explanation: The train directory also has additional folders which should be removed before creating training dataset.
End of explanation
batch_size = 1024
seed = 123
train_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='training', seed=seed)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
'aclImdb/train', batch_size=batch_size, validation_split=0.2,
subset='validation', seed=seed)
Explanation: Next, create a tf.data.Dataset using tf.keras.preprocessing.text_dataset_from_directory. You can read more about using this utility in this text classification tutorial.
Use the train directory to create both train and validation datasets with a split of 20% for validation.
End of explanation
for text_batch, label_batch in train_ds.take(1):
for i in range(5):
print(label_batch[i].numpy(), text_batch.numpy()[i])
Explanation: Take a look at a few movie reviews and their labels (1: positive, 0: negative) from the train dataset.
End of explanation
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
Explanation: Configure the dataset for performance
These are two important methods you should use when loading data to make sure that I/O does not become blocking.
.cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.
.prefetch() overlaps data preprocessing and model execution while training.
You can learn more about both methods, as well as how to cache data to disk in the data performance guide.
End of explanation
# Embed a 1,000 word vocabulary into 5 dimensions.
# TODO: Your code goes here
Explanation: Using the Embedding layer
Keras makes it easy to use word embeddings. Take a look at the Embedding layer.
The Embedding layer can be understood as a lookup table that maps from integer indices (which stand for specific words) to dense vectors (their embeddings). The dimensionality (or width) of the embedding is a parameter you can experiment with to see what works well for your problem, much in the same way you would experiment with the number of neurons in a Dense layer.
End of explanation
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
Explanation: When you create an Embedding layer, the weights for the embedding are randomly initialized (just like any other layer). During training, they are gradually adjusted via backpropagation. Once trained, the learned word embeddings will roughly encode similarities between words (as they were learned for the specific problem your model is trained on).
If you pass an integer to an embedding layer, the result replaces each integer with the vector from the embedding table:
End of explanation
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
Explanation: For text or sequence problems, the Embedding layer takes a 2D tensor of integers, of shape (samples, sequence_length), where each entry is a sequence of integers. It can embed sequences of variable lengths. You could feed into the embedding layer above batches with shapes (32, 10) (batch of 32 sequences of length 10) or (64, 15) (batch of 64 sequences of length 15).
The returned tensor has one more axis than the input, the embedding vectors are aligned along the new last axis. Pass it a (2, 3) input batch and the output is (2, 3, N)
End of explanation
# Create a custom standardization function to strip HTML break tags '<br />'.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation), '')
# Vocabulary size and number of words in a sequence.
vocab_size = 10000
sequence_length = 100
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Note that the layer uses the custom standardization defined above.
# Set maximum_sequence length as all samples are not of the same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
# Make a text-only dataset (no labels) and call adapt to build the vocabulary.
text_ds = train_ds.map(lambda x, y: x)
vectorize_layer.adapt(text_ds)
Explanation: When given a batch of sequences as input, an embedding layer returns a 3D floating point tensor, of shape (samples, sequence_length, embedding_dimensionality). To convert from this sequence of variable length to a fixed representation there are a variety of standard approaches. You could use an RNN, Attention, or pooling layer before passing it to a Dense layer. This tutorial uses pooling because it's the simplest. The Text Classification with an RNN tutorial is a good next step.
Text preprocessing
Next, define the dataset preprocessing steps required for your sentiment classification model. Initialize a TextVectorization layer with the desired parameters to vectorize movie reviews. You can learn more about using this layer in the Text Classification tutorial.
End of explanation
embedding_dim=16
# TODO: Your code goes here
Explanation: Create a classification model
Use the Keras Sequential API to define the sentiment classification model. In this case it is a "Continuous bag of words" style model.
* The TextVectorization layer transforms strings into vocabulary indices. You have already initialized vectorize_layer as a TextVectorization layer and built it's vocabulary by calling adapt on text_ds. Now vectorize_layer can be used as the first layer of your end-to-end classification model, feeding transformed strings into the Embedding layer.
* The Embedding layer takes the integer-encoded vocabulary and looks up the embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: (batch, sequence, embedding).
The GlobalAveragePooling1D layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.
The fixed-length output vector is piped through a fully-connected (Dense) layer with 16 hidden units.
The last layer is densely connected with a single output node.
Caution: This model doesn't use masking, so the zero-padding is used as part of the input and hence the padding length may affect the output. To fix this, see the masking and padding guide.
End of explanation
# TODO: Your code goes here
Explanation: Compile and train the model
Create a tf.keras.callbacks.TensorBoard.
End of explanation
# TODO: Your code goes here
model.fit(
train_ds,
validation_data=val_ds,
epochs=10,
callbacks=[tensorboard_callback])
Explanation: Compile and train the model using the Adam optimizer and BinaryCrossentropy loss.
End of explanation
model.summary()
Explanation: With this approach the model reaches a validation accuracy of around 84% (note that the model is overfitting since training accuracy is higher).
Note: Your results may be a bit different, depending on how weights were randomly initialized before training the embedding layer.
You can look into the model summary to learn more about each layer of the model.
End of explanation
!tensorboard --bind_all --port=8081 --logdir logs
Explanation: Visualize the model metrics in TensorBoard.
End of explanation
weights = # TODO: Your code goes here
vocab = # TODO: Your code goes here
Explanation: Run the following command in Cloud Shell:
<code>gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081</code>
Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.
In Cloud Shell, click Web Preview > Change Port and insert port number 8081. Click Change and Preview to open the TensorBoard.
To quit the TensorBoard, click Kernel > Interrupt kernel.
Retrieve the trained word embeddings and save them to disk
Next, retrieve the word embeddings learned during training. The embeddings are weights of the Embedding layer in the model. The weights matrix is of shape (vocab_size, embedding_dimension).
Obtain the weights from the model using get_layer() and get_weights(). The get_vocabulary() function provides the vocabulary to build a metadata file with one token per line.
End of explanation
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
Explanation: Write the weights to disk. To use the Embedding Projector, you will upload two files in tab separated format: a file of vectors (containing the embedding), and a file of meta data (containing the words).
End of explanation
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
Explanation: Two files will created as vectors.tsv and metadata.tsv. Download both files.
End of explanation |
4,202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Key Requirements for the iRF scikit-learn implementation
The following is a documentation of the main requirements for the iRF implementation
Pseudocode iRF implementation
Step 0
Step1: Step 1
Step2: Step 2
Step3: Step 2.2 Display Feature Importances Graphically (just for interest)
Step4: Step 3
Step5: Get the second Decision tree to use for testing
Step6: Write down an efficient Binary Tree Traversal Function | Python Code:
# Setup
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.datasets import load_iris
from sklearn import tree
import numpy as np
# Define a function to draw the decision trees in IPython
# Adapted from: http://scikit-learn.org/stable/modules/tree.html
from IPython.display import display, Image
import pydotplus
# Custom util functions
from utils import utils
# Set seed for reproducibility
np.random.seed(1015)
Explanation: Key Requirements for the iRF scikit-learn implementation
The following is a documentation of the main requirements for the iRF implementation
Pseudocode iRF implementation
Step 0: Setup
Import required libraries and set up the seed value for reproducibility
Keep all custom functions in utils/utils.py
Inputs:
* D = {($X_{i}$, $Y_{i}$), $X_{i} \in \mathbb{R}$, $Y_{i} \in \left {0, 1 \right }$
p
, Y i ∈ {0, 1}},C ∈ {0, 1}, B, K
End of explanation
# Load the iris data
iris = load_iris()
# Create the train-test datasets
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target)
np.random.seed(1039)
# Just fit a simple random forest classifier with 2 decision trees
rf = RandomForestClassifier(n_estimators = 2)
rf.fit(X = X_train, y = y_train)
# Now plot the trees individually
for idx, dtree in enumerate(rf.estimators_):
print(idx)
utils.draw_tree(inp_tree = dtree)
#utils.draw_tree(inp_tree = rf.estimators_[1])
Explanation: Step 1: Fit the Initial Random Forest
Just fit every feature with equal weights per the usual random forest code e.g. DecisionForestClassifier in scikit-learn
End of explanation
importances = rf.feature_importances_
std = np.std([dtree.feature_importances_ for dtree in rf.estimators_]
, axis=0)
indices = np.argsort(importances)[::-1]
# Check that the feature importances are standardized to 1
print(sum(importances))
Explanation: Step 2: Get the Gini Importance of Weights
For the first random forest we just need to get the Gini Importance of Weights
Step 2.1 Get them numerically - most important
End of explanation
# Print the feature ranking
print("Feature ranking:")
for f in range(X_train.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), indices)
plt.xlim([-1, X_train.shape[1]])
plt.show()
Explanation: Step 2.2 Display Feature Importances Graphically (just for interest)
End of explanation
feature_names = ["X" + str(i) for i in range(X_train.shape[1])]
target_vals = list(np.sort(np.unique(y_train)))
target_names = ["y" + str(i) for i in target_vals]
print(feature_names)
print(target_names)
Explanation: Step 3: For each Tree get core leaf node features
For each decision tree in the classifier, get:
The list of leaf nodes
Depth of the leaf node
Leaf node predicted class i.e. {0, 1}
Probability of predicting class in leaf node
Number of observations in the leaf node i.e. weight of node
Name the Features
End of explanation
estimator = rf.estimators_[1]
from sklearn.tree import _tree
estimator.tree_.node_count
estimator.tree_.children_left[0]
estimator.tree_.children_right[0]
_tree.TREE_LEAF
Explanation: Get the second Decision tree to use for testing
End of explanation
# Now plot the trees individually
utils.draw_tree(inp_tree = estimator)
def binaryTreePaths(dtree, root_node_id = 0):
# Use these lists to parse the tree structure
children_left = dtree.tree_.children_left
children_right = dtree.tree_.children_right
if root_node_id is None:
paths = []
if root_node_id == _tree.TREE_LEAF:
raise ValueError("Invalid node_id %s" % _tree.TREE_LEAF)
# if left/right is None we'll get empty list anyway
if children_left[root_node_id] != _tree.TREE_LEAF:
paths = [str(root_node_id) + '->' + str(l)
for l in binaryTreePaths(dtree, children_left[root_node_id]) +
binaryTreePaths(dtree, children_right[root_node_id])]
else:
paths = [root_node_id]
return paths
x1 = binaryTreePaths(rf.estimators_[1], root_node_id = 0)
x1
def binaryTreePaths2(dtree, root_node_id = 0):
# Use these lists to parse the tree structure
children_left = dtree.tree_.children_left
children_right = dtree.tree_.children_right
if root_node_id is None:
paths = []
if root_node_id == _tree.TREE_LEAF:
raise ValueError("Invalid node_id %s" % _tree.TREE_LEAF)
# if left/right is None we'll get empty list anyway
if children_left[root_node_id] != _tree.TREE_LEAF:
paths = [np.append(root_node_id, l)
for l in binaryTreePaths2(dtree, children_left[root_node_id]) +
binaryTreePaths2(dtree, children_right[root_node_id])]
else:
paths = [root_node_id]
return paths
x = binaryTreePaths2(rf.estimators_[1], root_node_id = 0)
x
leaf_nodes = [y[-1] for y in x]
leaf_nodes
n_node_samples = estimator.tree_.n_node_samples
num_samples = [n_node_samples[y].astype(int) for y in leaf_nodes]
print(n_node_samples)
print(len(n_node_samples))
num_samples
print(num_samples)
print(sum(num_samples))
print(sum(n_node_samples))
X_train.shape
value = estimator.tree_.value
values = [value[y].astype(int) for y in leaf_nodes]
print(values)
# This should match the number of rows in the training feature set
print(sum(values).sum())
values
feature_names = ["X" + str(i) for i in range(X_train.shape[1])]
np.asarray(feature_names)
print(type(feature_names))
print(feature_names[0])
print(feature_names[-2])
feature = estimator.tree_.feature
z = [feature[y].astype(int) for y in x]
z
#[feature_names[i] for i in z]
max_dpth = estimator.tree_.max_depth
max_dpth
max_n_class = estimator.tree_.max_n_classes
max_n_class
print("nodes", np.asarray(a = nodes, dtype = "int64"), sep = ":\n")
print("node_depth", node_depth, sep = ":\n")
print("leaf_node", is_leaves, sep = ":\n")
print("feature_names", used_feature_names, sep = ":\n")
print("feature", feature, sep = ":\n")
Explanation: Write down an efficient Binary Tree Traversal Function
End of explanation |
4,203 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TFLearn Subject Verb Agreement Error Detection 2
This notebook is based off the original fragment detection notebook, but specific to detection of participle phrase fragments.
As our training data we will use 799,675 correct sentences and, of a total 12,743,496 sentences with subject verb agreement errors, we will use a randomly chosen 799,675.
The labels will be either a 1 or 0, where 1 indicates a sentence with a subject verb agreement error and 0 indicates there is no subject verb agreement errors.
Because some libraries used require python 2.7, this jupyter notebook may not be able to run but it is used to document process.
Install Dependencies
Step1: Load Datafiles
Step2: Shuffle the data
Step4: Get verb phrase keys for sentence
Step5: Key counts
Step6: Take the trigrams and index them
Step7: Chunking the data for TF
Step8: Setting up TF
Step9: Initialize
Step10: Training
Step11: Playground
Step12: Save the vocab | Python Code:
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
import spacy
import re
from textstat.textstat import textstat
from pattern.en import lexeme, tenses
from pattern.en import pluralize, singularize
import sqlite3
import hashlib
nlp = spacy.load('en_core_web_lg')
conn = sqlite3.connect('db/mangled_agreement.db')
cursor = conn.cursor()
#from nltk.util import ngrams, trigrams
#import csv
#import pandas as pd
Explanation: TFLearn Subject Verb Agreement Error Detection 2
This notebook is based off the original fragment detection notebook, but specific to detection of participle phrase fragments.
As our training data we will use 799,675 correct sentences and, of a total 12,743,496 sentences with subject verb agreement errors, we will use a randomly chosen 799,675.
The labels will be either a 1 or 0, where 1 indicates a sentence with a subject verb agreement error and 0 indicates there is no subject verb agreement errors.
Because some libraries used require python 2.7, this jupyter notebook may not be able to run but it is used to document process.
Install Dependencies
End of explanation
# TODO: This is kind of memory intensive don'tcha think?
texts = []
labels = []
# add 0 label to correct sentences
for row in cursor.execute("SELECT sentence FROM orignal_sentences"):
texts.append(row[0].strip())
labels.append(0)
# add 1 label to sentences with a subject verb agreement error, limit should match the number of original sentences
for row in cursor.execute("SELECT sentence FROM mangled_sentences ORDER BY RANDOM() LIMIT 799675"):
texts.append(row[0].strip())
labels.append(1)
print(texts[-10:])
conn.close() # done with sqlite connection
Explanation: Load Datafiles
End of explanation
import random
combined = list(zip(texts,labels))
random.shuffle(combined)
texts[:], labels[:] = zip(*combined)
print(texts[-10:])
print(labels[-10:])
Explanation: Shuffle the data
End of explanation
def get_verb_phrases(sentence_doc):
Returns an object like,
[(1), (5,6,7)]
where this means 2 verb phrases. a single verb at index 1, another verb phrase 5,6,7.
- Adverbs are not included.
- Infinitive phrases (and verb phrases that are subsets of infinitive phrases) are not included
pattern = r'<VERB>*<ADV>*<VERB>+' # r'<VERB>?<ADV>*<VERB>+' is suggested by textacy site
verb_phrases = textacy.extract.pos_regex_matches(sentence_doc, pattern)
sentence_str = sentence_doc.text
index_2_word_no = {} # the starting position for each word to its number{0:0, 3:1, 7:2, 12:3}
for word in sentence_doc:
result = [] # [(1), (5,6,7)] => 2 verb phrases. a single verb at index 1, another verb phrase 5,6,7
for vp in verb_phrases:
word_numbers = []
# return the index of 'could have been happily eating' from 'She could have been happily eating chowder'
str_idx = sentence_str.index(vp.text)
first_word = index_2_word_no[str_idx] # word number for first word of verb phrase
x = first_word
if len(vp) > 1:
for verb_or_adverb in vp:
# filter out adverbs
if not verb_or_adverb.pos_ == 'ADV':
word_numbers.append(x)
x += 1
else:
word_numbers.append(first_word)
# filter out infinitive phrases
if ( (word_numbers[0] - 1) < 0) or (doc[word_numbers[0] - 1].text.lower() != 'to'):
result.append(word_numbers)
return result
def singular_or_plural(word_string):
if word_string == singularize(word_string):
return 'SG'
else:
return 'PL'
def sentence_to_keys(sentence):
doc = textacy.Doc(sentence, lang='en_core_web_lg')
# [(1), (5,6,7)] => 2 verb phrases. a single verb at index 1, another verb phrase 5,6,7
verb_phrases = get_verb_phrases(doc)
# doc = this could be my sentence
# doc_list = [this, -595002753822348241, 15488046584>THIS, my sentence]
# final_keys = [-595002753822348241:15488046584>THIS]
#
# doc = Jane is only here for tonight
# doc_list = [Jane, 13440080745121162>SG, only, here, for, tonight ]
# final_keys = [13440080745121162>SG]
doc_list = []
for word in doc:
if word.pos_ == 'VERB':
tense_hash = hashlib.sha256((str(tenses(word.text)))).hexdigest()
verb_number_or_pronoun = ''
for child in word.children:
if child.dep_ == 'nsubj':
if child.pos == 'PRON':
verb_number_or_pronoun = child.text.upper()
else:
verb_number_or_pronoun = singular_or_plural(child.text)
break
doc_list.append(tense_hash + '>' + verb_number_or_pronoun)
else:
doc_list.append(word.text)
# Get final keys
final_keys = []
for vp in verb_phrases:
vp_key_list = []
for word_no in vp:
vp_key_list.append(doc_list[word_no])
vp_key = ':'.join(vp_key_list)
final_keys.append(vp_key)
return final_keys
sentence_to_keys(texts[3])
Explanation: Get verb phrase keys for sentence
End of explanation
from collections import Counter
c = Counter()
for textString in texts:
c.update(sentence_to_keys(textString)))
total_counts = c
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: Key counts
End of explanation
word2idx = {n: i for i, n in enumerate(vocab)}## create the word-to-index dictionary here
print(word2idx)
def text_to_vector(text):
wordVector = np.zeros(len(vocab))
for word in sentence_to_keys(text):
index = word2idx.get(word, None)
if index != None:
wordVector[index] += 1
return wordVector
text_to_vector('Donald, standing on the precipice, began to dance.')[:65]
word_vectors = np.zeros((len(texts), len(vocab)), dtype=np.int_)
for ii, text in enumerate(texts):
word_vectors[ii] = text_to_vector(text)
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Take the trigrams and index them
End of explanation
records = len(labels)
test_fraction = 0.9
train_split, test_split = int(records*test_fraction), int(records*(1-test_fraction))
print(train_split, test_split)
trainX, trainY = word_vectors[:train_split], to_categorical(labels[:train_split], 2)
testX, testY = word_vectors[test_split:], to_categorical(labels[test_split:], 2)
trainX[-1], trainY[-1]
len(trainY), len(testY), len(trainY) + len(testY)
Explanation: Chunking the data for TF
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, len(vocab)]) # Input
net = tflearn.fully_connected(net, 200, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 25, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
len(vocab)
Explanation: Setting up TF
End of explanation
model = build_model()
Explanation: Initialize
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=50)
# Testing
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
w = csv.writer(open("../models/subjectverbagreementindex.csv", "w"))
for key, val in word2idx.items():
w.writerow([key, val])
model.save("../models/subject_verb_agreement_model.tfl")
Explanation: Training
End of explanation
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence)])[0][1]
print('Is this a participle phrase fragment?\n {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Yes' if positive_prob > 0.5 else 'No')
test_sentence("Neglecting to recognize the horrors those people endure allow people to go to war more easily.")
test_sentence("Katherine, gesticulating wildly and dripping in sweat, kissed him on the cheek.")
test_sentence("Working far into the night in an effort to salvage her little boat.")
test_sentence("Working far into the night in an effort to salvage her little boat, she slowly grew tired.")
test_sentence("Rushing to the rescue with his party.")
test_sentence("Isobel was about thirteen now, and as pretty a girl, according to Buzzby, as you could meet with in any part of Britain.")
test_sentence("Being of a modest and retiring disposition, Mr. Hawthorne avoided publicity.")
test_sentence("Clambering to the top of a bridge, he observed a great rainbow")
test_sentence("Clambering to the top of a bridge.")
test_sentence("He observed a great rainbow.")
test_sentence("Sitting on the iron throne, Joffry looked rather fat.")
test_sentence("Worrying that a meteor or chunk of space debris will conk her on the head.")
test_sentence("Aunt Olivia always wears a motorcycle helmet, worrying that a meteor or chunk of space debris will conk her on the head")
test_sentence("Affecting the lives of many students in New York City.")
test_sentence("Quill was a miracle, affecting the lives of many students in New York City.")
test_sentence("Standing on the edge of the cliff looking down.")
test_sentence("Emilia, standing on the edge of the cliff and looking down, began to weep.")
test_sentence("Standing on the edge of the cliff and looking down, Emilia began to weep.")
test_sentence("Tired and needing sleep.")
Explanation: Playground
End of explanation
vocab
Explanation: Save the vocab
End of explanation |
4,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
On this notebook the initial steps towards solving the capstone project will be taken. Some data gathering and others...
Step1: Getting the data
Step2: So, Google has a limit of 15 years of data on each query
Step3: Keep dictionary or use multiindex? | Python Code:
import yahoo_finance
import requests
import datetime
def print_unix_timestamp_date(timestamp):
print(
datetime.datetime.fromtimestamp(
int(timestamp)
).strftime('%Y-%m-%d %H:%M:%S')
)
print_unix_timestamp_date("1420077600")
print_unix_timestamp_date("1496113200")
EXAMPLE_QUERY = "http:/query1.finance.yahoo.com/v7/finance/download/AMZN?period1=1483585200&period2=1496113200&interval=1d&events=history&crumb=mFcCyf2I8jh"
import urllib2
response = urllib2.urlopen(EXAMPLE_QUERY)
html = response.read()
csv_values = requests.get(EXAMPLE_QUERY)
csv_values
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
%matplotlib inline
%load_ext autoreload
%autoreload 2
pd.__version__
Explanation: On this notebook the initial steps towards solving the capstone project will be taken. Some data gathering and others...
End of explanation
import pandas_datareader as pdr
pdr.__version__
from pandas_datareader import data, wb
SPY_CREATION_DATE = dt.datetime(1993,1,22)
start = SPY_CREATION_DATE
end = dt.datetime(1995,12,31)
#Let's try to get SPY
SPY_df = data.DataReader(name='SPY',data_source='google',start=start,
end=end)
print(SPY_df.shape)
SPY_df.head()
from yahoo_finance import Share
yahoo = Share('YHOO')
print(yahoo.get_price())
yahoo.get_historical('2005-01-01','2016-12-31')
import pandas_datareader.data as web
SPY_CREATION_DATE = dt.datetime(1993,1,22)
start = SPY_CREATION_DATE
end = dt.datetime(2016,12,31)
tickers = ['SPY','GOOG','AAPL','NVDA']
#Create the (empty) dataframe
dates = pd.date_range(start,end)
data_df = pd.DataFrame(index=dates)
#Let's try to get SPY
SPY_df = web.DataReader(name='SPY',data_source='google',start=start,
end=end)
print(SPY_df.shape)
SPY_df.head()
SPY_df['Close'].plot()
(SPY_df.index[-1]-SPY_df.index[0]).days / 365
Explanation: Getting the data
End of explanation
data_df
# This will add the data of one ticker
def add_ticker(data,ticker_df,ticker_name):
for key in data.keys():
column_df = pd.DataFrame(ticker_df[key]).rename(columns={key:ticker_name})
data[key] = data[key].join(column_df, how='left')
return data
def add_tickers(data, tickers, source):
for name in tickers:
if(not (name in data['Open'].columns)):
ticker_df = web.DataReader(name=name,data_source=source,start=start,end=end)
data = add_ticker_data(data,ticker_df,name)
print('Added: '+name)
else:
print(name+' was already added')
return data
Explanation: So, Google has a limit of 15 years of data on each query
End of explanation
iterables = [SPY_df.index, SPY_df.columns]
indexes = pd.MultiIndex.from_product(iterables, names=['date', 'feature'])
data_multi = pd.DataFrame(index=indexes)
print(data_multi.shape)
data_multi.head(20)
data_multi.xs('2001-02-08', level='date')
SPY_df.iloc[0]
SPY_df.head()
data_multi['sd'] = np.nan
data_multi.loc['2001-02-05','Open']['sd'] = SPY_df.loc['2001-02-05','Open']
data_multi
SPY_df.reset_index(inplace=True)
SPY_df.head()
SPY_df.set_index(['Date','Open'])
Explanation: Keep dictionary or use multiindex?
End of explanation |
4,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 1
Step1: Question 1
Find the two entries that sum to 2020 and then multiply those two numbers together.
Step2: Question 2
What is the product of the three entries that sum to 2020? | Python Code:
input_f = './input.txt'
# Read expenses
expenses = set()
with open(input_f, 'r') as fd:
for line in fd:
expenses.add(int(line.strip()))
Explanation: Day 1
End of explanation
# Find 2 expenses that add up to 2020 and get their product
stop = 0
for exp1 in expenses:
for exp2 in expenses:
if exp1 + exp2 == 2020:
print(exp1, exp2, exp1 * exp2)
Explanation: Question 1
Find the two entries that sum to 2020 and then multiply those two numbers together.
End of explanation
# First try, not very nice
stop = 0
for exp1 in expenses:
for exp2 in expenses:
for exp3 in expenses:
if exp1 + exp2 + exp3 == 2020:
print(exp1, exp2, exp3, exp1 * exp2 * exp3)
stop = 1
def evaluate_3(a, b, c):
if a + b + c == 2020:
return a * b * c
else:
return 0
# Second option (would also apply to question 1)
solution = 0
for exp1 in expenses:
for exp2 in expenses:
for exp3 in expenses:
solution = evaluate_3(exp1, exp2, exp3)
if solution != 0:
break
if solution != 0:
break
if solution != 0:
break
print(solution)
Explanation: Question 2
What is the product of the three entries that sum to 2020?
End of explanation |
4,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 1
Step1: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular
Step4: As we see we get the same answer both ways
Step5: Aside
Step6: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line
Step7: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
Step8: Predicting Values
Now that we have the model parameters
Step9: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question
Step10: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope
Step11: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
Step12: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question
Step13: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
Step14: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question
Step15: New Model
Step16: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 1: Simple Linear Regression
In this notebook we will use data on house sales in King County to predict house prices using simple (one input) linear regression. You will:
* Use graphlab SArray and SFrame functions to compute important summary statistics
* Write a function to compute the Simple Linear Regression weights using the closed form solution
* Write a function to make predictions of the output given the input feature
* Turn the regression around to predict the input given the output
* Compare two different models for predicting house prices
In this notebook you will be provided with some already complete code as well as some code that you should complete yourself in order to answer quiz questions. The code we provide to complte is optional and is there to assist you with solving the problems but feel free to ignore the helper code and write your own.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Split data into training and testing
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
# Let's compute the mean of the House Prices in King County in 2 different ways.
prices = sales['price'] # extract the price column of the sales SFrame -- this is now an SArray
# recall that the arithmetic average (the mean) is the sum of the prices divided by the total number of houses:
sum_prices = prices.sum()
num_houses = prices.size() # when prices is an SArray .size() returns its length
avg_price_1 = sum_prices/num_houses
avg_price_2 = prices.mean() # if you just want the average, the .mean() function
print "average price via method 1: " + str(avg_price_1)
print "average price via method 2: " + str(avg_price_2)
Explanation: Useful SFrame summary functions
In order to make use of the closed form soltion as well as take advantage of graphlab's built in functions we will review some important ones. In particular:
* Computing the sum of an SArray
* Computing the arithmetic average (mean) of an SArray
* multiplying SArrays by constants
* multiplying SArrays by other SArrays
End of explanation
# if we want to multiply every price by 0.5 it's a simple as:
half_prices = 0.5*prices
# Let's compute the sum of squares of price. We can multiply two SArrays of the same length elementwise also with *
prices_squared = prices*prices
sum_prices_squared = prices_squared.sum() # price_squared is an SArray of the squares and we want to add them up.
print "the sum of price squared is: " + str(sum_prices_squared)
Explanation: As we see we get the same answer both ways
End of explanation
def simple_linear_regression(input_feature, output):
# compute the mean of input_feature and output
avg_input = input_feature.mean()
avg_output = output.mean()
# compute the product of the output and the input_feature and its mean
pdt = input_feature * output
avg_pdt = pdt.mean()
# compute the squared value of the input_feature and its mean
sqr_input = input_feature * input_feature
# use the formula for the slope
slope = (pdt.sum() - input_feature.mean() * output.sum()) / (sqr_input.sum() - input_feature.mean() * input_feature.sum())
# use the formula for the intercept
intercept = avg_output - slope * avg_input
return (intercept, slope)
Explanation: Aside: The python notation x.xxe+yy means x.xx * 10^(yy). e.g 100 = 10^2 = 1*10^2 = 1e2
Build a generic simple linear regression function
Armed with these SArray functions we can use the closed form solution found from lecture to compute the slope and intercept for a simple linear regression on observations stored as SArrays: input_feature, output.
Complete the following function (or write your own) to compute the simple linear regression slope and intercept:
End of explanation
test_feature = graphlab.SArray(range(5))
test_output = graphlab.SArray(1 + 1*test_feature)
(test_intercept, test_slope) = simple_linear_regression(test_feature, test_output)
print "Intercept: " + str(test_intercept)
print "Slope: " + str(test_slope)
Explanation: We can test that our function works by passing it something where we know the answer. In particular we can generate a feature and then put the output exactly on a line: output = 1 + 1*input_feature then we know both our slope and intercept should be 1
End of explanation
sqft_intercept, sqft_slope = simple_linear_regression(train_data['sqft_living'], train_data['price'])
print "Intercept: " + str(sqft_intercept)
print "Slope: " + str(sqft_slope)
Explanation: Now that we know it works let's build a regression model for predicting price based on sqft_living. Rembember that we train on train_data!
End of explanation
def get_regression_predictions(input_feature, intercept, slope):
# calculate the predicted values:
predicted_values = input_feature * slope + intercept
return predicted_values
Explanation: Predicting Values
Now that we have the model parameters: intercept & slope we can make predictions. Using SArrays it's easy to multiply an SArray by a constant and add a constant value. Complete the following function to return the predicted output given the input_feature, slope and intercept:
End of explanation
my_house_sqft = graphlab.SArray([2650])
estimated_price = get_regression_predictions(my_house_sqft, sqft_intercept, sqft_slope)
print "The estimated price for a house with %d squarefeet is $%.2f" % (my_house_sqft[0], estimated_price[0])
Explanation: Now that we can calculate a prediction given the slop and intercept let's make a prediction. Use (or alter) the following to find out the estimated price for a house with 2650 squarefeet according to the squarefeet model we estiamted above.
Quiz Question: Using your Slope and Intercept from (4), What is the predicted price for a house with 2650 sqft?
End of explanation
def get_residual_sum_of_squares(input_feature, output, intercept, slope):
# First get the predictions
predictions = get_regression_predictions(input_feature, intercept, slope)
# then compute the residuals (since we are squaring it doesn't matter which order you subtract)
residuals = predictions - output
# square the residuals and add them up
RSS = (residuals * residuals).sum()
return(RSS)
Explanation: Residual Sum of Squares
Now that we have a model and can make predictions let's evaluate our model using Residual Sum of Squares (RSS). Recall that RSS is the sum of the squares of the residuals and the residuals is just a fancy word for the difference between the predicted output and the true output.
Complete the following (or write your own) function to compute the RSS of a simple linear regression model given the input_feature, output, intercept and slope:
End of explanation
print get_residual_sum_of_squares(test_feature, test_output, test_intercept, test_slope) # should be 0.0
Explanation: Let's test our get_residual_sum_of_squares function by applying it to the test model where the data lie exactly on a line. Since they lie exactly on a line the residual sum of squares should be zero!
End of explanation
rss_prices_on_sqft = get_residual_sum_of_squares(train_data['sqft_living'], train_data['price'], sqft_intercept, sqft_slope)
print 'The RSS of predicting Prices based on Square Feet is : ' + str(rss_prices_on_sqft)
Explanation: Now use your function to calculate the RSS on training data from the squarefeet model calculated above.
Quiz Question: According to this function and the slope and intercept from the squarefeet model What is the RSS for the simple linear regression using squarefeet to predict prices on TRAINING data?
End of explanation
def inverse_regression_predictions(output, intercept, slope):
# solve output = slope + intercept*input_feature for input_feature. Use this equation to compute the inverse predictions:
estimated_feature = get_regression_predictions(output, -intercept / slope, 1.0 / slope)
return estimated_feature
Explanation: Predict the squarefeet given price
What if we want to predict the squarefoot given the price? Since we have an equation y = a + b*x we can solve the function for x. So that if we have the intercept (a) and the slope (b) and the price (y) we can solve for the estimated squarefeet (x).
Comlplete the following function to compute the inverse regression estimate, i.e. predict the input_feature given the output!
End of explanation
my_house_price = 800000
estimated_squarefeet = inverse_regression_predictions(my_house_price, sqft_intercept, sqft_slope)
print "The estimated squarefeet for a house worth $%.2f is %d" % (my_house_price, estimated_squarefeet)
Explanation: Now that we have a function to compute the squarefeet given the price from our simple regression model let's see how big we might expect a house that coses $800,000 to be.
Quiz Question: According to this function and the regression slope and intercept from (3) what is the estimated square-feet for a house costing $800,000?
End of explanation
# Estimate the slope and intercept for predicting 'price' based on 'bedrooms'
bedrooms_intercept, bedrooms_slope = simple_linear_regression(train_data['bedrooms'], train_data['price'])
print bedrooms_intercept, bedrooms_slope
Explanation: New Model: estimate prices from bedrooms
We have made one model for predicting house prices using squarefeet, but there are many other features in the sales SFrame.
Use your simple linear regression function to estimate the regression parameters from predicting Prices based on number of bedrooms. Use the training data!
End of explanation
# Compute RSS when using bedrooms on TEST data:
get_residual_sum_of_squares(test_data['bedrooms'], test_data['price'], bedrooms_intercept, bedrooms_slope)
# Compute RSS when using squarfeet on TEST data:
get_residual_sum_of_squares(test_data['sqft_living'], test_data['price'], sqft_intercept, sqft_slope)
Explanation: Test your Linear Regression Algorithm
Now we have two models for predicting the price of a house. How do we know which one is better? Calculate the RSS on the TEST data (remember this data wasn't involved in learning the model). Compute the RSS from predicting prices using bedrooms and from predicting prices using squarefeet.
Quiz Question: Which model (square feet or bedrooms) has lowest RSS on TEST data? Think about why this might be the case.
End of explanation |
4,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load in the stopwords file. These are common words which we wish to exclude when performing comparisons (a, an, the, etc). Every word is separated by a new line.
Step1: Load in the data from the catalog | Python Code:
stopWordsFile = "en.txt"
with open(stopWordsFile) as f:
stoplist = [x.strip('\n') for x in f.readlines()]
Explanation: Load in the stopwords file. These are common words which we wish to exclude when performing comparisons (a, an, the, etc). Every word is separated by a new line.
End of explanation
# http://stackoverflow.com/questions/956867/how-to-get-string-objects-instead-of-unicode-ones-from-json-in-python
# need this to deal with unicode errors
def byteify(input):
if isinstance(input, dict):
return {byteify(key): byteify(value)
for key, value in input.iteritems()}
elif isinstance(input, list):
return [byteify(element) for element in input]
elif isinstance(input, unicode):
return input.encode('utf-8')
else:
return input
gunzipFile('../catalogs/gabi_2016_professional-database-2016.json.gz',
'../catalogs/gabi_2016_professional-database-2016.json')
gunzipFile('../catalogs/uslci_ecospold.json.gz',
'../catalogs/uslci_ecospold.json')
with open('../catalogs/gabi_2016_professional-database-2016.json') as data_file:
gabi = json.load(data_file, encoding='utf-8')
with open('../catalogs/uslci_ecospold.json') as data_file:
uslci = json.load(data_file, encoding='utf-8')
gabi = byteify(gabi)
uslci = byteify(uslci)
roundwood = [flow for flow in uslci['flows'] if search_tags(flow,'roundwood, softwood')]
roundwoodExample = roundwood[0]
# number of top scores to show
numTopScores = 10
flowNames = []
distValues = []
for flow in gabi['archives'][0]['flows']:
name = flow['tags']['Name']
flowNames.append(name)
dist = jaccardDistance(roundwoodExample['tags']['Name'], name, stoplist)
distValues.append(dist)
len(flowNames)
# figure out top scores
arr = np.array(distValues)
topIndices = arr.argsort()[0:numTopScores]
topScores = np.array(distValues)[topIndices]
print 'Process name to match:'
print roundwoodExample['tags']['Name']
print 'Matches using Jaccard Index:'
for i, s in zip(topIndices, topScores):
if s < 9999:
print(flowNames[i],s)
Explanation: Load in the data from the catalog
End of explanation |
4,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Taller de Python - Estadística en Física Experimental - 1er día
Esta presentación/notebook está disponible
Step1: Aquí hemos guardado en un espacio de memoria llamado por nosotros "x" la información de un valor de tipo entero, 5, en otro espacio de memoria, que nosotros llamamos "y" guardamos el texto "Hola mundo!". En Python, las comillas indican que lo que encerramos con ellas es un texto. x no es un texto, así que Python lo tratará como variable para manipular. "z" es el nombre del espacio de memoria donde se almacena una lista con 3 elementos enteros.
Podemos hacer cosas con esta información. Python es un lenguaje interpretado (a diferencia de otros como Java o C++), eso significa que ni bien nosotros le pedimos algo a Python, éste lo ejecuta. Así es que podremos pedirle por ejemplo que imprima en pantalla el contenido en y, el tipo de valor que es x (entero) entre otras cosas.
Step2: Vamos a utilizar mucho la función type() para entender con qué tipo de variables estamos trabajando. type() es una función predeterminada por Python, y lo que hace es pedir como argumento (lo que va entre los paréntesis) una variable y devuelve inmediatamente el tipo de variable que es.
Ejercicio 1
En el siguiente bloque cree las variables "dato1" y "dato2" y guarde en ellas los textos "estoy programando" y "que emocion!". Con la función type() averigue qué tipo de datos se almacena en esas variables.
Step3: Para las variables integers(enteros) y floats (flotantes) podemos hacer las operaciones matemáticas usuales y esperables. Veamos un poco las compatibilidades entre estos tipos de variables.
Step4: Ejercicio 2
Calcule el resultado de $$ \frac{(2+7.9)^2}{4^{7.4-3.14*9.81}-1} $$ y guárdelo en una variable
Step5: Listas, tuplas y diccionarios
Las listas son cadenas de datos de cualquier tipo, unidos por estar en una misma variable, con posiciones dentro de esa lista, con las cuales nosotros podemos llamarlas. En Python, las listas se enumeran desde el 0 en adelante.
Estas listas también tienen algunas operaciones que le son válidas.
Distintas son las tuplas. Las listas son editables (en jerga, mutables), pero las tuplas no (inmutables). Esto es importante cuando, a lo largo del desarrollo de un código donde necesitamos que ciertas cosas no cambien, no editemos por error valores fundamentales de nuestro problema a resolver.
Step6: Hay formas muy cómodas de hacer listas. Presentamos una que utilizaremos mucho, que es usando la función range. Esta devuelve como una receta de como hacer los numeros; por lo tanto tenemos que decirle al generador que cree la lista, por medio de otra herramienta incorporada de Python, list
Step7: Cómo en general no se hace seguido esto, no existe una forma "rápida" o "más elegante" de hacerlo.
Ejercicio 3
Haga una lista con los resultados de los últimos dos ejercicios y que la imprima en pantalla
Sobreescriba en la misma variable la misma lista pero con sus elementos permutados e imprima nuevamente la lista
Ejemplo de lo que debería mostrarse en pantalla
['estoy programando', 'que emocion!', -98.01]
['estoy programando', -98.01, 'que emocion!']
Step8: Ejercicio 4
Haga una lista con la función range de 15 elementos y sume los elementos 5, 10 y 12
Con la misma lista, haga el producto de los primeros 4 elementos de esa lista
Con la misma lista, reste el último valor con el primero
Step9: Ahora, el titulo hablaba de diccionarios... pero no son los que usamos para buscar el significado de las palabras. ¡Aunque pueden ser parecidos o funcionar igual!.
Un diccionario es un relación entre una variable llamada llave y otra variable llamado valor. Relación en el sentido de función que veíamos en el secundario, pero usualmente de forma discreta.
La magia es que sabiendo la llave, o key, ya tienes el valor, o value, por lo que podés usarlo como una lista pero sin usar indices si no cosas como cadenas. Las keys son únicas, y si quiero crear un diccionario con las mismas keys se van a pisar y queda la última aparición
Veamos un ejemplo
Step10: Es particularmente mágico el diccionario y lo podes usar para muchisimas cosas (y además Python lo usa para casi todo internamente, así que está muy bueno saber usarlos!).
El largo de un diccionario es la cantidad de keys que tiene, por ejemplo
Step11: Ejercicio 5
Haga un diccionario con tal que con el siguiente código
print(tu_dict[1] + tu_dict["FIFA"] + tu_dict[(3,4)])
Imprima "Programador, hola mundo!". Puede tener todas las entradas que quieras, no hay limite de la creatividad acá
Step12: Booleans
Este tipo de variable tiene sólo dos valores posibles
Step13: También podemos comparar listas, donde todas las entradas deberíán ser iguales
Step14: Lo mismo para tuplas (y aplica para diccionarios)
Step15: Con la función id() podemos ver si dos variables apuntan a la misma dirección de memoria, es decir podemos ver si dos variables tienen exactamente el mismo valor (aunque sea filosófico, en Python la diferencia es importante)
Step16: Las listas, tuplas y diccionarios también pueden devolver booleanos cuando se le pregunta si tiene o no algún elemento. Los diccionarios trabajaran sobre las llaves y las listas/tuplas sobre sus indices/valores
Step17: Ejercicio 6
Averigue el resultado de 4!=5==1. ¿Dónde pondría paréntesis para que el resultado fuera distinto?
Step18: Control de flujo
Step19: Ejercicio 7
Haga un programa con un if que imprima la suma de dos números si un tercero es positivo, y que imprima la resta si el tercero es negativo.
Step20: Para que Python repita una misma acción n cantidad de veces, utilizaremos la estructura for. En cada paso, nosotros podemos aprovechar el "número de iteración" como una variable. Eso nos servirá en la mayoría de los casos.
Step21: Ejercicio 8
Haga otra lista con 16 elementos, y haga un programa que con un for imprima solo los primeros 7
Modifique el for anterior y haga que imprima solo los elementos pares de su lista
Step22: La estructura while es poco recomendada en Python pero es importante saber que existe
Step23: Ejercicio 9
Calcule el factorial de N, siendo N la única variable que recibe la función (Se puede pensar usando for o usando while).
Calcule la sumatoria de los elementos de una lista.
Step24: Funciones
Pero si queremos definir nuestra propia manera de calcular algo, o si queremos agrupar una serie de órdenes bajo un mismo nombre, podemos definirnos nuestras propias funciones, pidiendo la cantidad de argumentos que querramos.
Vamos a usar las funciones lambda (también llamadas anonimas) más que nada para funciones matemáticas, aunque también tenga otros usos. Definamos el polinomio $f(x) = x^2 - 5x + 6$ que tiene como raíces $x = 3$ y $x = 2$.
Step25: Las funciones lambda son necesariamente funciones de una sola linea y también tienen que retornar nada; por eso son candidatas para expresiones matemáticas simples.
Las otras funciones, las más generales, se las llama funciones def, y tienen la siguiente forma.
Step26: Algo muy interesante y curioso, es que podemos hacer lo siguiente con las funciones
Step27: Las funciones pueden ser variables y esto abre la puerta a muchas cosas. Si tienen curiosidad, pregunten que está re bueno esto!
Ejercicio 10
Hacer una función que calcule el promedio de $n$ elementos dados en una lista.
Sugerencia
Step28: Ejercicio 11
Usando lo que ya sabemos de funciones matemáticas y las bifurcaciones que puede generar un if, hacer una función que reciba los coeficientes $a, b, c$ de la parábola $f(x) = ax^2 + bx + c$ y calcule las raíces si son reales (es decir, usando el discriminante $\Delta = b^2 - 4ac$ como criterio), y sino que imprima en pantalla una advertencia de que el cálculo no se puede hacer en $\mathbb{R}$.
Step29: Bonus track 1
Modificar la función anterior para que calcule las raíces de todos modos, aunque sean complejas. Python permite usar números complejos escritos de la forma 1 + 4j. Investiguen un poco
Step30: Ejercicio 12
Repitan el ejercicio 8, es decir
1. Hacer una función que calcule el factorial de N, siendo N la única variable que recibe la función (Se puede pensar usando for o usando while).
* Hacer una función que calcule la sumatoria de los elementos de una lista.
¿Se les ocurre otra forma de hacer el factorial? Piensen la definición matemática y escribanla en Python, y prueben calcular el factorial de 100 con esta definición nueva
Step31: Paquetes y módulos
Pero las operaciones básicas de suma, resta, multiplicación y división son todo lo que un lenguaje como Python puede hacer "nativamente". Una potencia o un seno es álgebra no lineal, y para hacerlo, habría que inventarse un algoritmo (una serie de pasos) para calcular por ejemplo sen($\pi$). Pero alguien ya lo hizo, ya lo pensó, ya lo escribió en lenguaje Python y ahora todos podemos usar ese algoritmo sin pensar en él. Solamente hay que decirle a nuestro intérprete de Python dónde está guardado ese algoritmo. Esta posibilidad de usar algoritmos de otros es fundamental en la programación, porque es lo que permite que nuestro problema se limite solamente a entender cómo llamar a estos algoritmos ya pensados y no tener que pensarlos cada vez.
Vamos entonces a llamar a un paquete (como se le llama en Python) llamada math que nos va a extender nuestras posibilididades matemáticas.
Step32: Para entender cómo funcionan estas funciones, es importante recurrir a su documentation. La de esta biblioteca en particular se encuentra en
https
Step33: Crear bibliotecas
Bueno, ahora que sabemos como usar bibliotecas, nos queda saber cómo podemos crearlas. Pero para saber eso, tenemos que saber que es un módulo en Python y cómo se relaciona con un paquete.
Se le llama módulo a los archivos de Python, archivos con la extensión *.py, como por ejemplo taller_python.py (como tal vez algunos hicieron ya). En este archivo se agregan funciones, variables, etc, que pueden ser llamadas desde otro módulo con el nombre sin la extensión, es decir
Step34: Python para buscar estos módulos revisa si el módulo importado (con el comando import) está presente en la misma carpeta del que importa y luego en una serie de lugares estándares de Python (que se pueden alterar y revisar usando sys.path, importando el paquete sys). Si lo encuentra lo importa y podés usar las funciones, y si no puede salta una excepción
Step35: Traten de importar la función __func_oculta. Se puede, pero es un hack de Python y la idea es que no sepa de ella. Es una forma de ocultar y encapsular código, que es uno de los principios de la programación orientada a objetos.
Finalmente, un paquete como math es un conjunto de módulos ordenados en una carpeta con el nombre math, con un archivo especial __init__.py, que hace que la carpeta se comporte como un módulo. Python importa lo que vea en el archivo __init__.py y permite además importar los módulos dentro (o submodulos), si no tienen guiones bajos antes.
Usualmente no es recomendable trabajar en el __init__.py, salvo que se tenga una razón muy necesaria (o simplemente vagancia)
Ejercicio 14
Creen una libraría llamada mi_taller_python y agregen dos funciones, una que devuelva el resultado de $\sqrt{x^2+2x+1}$ para cualquier x y otra que resuelva el resultado de $(x^2+2x+1)^{y}$, para cualquier x e y. Hagan todas las funciones ocultas que requieran (aunque recomendamos siempre minimzarlas)
Step36: Bonus track 2
Ahora que nos animamos a buscar nuevas bibliotecas y definir funciones, buscar la función newton() de la biblioteca scipy.optimize para hallar $x$ tal que se cumpla la siguiente ecuación no lineal $$\frac{1}{x} = ln(x)$$ | Python Code:
x = 5
y = 'Hola mundo!'
z = [1,2,3]
Explanation: Taller de Python - Estadística en Física Experimental - 1er día
Esta presentación/notebook está disponible:
Repositorio Github FIFA BsAs (para descargarlo, usen el botón raw o hagan un fork del repositorio)
Página web de talleres FIFA BsAs
Programar ¿con qué se come?
Programar es dar una lista de tareas concretas a la computadora para que haga. Esencialmente, una computadora sabe:
Leer datos
Escribir datos
Transformar datos
Y nada más que esto,. Así, la computadora pasa a ser suna gran gran calculadora que permite hacer cualquier tipo de cuenta de las que necesitemos dentro de la Física (y de la vida también) mientras sepamos cómo decirle a la máquina qué cómputos hacer.
Pero, ¿qué es Python?
Python es un lenguaje para hablarle a la computadora, que se denominan lenguajes de programación. Este lenguaje, que puede ser escrito y entendido por la computadora debe ser transformado a un lenguaje que entieda la computadora (o un intermediario, que se denomina maquina virtual) así se hacen las transformaciones. Todo este modelo de programación lo podemos ver esquematizado en la figura siguiente
<img src="modelo_computacional_python.png" alt="Drawing" style="width: 400px;"/>
Historia
Python nació en 1991, cuando su creador Guido Van Rossum lo hizo público en su versión 0.9. El lenguaje siempre buscó ser fácil de aprender y poder hacer tareas de todo tipo. Es fácil de aprender por su sintaxis, el tipado dinámico (que vamos a ver de que se trata) y además la gran cantidad de librerías/módulos para todo.
Herramientas para el taller
Para trabajar vamos a usar algún editor de texto (recomendamos Visual Studio Code, que viene con Anaconda), una terminal, o directamente el editor Spyder (que pueden buscarlo en las aplicaciones de la computadora si instalaron Anaconda o si lo instalaron en la PC del aula). También, si quieren podemos trabajar en un Jupyter Notebook, que permite hacer archivos como este (y hacer informes con código intercalado)
Esto es a gusto del consumidor, sabemos usar todas esas herramientas. Cada una tiene sus ventajas y desventajas:
- Escribir y ejecutar en consola no necesita instalar nada más que Python. Aprender a usar la consola da muchos beneficios de productividad
- El editor o entorno de desarrollo al tener más funcionalidad es más pesado, y probablemente sea más caro (Pycharm, que es el entorno de desarrollo más completo de Python sale alrededor de 200 dolares... auch)
- Jupyter notebook es un entorno muy interactivo, pero puede traer problemas en la lógica de ejecución. Hay que tener cuidado
Para instalar Python, conviene descargarse Anaconda. Este proyecto corresponde a una distribución de Python, que al tener una interfaz grafica amigable y manejador de paquetes llamado conda te permite instalar todas las librerías científicas de una. En Linux y macOS instalar Python sin Anaconda es más fácil, en Windows diría que es una necesidad sin meterse en asuntos oscuros de compilación (y además que el soporte en Windows de las librerías no es tan amplio).
Existe un proyecto llamado pyenv que en Linux y macOS permite instalar cualquier versión de Python. Si lo quieren tener (aunque para empezar Anaconda es mejor) pregunte que lo configuramos rápidamente.
Datos, memoria y otras yerbas
Para hacer cuentas, primero necesitamos el medio para guardar o almacenar los datos. El sector este se denomina memoria. Nuestros datos se guardan en espacios de memoria, y esos espacios tienen un nombre, un rótulo con el cual los podremos llamar y pedirle a la computadora que los utilice para operar con ellos, los modifique, etc.
Como esos espacios son capaces de variar al avanzar los datos llegamos a llamarlos variables, y el proceso de llenar la variable con un valor se denomina asignación, que en Python se corresponde con el "=".
Hasta ahora sólo tenemos en la cabeza valores numéricos para nuestras variables, considerando la analogía de la super-calculadora. Pero esto no es así, y es más las variables en Python contienen la información adicional del tipo de dato. Este tipo de dato determina las operaciones posibles con la variable (además del tamaño en memoria, pero esto ya era esperable del mismo valor de la variable).
Veamos un par de ejemplos
End of explanation
print(y)
print(type(x))
print(type(y), type(z), len(z))
Explanation: Aquí hemos guardado en un espacio de memoria llamado por nosotros "x" la información de un valor de tipo entero, 5, en otro espacio de memoria, que nosotros llamamos "y" guardamos el texto "Hola mundo!". En Python, las comillas indican que lo que encerramos con ellas es un texto. x no es un texto, así que Python lo tratará como variable para manipular. "z" es el nombre del espacio de memoria donde se almacena una lista con 3 elementos enteros.
Podemos hacer cosas con esta información. Python es un lenguaje interpretado (a diferencia de otros como Java o C++), eso significa que ni bien nosotros le pedimos algo a Python, éste lo ejecuta. Así es que podremos pedirle por ejemplo que imprima en pantalla el contenido en y, el tipo de valor que es x (entero) entre otras cosas.
End of explanation
# Realice el ejercicio 1
Explanation: Vamos a utilizar mucho la función type() para entender con qué tipo de variables estamos trabajando. type() es una función predeterminada por Python, y lo que hace es pedir como argumento (lo que va entre los paréntesis) una variable y devuelve inmediatamente el tipo de variable que es.
Ejercicio 1
En el siguiente bloque cree las variables "dato1" y "dato2" y guarde en ellas los textos "estoy programando" y "que emocion!". Con la función type() averigue qué tipo de datos se almacena en esas variables.
End of explanation
a = 5
b = 7
c = 5.0
d = 7.0
print(a+b, b+c, a*d, a/b, a/d, c**2)
Explanation: Para las variables integers(enteros) y floats (flotantes) podemos hacer las operaciones matemáticas usuales y esperables. Veamos un poco las compatibilidades entre estos tipos de variables.
End of explanation
# Realice el ejercicio 2. El resultado esperado es -98.01
Explanation: Ejercicio 2
Calcule el resultado de $$ \frac{(2+7.9)^2}{4^{7.4-3.14*9.81}-1} $$ y guárdelo en una variable
End of explanation
lista1 = [1, 2, 'saraza']
print(lista1, type(lista1))
print(lista1[1], type(lista1[1]))
print(lista1[2], type(lista1[2]))
print(lista1[-1])
lista2 = [2,3,4]
lista3 = [5,6,7]
#print(lista2+lista3)
print(lista2[2]+lista3[0])
tupla1 = (1,2,3)
lista4 = [1,2,3]
lista4[2] = 0
print(lista4)
#tupla1[0] = 0
print(tupla1)
Explanation: Listas, tuplas y diccionarios
Las listas son cadenas de datos de cualquier tipo, unidos por estar en una misma variable, con posiciones dentro de esa lista, con las cuales nosotros podemos llamarlas. En Python, las listas se enumeran desde el 0 en adelante.
Estas listas también tienen algunas operaciones que le son válidas.
Distintas son las tuplas. Las listas son editables (en jerga, mutables), pero las tuplas no (inmutables). Esto es importante cuando, a lo largo del desarrollo de un código donde necesitamos que ciertas cosas no cambien, no editemos por error valores fundamentales de nuestro problema a resolver.
End of explanation
listilla = list(range(10))
print(listilla, type(listilla))
Explanation: Hay formas muy cómodas de hacer listas. Presentamos una que utilizaremos mucho, que es usando la función range. Esta devuelve como una receta de como hacer los numeros; por lo tanto tenemos que decirle al generador que cree la lista, por medio de otra herramienta incorporada de Python, list
End of explanation
# Realice el ejercicio 3
Explanation: Cómo en general no se hace seguido esto, no existe una forma "rápida" o "más elegante" de hacerlo.
Ejercicio 3
Haga una lista con los resultados de los últimos dos ejercicios y que la imprima en pantalla
Sobreescriba en la misma variable la misma lista pero con sus elementos permutados e imprima nuevamente la lista
Ejemplo de lo que debería mostrarse en pantalla
['estoy programando', 'que emocion!', -98.01]
['estoy programando', -98.01, 'que emocion!']
End of explanation
# Realice el ejercicio 4
Explanation: Ejercicio 4
Haga una lista con la función range de 15 elementos y sume los elementos 5, 10 y 12
Con la misma lista, haga el producto de los primeros 4 elementos de esa lista
Con la misma lista, reste el último valor con el primero
End of explanation
d = {"hola": 1, "mundo": 2, 0: "numero", (0, 1): ["tupla", 0, 1]} # Las llaves pueden ser casi cualquier cosa (lista no)
print(d, type(d))
print(d["hola"])
print(d[0])
print(d[(0, 1)])
# Podés setear una llave (o key) vieja
d[0] = 10
# O podes agregar una nueva. El orden de las llaves no es algo en qué confiar necesariamente, para eso está OrderedDict
d[42] = "La respuesta"
# Cambiamos el diccionario, así que aparecen nuevas keys y cambios de values
print(d)
# Keys repetidas terminan siendo sobreescritas
rep_d = {0: 1, 0: 2}
print(rep_d)
# Otra cosas menor, un diccionario vacío es
empt_d = {}
print(empt_d)
Explanation: Ahora, el titulo hablaba de diccionarios... pero no son los que usamos para buscar el significado de las palabras. ¡Aunque pueden ser parecidos o funcionar igual!.
Un diccionario es un relación entre una variable llamada llave y otra variable llamado valor. Relación en el sentido de función que veíamos en el secundario, pero usualmente de forma discreta.
La magia es que sabiendo la llave, o key, ya tienes el valor, o value, por lo que podés usarlo como una lista pero sin usar indices si no cosas como cadenas. Las keys son únicas, y si quiero crear un diccionario con las mismas keys se van a pisar y queda la última aparición
Veamos un ejemplo
End of explanation
new_d = {0: '0', '0': 0}
print(len(new_d))
# Diccionario vacío
print(len({}))
Explanation: Es particularmente mágico el diccionario y lo podes usar para muchisimas cosas (y además Python lo usa para casi todo internamente, así que está muy bueno saber usarlos!).
El largo de un diccionario es la cantidad de keys que tiene, por ejemplo
End of explanation
# Realice el ejercicio 5
# Descomente esta línea y a trabajar
# print(tu_dict[1] + tu_dict["FIFA"] + tu_dict[(3,4)])
Explanation: Ejercicio 5
Haga un diccionario con tal que con el siguiente código
print(tu_dict[1] + tu_dict["FIFA"] + tu_dict[(3,4)])
Imprima "Programador, hola mundo!". Puede tener todas las entradas que quieras, no hay limite de la creatividad acá
End of explanation
print(5 > 4)
print(4 > 5)
print(4 == 5) #La igualdad matemática se escribe con doble ==
print(4 != 5) #La desigualdad matemática se escribe con !=
print(type(4 > 5))
Explanation: Booleans
Este tipo de variable tiene sólo dos valores posibles: 1 y 0, o True y False. Las utilizaremos escencialmente para que Python reconozca relaciones entre números.
End of explanation
print([1, 2, 3] == [1, 2, 3])
print([1, 2, 3] == [1, 3, 2])
Explanation: También podemos comparar listas, donde todas las entradas deberíán ser iguales
End of explanation
print((0, 1) == (0, 1))
print((1, 3) == (0, 3))
Explanation: Lo mismo para tuplas (y aplica para diccionarios)
End of explanation
a = 5
b = a
print(id(a) == id(b))
a = 12 # Reutilizamos la variable, con un nuevo valor
b = 12
print(id(a) == id(b)) # Python cachea números de 16bits
a = 66000
b = 66000
print(id(a) == id(b))
# No cachea listas, ni strings
a = [1, 2, 3]
b = [1, 2, 3]
print(id(a) == id(b))
a = "Python es lo más"
b = "Python es lo más"
print(id(a) == id(b))
Explanation: Con la función id() podemos ver si dos variables apuntan a la misma dirección de memoria, es decir podemos ver si dos variables tienen exactamente el mismo valor (aunque sea filosófico, en Python la diferencia es importante)
End of explanation
nueva_l = [0, 42, 3]
nueva_t = (2.3, 4.2);
nuevo_d = {"0": -4, (0, 1): "tupla"}
# La frase es
# >>> x in collection
# donde collection es una tupla, lista o diccionario. Parece inglés escrito no?
print(42 in nueva_l)
print(3 in nueva_t)
print((0,1) in nuevo_d)
Explanation: Las listas, tuplas y diccionarios también pueden devolver booleanos cuando se le pregunta si tiene o no algún elemento. Los diccionarios trabajaran sobre las llaves y las listas/tuplas sobre sus indices/valores
End of explanation
# Realice el ejercicio 5
Explanation: Ejercicio 6
Averigue el resultado de 4!=5==1. ¿Dónde pondría paréntesis para que el resultado fuera distinto?
End of explanation
parametro = 5
if parametro > 0: # un if inaugura un nuevo bloque indentado
print('Tu parametro es {} y es mayor a cero'.format(parametro))
print('Gracias')
else: # el else inaugura otro bloque indentado
print('Tu parametro es {} y es menor o igual a cero'.format(parametro))
print('Gracias')
print('Vuelva pronto')
print(' ')
parametro = -5
if parametro > 0: # un if inaugura un nuevo bloque indentado
print('Tu parametro es {} y es mayor a cero'.format(parametro))
print('Gracias')
else: # el else inaugura otro bloque indentado
print('Tu parametro es {} y es menor o igual a cero'.format(parametro))
print('Gracias')
print('Vuelva pronto')
print(' ')
Explanation: Control de flujo: condicionales e iteraciones (if y for para los amigos)
Si en el fondo un programa es una serie de algoritmos que la computadora debe seguir, un conocimiento fundamental para programar es saber cómo pedirle a una computadora que haga operaciones si se cumple una condición y que haga otras si no se cumple. Nos va a permitir hacer programas mucho más complejos. Veamos entonces como aplicar un if.
End of explanation
# Realice el ejercicio 7
Explanation: Ejercicio 7
Haga un programa con un if que imprima la suma de dos números si un tercero es positivo, y que imprima la resta si el tercero es negativo.
End of explanation
nueva_lista = ['nada',1,2,'tres', 'cuatro', 7-2, 2*3, 7/1, 2**3, 3**2]
for i in range(10): # i es una variable que inventamos en el for, y que tomará los valores de la
print(nueva_lista[i]) #lista que se genere con range(10)
Explanation: Para que Python repita una misma acción n cantidad de veces, utilizaremos la estructura for. En cada paso, nosotros podemos aprovechar el "número de iteración" como una variable. Eso nos servirá en la mayoría de los casos.
End of explanation
# Realice el ejercicio 8
Explanation: Ejercicio 8
Haga otra lista con 16 elementos, y haga un programa que con un for imprima solo los primeros 7
Modifique el for anterior y haga que imprima solo los elementos pares de su lista
End of explanation
i = 1
while i < 10: # tener cuidado con los while que se cumplen siempre. Eso daría lugar a los loops infinitos.
i = i+1
print(i)
Explanation: La estructura while es poco recomendada en Python pero es importante saber que existe: consiste en repetir un paso mientras se cumpla una condición. Es como un for mezclado con un if.
End of explanation
# Realice el ejercicio 8
Explanation: Ejercicio 9
Calcule el factorial de N, siendo N la única variable que recibe la función (Se puede pensar usando for o usando while).
Calcule la sumatoria de los elementos de una lista.
End of explanation
f = lambda x: x**2 - 5*x + 6
print(f(3), f(2), f(0))
Explanation: Funciones
Pero si queremos definir nuestra propia manera de calcular algo, o si queremos agrupar una serie de órdenes bajo un mismo nombre, podemos definirnos nuestras propias funciones, pidiendo la cantidad de argumentos que querramos.
Vamos a usar las funciones lambda (también llamadas anonimas) más que nada para funciones matemáticas, aunque también tenga otros usos. Definamos el polinomio $f(x) = x^2 - 5x + 6$ que tiene como raíces $x = 3$ y $x = 2$.
End of explanation
def promedio(a,b,c):
N = a + b + c # Es importante que toda la función tenga su contenido indentado
N = N/3.0
return N
mipromedio = promedio(5,5,7) # Aquí rompimos la indentación
print(mipromedio)
Explanation: Las funciones lambda son necesariamente funciones de una sola linea y también tienen que retornar nada; por eso son candidatas para expresiones matemáticas simples.
Las otras funciones, las más generales, se las llama funciones def, y tienen la siguiente forma.
End of explanation
def otra_funcion(a, b):
return a + b * 2
# Es un valor!
otra_f = otra_funcion
print(otra_f)
print(type(otra_f))
print(otra_f(2, 3))
Explanation: Algo muy interesante y curioso, es que podemos hacer lo siguiente con las funciones
End of explanation
# Realice el ejercicio 9
Explanation: Las funciones pueden ser variables y esto abre la puerta a muchas cosas. Si tienen curiosidad, pregunten que está re bueno esto!
Ejercicio 10
Hacer una función que calcule el promedio de $n$ elementos dados en una lista.
Sugerencia: utilizar las funciones len() y sum() como auxiliares.
End of explanation
# Realice el ejercicio 10
Explanation: Ejercicio 11
Usando lo que ya sabemos de funciones matemáticas y las bifurcaciones que puede generar un if, hacer una función que reciba los coeficientes $a, b, c$ de la parábola $f(x) = ax^2 + bx + c$ y calcule las raíces si son reales (es decir, usando el discriminante $\Delta = b^2 - 4ac$ como criterio), y sino que imprima en pantalla una advertencia de que el cálculo no se puede hacer en $\mathbb{R}$.
End of explanation
# Bonus track 1
Explanation: Bonus track 1
Modificar la función anterior para que calcule las raíces de todos modos, aunque sean complejas. Python permite usar números complejos escritos de la forma 1 + 4j. Investiguen un poco
End of explanation
# Realice el ejercicio 12
Explanation: Ejercicio 12
Repitan el ejercicio 8, es decir
1. Hacer una función que calcule el factorial de N, siendo N la única variable que recibe la función (Se puede pensar usando for o usando while).
* Hacer una función que calcule la sumatoria de los elementos de una lista.
¿Se les ocurre otra forma de hacer el factorial? Piensen la definición matemática y escribanla en Python, y prueben calcular el factorial de 100 con esta definición nueva
End of explanation
import math # Llamamos a una biblioteca
r1 = math.pow(2,4)
r2 = math.cos(math.pi)
r3 = math.log(100,10)
r4 = math.log(math.e)
print(r1, r2, r3, r4)
Explanation: Paquetes y módulos
Pero las operaciones básicas de suma, resta, multiplicación y división son todo lo que un lenguaje como Python puede hacer "nativamente". Una potencia o un seno es álgebra no lineal, y para hacerlo, habría que inventarse un algoritmo (una serie de pasos) para calcular por ejemplo sen($\pi$). Pero alguien ya lo hizo, ya lo pensó, ya lo escribió en lenguaje Python y ahora todos podemos usar ese algoritmo sin pensar en él. Solamente hay que decirle a nuestro intérprete de Python dónde está guardado ese algoritmo. Esta posibilidad de usar algoritmos de otros es fundamental en la programación, porque es lo que permite que nuestro problema se limite solamente a entender cómo llamar a estos algoritmos ya pensados y no tener que pensarlos cada vez.
Vamos entonces a llamar a un paquete (como se le llama en Python) llamada math que nos va a extender nuestras posibilididades matemáticas.
End of explanation
# Realice el ejercicio 13
Explanation: Para entender cómo funcionan estas funciones, es importante recurrir a su documentation. La de esta biblioteca en particular se encuentra en
https://docs.python.org/2/library/math.html
Ejercicio 13
Use Python como calculadora y halle los resultados de
$\log(\cos(2\pi))$
$\text{atanh}(2^{\cos(e)} -1) $
$\sqrt{x^2+2x+1}$ con $x = 125$
End of explanation
import taller_python # Vean el repositorio!
Explanation: Crear bibliotecas
Bueno, ahora que sabemos como usar bibliotecas, nos queda saber cómo podemos crearlas. Pero para saber eso, tenemos que saber que es un módulo en Python y cómo se relaciona con un paquete.
Se le llama módulo a los archivos de Python, archivos con la extensión *.py, como por ejemplo taller_python.py (como tal vez algunos hicieron ya). En este archivo se agregan funciones, variables, etc, que pueden ser llamadas desde otro módulo con el nombre sin la extensión, es decir
End of explanation
print(taller_python.func(5, 6))
# Veamos la documentación
help(taller_python.func)
Explanation: Python para buscar estos módulos revisa si el módulo importado (con el comando import) está presente en la misma carpeta del que importa y luego en una serie de lugares estándares de Python (que se pueden alterar y revisar usando sys.path, importando el paquete sys). Si lo encuentra lo importa y podés usar las funciones, y si no puede salta una excepción
End of explanation
# Realice el ejercicio 14
Explanation: Traten de importar la función __func_oculta. Se puede, pero es un hack de Python y la idea es que no sepa de ella. Es una forma de ocultar y encapsular código, que es uno de los principios de la programación orientada a objetos.
Finalmente, un paquete como math es un conjunto de módulos ordenados en una carpeta con el nombre math, con un archivo especial __init__.py, que hace que la carpeta se comporte como un módulo. Python importa lo que vea en el archivo __init__.py y permite además importar los módulos dentro (o submodulos), si no tienen guiones bajos antes.
Usualmente no es recomendable trabajar en el __init__.py, salvo que se tenga una razón muy necesaria (o simplemente vagancia)
Ejercicio 14
Creen una libraría llamada mi_taller_python y agregen dos funciones, una que devuelva el resultado de $\sqrt{x^2+2x+1}$ para cualquier x y otra que resuelva el resultado de $(x^2+2x+1)^{y}$, para cualquier x e y. Hagan todas las funciones ocultas que requieran (aunque recomendamos siempre minimzarlas)
End of explanation
#Acá va el bonus track 2, para ya saborear la próxima clase
Explanation: Bonus track 2
Ahora que nos animamos a buscar nuevas bibliotecas y definir funciones, buscar la función newton() de la biblioteca scipy.optimize para hallar $x$ tal que se cumpla la siguiente ecuación no lineal $$\frac{1}{x} = ln(x)$$
End of explanation |
4,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Delay Embedding and the MFPT
Here, we give an example script, showing the effect of Delay Embedding on a Brownian motion on the Muller-Brown potential, projeted onto its y-axis. This script may take a long time to run, as considerable data is required to accurately reconstruct the hidden degrees of freedom.
Step1: Load Data and set Hyperparameters
We first load in the pre-sampled data. The data consists of 400 short trajectories, each with 30 datapoints. The precise sampling procedure is described in "Galerkin Approximation of Dynamical Quantities using Trajectory Data" by Thiede et al. Note that this is a smaller dataset than in the paper. We use a smallar dataset to ensure the diffusion map basis construction runs in a reasonably short time.
Set Hyperparameters
Here we specify a few hyperparameters. Thes can be varied to study the behavior of the scheme in various limits by the user.
Step2: Load and format the data
Step3: We also convert the data into the flattened format. This converts the data into a 2D array, which allows the data to be passed into many ML packages that require a two-dimensional dataset. In particular, this is the format accepted by the Diffusion Atlas object. Trajectory start/stop points are then stored in the traj_edges array.
Step4: Construct DGA MFPT by increasing lag times
We first construct the MFPT with increasing lag times.
Step5: Construct DGA MFPT with increasing Delay Embedding
We now construct the MFPT using delay embedding. To accelerate the process, we will only use every fifth value of the delay length.
Step6: Plot the Results
We plot the results of our calculation, against the true value (black line, with the standard deviation in stateB given by the dotted lines). We see that increasing the lag time causes the mean-first-passage time to grow unboundedly. In contrast, with delay embedding the mean-first-passage time converges. We do, however, see one bad fluction at a delay length of 16, and that as the the delay length gets sufficiently long, the calculation blows up. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pyedgar
from pyedgar.data_manipulation import tlist_to_flat, flat_to_tlist, delay_embed, lift_function
%matplotlib inline
Explanation: Delay Embedding and the MFPT
Here, we give an example script, showing the effect of Delay Embedding on a Brownian motion on the Muller-Brown potential, projeted onto its y-axis. This script may take a long time to run, as considerable data is required to accurately reconstruct the hidden degrees of freedom.
End of explanation
ntraj = 700
trajectory_length = 40
lag_values = np.arange(1, 37, 2)
embedding_values = lag_values[1:] - 1
Explanation: Load Data and set Hyperparameters
We first load in the pre-sampled data. The data consists of 400 short trajectories, each with 30 datapoints. The precise sampling procedure is described in "Galerkin Approximation of Dynamical Quantities using Trajectory Data" by Thiede et al. Note that this is a smaller dataset than in the paper. We use a smallar dataset to ensure the diffusion map basis construction runs in a reasonably short time.
Set Hyperparameters
Here we specify a few hyperparameters. Thes can be varied to study the behavior of the scheme in various limits by the user.
End of explanation
trajs_2d = np.load('data/muller_brown_trajs.npy')[:ntraj, :trajectory_length] # Raw trajectory
trajs = trajs_2d[:, :, 1] # Only keep y coordinate
stateA = (trajs > 1.15).astype('float')
stateB = (trajs < 0.15).astype('float')
# Convert to list of trajectories format
trajs = [traj_i.reshape(-1, 1) for traj_i in trajs]
stateA = [A_i for A_i in stateA]
stateB = [B_i for B_i in stateB]
# Load the true results
true_mfpt = np.load('data/htAB_1_0_0_1.npy')
Explanation: Load and format the data
End of explanation
flattened_trajs, traj_edges = tlist_to_flat(trajs)
flattened_stateA = np.hstack(stateA)
flattened_stateB = np.hstack(stateB)
print("Flattened Shapes are: ", flattened_trajs.shape, flattened_stateA.shape, flattened_stateB.shape,)
Explanation: We also convert the data into the flattened format. This converts the data into a 2D array, which allows the data to be passed into many ML packages that require a two-dimensional dataset. In particular, this is the format accepted by the Diffusion Atlas object. Trajectory start/stop points are then stored in the traj_edges array.
End of explanation
# Build the basis set
diff_atlas = pyedgar.basis.DiffusionAtlas.from_sklearn(alpha=0, k=500, bandwidth_type='-1/d', epsilon='bgh_generous')
diff_atlas.fit(flattened_trajs)
flat_basis = diff_atlas.make_dirichlet_basis(200, in_domain=(1. - flattened_stateA))
basis = flat_to_tlist(flat_basis, traj_edges)
flat_basis_no_boundaries = diff_atlas.make_dirichlet_basis(200)
basis_no_boundaries = flat_to_tlist(flat_basis_no_boundaries, traj_edges)
# Perform DGA calculation
mfpt_BA_lags = []
for lag in lag_values:
mfpt = pyedgar.galerkin.compute_mfpt(basis, stateA, lag=lag)
pi = pyedgar.galerkin.compute_change_of_measure(basis_no_boundaries, lag=lag)
flat_pi = np.array(pi).ravel()
flat_mfpt = np.array(mfpt).ravel()
mfpt_BA = np.mean(flat_mfpt * flat_pi * np.array(stateB).ravel()) / np.mean(flat_pi * np.array(stateB).ravel())
mfpt_BA_lags.append(mfpt_BA)
Explanation: Construct DGA MFPT by increasing lag times
We first construct the MFPT with increasing lag times.
End of explanation
mfpt_BA_embeddings = []
for lag in embedding_values:
# Perform delay embedding
debbed_traj = delay_embed(trajs, n_embed=lag)
lifted_A = lift_function(stateA, n_embed=lag)
lifted_B = lift_function(stateB, n_embed=lag)
flat_debbed_traj, embed_edges = tlist_to_flat(debbed_traj)
flat_lifted_A = np.hstack(lifted_A)
# Build the basis
diff_atlas = pyedgar.basis.DiffusionAtlas.from_sklearn(alpha=0, k=500, bandwidth_type='-1/d',
epsilon='bgh_generous', neighbor_params={'algorithm':'brute'})
diff_atlas.fit(flat_debbed_traj)
flat_deb_basis = diff_atlas.make_dirichlet_basis(200, in_domain=(1. - flat_lifted_A))
deb_basis = flat_to_tlist(flat_deb_basis, embed_edges)
flat_pi_basis = diff_atlas.make_dirichlet_basis(200)
pi_basis = flat_to_tlist(flat_deb_basis, embed_edges)
# Construct the Estimate
deb_mfpt = pyedgar.galerkin.compute_mfpt(deb_basis, lifted_A, lag=1)
pi = pyedgar.galerkin.compute_change_of_measure(pi_basis)
flat_pi = np.array(pi).ravel()
flat_mfpt = np.array(deb_mfpt).ravel()
deb_mfpt_BA = np.mean(flat_mfpt * flat_pi * np.array(lifted_B).ravel()) / np.mean(flat_pi * np.array(lifted_B).ravel())
mfpt_BA_embeddings.append(deb_mfpt_BA)
Explanation: Construct DGA MFPT with increasing Delay Embedding
We now construct the MFPT using delay embedding. To accelerate the process, we will only use every fifth value of the delay length.
End of explanation
plt.plot(embedding_values, mfpt_BA_embeddings, label="Delay Embedding")
plt.plot(lag_values, mfpt_BA_lags, label="Lags")
plt.axhline(true_mfpt[0] * 10, color='k', label='True')
plt.axhline((true_mfpt[0] + true_mfpt[1]) * 10., color='k', linestyle=':')
plt.axhline((true_mfpt[0] - true_mfpt[1]) * 10., color='k', linestyle=':')
plt.legend()
plt.ylim(0, 100)
plt.xlabel("Lag / Delay Length")
plt.ylabel("Estimated MFPT")
Explanation: Plot the Results
We plot the results of our calculation, against the true value (black line, with the standard deviation in stateB given by the dotted lines). We see that increasing the lag time causes the mean-first-passage time to grow unboundedly. In contrast, with delay embedding the mean-first-passage time converges. We do, however, see one bad fluction at a delay length of 16, and that as the the delay length gets sufficiently long, the calculation blows up.
End of explanation |
4,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 4c
Step1: Set your bucket
Step2: Verify CSV files exist
In the seventh lab of this series 1b_prepare_data_babyweight, we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
Step3: Create Keras model
Lab Task #1
Step6: Lab Task #2
Step8: Lab Task #3
Step10: Lab Task #4
Step12: Lab Task #5
Step14: Lab Task #6
Step16: Lab Task #7
Step17: We can visualize the wide and deep network using the Keras plot_model utility.
Step18: Run and evaluate model
Lab Task #8
Step19: Visualize loss curve
Step20: Save the model | Python Code:
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
print(tf.__version__)
Explanation: LAB 4c: Create Keras Wide and Deep model.
Learning Objectives
Set CSV Columns, label column, and column defaults
Make dataset of features and label from CSV files
Create input layers for raw features
Create feature columns for inputs
Create wide layer, deep dense hidden layers, and output layer
Create custom evaluation metric
Build wide and deep model tying all of the pieces together
Train and evaluate
Introduction
In this notebook, we'll be using Keras to create a wide and deep model to predict the weight of a baby before it is born.
We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a wide and deep neural network in Keras. We'll create a custom evaluation metric and build our wide and deep model. Finally, we'll train and evaluate our model.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Load necessary libraries
End of explanation
BUCKET = # REPLACE BY YOUR BUCKET
os.environ['BUCKET'] = BUCKET
Explanation: Set your bucket:
End of explanation
TRAIN_DATA_PATH = "gs://{bucket}/babyweight/data/train*.csv".format(bucket=BUCKET)
EVAL_DATA_PATH = "gs://{bucket}/babyweight/data/eval*.csv".format(bucket=BUCKET)
!gsutil ls $TRAIN_DATA_PATH
!gsutil ls $EVAL_DATA_PATH
Explanation: Verify CSV files exist
In the seventh lab of this series 1b_prepare_data_babyweight, we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
End of explanation
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = [""]
# TODO: Add string name for label column
LABEL_COLUMN = ""
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = []
Explanation: Create Keras model
Lab Task #1: Set CSV Columns, label column, and column defaults.
Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.
* CSV_COLUMNS are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files
* LABEL_COLUMN is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.
* DEFAULTS is a list with the same length as CSV_COLUMNS, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
End of explanation
def features_and_labels(row_data):
Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode='eval'):
Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: 'eval' | 'train' to determine if training or evaluating.
Returns:
`Dataset` object.
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset()
# TODO: Map dataset to features and label
dataset = dataset.map() # features, label
# Shuffle and repeat for training
if mode == 'train':
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
Explanation: Lab Task #2: Make dataset of features and label from CSV files.
Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
End of explanation
def create_input_layers():
Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
# TODO: Create dictionary of tf.keras.layers.Input for each dense feature
deep_inputs = {}
# TODO: Create dictionary of tf.keras.layers.Input for each sparse feature
wide_inputs = {}
inputs = {**wide_inputs, **deep_inputs}
return inputs
Explanation: Lab Task #3: Create input layers for raw features.
We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining:
* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.
* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
* dtype: The data type expected by the input, as a string (float32, float64, int32...)
End of explanation
def create_feature_columns(nembeds):
Creates wide and deep dictionaries of feature columns from inputs.
Args:
nembeds: int, number of dimensions to embed categorical column down to.
Returns:
Wide and deep dictionaries of feature columns.
# TODO: Create deep feature columns for numeric features
deep_fc = {}
# TODO: Create wide feature columns for categorical features
wide_fc = {}
# TODO: Bucketize the float fields. This makes them wide
# TODO: Cross all the wide cols, have to do the crossing before we one-hot
# TODO: Embed cross and add to deep feature columns
return wide_fc, deep_fc
Explanation: Lab Task #4: Create feature columns for inputs.
Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
End of explanation
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
Creates model architecture and returns outputs.
Args:
wide_inputs: Dense tensor used as inputs to wide side of model.
deep_inputs: Dense tensor used as inputs to deep side of model.
dnn_hidden_units: List of integers where length is number of hidden
layers and ith element is the number of neurons at ith layer.
Returns:
Dense tensor output from the model.
# Hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
# TODO: Create DNN model for the deep side
deep_out =
# TODO: Create linear model for the wide side
wide_out =
# Concatenate the two sides
both = tf.keras.layers.concatenate(
inputs=[deep_out, wide_out], name="both")
# TODO: Create final output layer
return output
Explanation: Lab Task #5: Create wide and deep model and output layer.
So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. We need to create a wide and deep model now. The wide side will just be a linear regression or dense layer. For the deep side, let's create some hidden dense layers. All of this will end with a single dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
End of explanation
def rmse(y_true, y_pred):
Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
# TODO: Calculate RMSE from true and predicted labels
pass
Explanation: Lab Task #6: Create custom evaluation metric.
We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
End of explanation
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
Builds wide and deep model using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
# Create input layers
inputs = create_input_layers()
# Create feature columns
wide_fc, deep_fc = create_feature_columns(nembeds)
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
# TODO: Add wide and deep feature colummns
wide_inputs = tf.keras.layers.DenseFeatures(
feature_columns=#TODO, name="wide_inputs")(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(
feature_columns=#TODO, name="deep_inputs")(inputs)
# Get output of model given inputs
output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse"])
return model
print("Here is our wide and deep architecture so far:\n")
model = build_wide_deep_model()
print(model.summary())
Explanation: Lab Task #7: Build wide and deep model tying all of the pieces together.
Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is NOT a simple feedforward model with no branching, side inputs, etc. so we can't use Keras' Sequential Model API. We're instead going to use Keras' Functional Model API. Here we will build the model using tf.keras.models.Model giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
End of explanation
tf.keras.utils.plot_model(
model=model, to_file="wd_model.png", show_shapes=False, rankdir="LR")
Explanation: We can visualize the wide and deep network using the Keras plot_model utility.
End of explanation
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset()
# TODO: Load evaluation dataset
evalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit()
Explanation: Run and evaluate model
Lab Task #8: Train and evaluate.
We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
End of explanation
# Plot
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
Explanation: Visualize loss curve
End of explanation
OUTPUT_DIR = "babyweight_trained_wd"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
Explanation: Save the model
End of explanation |
4,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Detailed RBC Model Example
Consider the equilibrium conditions for a basic RBC model without labor
Step1: Initializing the model in linearsolve
To initialize the model, we need to first set the model's parameters. We do this by creating a Pandas Series variable called parameters
Step2: Next, we need to define a function that returns the equilibrium conditions of the model. The function will take as inputs two vectors
Step3: Notice that inside the function we have to define the variables of the model form the elements of the input vectors variables_forward and variables_current.
Initializing the model
To initialize the model, we need to specify the total number of state variables in the model, the number of state variables with exogenous shocks, the names of the endogenous variables, and the parameters of the model.
It is essential that the variable names are ordered in the following way
Step4: Steady state
Next, we need to compute the nonstochastic steady state of the model. The .compute_ss() method can be used to compute the steady state numerically. The method's default is to use scipy's fsolve() function, but other scipy root-finding functions can be used
Step5: Note that the steady state is returned as a Pandas Series. Alternatively, you could compute the steady state directly and then sent the rbc.ss attribute
Step6: Log-linearization and solution
Now we use the .log_linear() method to find the log-linear appxoximation to the model's equilibrium conditions. That is, we'll transform the nonlinear model into a linear model in which all variables are expressed as log-deviations from the steady state. Specifically, we'll compute the matrices $A$ and $B$ that satisfy
Step7: Finally, we need to obtain the solution to the log-linearized model. The solution is a pair of matrices $F$ and $P$ that specify
Step8: Impulse responses
One the model is solved, use the .impulse() method to compute impulse responses to exogenous shocks to the state. The method creates the .irs attribute which is a dictionary with keys equal to the names of the exogenous shocks and the values are Pandas DataFrames with the computed impulse respones. You can supply your own values for the shocks, but the default is 0.01 for each exogenous shock.
Step9: Plotting is easy.
Step10: Stochastic simulation
Creating a stochastic simulation of the model is straightforward with the .stoch_sim() method. In the following example, I create a 151 period (including t=0) simulation by first simulating the model for 251 periods and then dropping the first 100 values. The standard deviation of the shock to $A_t$ is set to 0.00763. The seed for the numpy random number generator is set to 0. | Python Code:
# Import numpy, pandas, linearsolve, matplotlib.pyplot
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
Explanation: A Detailed RBC Model Example
Consider the equilibrium conditions for a basic RBC model without labor:
\begin{align}
C_t^{-\sigma} & = \beta E_t \left[C_{t+1}^{-\sigma}(\alpha A_{t+1} K_{t+1}^{\alpha-1} + 1 - \delta)\right]\
Y_t & = A_t K_t^{\alpha}\
I_t & = K_{t+1} - (1-\delta)K_t\
Y_t & = C_t + I_t\
\log A_t & = \rho_a \log A_{t-1} + \epsilon_t
\end{align}
In the nonstochastic steady state, we have:
\begin{align}
K & = \left(\frac{\alpha A}{1/\beta+\delta-1}\right)^{\frac{1}{1-\alpha}}\
Y & = AK^{\alpha}\
I & = \delta K\
C & = Y - I
\end{align}
Given values for the parameters $\beta$, $\sigma$, $\alpha$, $\delta$, and $A$, steady state values of capital, output, investment, and consumption are easily computed.
Import requisite modules
End of explanation
# Input model parameters
parameters = pd.Series(dtype=float)
parameters['alpha'] = .35
parameters['beta'] = 0.99
parameters['delta'] = 0.025
parameters['rhoa'] = .9
parameters['sigma'] = 1.5
parameters['A'] = 1
Explanation: Initializing the model in linearsolve
To initialize the model, we need to first set the model's parameters. We do this by creating a Pandas Series variable called parameters:
End of explanation
# Define function to compute equilibrium conditions
def equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Variables
fwd = variables_forward
cur = variables_current
# Household Euler equation
euler_eqn = p.beta*fwd.c**-p.sigma*(p.alpha*fwd.y/fwd.k+1-p.delta) - cur.c**-p.sigma
# Production function
production_fuction = cur.a*cur.k**p.alpha - cur.y
# Capital evolution
capital_evolution = fwd.k - (1-p.delta)*cur.k - cur.i
# Goods market clearing
market_clearing = cur.c + cur.i - cur.y
# Exogenous technology
technology_proc = cur.a**p.rhoa- fwd.a
# Stack equilibrium conditions into a numpy array
return np.array([
euler_eqn,
production_fuction,
capital_evolution,
market_clearing,
technology_proc
])
Explanation: Next, we need to define a function that returns the equilibrium conditions of the model. The function will take as inputs two vectors: one vector of "current" variables and another of "forward-looking" or one-period-ahead variables. The function will return an array that represents the equilibirum conditions of the model. We'll enter each equation with all variables moved to one side of the equals sign. For example, here's how we'll enter the produciton fucntion:
production_function = technology_current*capital_current**alpha - output_curent
Here the variable production_function stores the production function equation set equal to zero. We can enter the equations in almost any way we want. For example, we could also have entered the production function this way:
production_function = 1 - output_curent/technology_current/capital_current**alpha
One more thing to consider: the natural log in the equation describing the evolution of total factor productivity will create problems for the solution routine later on. So rewrite the equation as:
\begin{align}
A_{t+1} & = A_{t}^{\rho_a}e^{\epsilon_{t+1}}\
\end{align}
So the complete system of equations that we enter into the program looks like:
\begin{align}
C_t^{-\sigma} & = \beta E_t \left[C_{t+1}^{-\sigma}(\alpha Y_{t+1} /K_{t+1}+ 1 - \delta)\right]\
Y_t & = A_t K_t^{\alpha}\
I_t & = K_{t+1} - (1-\delta)K_t\
Y_t & = C_t + I_t\
A_{t+1} & = A_{t}^{\rho_a}e^{\epsilon_{t+1}}
\end{align}
Now let's define the function that returns the equilibrium conditions:
End of explanation
# Initialize the model
rbc = ls.model(equations = equations,
n_states=2,
n_exo_states=1,
var_names=['a','k','c','y','i'],
parameters=parameters)
Explanation: Notice that inside the function we have to define the variables of the model form the elements of the input vectors variables_forward and variables_current.
Initializing the model
To initialize the model, we need to specify the total number of state variables in the model, the number of state variables with exogenous shocks, the names of the endogenous variables, and the parameters of the model.
It is essential that the variable names are ordered in the following way: First the names of the endogenous variables with the state variables with exogenous shocks, then the state variables without shocks, and finally the control variables. Ordering within the groups doesn't matter.
End of explanation
# Compute the steady state numerically
guess = [1,1,1,1,1]
rbc.compute_ss(guess)
print(rbc.ss)
Explanation: Steady state
Next, we need to compute the nonstochastic steady state of the model. The .compute_ss() method can be used to compute the steady state numerically. The method's default is to use scipy's fsolve() function, but other scipy root-finding functions can be used: root, broyden1, and broyden2. The optional argument options lets the user pass keywords directly to the optimization function. Check out the documentation for Scipy's nonlinear solvers here: http://docs.scipy.org/doc/scipy/reference/optimize.html
End of explanation
# Steady state solution
p = parameters
K = (p.alpha*p.A/(1/p.beta+p.delta-1))**(1/(1-p.alpha))
C = p.A*K**p.alpha - p.delta*K
Y = p.A*K**p.alpha
I = Y - C
rbc.set_ss([p.A,K,C,Y,I])
print(rbc.ss)
Explanation: Note that the steady state is returned as a Pandas Series. Alternatively, you could compute the steady state directly and then sent the rbc.ss attribute:
End of explanation
# Find the log-linear approximation around the non-stochastic steady state
rbc.log_linear_approximation()
print('The matrix A:\n\n',np.around(rbc.a,4),'\n\n')
print('The matrix B:\n\n',np.around(rbc.b,4))
Explanation: Log-linearization and solution
Now we use the .log_linear() method to find the log-linear appxoximation to the model's equilibrium conditions. That is, we'll transform the nonlinear model into a linear model in which all variables are expressed as log-deviations from the steady state. Specifically, we'll compute the matrices $A$ and $B$ that satisfy:
\begin{align}
A E_t\left[ x_{t+1} \right] & = B x_t + \left[ \begin{array}{c} \epsilon_{t+1} \ 0 \end{array} \right],
\end{align}
where the vector $x_{t}$ denotes the log deviation of the endogenous variables from their steady state values.
End of explanation
# Solve the model
rbc.solve_klein(rbc.a,rbc.b)
# Display the output
print('The matrix F:\n\n',np.around(rbc.f,4),'\n\n')
print('The matrix P:\n\n',np.around(rbc.p,4))
Explanation: Finally, we need to obtain the solution to the log-linearized model. The solution is a pair of matrices $F$ and $P$ that specify:
The current values of the non-state variables $u_{t}$ as a linear function of the previous values of the state variables $s_t$.
The future values of the state variables $s_{t+1}$ as a linear function of the previous values of the state variables $s_t$ and the future realisation of the exogenous shock process $\epsilon_{t+1}$.
\begin{align}
u_t & = Fs_t\
s_{t+1} & = Ps_t + \epsilon_{t+1}.
\end{align}
We use the .klein() method to find the solution.
End of explanation
# Compute impulse responses and plot
rbc.impulse(T=41,t0=1,shocks=None,percent=True)
print('Impulse responses to a 0.01 unit shock to A:\n\n',rbc.irs['e_a'].head())
Explanation: Impulse responses
One the model is solved, use the .impulse() method to compute impulse responses to exogenous shocks to the state. The method creates the .irs attribute which is a dictionary with keys equal to the names of the exogenous shocks and the values are Pandas DataFrames with the computed impulse respones. You can supply your own values for the shocks, but the default is 0.01 for each exogenous shock.
End of explanation
rbc.irs['e_a'][['a','k','c','y','i']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=2)
rbc.irs['e_a'][['e_a','a']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=2)
Explanation: Plotting is easy.
End of explanation
rbc.stoch_sim(T=121,drop_first=100,cov_mat=np.array([0.00763**2]),seed=0,percent=True)
rbc.simulated[['k','c','y','i']].plot(lw='5',alpha=0.5,grid=True).legend(loc='upper right',ncol=4)
rbc.simulated[['a']].plot(lw='5',alpha=0.5,grid=True).legend(ncol=4)
rbc.simulated['e_a'].plot(lw='5',alpha=0.5,grid=True).legend(ncol=4)
Explanation: Stochastic simulation
Creating a stochastic simulation of the model is straightforward with the .stoch_sim() method. In the following example, I create a 151 period (including t=0) simulation by first simulating the model for 251 periods and then dropping the first 100 values. The standard deviation of the shock to $A_t$ is set to 0.00763. The seed for the numpy random number generator is set to 0.
End of explanation |
4,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Sex')
Explanation: Answer: 61.62%
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append( passenger['Sex'] == 'female' )
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: 78.68%
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append( ( passenger['Sex'] == 'female' or passenger['Age'] < 10 ) )
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, "Age", ["Sex == 'female'", "Pclass == 3" ] )
Explanation: Answer: 79.35%
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append( (passenger['Sex'] == 'female' and passenger['Pclass']<3 ) or
(passenger['Sex'] == 'female' and passenger['Pclass']==3 and passenger['SibSp']<2.5 and passenger['Age']< 40 ) or
(passenger['Sex'] == 'male' and passenger['Age'] < 10 and passenger['SibSp']<2.5 ) )
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation |
4,213 | Given the following text description, write Python code to implement the functionality described.
Description:
Program to find the Excenters of a Triangle
Python3 program for the above approach ; Function to calculate the distance between a pair of points ; Function to calculate the coordinates of the excenters of a triangle ; Length of the sides of the triangle ; Stores the coordinates of the excenters of the triangle ; For I1 ; For I2 ; For I3 ; Print the excenters of the triangle ; Driver Code
| Python Code:
from math import sqrt
def distance(m , n , p , q ) :
return(sqrt(pow(n - m , 2 ) + pow(q - p , 2 ) * 1.0 ) )
def Excenters(x1 , y1 , x2 , y2 , x3 , y3 ) :
a = distance(x2 , x3 , y2 , y3 )
b = distance(x3 , x1 , y3 , y1 )
c = distance(x1 , x2 , y1 , y2 )
excenter =[[ 0 , 0 ] for i in range(4 ) ]
excenter[1 ][0 ] =(( -(a * x1 ) +(b * x2 ) +(c * x3 ) ) //(- a + b + c ) )
excenter[1 ][1 ] =(( -(a * y1 ) +(b * y2 ) +(c * y3 ) ) //(- a + b + c ) )
excenter[2 ][0 ] =(((a * x1 ) -(b * x2 ) +(c * x3 ) ) //(a - b + c ) )
excenter[2 ][1 ] =(((a * y1 ) -(b * y2 ) +(c * y3 ) ) //(a - b + c ) )
excenter[3 ][0 ] =(((a * x1 ) +(b * x2 ) -(c * x3 ) ) //(a + b - c ) )
excenter[3 ][1 ] =(((a * y1 ) +(b * y2 ) -(c * y3 ) ) //(a + b - c ) )
for i in range(1 , 4 ) :
print(int(excenter[i ][0 ] ) , int(excenter[i ][1 ] ) )
if __name__== ' __main __' :
x1 = 0
x2 = 3
x3 = 0
y1 = 0
y2 = 0
y3 = 4
Excenters(x1 , y1 , x2 , y2 , x3 , y3 )
|
4,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis of match thread
To run this yourself. Two things have to be done manually
Step1: Some parameters
These need to be changed every match
Step2: More parameters
These parameters and definitions that don't need to change each game
Step4: Function definitions
Six funcitons that do most of the work
Step5: Get comments and sentiment score
If the data already exists, it loads that (remember to run in main directory, not notebook directory)
Step6: Get match report
Step7: Parse match report
I have no idea if these are universal ways the match thread are, but works at least for this game (needs to be check in other games).
So this will be improved in future.
Step8: Get positive/negative difference
Sort into number of positive and number of negative comments
Step9: Plot figure (unweighted)
Step10: Plot figure (weighted) | Python Code:
import praw
import datetime
import pandas as pd
import nltk.sentiment.vader
import matplotlib.pyplot as plt
# Import all relevant packages
from bs4 import BeautifulSoup
from selenium import webdriver
import numpy as np
import os
Explanation: Sentiment analysis of match thread
To run this yourself. Two things have to be done manually:
Set up PhantomJS with Selenium (This has gotten easier to install (not sure if PhantomJS now comes with Selenium by default or you still have to download it seperately). Used to be a pain.)
Get a client_id / client_secret set up with PRAW / Reddit. In this code it is assumed that there is a file called: praw.json which contains client_id, client_secret, password, user_agent, and username.
Tweaks will need to be made before the match events are fully automatic.
Notebooks are run in the main directory of the repository (and just archived in the notebook folder). So paths will have to be modified if you run the notebook in the notebook folder
Import packages
End of explanation
url = 'http://www.telegraph.co.uk/football/2017/08/12/watford-vs-liverpool-premier-league-live-score-updates-team/'
thread_id = '6t7tx3'
analysis_name = 'League_1_Watford'
Explanation: Some parameters
These need to be changed every match
End of explanation
# Define some objects to be used later
# set up driver for scraping
driver = webdriver.PhantomJS()
# Define NLTK object
vader = nltk.sentiment.vader.SentimentIntensityAnalyzer()
# set matplotlib style
plt.style.use('ggplot')
# Change this to 0 if you have downloaded the data and want to redownload
use_saved_data = 1
Explanation: More parameters
These parameters and definitions that don't need to change each game
End of explanation
def get_match_report(url):
Function gets all times and titles of telegraph match report
#Open page and make a soup object
driver.get(url)
r = driver.page_source
soup = BeautifulSoup(r, 'lxml')
#This finds a list of all links which are connected to a concept
updates = soup.findAll('div', class_='live-post js-live-post component')
titles = soup.findAll('h3', class_='live-post__title')
titles = [t.text.lower() for t in titles]
tele_times = [u.find('a').text for u in updates]
tele_times_dt = []
for t in tele_times:
t = t.split(':')
if t[1][-2:] == 'AM' or t[0] == '12':
tele_times_dt.append(datetime.time(int(t[0]),int(t[1][:-2])))
else:
tele_times_dt.append(datetime.time(int(t[0])+12,int(t[1][:-2])))
return titles,tele_times_dt
def get_comments(thread_id,praw_info):
reddit = praw.Reddit(client_id=praw_info['client_id'][0],
client_secret=praw_info['client_secret'][0],
password=praw_info['password'][0],
user_agent=praw_info['user_agent'][0],
username=praw_info['username'][0])
submission = reddit.submission(id=thread_id)
submission.comments.replace_more(limit=None, threshold = 0)
return submission
def comment_time_and_sentiment(submission):
time = []
sentiment = []
score = []
# Loop through top comments and add to time and sentiment list
for top_level_comment in submission.comments:
time.append((datetime.datetime.fromtimestamp(top_level_comment.created_utc) - datetime.timedelta(hours=1)))
sentiment.append(vader.polarity_scores(top_level_comment.body)['compound'])
score.append(top_level_comment.score)
# Make time format
pd_time = pd.to_datetime(time)
# Make to dateframe
df = pd.DataFrame(data={'sentiment': sentiment,'score':score}, index = pd_time)
return df
def posneg_sentiment_difference(df,bins='1min'):
# Find comments with positive > 0 and negative < 0 sentiment
pdf = df[df['sentiment'] > 0]
ndf = df[df['sentiment'] < 0]
# Bin
pgdf = pdf.groupby(pd.TimeGrouper(freq=bins)).count()
ngdf = ndf.groupby(pd.TimeGrouper(freq=bins)).count()
diff_df = (pgdf['sentiment']-ngdf['sentiment']).dropna()
return diff_df
def weighted_posneg_sentiment_difference(df,bins='1min'):
# Find comments with positive > 0 and negative < 0 sentiment
df = pd.DataFrame(df[df['score']>0])
pdf = df[df['sentiment'] > 0]
ndf = df[df['sentiment'] < 0]
# Bin
pgdf = pdf.groupby(pd.TimeGrouper(freq=bins)).count()
ngdf = ndf.groupby(pd.TimeGrouper(freq=bins)).count()
# Take the difference
diff_df = (pgdf['sentiment']*pgdf['score']-ngdf['sentiment']*ngdf['score']).dropna()
return diff_df
def plot_figure(df,ax):
# Main line
ax.plot(df.index.time,df,linewidth=2,color='firebrick')
# Scale y axis (make even -/+ directions)
ax.set_ylim([-np.max(np.abs(ax.get_ylim())),np.max(np.abs(ax.get_ylim()))])
# Make axis ticks and labels correct
ax.set_xlim(datetime.time(12,00),datetime.time(15,00))
ax.set_xticks([ax.get_xlim()[0]+m*60 for m in range(0,181,30)])
ax.set_xlabel('Time (GMT/BST)')
return ax
Explanation: Function definitions
Six funcitons that do most of the work
End of explanation
# If data doesn't exist, download it. If data exists, load it.
if use_saved_data == 1 and os.path.exists('./data/' + analysis_name + '.csv'):
df = pd.read_csv('./data/' + analysis_name + '.csv', index_col=0, parse_dates=[0])
else:
# read in reddit api info
praw_info = pd.read_json('praw.json')
# do the sentiment analysis
submission = get_comments(thread_id,praw_info)
df = comment_time_and_sentiment(submission)
df.to_csv('./data/' + analysis_name + '.csv')
# Delete reddit api info
praw_info = {}
Explanation: Get comments and sentiment score
If the data already exists, it loads that (remember to run in main directory, not notebook directory)
End of explanation
titles,matchevents = get_match_report(url)
Explanation: Get match report
End of explanation
goal = [matchevents[i] for i,t in enumerate(titles) if t == 'goal!']
penalty = [matchevents[i] for i,t in enumerate(titles) if t[:7] == 'penalty']
halftime = [matchevents[i] for i,t in enumerate(titles) if t[:2] == 'ht']
fulltime = [matchevents[i] for i,t in enumerate(titles) if t[:2] == 'ft']
kickoff = [matchevents[i] for i,t in enumerate(titles) if t[:9] == 'we\'re off']
Explanation: Parse match report
I have no idea if these are universal ways the match thread are, but works at least for this game (needs to be check in other games).
So this will be improved in future.
End of explanation
posneg_df = posneg_sentiment_difference(df,bins='2min')
weighted_posneg_df = weighted_posneg_sentiment_difference(df,bins='1min')
Explanation: Get positive/negative difference
Sort into number of positive and number of negative comments
End of explanation
fig,ax = plt.subplots(1)
ax = plot_figure(posneg_df,ax)
ax.set_ylabel('# Pos Comments - # Neg Comments')
# MATCH EVENTS (BELOW HERE) MIGHT HAVE TO CHANGE
# Get y axis lims to place events
scatter_y_min, scatter_y_max = ax.get_ylim()
# Place match events
ax.scatter(goal,np.tile(scatter_y_max,len(goal)),color='black',s=15)
ax.scatter(penalty,np.tile(scatter_y_max,len(penalty)),color='black',s=15)
# Define first and second half
ax.fill_between([kickoff[1],halftime[0]],scatter_y_min,scatter_y_max+np.abs(scatter_y_max*0.05),facecolor='dimgray',alpha=0.25,zorder=0)
ax.fill_between([kickoff[0],fulltime[0]],scatter_y_min,scatter_y_max+np.abs(scatter_y_max*0.05),facecolor='dimgray',alpha=0.25,zorder=0)
ax.text(datetime.time(kickoff[1].hour,kickoff[1].minute+3),scatter_y_min+np.abs(scatter_y_min*0.05),'First Half')
ax.text(datetime.time(kickoff[0].hour,kickoff[0].minute+3),scatter_y_min+np.abs(scatter_y_min*0.05),'Second Half')
# Rescale ylim to encmpass match events
ax.set_ylim([scatter_y_min,scatter_y_max+np.abs(scatter_y_max*0.05)])
# Save
fig.savefig('./figures/' + analysis_name + '.png',dpi=300)
fig.savefig('./figures/' + analysis_name + '.pdf',dpi=300)
Explanation: Plot figure (unweighted)
End of explanation
# Plot weighted figure
fig,ax = plt.subplots(1)
ax = plot_figure(weighted_posneg_df,ax)
ax.set_ylabel('# Pos - Neg Comments (weighted by upvotes)')
# MATCH EVENTS (BELOW HERE) MIGHT HAVE TO CHANGE
# Get y axis lims to place events
scatter_y_min, scatter_y_max = ax.get_ylim()
# Place match events
ax.scatter(goal,np.tile(scatter_y_max,len(goal)),color='black',s=15)
ax.scatter(penalty,np.tile(scatter_y_max,len(penalty)),color='black',s=15)
# Define first and second half
ax.fill_between([kickoff[1],halftime[0]],scatter_y_min,scatter_y_max+np.abs(scatter_y_max*0.05),facecolor='dimgray',alpha=0.25,zorder=0)
ax.fill_between([kickoff[0],fulltime[0]],scatter_y_min,scatter_y_max+np.abs(scatter_y_max*0.05),facecolor='dimgray',alpha=0.25,zorder=0)
ax.text(datetime.time(kickoff[1].hour,kickoff[1].minute+3),scatter_y_min+np.abs(scatter_y_min*0.05),'First Half')
ax.text(datetime.time(kickoff[0].hour,kickoff[0].minute+3),scatter_y_min+np.abs(scatter_y_min*0.05),'Second Half')
# Rescale ylim to encmpass match events
ax.set_ylim([scatter_y_min,scatter_y_max+np.abs(scatter_y_max*0.05)])
# Save
fig.savefig('./figures/weighted_' + analysis_name + '.png',dpi=300)
fig.savefig('./figures/weighted_' + analysis_name + '.pdf',dpi=300)
Explanation: Plot figure (weighted)
End of explanation |
4,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras debugging tips
Author
Step1: Now, rather than using it in a end-to-end model directly, let's try to call the layer on
some test data
Step2: We get the following
Step3: Now our code works fine
Step4: Tip 2
Step5: Calling summary() can help you check the output shape of each layer
Step6: You can also visualize the entire network topology alongside output shapes using
plot_model
Step7: With this plot, any connectivity-level error becomes immediately obvious.
Tip 3
Step8: Let's train a one-layer model on MNIST with this custom training loop.
We pick, somewhat at random, a batch size of 1024 and a learning rate of 0.1. The general
idea being to use larger batches and a larger learning rate than usual, since our
"improved" gradients should lead us to quicker convergence.
Step9: Oh no, it doesn't converge! Something is not working as planned.
Time for some step-by-step printing of what's going on with our gradients.
We add various print statements in the train_step method, and we make sure to pass
run_eagerly=True to compile() to run our code step-by-step, eagerly.
Step10: What did we learn?
The first order and second order gradients can have values that differ by orders of
magnitudes.
Sometimes, they may not even have the same sign.
Their values can vary greatly at each step.
This leads us to an obvious idea
Step11: Now, training converges! It doesn't work well at all, but at least the model learns
something.
After spending a few minutes tuning parameters, we get to the following configuration
that works somewhat well (achieves 97% validation accuracy and seems reasonably robust to
overfitting) | Python Code:
import tensorflow as tf
from tensorflow.keras import layers
class MyAntirectifier(layers.Layer):
def build(self, input_shape):
output_dim = input_shape[-1]
self.kernel = self.add_weight(
shape=(output_dim * 2, output_dim),
initializer="he_normal",
name="kernel",
trainable=True,
)
def call(self, inputs):
# Take the positive part of the input
pos = tf.nn.relu(inputs)
# Take the negative part of the input
neg = tf.nn.relu(-inputs)
# Concatenate the positive and negative parts
concatenated = tf.concat([pos, neg], axis=0)
# Project the concatenation down to the same dimensionality as the input
return tf.matmul(concatenated, self.kernel)
Explanation: Keras debugging tips
Author: fchollet<br>
Date created: 2020/05/16<br>
Last modified: 2020/05/16<br>
Description: Four simple tips to help you debug your Keras code.
Introduction
It's generally possible to do almost anything in Keras without writing code per se:
whether you're implementing a new type of GAN or the latest convnet architecture for
image segmentation, you can usually stick to calling built-in methods. Because all
built-in methods do extensive input validation checks, you will have little to no
debugging to do. A Functional API model made entirely of built-in layers will work on
first try -- if you can compile it, it will run.
However, sometimes, you will need to dive deeper and write your own code. Here are some
common examples:
Creating a new Layer subclass.
Creating a custom Metric subclass.
Implementing a custom train_step on a Model.
This document provides a few simple tips to help you navigate debugging in these
situations.
Tip 1: test each part before you test the whole
If you've created any object that has a chance of not working as expected, don't just
drop it in your end-to-end process and watch sparks fly. Rather, test your custom object
in isolation first. This may seem obvious -- but you'd be surprised how often people
don't start with this.
If you write a custom layer, don't call fit() on your entire model just yet. Call
your layer on some test data first.
If you write a custom metric, start by printing its output for some reference inputs.
Here's a simple example. Let's write a custom layer a bug in it:
End of explanation
class MyAntirectifier(layers.Layer):
def build(self, input_shape):
output_dim = input_shape[-1]
self.kernel = self.add_weight(
shape=(output_dim * 2, output_dim),
initializer="he_normal",
name="kernel",
trainable=True,
)
def call(self, inputs):
pos = tf.nn.relu(inputs)
neg = tf.nn.relu(-inputs)
print("pos.shape:", pos.shape)
print("neg.shape:", neg.shape)
concatenated = tf.concat([pos, neg], axis=0)
print("concatenated.shape:", concatenated.shape)
print("kernel.shape:", self.kernel.shape)
return tf.matmul(concatenated, self.kernel)
Explanation: Now, rather than using it in a end-to-end model directly, let's try to call the layer on
some test data:
python
x = tf.random.normal(shape=(2, 5))
y = MyAntirectifier()(x)
We get the following error:
...
1 x = tf.random.normal(shape=(2, 5))
----> 2 y = MyAntirectifier()(x)
...
17 neg = tf.nn.relu(-inputs)
18 concatenated = tf.concat([pos, neg], axis=0)
---> 19 return tf.matmul(concatenated, self.kernel)
...
InvalidArgumentError: Matrix size-incompatible: In[0]: [4,5], In[1]: [10,5] [Op:MatMul]
Looks like our input tensor in the matmul op may have an incorrect shape.
Let's add a print statement to check the actual shapes:
End of explanation
class MyAntirectifier(layers.Layer):
def build(self, input_shape):
output_dim = input_shape[-1]
self.kernel = self.add_weight(
shape=(output_dim * 2, output_dim),
initializer="he_normal",
name="kernel",
trainable=True,
)
def call(self, inputs):
pos = tf.nn.relu(inputs)
neg = tf.nn.relu(-inputs)
print("pos.shape:", pos.shape)
print("neg.shape:", neg.shape)
concatenated = tf.concat([pos, neg], axis=1)
print("concatenated.shape:", concatenated.shape)
print("kernel.shape:", self.kernel.shape)
return tf.matmul(concatenated, self.kernel)
Explanation: We get the following:
pos.shape: (2, 5)
neg.shape: (2, 5)
concatenated.shape: (4, 5)
kernel.shape: (10, 5)
Turns out we had the wrong axis for the concat op! We should be concatenating neg and
pos alongside the feature axis 1, not the batch axis 0. Here's the correct version:
End of explanation
x = tf.random.normal(shape=(2, 5))
y = MyAntirectifier()(x)
Explanation: Now our code works fine:
End of explanation
from tensorflow import keras
num_tags = 12 # Number of unique issue tags
num_words = 10000 # Size of vocabulary obtained when preprocessing text data
num_departments = 4 # Number of departments for predictions
title_input = keras.Input(
shape=(None,), name="title"
) # Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name="body") # Variable-length sequence of ints
tags_input = keras.Input(
shape=(num_tags,), name="tags"
) # Binary vectors of size `num_tags`
# Embed each word in the title into a 64-dimensional vector
title_features = layers.Embedding(num_words, 64)(title_input)
# Embed each word in the text into a 64-dimensional vector
body_features = layers.Embedding(num_words, 64)(body_input)
# Reduce sequence of embedded words in the title into a single 128-dimensional vector
title_features = layers.LSTM(128)(title_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = layers.LSTM(32)(body_features)
# Merge all available features into a single large vector via concatenation
x = layers.concatenate([title_features, body_features, tags_input])
# Stick a logistic regression for priority prediction on top of the features
priority_pred = layers.Dense(1, name="priority")(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, name="department")(x)
# Instantiate an end-to-end model predicting both priority and department
model = keras.Model(
inputs=[title_input, body_input, tags_input],
outputs=[priority_pred, department_pred],
)
Explanation: Tip 2: use model.summary() and plot_model() to check layer output shapes
If you're working with complex network topologies, you're going to need a way
to visualize how your layers are connected and how they transform the data that passes
through them.
Here's an example. Consider this model with three inputs and two outputs (lifted from the
Functional API
guide):
End of explanation
model.summary()
Explanation: Calling summary() can help you check the output shape of each layer:
End of explanation
keras.utils.plot_model(model, show_shapes=True)
Explanation: You can also visualize the entire network topology alongside output shapes using
plot_model:
End of explanation
class MyModel(keras.Model):
def train_step(self, data):
inputs, targets = data
trainable_vars = self.trainable_variables
with tf.GradientTape() as tape2:
with tf.GradientTape() as tape1:
preds = self(inputs, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(targets, preds)
# Compute first-order gradients
dl_dw = tape1.gradient(loss, trainable_vars)
# Compute second-order gradients
d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)
# Combine first-order and second-order gradients
grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]
# Update weights
self.optimizer.apply_gradients(zip(grads, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(targets, preds)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
Explanation: With this plot, any connectivity-level error becomes immediately obvious.
Tip 3: to debug what happens during fit(), use run_eagerly=True
The fit() method is fast: it runs a well-optimized, fully-compiled computation graph.
That's great for performance, but it also means that the code you're executing isn't the
Python code you've written. This can be problematic when debugging. As you may recall,
Python is slow -- so we use it as a staging language, not as an execution language.
Thankfully, there's an easy way to run your code in "debug mode", fully eagerly:
pass run_eagerly=True to compile(). Your call to fit() will now get executed line
by line, without any optimization. It's slower, but it makes it possible to print the
value of intermediate tensors, or to use a Python debugger. Great for debugging.
Here's a basic example: let's write a really simple model with a custom train_step. Our
model just implements gradient descent, but instead of first-order gradients, it uses a
combination of first-order and second-order gradients. Pretty trivial so far.
Can you spot what we're doing wrong?
End of explanation
import numpy as np
# Construct an instance of MyModel
def get_model():
inputs = keras.Input(shape=(784,))
intermediate = layers.Dense(256, activation="relu")(inputs)
outputs = layers.Dense(10, activation="softmax")(intermediate)
model = MyModel(inputs, outputs)
return model
# Prepare data
(x_train, y_train), _ = keras.datasets.mnist.load_data()
x_train = np.reshape(x_train, (-1, 784)) / 255
model = get_model()
model.compile(
optimizer=keras.optimizers.SGD(learning_rate=1e-2),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
model.fit(x_train, y_train, epochs=3, batch_size=1024, validation_split=0.1)
Explanation: Let's train a one-layer model on MNIST with this custom training loop.
We pick, somewhat at random, a batch size of 1024 and a learning rate of 0.1. The general
idea being to use larger batches and a larger learning rate than usual, since our
"improved" gradients should lead us to quicker convergence.
End of explanation
class MyModel(keras.Model):
def train_step(self, data):
print()
print("----Start of step: %d" % (self.step_counter,))
self.step_counter += 1
inputs, targets = data
trainable_vars = self.trainable_variables
with tf.GradientTape() as tape2:
with tf.GradientTape() as tape1:
preds = self(inputs, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(targets, preds)
# Compute first-order gradients
dl_dw = tape1.gradient(loss, trainable_vars)
# Compute second-order gradients
d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)
print("Max of dl_dw[0]: %.4f" % tf.reduce_max(dl_dw[0]))
print("Min of dl_dw[0]: %.4f" % tf.reduce_min(dl_dw[0]))
print("Mean of dl_dw[0]: %.4f" % tf.reduce_mean(dl_dw[0]))
print("-")
print("Max of d2l_dw2[0]: %.4f" % tf.reduce_max(d2l_dw2[0]))
print("Min of d2l_dw2[0]: %.4f" % tf.reduce_min(d2l_dw2[0]))
print("Mean of d2l_dw2[0]: %.4f" % tf.reduce_mean(d2l_dw2[0]))
# Combine first-order and second-order gradients
grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]
# Update weights
self.optimizer.apply_gradients(zip(grads, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(targets, preds)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
model = get_model()
model.compile(
optimizer=keras.optimizers.SGD(learning_rate=1e-2),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
run_eagerly=True,
)
model.step_counter = 0
# We pass epochs=1 and steps_per_epoch=10 to only run 10 steps of training.
model.fit(x_train, y_train, epochs=1, batch_size=1024, verbose=0, steps_per_epoch=10)
Explanation: Oh no, it doesn't converge! Something is not working as planned.
Time for some step-by-step printing of what's going on with our gradients.
We add various print statements in the train_step method, and we make sure to pass
run_eagerly=True to compile() to run our code step-by-step, eagerly.
End of explanation
class MyModel(keras.Model):
def train_step(self, data):
inputs, targets = data
trainable_vars = self.trainable_variables
with tf.GradientTape() as tape2:
with tf.GradientTape() as tape1:
preds = self(inputs, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(targets, preds)
# Compute first-order gradients
dl_dw = tape1.gradient(loss, trainable_vars)
# Compute second-order gradients
d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)
dl_dw = [tf.math.l2_normalize(w) for w in dl_dw]
d2l_dw2 = [tf.math.l2_normalize(w) for w in d2l_dw2]
# Combine first-order and second-order gradients
grads = [0.5 * w1 + 0.5 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]
# Update weights
self.optimizer.apply_gradients(zip(grads, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(targets, preds)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
model = get_model()
model.compile(
optimizer=keras.optimizers.SGD(learning_rate=1e-2),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
model.fit(x_train, y_train, epochs=5, batch_size=1024, validation_split=0.1)
Explanation: What did we learn?
The first order and second order gradients can have values that differ by orders of
magnitudes.
Sometimes, they may not even have the same sign.
Their values can vary greatly at each step.
This leads us to an obvious idea: let's normalize the gradients before combining them.
End of explanation
class MyModel(keras.Model):
def train_step(self, data):
inputs, targets = data
trainable_vars = self.trainable_variables
with tf.GradientTape() as tape2:
with tf.GradientTape() as tape1:
preds = self(inputs, training=True) # Forward pass
# Compute the loss value
# (the loss function is configured in `compile()`)
loss = self.compiled_loss(targets, preds)
# Compute first-order gradients
dl_dw = tape1.gradient(loss, trainable_vars)
# Compute second-order gradients
d2l_dw2 = tape2.gradient(dl_dw, trainable_vars)
dl_dw = [tf.math.l2_normalize(w) for w in dl_dw]
d2l_dw2 = [tf.math.l2_normalize(w) for w in d2l_dw2]
# Combine first-order and second-order gradients
grads = [0.2 * w1 + 0.8 * w2 for (w1, w2) in zip(d2l_dw2, dl_dw)]
# Update weights
self.optimizer.apply_gradients(zip(grads, trainable_vars))
# Update metrics (includes the metric that tracks the loss)
self.compiled_metrics.update_state(targets, preds)
# Return a dict mapping metric names to current value
return {m.name: m.result() for m in self.metrics}
model = get_model()
lr = learning_rate = keras.optimizers.schedules.InverseTimeDecay(
initial_learning_rate=0.1, decay_steps=25, decay_rate=0.1
)
model.compile(
optimizer=keras.optimizers.SGD(lr),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
model.fit(x_train, y_train, epochs=50, batch_size=2048, validation_split=0.1)
Explanation: Now, training converges! It doesn't work well at all, but at least the model learns
something.
After spending a few minutes tuning parameters, we get to the following configuration
that works somewhat well (achieves 97% validation accuracy and seems reasonably robust to
overfitting):
Use 0.2 * w1 + 0.8 * w2 for combining gradients.
Use a learning rate that decays linearly over time.
I'm not going to say that the idea works -- this isn't at all how you're supposed to do
second-order optimization (pointers: see the Newton & Gauss-Newton methods, quasi-Newton
methods, and BFGS). But hopefully this demonstration gave you an idea of how you can
debug your way out of uncomfortable training situations.
Remember: use run_eagerly=True for debugging what happens in fit(). And when your code
is finally working as expected, make sure to remove this flag in order to get the best
runtime performance!
Here's our final training run:
End of explanation |
4,216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PDF is garbage
In this example, we are looking for a link to some source code
Step1: PDF is garbage, continued
If we remove line breaks to fix URLs that have been wrapped, we discover
that the visible line breaks in the document do not correspond to actual
line breaks in the represented text. The result is random garbage.
Step2: Nope.
At this point, the author elects to flip a table.
Let's try looking at the HTML version. I'll swipe some code from
Dive into Python here, because
finding URLs in a HTML document is what is known as a "Solved Problem."
Step3: Here are all the URLs in the document...
Step4: Bleh. That is mostly links in the references, ads and navigation cruft
from the journal's content mismanagement system. Because their system
is heinously ad hoc, there is no base URL. So, we're forced to use an
ad hoc exclusion list.
Step5: Much better. Now, let's see if these exist...
Step6: Looks like this will work, though we'll need to make a hand-curated list of
excluded URLs. Othersise, the counts of dead links could be badly skewed by
any issues within the journal's content mismanagement system, ad servers and
other irrelevent crud.
Walking through Zotero
Let's try walking through the publications in a Zotero library...
Step7: So far so good. Let's have a look at the url attribute...
Step8: Well, it looks like not all resources have URLs. Let's try looping over
some of these and extracting links...
Step9: Clearly, we need to expand the excluded URL list. And we need to match
domains, not URLs.
Step10: This excluded list is getting sloppy as the author slowly lapses into
a vegitative state, but we'll push on anyway.
Step11: Some journals aggressivly ban and throttle IPs, so this process gets slow
and awful, but it works. Let's check these for dead links... | Python Code:
urlre = re.compile( '(?P<url>https?://[^\s]+)' )
for page in doc :
print urlre.findall( page )
Explanation: PDF is garbage
In this example, we are looking for a link to some source code :
http://prodege.jgi-psf.org//downloads/src
However, in the PDF, the URL is line wrapped, so the src is lost.
End of explanation
urlre = re.compile( '(?P<url>https?://[^\s]+)' )
for page in doc :
print urlre.findall( page.replace('\n','') )
Explanation: PDF is garbage, continued
If we remove line breaks to fix URLs that have been wrapped, we discover
that the visible line breaks in the document do not correspond to actual
line breaks in the represented text. The result is random garbage.
End of explanation
from sgmllib import SGMLParser
class URLLister(SGMLParser):
def reset(self):
SGMLParser.reset(self)
self.urls = []
def start_a(self, attrs):
href = [v for k, v in attrs if k=='href']
if href:
self.urls.extend(href)
def get_urls_from(url):
url_list = []
import urllib
usock = urllib.urlopen(url)
parser = URLLister()
parser.feed(usock.read())
usock.close()
parser.close()
map(url_list.append,
[item for item in parser.urls if item.startswith(('http', 'ftp', 'www'))])
return url_list
Explanation: Nope.
At this point, the author elects to flip a table.
Let's try looking at the HTML version. I'll swipe some code from
Dive into Python here, because
finding URLs in a HTML document is what is known as a "Solved Problem."
End of explanation
urls = get_urls_from('http://www.nature.com/ismej/journal/v10/n1/full/ismej2015100a.html')
urls
Explanation: Here are all the URLs in the document...
End of explanation
excluded = [ 'http://www.nature.com',
'http://dx.doi.org',
'http://www.ncbi.nlm.nih.gov',
'http://creativecommons.org',
'https://s100.copyright.com',
'http://mts-isme.nature.com',
'http://www.isme-microbes.org',
'http://ad.doubleclick.net',
'http://mse.force.com',
'http://links.isiglobalnet2.com',
'http://www.readcube.com',
'http://chemport.cas.org',
'http://publicationethics.org/',
'http://www.natureasia.com/'
]
def novel_url( url ) :
for excluded_url in excluded :
if url.startswith( excluded_url ) :
return False
return True
filter( novel_url, urls )
Explanation: Bleh. That is mostly links in the references, ads and navigation cruft
from the journal's content mismanagement system. Because their system
is heinously ad hoc, there is no base URL. So, we're forced to use an
ad hoc exclusion list.
End of explanation
import requests
for url in filter( novel_url, urls ) :
request = requests.get( url )
if request.status_code == 200:
print 'Good : ', url
else:
print 'Fail : ', url
Explanation: Much better. Now, let's see if these exist...
End of explanation
from pyzotero import zotero
api_key = open( 'zotero_api_key.txt' ).read().strip()
library_id = open( 'zotero_api_userID.txt' ).read().strip()
library_type = 'group'
group_id = '405341' # microBE.net group ID
zot = zotero.Zotero(group_id, library_type, api_key)
items = zot.top(limit=5)
# we've retrieved the latest five top-level items in our library
# we can print each item's item type and ID
for item in items:
#print('Item: %s | Key: %s') % (item['data']['itemType'], item['data']['key'])
print item['data']['key'], ':', item['data']['title']
Explanation: Looks like this will work, though we'll need to make a hand-curated list of
excluded URLs. Othersise, the counts of dead links could be badly skewed by
any issues within the journal's content mismanagement system, ad servers and
other irrelevent crud.
Walking through Zotero
Let's try walking through the publications in a Zotero library...
End of explanation
for item in items:
print item['data']['key'], ':', item['data']['url']
Explanation: So far so good. Let's have a look at the url attribute...
End of explanation
for item in items:
paper_url = item['data']['url']
if paper_url.startswith( 'http' ) :
link_urls = get_urls_from( paper_url )
print item['data']['key']
for url in filter( novel_url, link_urls ) :
print ' ', url
Explanation: Well, it looks like not all resources have URLs. Let's try looping over
some of these and extracting links...
End of explanation
excluded = [ 'nature.com',
'doi.org',
'ncbi.nlm.nih.gov',
'creativecommons.org',
'copyright.com',
'isme-microbes.org',
'doubleclick.net',
'force.com',
'isiglobalnet2.com',
'readcube.com',
'cas.org',
'publicationethics.org',
'natureasia.com',
'uq.edu.au',
'edx.org',
'facebook.com',
'instagram.com',
'youtube.com',
'flickr.com',
'twitter.com',
'go8.edu.au',
'google.com',
'vimeo.com',
'peerj.com',
'mendeley.com',
'cloudfront.net',
'webofknowledge.com',
'sciencedirect.com',
'aol.com',
'pinterest.com',
'scopus.com',
'live.com',
'exlibrisgroup.com',
'usyd.edu.au',
'academicanalytics.com',
'microbiomedigest.com',
'ask.com',
'sogou.com',
'ou.com',
'du.edu',
'ru.nl',
'freshdesk.com',
'caltech.edu',
'traackr.com',
'adobe.com',
'linkedin.com',
'feedly.com',
'google.co.uk',
'glgoo.org',
'library.wisc.edu',
'lib.fsu.edu',
'library.illinois.edu',
'exchange.ou.edu',
'lib.noaa.gov',
'innocentive.com',
'sfx.kcl.ac.uk',
'sfx.unimi.it',
'lib.utexas.edu',
'orcid.org',
]
def novel_url( url ) :
for excluded_url in excluded :
if url.__contains__( excluded_url ) :
return False
return True
Explanation: Clearly, we need to expand the excluded URL list. And we need to match
domains, not URLs.
End of explanation
for item in items:
paper_url = item['data']['url']
if paper_url.startswith( 'http' ) :
try :
link_urls = get_urls_from( paper_url )
print item['data']['key']
for url in list(set(filter( novel_url, link_urls ))) :
print ' ', url
except IOError :
print item['data']['key'], 'FAILED'
Explanation: This excluded list is getting sloppy as the author slowly lapses into
a vegitative state, but we'll push on anyway.
End of explanation
for item in items:
paper_url = item['data']['url']
if paper_url.startswith( 'http' ) :
try :
link_urls = get_urls_from( paper_url )
print item['data']['key']
for url in list(set(filter( novel_url, link_urls ))) :
request = requests.get( url )
if request.status_code == 200:
print ' Good : ', url
else:
print ' Fail : ', url
except IOError :
print item['data']['key'], 'FAILED'
Explanation: Some journals aggressivly ban and throttle IPs, so this process gets slow
and awful, but it works. Let's check these for dead links...
End of explanation |
4,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Topic Modeling with MALLET
We'd like to test how Taylor Salo integrated MALLET into NeuroSynth, and whether that integration works in a docker container.
First, let's import some dependencies and text to work with.
For testing, we'll use an XML file separately downloaded from PubMed. In the spirit of NeuroSynth, we downloaded Tal Yarkoni's bibliography. Thanks, Tal!
Step1: Three articles do not have abstracts
Step2: We have a test dataset! Let's see how it plays with MALLET. | Python Code:
from bs4 import BeautifulSoup
import pandas as pd
with open('../neurosynth/tests/data/yarkoni_pubmed.xml') as infile:
xml_file = infile.read()
soup = BeautifulSoup(xml_file, 'lxml')
try:
assert type(soup) == BeautifulSoup
except AssertionError:
print('Check file type! Must be HTML or XML.')
titles = soup.find_all('articletitle')
abstracts = soup.find_all('abstract')
if len(titles) != len(abstracts):
print('Warning: Some articles do not have abstracts on PubMed!')
print('Only articles with complete data will be included.')
Explanation: Topic Modeling with MALLET
We'd like to test how Taylor Salo integrated MALLET into NeuroSynth, and whether that integration works in a docker container.
First, let's import some dependencies and text to work with.
For testing, we'll use an XML file separately downloaded from PubMed. In the spirit of NeuroSynth, we downloaded Tal Yarkoni's bibliography. Thanks, Tal!
End of explanation
abstracts = []
pmids = []
articles = soup.find_all('pubmedarticle')
for a in articles:
if a.find_all('abstract')!= []:
# This is a little messy, but pulls out the
# results in plain text without another loop.
abstracts.append(a.find_all('abstracttext')[0].get_text())
pmids.append(a.find_all(idtype='pubmed')[0].get_text())
df = pd.DataFrame({'pmid': pmids,
'abstract': abstracts})
df.head()
Explanation: Three articles do not have abstracts:
1. Pain in the ACC?
2. Introduction to the special issue on reliability and
replication in cognitive and affective neuroscience research.
3. Establishing homology between monkey and human brains.
Maybe because they're commentaries? We'll need to filter the results to only consider articles with abstracts. Then, import any matching articles into a pandas dataframe.
End of explanation
import os
import subprocess
import shutil
import sys
sys.path.append(os.path.abspath('..'))
from neurosynth.analysis.reduce import topic_models
weights_df, keys_df = topic_models(df)
keys_df.head()
Explanation: We have a test dataset! Let's see how it plays with MALLET.
End of explanation |
4,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CollateX and XML, Part 2
David J. Birnbaum (djbpitt@gmail.com, http
Step1: The WitnessSet class represents all of the witnesses being collated. The generate_json_input() method returns a JSON object that is suitable for input into CollateX.
At the moment each witness contains just one line (<l> element), so the entire witness is treated as a line. In future parts of this tutorial, the lines will be processed individually, segmenting the collation task into subtasks that collate just one line at a time.
Step3: The Line class contains methods applied to individual lines (note that each witness in this part of the tutorial consists of only a single line). The XSLT stylesheets and the functions to use them have been moved into the Line class, since they apply to individual lines. The siglum() method returns the manuscript identifier and the tokens() method returns a list of JSON objects, one for each word token.
With a witness that contained more than one line, the siglum would be a property of the witness and the tokens would be a property of each line of the witness. In this part of the tutorial, since each witness has only one line, the siglum is recorded as an attribute of the line, rather than of an XML ancestor that contains all of the lines of the witness.
Step4: The Word class contains methods that apply to individual words. unwrap() and normalize() are private; they are used by createToken() to return a JSON object with the "t" and "n" properties for a word token.
Step9: Create XML data and assign to a witnessSet variable
Step10: Generate JSON from the data and examine it
Step11: Collate and output the results as a plain-text alignment table, as JSON, and as colored HTML | Python Code:
from collatex import *
from lxml import etree
import json,re
Explanation: CollateX and XML, Part 2
David J. Birnbaum (djbpitt@gmail.com, http://www.obdurodon.org), 2015-06-29
This example collates a single line of XML from four witnesses. In Part 1 we spelled out the details step by step in a way that would not be used in a real project, but that made it easy to see how each step moves toward the final result. In Part 2 we employ three classes (WitnessSet, Line, Word) to make the code more extensible and adaptable.
The sample input is still a single line for four witnesses, given as strings within the Python script. This time, though, the witness identifier (siglum) is given as an attribute on the XML input line.
Load libraries. Unchanged from Part 1.
End of explanation
class WitnessSet:
def __init__(self,witnessList):
self.witnessList = witnessList
def generate_json_input(self):
json_input = {}
witnesses = []
json_input['witnesses'] = witnesses
for witness in self.witnessList:
line = Line(witness)
witnessData = {}
witnessData['id'] = line.siglum()
witnessTokens = {}
witnessData['tokens'] = line.tokens()
witnesses.append(witnessData)
return json_input
Explanation: The WitnessSet class represents all of the witnesses being collated. The generate_json_input() method returns a JSON object that is suitable for input into CollateX.
At the moment each witness contains just one line (<l> element), so the entire witness is treated as a line. In future parts of this tutorial, the lines will be processed individually, segmenting the collation task into subtasks that collate just one line at a time.
End of explanation
class Line:
addWMilestones = etree.XML(
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" indent="no" encoding="UTF-8" omit-xml-declaration="yes"/>
<xsl:template match="*|@*">
<xsl:copy>
<xsl:apply-templates select="node() | @*"/>
</xsl:copy>
</xsl:template>
<xsl:template match="/*">
<xsl:copy>
<xsl:apply-templates select="@*"/>
<!-- insert a <w/> milestone before the first word -->
<w/>
<xsl:apply-templates/>
</xsl:copy>
</xsl:template>
<!-- convert <add>, <sic>, and <crease> to milestones (and leave them that way)
CUSTOMIZE HERE: add other elements that may span multiple word tokens
-->
<xsl:template match="add | sic | crease ">
<xsl:element name="{name()}">
<xsl:attribute name="n">start</xsl:attribute>
</xsl:element>
<xsl:apply-templates/>
<xsl:element name="{name()}">
<xsl:attribute name="n">end</xsl:attribute>
</xsl:element>
</xsl:template>
<xsl:template match="note"/>
<xsl:template match="text()">
<xsl:call-template name="whiteSpace">
<xsl:with-param name="input" select="translate(.,'
',' ')"/>
</xsl:call-template>
</xsl:template>
<xsl:template name="whiteSpace">
<xsl:param name="input"/>
<xsl:choose>
<xsl:when test="not(contains($input, ' '))">
<xsl:value-of select="$input"/>
</xsl:when>
<xsl:when test="starts-with($input,' ')">
<xsl:call-template name="whiteSpace">
<xsl:with-param name="input" select="substring($input,2)"/>
</xsl:call-template>
</xsl:when>
<xsl:otherwise>
<xsl:value-of select="substring-before($input, ' ')"/>
<w/>
<xsl:call-template name="whiteSpace">
<xsl:with-param name="input" select="substring-after($input,' ')"/>
</xsl:call-template>
</xsl:otherwise>
</xsl:choose>
</xsl:template>
</xsl:stylesheet>
)
transformAddW = etree.XSLT(addWMilestones)
xsltWrapW = etree.XML('''
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="xml" indent="no" omit-xml-declaration="yes"/>
<xsl:template match="/*">
<xsl:copy>
<xsl:apply-templates select="w"/>
</xsl:copy>
</xsl:template>
<xsl:template match="w">
<!-- faking <xsl:for-each-group> as well as the "<<" and except" operators -->
<xsl:variable name="tooFar" select="following-sibling::w[1] | following-sibling::w[1]/following::node()"/>
<w>
<xsl:copy-of select="following-sibling::node()[count(. | $tooFar) != count($tooFar)]"/>
</w>
</xsl:template>
</xsl:stylesheet>
''')
transformWrapW = etree.XSLT(xsltWrapW)
def __init__(self,line):
self.line = line
def siglum(self):
return str(etree.XML(self.line).xpath('/l/@wit')[0])
def tokens(self):
return [Word(token).createToken() for token in Line.transformWrapW(Line.transformAddW(etree.XML(self.line))).xpath('//w')]
Explanation: The Line class contains methods applied to individual lines (note that each witness in this part of the tutorial consists of only a single line). The XSLT stylesheets and the functions to use them have been moved into the Line class, since they apply to individual lines. The siglum() method returns the manuscript identifier and the tokens() method returns a list of JSON objects, one for each word token.
With a witness that contained more than one line, the siglum would be a property of the witness and the tokens would be a property of each line of the witness. In this part of the tutorial, since each witness has only one line, the siglum is recorded as an attribute of the line, rather than of an XML ancestor that contains all of the lines of the witness.
End of explanation
class Word:
unwrapRegex = re.compile('<w>(.*)</w>')
stripTagsRegex = re.compile('<.*?>')
def __init__(self,word):
self.word = word
def unwrap(self):
return Word.unwrapRegex.match(etree.tostring(self.word,encoding='unicode')).group(1)
def normalize(self):
return Word.stripTagsRegex.sub('',self.unwrap().lower())
def createToken(self):
token = {}
token['t'] = self.unwrap()
token['n'] = self.normalize()
return token
Explanation: The Word class contains methods that apply to individual words. unwrap() and normalize() are private; they are used by createToken() to return a JSON object with the "t" and "n" properties for a word token.
End of explanation
A = <l wit='A'><abbrev>Et</abbrev>cil i partent seulement</l>
B = <l wit='B'><abbrev>Et</abbrev>cil i p<abbrev>er</abbrev>dent ausem<abbrev>en</abbrev>t</l>
C = <l wit='C'><abbrev>Et</abbrev>cil i p<abbrev>ar</abbrev>tent seulema<abbrev>n</abbrev>t</l>
D = <l wit='D'>E cil i partent sulement</l>
witnessSet = WitnessSet([A,B,C,D])
Explanation: Create XML data and assign to a witnessSet variable
End of explanation
json_input = witnessSet.generate_json_input()
print(json_input)
Explanation: Generate JSON from the data and examine it
End of explanation
collationText = collate_pretokenized_json(json_input,output='table',layout='vertical')
print(collationText)
collationJSON = collate_pretokenized_json(json_input,output='json')
print(collationJSON)
collationHTML2 = collate_pretokenized_json(json_input,output='html2')
Explanation: Collate and output the results as a plain-text alignment table, as JSON, and as colored HTML
End of explanation |
4,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Hub Authors.
Step1: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: You will use the AdamW optimizer from tensorflow/models to fine-tune BERT, which you will install as well.
Step3: Next, configure TFHub to read checkpoints directly from TFHub's Cloud Storage buckets. This is only recommended when running TFHub models on TPU.
Without this setting TFHub would download the compressed file and extract the checkpoint locally. Attempting to load from these local files will fail with the following error
Step4: Connect to the TPU worker
The following code connects to the TPU worker and changes TensorFlow's default device to the CPU device on the TPU worker. It also defines a TPU distribution strategy that you will use to distribute model training onto the 8 separate TPU cores available on this one TPU worker. See TensorFlow's TPU guide for more information.
Step5: Loading models from TensorFlow Hub
Here you can choose which BERT model you will load from TensorFlow Hub and fine-tune.
There are multiple BERT models available to choose from.
BERT-Base, Uncased and seven more models with trained weights released by the original BERT authors.
Small BERTs have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.
ALBERT
Step6: Preprocess the text
On the Classify text with BERT colab the preprocessing model is used directly embedded with the BERT encoder.
This tutorial demonstrates how to do preprocessing as part of your input pipeline for training, using Dataset.map, and then merge it into the model that gets exported for inference. That way, both training and inference can work from raw text inputs, although the TPU itself requires numeric inputs.
TPU requirements aside, it can help performance have preprocessing done asynchronously in an input pipeline (you can learn more in the tf.data performance guide).
This tutorial also demonstrates how to build multi-input models, and how to adjust the sequence length of the inputs to BERT.
Let's demonstrate the preprocessing model.
Step7: Each preprocessing model also provides a method, .bert_pack_inputs(tensors, seq_length), which takes a list of tokens (like tok above) and a sequence length argument. This packs the inputs to create a dictionary of tensors in the format expected by the BERT model.
Step9: Here are some details to pay attention to
Step10: Let's demonstrate the preprocessing model. You will create a test with two sentences input (input1 and input2). The output is what a BERT model would expect as input
Step11: Let's take a look at the model's structure, paying attention to the two inputs you just defined.
Step12: To apply the preprocessing in all the inputs from the dataset, you will use the map function from the dataset. The result is then cached for performance.
Step13: Define your model
You are now ready to define your model for sentence or sentence pair classification by feeding the preprocessed inputs through the BERT encoder and putting a linear classifier on top (or other arrangement of layers as you prefer), and using dropout for regularization.
Step14: Let's try running the model on some preprocessed inputs.
Step15: Choose a task from GLUE
You are going to use a TensorFlow DataSet from the GLUE benchmark suite.
Colab lets you download these small datasets to the local filesystem, and the code below reads them entirely into memory, because the separate TPU worker host cannot access the local filesystem of the colab runtime.
For bigger datasets, you'll need to create your own Google Cloud Storage bucket and have the TPU worker read the data from there. You can learn more in the TPU guide.
It's recommended to start with the CoLa dataset (for single sentence) or MRPC (for multi sentence) since these are small and don't take long to fine tune.
Step16: The dataset also determines the problem type (classification or regression) and the appropriate loss function for training.
Step17: Train your model
Finally, you can train the model end-to-end on the dataset you chose.
Distribution
Recall the set-up code at the top, which has connected the colab runtime to
a TPU worker with multiple TPU devices. To distribute training onto them, you will create and compile your main Keras model within the scope of the TPU distribution strategy. (For details, see Distributed training with Keras.)
Preprocessing, on the other hand, runs on the CPU of the worker host, not the TPUs, so the Keras model for preprocessing as well as the training and validation datasets mapped with it are built outside the distribution strategy scope. The call to Model.fit() will take care of distributing the passed-in dataset to the model replicas.
Note
Step18: Export for inference
You will create a final model that has the preprocessing part and the fine-tuned BERT we've just created.
At inference time, preprocessing needs to be part of the model (because there is no longer a separate input queue as for training data that does it). Preprocessing is not just computation; it has its own resources (the vocab table) that must be attached to the Keras Model that is saved for export.
This final assembly is what will be saved.
You are going to save the model on colab and later you can download to keep it for the future (View -> Table of contents -> Files).
Step19: Test the model
The final step is testing the results of your exported model.
Just to make some comparison, let's reload the model and test it using some inputs from the test split from the dataset.
Note
Step20: Test
Step21: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. Notice there are some small differences in the input. In Python, you can test them as follows | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Hub Authors.
End of explanation
!pip install -q -U tensorflow-text
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/bert_glue"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/bert_glue.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/bert_glue.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/bert_glue.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/collections/bert/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
Solve GLUE tasks using BERT on TPU
BERT can be used to solve many problems in natural language processing. You will learn how to fine-tune BERT for many tasks from the GLUE benchmark:
CoLA (Corpus of Linguistic Acceptability): Is the sentence grammatically correct?
SST-2 (Stanford Sentiment Treebank): The task is to predict the sentiment of a given sentence.
MRPC (Microsoft Research Paraphrase Corpus): Determine whether a pair of sentences are semantically equivalent.
QQP (Quora Question Pairs2): Determine whether a pair of questions are semantically equivalent.
MNLI (Multi-Genre Natural Language Inference): Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral).
QNLI(Question-answering Natural Language Inference): The task is to determine whether the context sentence contains the answer to the question.
RTE(Recognizing Textual Entailment): Determine if a sentence entails a given hypothesis or not.
WNLI(Winograd Natural Language Inference): The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence.
This tutorial contains complete end-to-end code to train these models on a TPU. You can also run this notebook on a GPU, by changing one line (described below).
In this notebook, you will:
Load a BERT model from TensorFlow Hub
Choose one of GLUE tasks and download the dataset
Preprocess the text
Fine-tune BERT (examples are given for single-sentence and multi-sentence datasets)
Save the trained model and use it
Key point: The model you develop will be end-to-end. The preprocessing logic will be included in the model itself, making it capable of accepting raw strings as input.
Note: This notebook should be run using a TPU. In Colab, choose Runtime -> Change runtime type and verify that a TPU is selected.
Setup
You will use a separate model to preprocess text before using it to fine-tune BERT. This model depends on tensorflow/text, which you will install below.
End of explanation
!pip install -q -U tf-models-official
!pip install -U tfds-nightly
import os
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
import tensorflow_text as text # A dependency of the preprocessing model
import tensorflow_addons as tfa
from official.nlp import optimization
import numpy as np
tf.get_logger().setLevel('ERROR')
Explanation: You will use the AdamW optimizer from tensorflow/models to fine-tune BERT, which you will install as well.
End of explanation
os.environ["TFHUB_MODEL_LOAD_FORMAT"]="UNCOMPRESSED"
Explanation: Next, configure TFHub to read checkpoints directly from TFHub's Cloud Storage buckets. This is only recommended when running TFHub models on TPU.
Without this setting TFHub would download the compressed file and extract the checkpoint locally. Attempting to load from these local files will fail with the following error:
InvalidArgumentError: Unimplemented: File system scheme '[local]' not implemented
This is because the TPU can only read directly from Cloud Storage buckets.
Note: This setting is automatic in Colab.
End of explanation
import os
if os.environ['COLAB_TPU_ADDR']:
cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(cluster_resolver)
tf.tpu.experimental.initialize_tpu_system(cluster_resolver)
strategy = tf.distribute.TPUStrategy(cluster_resolver)
print('Using TPU')
elif tf.config.list_physical_devices('GPU'):
strategy = tf.distribute.MirroredStrategy()
print('Using GPU')
else:
raise ValueError('Running on CPU is not recommended.')
Explanation: Connect to the TPU worker
The following code connects to the TPU worker and changes TensorFlow's default device to the CPU device on the TPU worker. It also defines a TPU distribution strategy that you will use to distribute model training onto the 8 separate TPU cores available on this one TPU worker. See TensorFlow's TPU guide for more information.
End of explanation
#@title Choose a BERT model to fine-tune
bert_model_name = 'bert_en_uncased_L-12_H-768_A-12' #@param ["bert_en_uncased_L-12_H-768_A-12", "bert_en_uncased_L-24_H-1024_A-16", "bert_en_wwm_uncased_L-24_H-1024_A-16", "bert_en_cased_L-12_H-768_A-12", "bert_en_cased_L-24_H-1024_A-16", "bert_en_wwm_cased_L-24_H-1024_A-16", "bert_multi_cased_L-12_H-768_A-12", "small_bert/bert_en_uncased_L-2_H-128_A-2", "small_bert/bert_en_uncased_L-2_H-256_A-4", "small_bert/bert_en_uncased_L-2_H-512_A-8", "small_bert/bert_en_uncased_L-2_H-768_A-12", "small_bert/bert_en_uncased_L-4_H-128_A-2", "small_bert/bert_en_uncased_L-4_H-256_A-4", "small_bert/bert_en_uncased_L-4_H-512_A-8", "small_bert/bert_en_uncased_L-4_H-768_A-12", "small_bert/bert_en_uncased_L-6_H-128_A-2", "small_bert/bert_en_uncased_L-6_H-256_A-4", "small_bert/bert_en_uncased_L-6_H-512_A-8", "small_bert/bert_en_uncased_L-6_H-768_A-12", "small_bert/bert_en_uncased_L-8_H-128_A-2", "small_bert/bert_en_uncased_L-8_H-256_A-4", "small_bert/bert_en_uncased_L-8_H-512_A-8", "small_bert/bert_en_uncased_L-8_H-768_A-12", "small_bert/bert_en_uncased_L-10_H-128_A-2", "small_bert/bert_en_uncased_L-10_H-256_A-4", "small_bert/bert_en_uncased_L-10_H-512_A-8", "small_bert/bert_en_uncased_L-10_H-768_A-12", "small_bert/bert_en_uncased_L-12_H-128_A-2", "small_bert/bert_en_uncased_L-12_H-256_A-4", "small_bert/bert_en_uncased_L-12_H-512_A-8", "small_bert/bert_en_uncased_L-12_H-768_A-12", "albert_en_base", "albert_en_large", "albert_en_xlarge", "albert_en_xxlarge", "electra_small", "electra_base", "experts_pubmed", "experts_wiki_books", "talking-heads_base", "talking-heads_large"]
map_name_to_handle = {
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3',
'bert_en_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_uncased_L-24_H-1024_A-16/3',
'bert_en_wwm_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_wwm_uncased_L-24_H-1024_A-16/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_L-12_H-768_A-12/3',
'bert_en_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_cased_L-24_H-1024_A-16/3',
'bert_en_wwm_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_wwm_cased_L-24_H-1024_A-16/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-128_A-2/1',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-512_A-8/1',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-768_A-12/1',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-128_A-2/1',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-256_A-4/1',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-768_A-12/1',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-128_A-2/1',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-256_A-4/1',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-512_A-8/1',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-6_H-768_A-12/1',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-128_A-2/1',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-256_A-4/1',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-768_A-12/1',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-128_A-2/1',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-256_A-4/1',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-512_A-8/1',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-10_H-768_A-12/1',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/1',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-256_A-4/1',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-512_A-8/1',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-768_A-12/1',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_base/2',
'albert_en_large':
'https://tfhub.dev/tensorflow/albert_en_large/2',
'albert_en_xlarge':
'https://tfhub.dev/tensorflow/albert_en_xlarge/2',
'albert_en_xxlarge':
'https://tfhub.dev/tensorflow/albert_en_xxlarge/2',
'electra_small':
'https://tfhub.dev/google/electra_small/2',
'electra_base':
'https://tfhub.dev/google/electra_base/2',
'experts_pubmed':
'https://tfhub.dev/google/experts/bert/pubmed/2',
'experts_wiki_books':
'https://tfhub.dev/google/experts/bert/wiki_books/2',
'talking-heads_base':
'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_base/1',
'talking-heads_large':
'https://tfhub.dev/tensorflow/talkheads_ggelu_bert_en_large/1',
}
map_model_to_preprocess = {
'bert_en_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_en_wwm_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'bert_en_cased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'bert_en_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_cased_preprocess/3',
'bert_en_wwm_uncased_L-24_H-1024_A-16':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-2_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-4_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-6_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-8_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-10_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-128_A-2':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-256_A-4':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-512_A-8':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'small_bert/bert_en_uncased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'bert_multi_cased_L-12_H-768_A-12':
'https://tfhub.dev/tensorflow/bert_multi_cased_preprocess/3',
'albert_en_base':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'albert_en_large':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'albert_en_xlarge':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'albert_en_xxlarge':
'https://tfhub.dev/tensorflow/albert_en_preprocess/3',
'electra_small':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'electra_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_pubmed':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'experts_wiki_books':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'talking-heads_base':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
'talking-heads_large':
'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3',
}
tfhub_handle_encoder = map_name_to_handle[bert_model_name]
tfhub_handle_preprocess = map_model_to_preprocess[bert_model_name]
print('BERT model selected :', tfhub_handle_encoder)
print('Preprocessing model auto-selected:', tfhub_handle_preprocess)
Explanation: Loading models from TensorFlow Hub
Here you can choose which BERT model you will load from TensorFlow Hub and fine-tune.
There are multiple BERT models available to choose from.
BERT-Base, Uncased and seven more models with trained weights released by the original BERT authors.
Small BERTs have the same general architecture but fewer and/or smaller Transformer blocks, which lets you explore tradeoffs between speed, size and quality.
ALBERT: four different sizes of "A Lite BERT" that reduces model size (but not computation time) by sharing parameters between layers.
BERT Experts: eight models that all have the BERT-base architecture but offer a choice between different pre-training domains, to align more closely with the target task.
Electra has the same architecture as BERT (in three different sizes), but gets pre-trained as a discriminator in a set-up that resembles a Generative Adversarial Network (GAN).
BERT with Talking-Heads Attention and Gated GELU [base, large] has two improvements to the core of the Transformer architecture.
See the model documentation linked above for more details.
In this tutorial, you will start with BERT-base. You can use larger and more recent models for higher accuracy, or smaller models for faster training times. To change the model, you only need to switch a single line of code (shown below). All the differences are encapsulated in the SavedModel you will download from TensorFlow Hub.
End of explanation
bert_preprocess = hub.load(tfhub_handle_preprocess)
tok = bert_preprocess.tokenize(tf.constant(['Hello TensorFlow!']))
print(tok)
Explanation: Preprocess the text
On the Classify text with BERT colab the preprocessing model is used directly embedded with the BERT encoder.
This tutorial demonstrates how to do preprocessing as part of your input pipeline for training, using Dataset.map, and then merge it into the model that gets exported for inference. That way, both training and inference can work from raw text inputs, although the TPU itself requires numeric inputs.
TPU requirements aside, it can help performance have preprocessing done asynchronously in an input pipeline (you can learn more in the tf.data performance guide).
This tutorial also demonstrates how to build multi-input models, and how to adjust the sequence length of the inputs to BERT.
Let's demonstrate the preprocessing model.
End of explanation
text_preprocessed = bert_preprocess.bert_pack_inputs([tok, tok], tf.constant(20))
print('Shape Word Ids : ', text_preprocessed['input_word_ids'].shape)
print('Word Ids : ', text_preprocessed['input_word_ids'][0, :16])
print('Shape Mask : ', text_preprocessed['input_mask'].shape)
print('Input Mask : ', text_preprocessed['input_mask'][0, :16])
print('Shape Type Ids : ', text_preprocessed['input_type_ids'].shape)
print('Type Ids : ', text_preprocessed['input_type_ids'][0, :16])
Explanation: Each preprocessing model also provides a method, .bert_pack_inputs(tensors, seq_length), which takes a list of tokens (like tok above) and a sequence length argument. This packs the inputs to create a dictionary of tensors in the format expected by the BERT model.
End of explanation
def make_bert_preprocess_model(sentence_features, seq_length=128):
Returns Model mapping string features to BERT inputs.
Args:
sentence_features: a list with the names of string-valued features.
seq_length: an integer that defines the sequence length of BERT inputs.
Returns:
A Keras Model that can be called on a list or dict of string Tensors
(with the order or names, resp., given by sentence_features) and
returns a dict of tensors for input to BERT.
input_segments = [
tf.keras.layers.Input(shape=(), dtype=tf.string, name=ft)
for ft in sentence_features]
# Tokenize the text to word pieces.
bert_preprocess = hub.load(tfhub_handle_preprocess)
tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name='tokenizer')
segments = [tokenizer(s) for s in input_segments]
# Optional: Trim segments in a smart way to fit seq_length.
# Simple cases (like this example) can skip this step and let
# the next step apply a default truncation to approximately equal lengths.
truncated_segments = segments
# Pack inputs. The details (start/end token ids, dict of output tensors)
# are model-dependent, so this gets loaded from the SavedModel.
packer = hub.KerasLayer(bert_preprocess.bert_pack_inputs,
arguments=dict(seq_length=seq_length),
name='packer')
model_inputs = packer(truncated_segments)
return tf.keras.Model(input_segments, model_inputs)
Explanation: Here are some details to pay attention to:
- input_mask The mask allows the model to cleanly differentiate between the content and the padding. The mask has the same shape as the input_word_ids, and contains a 1 anywhere the input_word_ids is not padding.
- input_type_ids has the same shape as input_mask, but inside the non-padded region, contains a 0 or a 1 indicating which sentence the token is a part of.
Next, you will create a preprocessing model that encapsulates all this logic. Your model will take strings as input, and return appropriately formatted objects which can be passed to BERT.
Each BERT model has a specific preprocessing model, make sure to use the proper one described on the BERT's model documentation.
Note: BERT adds a "position embedding" to the token embedding of each input, and these come from a fixed-size lookup table. That imposes a max seq length of 512 (which is also a practical limit, due to the quadratic growth of attention computation). For this Colab 128 is good enough.
End of explanation
test_preprocess_model = make_bert_preprocess_model(['my_input1', 'my_input2'])
test_text = [np.array(['some random test sentence']),
np.array(['another sentence'])]
text_preprocessed = test_preprocess_model(test_text)
print('Keys : ', list(text_preprocessed.keys()))
print('Shape Word Ids : ', text_preprocessed['input_word_ids'].shape)
print('Word Ids : ', text_preprocessed['input_word_ids'][0, :16])
print('Shape Mask : ', text_preprocessed['input_mask'].shape)
print('Input Mask : ', text_preprocessed['input_mask'][0, :16])
print('Shape Type Ids : ', text_preprocessed['input_type_ids'].shape)
print('Type Ids : ', text_preprocessed['input_type_ids'][0, :16])
Explanation: Let's demonstrate the preprocessing model. You will create a test with two sentences input (input1 and input2). The output is what a BERT model would expect as input: input_word_ids, input_masks and input_type_ids.
End of explanation
tf.keras.utils.plot_model(test_preprocess_model, show_shapes=True, show_dtype=True)
Explanation: Let's take a look at the model's structure, paying attention to the two inputs you just defined.
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
def load_dataset_from_tfds(in_memory_ds, info, split, batch_size,
bert_preprocess_model):
is_training = split.startswith('train')
dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[split])
num_examples = info.splits[split].num_examples
if is_training:
dataset = dataset.shuffle(num_examples)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
dataset = dataset.map(lambda ex: (bert_preprocess_model(ex), ex['label']))
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
return dataset, num_examples
Explanation: To apply the preprocessing in all the inputs from the dataset, you will use the map function from the dataset. The result is then cached for performance.
End of explanation
def build_classifier_model(num_classes):
class Classifier(tf.keras.Model):
def __init__(self, num_classes):
super(Classifier, self).__init__(name="prediction")
self.encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True)
self.dropout = tf.keras.layers.Dropout(0.1)
self.dense = tf.keras.layers.Dense(num_classes)
def call(self, preprocessed_text):
encoder_outputs = self.encoder(preprocessed_text)
pooled_output = encoder_outputs["pooled_output"]
x = self.dropout(pooled_output)
x = self.dense(x)
return x
model = Classifier(num_classes)
return model
Explanation: Define your model
You are now ready to define your model for sentence or sentence pair classification by feeding the preprocessed inputs through the BERT encoder and putting a linear classifier on top (or other arrangement of layers as you prefer), and using dropout for regularization.
End of explanation
test_classifier_model = build_classifier_model(2)
bert_raw_result = test_classifier_model(text_preprocessed)
print(tf.sigmoid(bert_raw_result))
Explanation: Let's try running the model on some preprocessed inputs.
End of explanation
tfds_name = 'glue/cola' #@param ['glue/cola', 'glue/sst2', 'glue/mrpc', 'glue/qqp', 'glue/mnli', 'glue/qnli', 'glue/rte', 'glue/wnli']
tfds_info = tfds.builder(tfds_name).info
sentence_features = list(tfds_info.features.keys())
sentence_features.remove('idx')
sentence_features.remove('label')
available_splits = list(tfds_info.splits.keys())
train_split = 'train'
validation_split = 'validation'
test_split = 'test'
if tfds_name == 'glue/mnli':
validation_split = 'validation_matched'
test_split = 'test_matched'
num_classes = tfds_info.features['label'].num_classes
num_examples = tfds_info.splits.total_num_examples
print(f'Using {tfds_name} from TFDS')
print(f'This dataset has {num_examples} examples')
print(f'Number of classes: {num_classes}')
print(f'Features {sentence_features}')
print(f'Splits {available_splits}')
with tf.device('/job:localhost'):
# batch_size=-1 is a way to load the dataset into memory
in_memory_ds = tfds.load(tfds_name, batch_size=-1, shuffle_files=True)
# The code below is just to show some samples from the selected dataset
print(f'Here are some sample rows from {tfds_name} dataset')
sample_dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[train_split])
labels_names = tfds_info.features['label'].names
print(labels_names)
print()
sample_i = 1
for sample_row in sample_dataset.take(5):
samples = [sample_row[feature] for feature in sentence_features]
print(f'sample row {sample_i}')
for sample in samples:
print(sample.numpy())
sample_label = sample_row['label']
print(f'label: {sample_label} ({labels_names[sample_label]})')
print()
sample_i += 1
Explanation: Choose a task from GLUE
You are going to use a TensorFlow DataSet from the GLUE benchmark suite.
Colab lets you download these small datasets to the local filesystem, and the code below reads them entirely into memory, because the separate TPU worker host cannot access the local filesystem of the colab runtime.
For bigger datasets, you'll need to create your own Google Cloud Storage bucket and have the TPU worker read the data from there. You can learn more in the TPU guide.
It's recommended to start with the CoLa dataset (for single sentence) or MRPC (for multi sentence) since these are small and don't take long to fine tune.
End of explanation
def get_configuration(glue_task):
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
if glue_task == 'glue/cola':
metrics = tfa.metrics.MatthewsCorrelationCoefficient(num_classes=2)
else:
metrics = tf.keras.metrics.SparseCategoricalAccuracy(
'accuracy', dtype=tf.float32)
return metrics, loss
Explanation: The dataset also determines the problem type (classification or regression) and the appropriate loss function for training.
End of explanation
epochs = 3
batch_size = 32
init_lr = 2e-5
print(f'Fine tuning {tfhub_handle_encoder} model')
bert_preprocess_model = make_bert_preprocess_model(sentence_features)
with strategy.scope():
# metric have to be created inside the strategy scope
metrics, loss = get_configuration(tfds_name)
train_dataset, train_data_size = load_dataset_from_tfds(
in_memory_ds, tfds_info, train_split, batch_size, bert_preprocess_model)
steps_per_epoch = train_data_size // batch_size
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = num_train_steps // 10
validation_dataset, validation_data_size = load_dataset_from_tfds(
in_memory_ds, tfds_info, validation_split, batch_size,
bert_preprocess_model)
validation_steps = validation_data_size // batch_size
classifier_model = build_classifier_model(num_classes)
optimizer = optimization.create_optimizer(
init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw')
classifier_model.compile(optimizer=optimizer, loss=loss, metrics=[metrics])
classifier_model.fit(
x=train_dataset,
validation_data=validation_dataset,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
validation_steps=validation_steps)
Explanation: Train your model
Finally, you can train the model end-to-end on the dataset you chose.
Distribution
Recall the set-up code at the top, which has connected the colab runtime to
a TPU worker with multiple TPU devices. To distribute training onto them, you will create and compile your main Keras model within the scope of the TPU distribution strategy. (For details, see Distributed training with Keras.)
Preprocessing, on the other hand, runs on the CPU of the worker host, not the TPUs, so the Keras model for preprocessing as well as the training and validation datasets mapped with it are built outside the distribution strategy scope. The call to Model.fit() will take care of distributing the passed-in dataset to the model replicas.
Note: The single TPU worker host already has the resource objects (think: a lookup table) needed for tokenization. Scaling up to multiple workers requires use of Strategy.experimental_distribute_datasets_from_function with a function that loads the preprocessing model separately onto each worker.
Optimizer
Fine-tuning follows the optimizer set-up from BERT pre-training (as in Classify text with BERT): It uses the AdamW optimizer with a linear decay of a notional initial learning rate, prefixed with a linear warm-up phase over the first 10% of training steps (num_warmup_steps). In line with the BERT paper, the initial learning rate is smaller for fine-tuning (best of 5e-5, 3e-5, 2e-5).
End of explanation
main_save_path = './my_models'
bert_type = tfhub_handle_encoder.split('/')[-2]
saved_model_name = f'{tfds_name.replace("/", "_")}_{bert_type}'
saved_model_path = os.path.join(main_save_path, saved_model_name)
preprocess_inputs = bert_preprocess_model.inputs
bert_encoder_inputs = bert_preprocess_model(preprocess_inputs)
bert_outputs = classifier_model(bert_encoder_inputs)
model_for_export = tf.keras.Model(preprocess_inputs, bert_outputs)
print('Saving', saved_model_path)
# Save everything on the Colab host (even the variables from TPU memory)
save_options = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')
model_for_export.save(saved_model_path, include_optimizer=False,
options=save_options)
Explanation: Export for inference
You will create a final model that has the preprocessing part and the fine-tuned BERT we've just created.
At inference time, preprocessing needs to be part of the model (because there is no longer a separate input queue as for training data that does it). Preprocessing is not just computation; it has its own resources (the vocab table) that must be attached to the Keras Model that is saved for export.
This final assembly is what will be saved.
You are going to save the model on colab and later you can download to keep it for the future (View -> Table of contents -> Files).
End of explanation
with tf.device('/job:localhost'):
reloaded_model = tf.saved_model.load(saved_model_path)
#@title Utility methods
def prepare(record):
model_inputs = [[record[ft]] for ft in sentence_features]
return model_inputs
def prepare_serving(record):
model_inputs = {ft: record[ft] for ft in sentence_features}
return model_inputs
def print_bert_results(test, bert_result, dataset_name):
bert_result_class = tf.argmax(bert_result, axis=1)[0]
if dataset_name == 'glue/cola':
print('sentence:', test[0].numpy())
if bert_result_class == 1:
print('This sentence is acceptable')
else:
print('This sentence is unacceptable')
elif dataset_name == 'glue/sst2':
print('sentence:', test[0])
if bert_result_class == 1:
print('This sentence has POSITIVE sentiment')
else:
print('This sentence has NEGATIVE sentiment')
elif dataset_name == 'glue/mrpc':
print('sentence1:', test[0])
print('sentence2:', test[1])
if bert_result_class == 1:
print('Are a paraphrase')
else:
print('Are NOT a paraphrase')
elif dataset_name == 'glue/qqp':
print('question1:', test[0])
print('question2:', test[1])
if bert_result_class == 1:
print('Questions are similar')
else:
print('Questions are NOT similar')
elif dataset_name == 'glue/mnli':
print('premise :', test[0])
print('hypothesis:', test[1])
if bert_result_class == 1:
print('This premise is NEUTRAL to the hypothesis')
elif bert_result_class == 2:
print('This premise CONTRADICTS the hypothesis')
else:
print('This premise ENTAILS the hypothesis')
elif dataset_name == 'glue/qnli':
print('question:', test[0])
print('sentence:', test[1])
if bert_result_class == 1:
print('The question is NOT answerable by the sentence')
else:
print('The question is answerable by the sentence')
elif dataset_name == 'glue/rte':
print('sentence1:', test[0])
print('sentence2:', test[1])
if bert_result_class == 1:
print('Sentence1 DOES NOT entails sentence2')
else:
print('Sentence1 entails sentence2')
elif dataset_name == 'glue/wnli':
print('sentence1:', test[0])
print('sentence2:', test[1])
if bert_result_class == 1:
print('Sentence1 DOES NOT entails sentence2')
else:
print('Sentence1 entails sentence2')
print('BERT raw results:', bert_result[0])
print()
Explanation: Test the model
The final step is testing the results of your exported model.
Just to make some comparison, let's reload the model and test it using some inputs from the test split from the dataset.
Note: The test is done on the colab host, not the TPU worker that it has connected to, so it appears below with explicit device placements. You can omit those when loading the SavedModel elsewhere.
End of explanation
with tf.device('/job:localhost'):
test_dataset = tf.data.Dataset.from_tensor_slices(in_memory_ds[test_split])
for test_row in test_dataset.shuffle(1000).map(prepare).take(5):
if len(sentence_features) == 1:
result = reloaded_model(test_row[0])
else:
result = reloaded_model(list(test_row))
print_bert_results(test_row, result, tfds_name)
Explanation: Test
End of explanation
with tf.device('/job:localhost'):
serving_model = reloaded_model.signatures['serving_default']
for test_row in test_dataset.shuffle(1000).map(prepare_serving).take(5):
result = serving_model(**test_row)
# The 'prediction' key is the classifier's defined model name.
print_bert_results(list(test_row.values()), result['prediction'], tfds_name)
Explanation: If you want to use your model on TF Serving, remember that it will call your SavedModel through one of its named signatures. Notice there are some small differences in the input. In Python, you can test them as follows:
End of explanation |
4,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working With Sessions
Import the LArray library
Step1: Three Kinds Of Sessions
They are three ways to group objects in LArray
Step2: CheckedSession
The syntax to define a checked-session is a bit specific
Step3: Loading and Dumping Sessions
One of the main advantages of grouping arrays, axes and groups in session objects is that you can load and save all of them in one shot. Like arrays, it is possible to associate metadata to a session. These can be saved and loaded in all file formats.
Loading Sessions (CSV, Excel, HDF5)
To load the items of a session, you have two options
Step4: 2) Call the load method on an existing session and pass the path to the Excel/HDF5 file or to the directory containing CSV files as first argument
Step5: The load method offers some options
Step6: 2) Setting the display argument to True, the load method will print a message each time a new item is loaded
Step7: Dumping Sessions (CSV, Excel, HDF5)
To save a session, you need to call the save method. The first argument is the path to a Excel/HDF5 file or to a directory if items are saved to CSV files
Step8: <div class="alert alert-info">
Note
Step9: 2) By default, dumping a session to an Excel or HDF5 file will overwrite it. By setting the overwrite argument to False, you can choose to update the existing Excel or HDF5 file
Step10: 3) Setting the display argument to True, the save method will print a message each time an item is dumped
Step11: Exploring Content
To get the list of items names of a session, use the names shortcut (be careful that the list is sorted alphabetically and does not follow the internal order!)
Step12: To get more information of items of a session, the summary will provide not only the names of items but also the list of labels in the case of axes or groups and the list of axes, the shape and the dtype in the case of arrays
Step13: Selecting And Filtering Items
Session objects work like ordinary dict Python objects. To select an item, use the usual syntax <session_var>['<item_name>']
Step14: A simpler way consists in the use the syntax <session_var>.<item_name>
Step15: <div class="alert alert-warning">
**Warning
Step16: <div class="alert alert-warning">
**Warning
Step17: The filter method allows you to select all items of the same kind (i.e. all axes, or groups or arrays) or all items with names satisfying a given pattern
Step18: <div class="alert alert-warning">
**Warning
Step19: Iterating over Items
Like the built-in Python dict objects, Session objects provide methods to iterate over items
Step20: Manipulating Checked Sessions
Note
Step21: One of the specificities of checked-sessions is that the type of the contained objects is protected (it cannot change). Any attempt to assign a value of different type will raise an error
Step22: The death array has been declared as a CheckedArray.
As a consequence, its axes are protected.
Trying to assign a value with incompatible axes raises an error
Step23: The deaths array is also constrained by its declared dtype int. This means that if you try to assign a value of type float instead of int, the value will be converted to int if possible
Step24: or raise an error
Step25: It is possible to add a new variable after the checked-session has been initialized but in that case, a warning message is printed (in case you misspelled the name of variable while trying to modify it)
Step26: Arithmetic Operations On Sessions
Session objects accept binary operations with a scalar
Step27: with an array (please read the documentation of the random.choice function first if you don't know it)
Step28: with another session
Step29: Applying Functions On All Arrays
In addition to the classical arithmetic operations, the apply method can be used to apply the same function on all arrays. This function should take a single element argument and return a single value
Step30: It is possible to pass a function with additional arguments
Step31: It is also possible to apply a function on non-Array objects of a session. Please refer the documentation of the apply method.
Comparing Sessions
Being able to compare two sessions may be useful when you want to compare two different models expected to give the same results or when you have updated your model and want to see what are the consequences of the recent changes.
Session objects provide the two methods to compare two sessions
Step32: The == operator return a new session with boolean arrays with elements compared element-wise
Step33: This also works for axes and groups
Step34: The != operator does the opposite of == operator | Python Code:
%xmode Minimal
from larray import *
Explanation: Working With Sessions
Import the LArray library:
End of explanation
# define some scalars, axes and arrays
variant = 'baseline'
country = Axis('country=Belgium,France,Germany')
gender = Axis('gender=Male,Female')
time = Axis('time=2013..2017')
population = zeros([country, gender, time])
births = zeros([country, gender, time])
deaths = zeros([country, gender, time])
# create an empty session and objects one by one after
s = Session()
s.variant = variant
s.country = country
s.gender = gender
s.time = time
s.population = population
s.births = births
s.deaths = deaths
print(s.summary())
# or create a session in one step by passing all objects to the constructor
s = Session(variant=variant, country=country, gender=gender, time=time,
population=population, births=births, deaths=deaths)
print(s.summary())
Explanation: Three Kinds Of Sessions
They are three ways to group objects in LArray:
Session: is an ordered dict-like container with special I/O methods. Although the autocomplete* feature on the objects stored in the session is available in the larray-editor, it is not available in development tools like PyCharm making it cumbersome to use.
CheckedSession: provides the same methods as Session objects but are defined in a completely different way (see example below). The autocomplete* feature is both available in the larray-editor and in development tools (PyCharm). In addition, the type of each stored object is protected. Optionally, it is possible to constrain the axes and dtype of arrays using CheckedArray.
CheckedParameters: is a special version of CheckedSession in which the value of all stored objects (parameters) is frozen after initialization.
* Autocomplete is the feature in which development tools try to predict the variable or function a user intends to enter after only a few characters have been typed (like word completion in cell phones).
Creating Sessions
Session
Create a session:
End of explanation
class Demography(CheckedSession):
# (convention is to declare parameters (read-only objects) in capital letters)
# Declare 'VARIANT' parameter as of type string.
# 'VARIANT' will be initialized when a 'Demography' session will be created
VARIANT: str
# declare variables with an initialization value.
# Their type is deduced from their initialization value.
COUNTRY = Axis('country=Belgium,France,Germany')
GENDER = Axis('gender=Male,Female')
TIME = Axis('time=2013..2017')
population = zeros([COUNTRY, GENDER, TIME], dtype=int)
births = zeros([COUNTRY, GENDER, TIME], dtype=int)
# declare 'deaths' with constrained axes and dtype.
# Its type (Array), axes and dtype are not modifiable.
# It will be initialized with 0
deaths: CheckedArray([COUNTRY, GENDER, TIME], int) = 0
d = Demography(VARIANT='baseline')
print(d.summary())
Explanation: CheckedSession
The syntax to define a checked-session is a bit specific:
python
class MySession(CheckedSession):
# Variables can be declared in two ways:
# a) by specifying only the type of the variable (to be initialized later)
var1: Type
# b) by giving an initialization value.
# In that case, the type is deduced from the initialization value
var2 = initialization value
# Additionally, axes and dtype of Array variables can be constrained
# using the special type CheckedArray
arr1: CheckedArray([list, of, axes], dtype) = initialization value
Check the example below:
End of explanation
# create a new Session object and load all arrays, axes, groups and metadata
# from all CSV files located in the passed directory
csv_dir = get_example_filepath('demography_eurostat')
s = Session(csv_dir)
# create a new Session object and load all arrays, axes, groups and metadata
# stored in the passed Excel file
filepath_excel = get_example_filepath('demography_eurostat.xlsx')
s = Session(filepath_excel)
# create a new Session object and load all arrays, axes, groups and metadata
# stored in the passed HDF5 file
filepath_hdf = get_example_filepath('demography_eurostat.h5')
s = Session(filepath_hdf)
print(s.summary())
Explanation: Loading and Dumping Sessions
One of the main advantages of grouping arrays, axes and groups in session objects is that you can load and save all of them in one shot. Like arrays, it is possible to associate metadata to a session. These can be saved and loaded in all file formats.
Loading Sessions (CSV, Excel, HDF5)
To load the items of a session, you have two options:
1) Instantiate a new session and pass the path to the Excel/HDF5 file or to the directory containing CSV files to the Session constructor:
End of explanation
# create a session containing 3 axes, 2 groups and one array 'population'
filepath = get_example_filepath('population_only.xlsx')
s = Session(filepath)
print(s.summary())
# call the load method on the previous session and add the 'births' and 'deaths' arrays to it
filepath = get_example_filepath('births_and_deaths.xlsx')
s.load(filepath)
print(s.summary())
Explanation: 2) Call the load method on an existing session and pass the path to the Excel/HDF5 file or to the directory containing CSV files as first argument:
End of explanation
births_and_deaths_session = Session()
# use the names argument to only load births and deaths arrays
births_and_deaths_session.load(filepath_hdf, names=['births', 'deaths'])
print(births_and_deaths_session.summary())
Explanation: The load method offers some options:
1) Using the names argument, you can specify which items to load:
End of explanation
s = Session()
# with display=True, the load method will print a message
# each time a new item is loaded
s.load(filepath_hdf, display=True)
Explanation: 2) Setting the display argument to True, the load method will print a message each time a new item is loaded:
End of explanation
# save items of a session in CSV files.
# Here, the save method will create a 'demography' directory in which CSV files will be written
s.save('demography')
# save the session to an HDF5 file
s.save('demography.h5')
# save the session to an Excel file
s.save('demography.xlsx')
Explanation: Dumping Sessions (CSV, Excel, HDF5)
To save a session, you need to call the save method. The first argument is the path to a Excel/HDF5 file or to a directory if items are saved to CSV files:
End of explanation
# use the names argument to only save births and deaths arrays
s.save('demography.h5', names=['births', 'deaths'])
# load session saved in 'demography.h5' to see its content
Session('demography.h5').names
Explanation: <div class="alert alert-info">
Note: Concerning the CSV and Excel formats, the metadata is saved in one Excel sheet (CSV file) named `__metadata__(.csv)`. This sheet (CSV file) name cannot be changed.
</div>
The save method has several arguments:
1) Using the names argument, you can specify which items to save:
End of explanation
population = read_csv('./demography/population.csv')
pop_ses = Session([('population', population)])
# by setting overwrite to False, the destination file is updated instead of overwritten.
# The items already stored in the file but not present in the session are left intact.
# On the contrary, the items that exist in both the file and the session are completely overwritten.
pop_ses.save('demography.h5', overwrite=False)
# load session saved in 'demography.h5' to see its content
Session('demography.h5').names
Explanation: 2) By default, dumping a session to an Excel or HDF5 file will overwrite it. By setting the overwrite argument to False, you can choose to update the existing Excel or HDF5 file:
End of explanation
# with display=True, the save method will print a message
# each time an item is dumped
s.save('demography.h5', display=True)
Explanation: 3) Setting the display argument to True, the save method will print a message each time an item is dumped:
End of explanation
# load a session representing the results of a demographic model
filepath_hdf = get_example_filepath('demography_eurostat.h5')
s = Session(filepath_hdf)
# print the content of the session
print(s.names)
Explanation: Exploring Content
To get the list of items names of a session, use the names shortcut (be careful that the list is sorted alphabetically and does not follow the internal order!):
End of explanation
# print the content of the session
print(s.summary())
Explanation: To get more information of items of a session, the summary will provide not only the names of items but also the list of labels in the case of axes or groups and the list of axes, the shape and the dtype in the case of arrays:
End of explanation
s['population']
Explanation: Selecting And Filtering Items
Session objects work like ordinary dict Python objects. To select an item, use the usual syntax <session_var>['<item_name>']:
End of explanation
s.population
Explanation: A simpler way consists in the use the syntax <session_var>.<item_name>:
End of explanation
s_selected = s['population', 'births', 'deaths']
s_selected.names
Explanation: <div class="alert alert-warning">
**Warning:** The syntax ``session_var.item_name`` will work as long as you don't use any special character like ``, ; :`` in the item's name.
</div>
To return a new session with selected items, use the syntax <session_var>[list, of, item, names]:
End of explanation
d_selected = d['births', 'deaths']
# test if v_selected is a checked-session
print('is still a check-session?', isinstance(d_selected, CheckedSession))
#test if v_selected is a normal session
print('is now a normal session?', isinstance(d_selected, Session))
Explanation: <div class="alert alert-warning">
**Warning:** The same selection as above can be applied on a checked-session **but the returned object is a normal session and NOT a checked-session**. This means that you will loose all the benefits (autocomplete, protection on type, axes and dtype) of checked-sessions.
</div>
End of explanation
# select only arrays of a session
s.filter(kind=Array)
# selection all items with a name starting with a letter between a and k
s.filter(pattern='[a-k]*')
Explanation: The filter method allows you to select all items of the same kind (i.e. all axes, or groups or arrays) or all items with names satisfying a given pattern:
End of explanation
d_filtered = d.filter(pattern='[a-k]*')
# test if v_selected is a checked-session
print('is still a check-session?', isinstance(d_filtered, CheckedSession))
#test if v_selected is a normal session
print('is now a normal session?', isinstance(d_filtered, Session))
Explanation: <div class="alert alert-warning">
**Warning:** Using the *filter()* method on a checked-session **will return a normal session and NOT a checked-session**. This means that you will loose all the benefits (autocomplete, protection on type, axes and dtype) of checked-sessions.
</div>
End of explanation
# iterate over item names
for key in s.keys():
print(key)
# iterate over items
for value in s.values():
if isinstance(value, Array):
print(value.info)
else:
print(repr(value))
print()
# iterate over names and items
for key, value in s.items():
if isinstance(value, Array):
print(key, ':')
print(value.info)
else:
print(key, ':', repr(value))
print()
Explanation: Iterating over Items
Like the built-in Python dict objects, Session objects provide methods to iterate over items:
End of explanation
class Demography(CheckedSession):
COUNTRY = Axis('country=Belgium,France,Germany')
GENDER = Axis('gender=Male,Female')
TIME = Axis('time=2013..2017')
population = zeros([COUNTRY, GENDER, TIME], dtype=int)
# declare the deaths array with constrained axes and dtype
deaths: CheckedArray([COUNTRY, GENDER, TIME], int) = 0
d = Demography()
print(d.summary())
Explanation: Manipulating Checked Sessions
Note: this section only concerns objects declared in checked-sessions.
Let's create a simplified version of the Demography checked-session we have defined above:
End of explanation
# The population variable was initialized with the zeros() function which returns an Array object.
# The declared type of the population variable is Array and is protected
d.population = Axis('population=child,teenager,adult,elderly')
Explanation: One of the specificities of checked-sessions is that the type of the contained objects is protected (it cannot change). Any attempt to assign a value of different type will raise an error:
End of explanation
AGE = Axis('age=0..100')
d.deaths = zeros([d.COUNTRY, AGE, d.GENDER, d.TIME])
Explanation: The death array has been declared as a CheckedArray.
As a consequence, its axes are protected.
Trying to assign a value with incompatible axes raises an error:
End of explanation
d.deaths = 1.2
d.deaths
Explanation: The deaths array is also constrained by its declared dtype int. This means that if you try to assign a value of type float instead of int, the value will be converted to int if possible:
End of explanation
d.deaths = 'undead'
Explanation: or raise an error:
End of explanation
# misspell population (forgot the 'a')
d.popultion = 0
Explanation: It is possible to add a new variable after the checked-session has been initialized but in that case, a warning message is printed (in case you misspelled the name of variable while trying to modify it):
End of explanation
# get population, births and deaths in millions
s_div = s / 1e6
s_div.population
Explanation: Arithmetic Operations On Sessions
Session objects accept binary operations with a scalar:
End of explanation
from larray import random
random_increment = random.choice([-1, 0, 1], p=[0.3, 0.4, 0.3], axes=s.population.axes) * 1000
random_increment
# add some variables of a session by a common array
s_rand = s['population', 'births', 'deaths'] + random_increment
s_rand.population
Explanation: with an array (please read the documentation of the random.choice function first if you don't know it):
End of explanation
# compute the difference between each array of the two sessions
s_diff = s - s_rand
s_diff.births
Explanation: with another session:
End of explanation
# add the next year to all arrays
def add_next_year(array):
if 'time' in array.axes.names:
last_year = array.time.i[-1]
return array.append('time', 0, last_year + 1)
else:
return array
s_with_next_year = s.apply(add_next_year)
print('population array before calling apply:')
print(s.population)
print()
print('population array after calling apply:')
print(s_with_next_year.population)
Explanation: Applying Functions On All Arrays
In addition to the classical arithmetic operations, the apply method can be used to apply the same function on all arrays. This function should take a single element argument and return a single value:
End of explanation
# add the next year to all arrays.
# Use the 'copy_values_from_last_year flag' to indicate
# whether or not to copy values from the last year
def add_next_year(array, copy_values_from_last_year):
if 'time' in array.axes.names:
last_year = array.time.i[-1]
value = array[last_year] if copy_values_from_last_year else 0
return array.append('time', value, last_year + 1)
else:
return array
s_with_next_year = s.apply(add_next_year, True)
print('population array before calling apply:')
print(s.population)
print()
print('population array after calling apply:')
print(s_with_next_year.population)
Explanation: It is possible to pass a function with additional arguments:
End of explanation
# load a session representing the results of a demographic model
filepath_hdf = get_example_filepath('demography_eurostat.h5')
s = Session(filepath_hdf)
# create a copy of the original session
s_copy = s.copy()
# 'element_equals' compare arrays one by one
s.element_equals(s_copy)
# 'equals' returns True if all items of the two sessions have exactly the same items
s.equals(s_copy)
# slightly modify the 'population' array for some labels combination
s_copy.population += random_increment
# the 'population' array is different between the two sessions
s.element_equals(s_copy)
# 'equals' returns False if at least one item of the two sessions are different in values or axes
s.equals(s_copy)
# reset the 'copy' session as a copy of the original session
s_copy = s.copy()
# add an array to the 'copy' session
s_copy.gender_ratio = s_copy.population.ratio('gender')
# the 'gender_ratio' array is not present in the original session
s.element_equals(s_copy)
# 'equals' returns False if at least one item is not present in the two sessions
s.equals(s_copy)
Explanation: It is also possible to apply a function on non-Array objects of a session. Please refer the documentation of the apply method.
Comparing Sessions
Being able to compare two sessions may be useful when you want to compare two different models expected to give the same results or when you have updated your model and want to see what are the consequences of the recent changes.
Session objects provide the two methods to compare two sessions: equals and element_equals:
The equals method will return True if all items from both sessions are identical, False otherwise.
The element_equals method will compare items of two sessions one by one and return an array of boolean values.
End of explanation
# reset the 'copy' session as a copy of the original session
s_copy = s.copy()
# slightly modify the 'population' array for some labels combination
s_copy.population += random_increment
s_check_same_values = s == s_copy
s_check_same_values.population
Explanation: The == operator return a new session with boolean arrays with elements compared element-wise:
End of explanation
s_check_same_values.time
Explanation: This also works for axes and groups:
End of explanation
s_check_different_values = s != s_copy
s_check_different_values.population
Explanation: The != operator does the opposite of == operator:
End of explanation |
4,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is part of the clifford documentation
Step1: We'll create copies of the point and line reflected in the circle, using $X = C\hat X\tilde C$, where $\hat X$ is the grade involution.
Step2: pyganja
pyganja is a python interface to the ganja.js (github) library.
To use it, typically we need to import two names from the library
Step3: GanjaScene lets us build scenes out of geometric objects, with attached labels and RGB colors
Step4: Once we've built our scene, we can draw it, specifying a scale (which here we use to zoom out), and the signature of our algebra (which defaults to conformal 3D)
Step5: A cool feature of GanjaScene is the ability to use + to draw both scenes together
Step6: mpl_toolkits.clifford
While ganja.js produces great diagrams, it's hard to combine them with other plotting tools.
mpl_toolkits.clifford works within matplotlib.
Step7: Assembling the plot is a lot more work, but we also get much more control
Step8: G3C
Let's repeat the above, but with 3D Conformal Geometric Algebra.
Note that if you're viewing these docs in a jupyter notebook, the lines below will replace all your 2d variables with 3d ones
Step9: pyganja
Once again, we can create a pair of scenes exactly as before
Step10: But this time, when we draw them we don't need to pass sig.
Better yet, we can rotate the 3D world around using left click, pan with right click, and zoom with the scroll wheel.
Step11: Some more example of using pyganja to visualize 3D CGA can be found in the interpolation and clustering notebooks.
mpl_toolkits.clifford
The 3D approach for matplotlib is much the same.
Note that due to poor handling of rounding errors in clifford.tools.classify, a call to .normal() is needed.
Along with explicit grade selection, this is a useful trick to try and get something to render which otherwise would not. | Python Code:
from clifford.g2c import *
point = up(2*e1+e2)
line = up(3*e1 + 2*e2) ^ up(3*e1 - 2*e2) ^ einf
circle = up(e1) ^ up(-e1 + 2*e2) ^ up(-e1 - 2*e2)
Explanation: This notebook is part of the clifford documentation: https://clifford.readthedocs.io/.
Visualization tools
In this example we will look at some external tools that can be used with clifford to help visualize geometric objects.
The two tools available are:
pyganja (github)
mpl_toolkits.clifford (github)
Both of these can be installed with pip install followed by the package name above.
G2C
Let's start by creating some objects in 2d Conformal Geometric Algebra to visualize:
End of explanation
point_refl = circle * point.gradeInvol() * ~circle
line_refl = circle * line.gradeInvol() * ~circle
Explanation: We'll create copies of the point and line reflected in the circle, using $X = C\hat X\tilde C$, where $\hat X$ is the grade involution.
End of explanation
from pyganja import GanjaScene, draw
import pyganja; pyganja.__version__
Explanation: pyganja
pyganja is a python interface to the ganja.js (github) library.
To use it, typically we need to import two names from the library:
End of explanation
sc = GanjaScene()
sc.add_object(point, color=(255, 0, 0), label='point')
sc.add_object(line, color=(0, 255, 0), label='line')
sc.add_object(circle, color=(0, 0, 255), label='circle')
sc_refl = GanjaScene()
sc_refl.add_object(point_refl, color=(128, 0, 0), label='point_refl')
sc_refl.add_object(line_refl, color=(0, 128, 0), label='line_refl')
Explanation: GanjaScene lets us build scenes out of geometric objects, with attached labels and RGB colors:
End of explanation
draw(sc, sig=layout.sig, scale=0.5)
Explanation: Once we've built our scene, we can draw it, specifying a scale (which here we use to zoom out), and the signature of our algebra (which defaults to conformal 3D):
End of explanation
draw(sc + sc_refl, sig=layout.sig, scale=0.5)
Explanation: A cool feature of GanjaScene is the ability to use + to draw both scenes together:
End of explanation
from matplotlib import pyplot as plt
plt.ioff() # we'll ask for plotting when we want it
# if you're editing this locally, you'll get an interactive UI if you uncomment the following
#
# %matplotlib notebook
from mpl_toolkits.clifford import plot
import mpl_toolkits.clifford; mpl_toolkits.clifford.__version__
Explanation: mpl_toolkits.clifford
While ganja.js produces great diagrams, it's hard to combine them with other plotting tools.
mpl_toolkits.clifford works within matplotlib.
End of explanation
# standard matplotlib stuff - construct empty plots side-by-side, and set the scaling
fig, (ax_before, ax_both) = plt.subplots(1, 2, sharex=True, sharey=True)
ax_before.set(xlim=[-4, 4], ylim=[-4, 4], aspect='equal')
ax_both.set(xlim=[-4, 4], ylim=[-4, 4], aspect='equal')
# plot the objects before reflection on both plots
for ax in (ax_before, ax_both):
plot(ax, [point], color='tab:blue', label='point', marker='x', linestyle=' ')
plot(ax, [line], color='tab:green', label='line')
plot(ax, [circle], color='tab:red', label='circle')
# plot the objects after reflection, with thicker lines
plot(ax_both, [point_refl], color='tab:blue', label='point_refl', marker='x', linestyle=' ', markeredgewidth=2)
plot(ax_both, [line_refl], color='tab:green', label='line_refl', linewidth=2)
fig.tight_layout()
ax_both.legend()
# show the figure
fig
Explanation: Assembling the plot is a lot more work, but we also get much more control:
End of explanation
from clifford.g3c import *
point = up(2*e1+e2)
line = up(3*e1 + 2*e2) ^ up(3*e1 - 2*e2) ^ einf
circle = up(e1) ^ up(-e1 + 1.6*e2 + 1.2*e3) ^ up(-e1 - 1.6*e2 - 1.2*e3)
sphere = up(3*e1) ^ up(e1) ^ up(2*e1 + e2) ^ up(2*e1 + e3)
# note that due to floating point rounding, we need to truncate back to a single grade here, with ``(grade)``
point_refl = homo((circle * point.gradeInvol() * ~circle)(1))
line_refl = (circle * line.gradeInvol() * ~circle)(3)
sphere_refl = (circle * sphere.gradeInvol() * ~circle)(4)
Explanation: G3C
Let's repeat the above, but with 3D Conformal Geometric Algebra.
Note that if you're viewing these docs in a jupyter notebook, the lines below will replace all your 2d variables with 3d ones
End of explanation
sc = GanjaScene()
sc.add_object(point, color=(255, 0, 0), label='point')
sc.add_object(line, color=(0, 255, 0), label='line')
sc.add_object(circle, color=(0, 0, 255), label='circle')
sc.add_object(sphere, color=(0, 255, 255), label='sphere')
sc_refl = GanjaScene()
sc_refl.add_object(point_refl, color=(128, 0, 0), label='point_refl')
sc_refl.add_object(line_refl.normal(), color=(0, 128, 0), label='line_refl')
sc_refl.add_object(sphere_refl.normal(), color=(0, 128, 128), label='sphere_refl')
Explanation: pyganja
Once again, we can create a pair of scenes exactly as before
End of explanation
draw(sc + sc_refl, scale=0.5)
Explanation: But this time, when we draw them we don't need to pass sig.
Better yet, we can rotate the 3D world around using left click, pan with right click, and zoom with the scroll wheel.
End of explanation
# standard matplotlib stuff - construct empty plots side-by-side, and set the scaling
fig, (ax_before, ax_both) = plt.subplots(1, 2, subplot_kw=dict(projection='3d'), figsize=(8, 4))
ax_before.set(xlim=[-4, 4], ylim=[-4, 4], zlim=[-4, 4])
ax_both.set(xlim=[-4, 4], ylim=[-4, 4], zlim=[-4, 4])
# plot the objects before reflection on both plots
for ax in (ax_before, ax_both):
plot(ax, [point], color='tab:red', label='point', marker='x', linestyle=' ')
plot(ax, [line], color='tab:green', label='line')
plot(ax, [circle], color='tab:blue', label='circle')
plot(ax, [sphere], color='tab:cyan') # labels do not work for spheres: pygae/mpl_toolkits.clifford#5
# plot the objects after reflection
plot(ax_both, [point_refl], color='tab:red', label='point_refl', marker='x', linestyle=' ', markeredgewidth=2)
plot(ax_both, [line_refl.normal()], color='tab:green', label='line_refl', linewidth=2)
plot(ax_both, [sphere_refl], color='tab:cyan')
fig.tight_layout()
ax_both.legend()
# show the figure
fig
Explanation: Some more example of using pyganja to visualize 3D CGA can be found in the interpolation and clustering notebooks.
mpl_toolkits.clifford
The 3D approach for matplotlib is much the same.
Note that due to poor handling of rounding errors in clifford.tools.classify, a call to .normal() is needed.
Along with explicit grade selection, this is a useful trick to try and get something to render which otherwise would not.
End of explanation |
4,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parsing tsv files and populating the database
Step1: Inspecting the first few lines of the file, we get a feel for this data schema.
Mongo Considerations
Step2: |Number|Name| Name | Position | Something | Height | Weight | Something | Year | Hometown |
|------|----|------|----------|-----------|--------|--------|-----------|------|----------|
| ... | ...| .... | ... | ... | ... | ... | ... | ... | ... | | Python Code:
osu_roster_filepath = '../data/osu_roster.csv'
Explanation: Parsing tsv files and populating the database
End of explanation
!head {osu_roster_filepath}
Explanation: Inspecting the first few lines of the file, we get a feel for this data schema.
Mongo Considerations:
- can specify categories for validation
End of explanation
ohio_state = Team()
ohio_state.city = 'Columbus'
ohio_state.name = 'Buckeyes'
ohio_state.state = 'OH'
ohio_state.save()
with open(osu_roster_filepath, 'r') as f:
for line in f.readlines():
items = line.split('\t')
number = int(items[0])
name = items[1]
first, last = items[2].split(', ')
position = items[3]
something = items[4]
height = items[5]
weight = items[6]
something = items[7]
year = items[8]
hometown = items[9]
athlete = Athlete()
athlete.number = number
athlete.name = name
athlete.first = first
athlete.last = last
athlete.position = position
athlete.height = height
athlete.weight = weight
athlete.year = year
athlete.hometown = hometown
# athlete.save()
print(number, name, first, last, position, something, height, weight, something, year, hometown)
Explanation: |Number|Name| Name | Position | Something | Height | Weight | Something | Year | Hometown |
|------|----|------|----------|-----------|--------|--------|-----------|------|----------|
| ... | ...| .... | ... | ... | ... | ... | ... | ... | ... |
End of explanation |
4,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NER using Data Programming
Project Mars Target Encyclopedia
This notebook does not explain much, however, the exaplanations are found in the original notebook(s) https
Step2: Load all data to snorkel db
Step3: Split the corpus into train, development and testing
Here we use the same split we used for previous setup with CoreNLP CRF classifier
Step4: Extract candidates
Here recall should be high, precison can be bad
NOTE
Step5: Extract Develop and Test Sets
NOTE
Step6: Labelling Functions
Targets
1. From a known dictionary
Step7: 2. From wikipedia page
Step8: Generative model | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
from snorkel import SnorkelSession
import os
import numpy as np
import re, string
import codecs
# Open Session
session = SnorkelSession()
Explanation: NER using Data Programming
Project Mars Target Encyclopedia
This notebook does not explain much, however, the exaplanations are found in the original notebook(s) https://github.com/HazyResearch/snorkel/tree/master/tutorials/intro
Setup:
Follow instructions in https://github.com/HazyResearch/snorkel
Start jupyter notebook server using ./run.sh as described in snorkel README
copy this notebook to a place accessible from the jupyter server started in previous step. Perhaps you may symlink your directory
End of explanation
parent = "/Users/thammegr/work/mte/data/newcorpus/workspace"
# these lists are as per our previous experiments
traing_list_file = parent + "/train_62r15_685k14_384k15.list"
dev_list_file = parent + "/development.list"
test_list_file = parent + "/test.list"
# combine all the above using
# cat train_62r15_685k14_384k15.list development.list test.list > all.list
all_list_file = parent + "/all.list"
# FIXME: overwriting training split, only 70 docs are chosen here
traing_list_file = parent + "/train_head70.list"
# and all list also overriden here
all_list_file = parent + "/all-small.list"
from snorkel.parser import CSVPathsPreprocessor
from snorkel.parser import TextDocPreprocessor
class CustomTextDocPreprocessor(TextDocPreprocessor):
It customizes the following:
generates custom doc_id which includes parent directory name
because the file names are not unique
Injects file path into the metadata - required for later stage
def parse_file(self, fp, file_name):
res = list(super(CustomTextDocPreprocessor, self).parse_file(fp, file_name))
assert len(res) == 1 # parent class must produce one record per file
doc, content = res[0]
doc.name = "/".join(fp.split("/")[-2:]).rsplit('.', 1)[0]
doc.stable_id = self.get_stable_id(doc.name)
print(doc.stable_id)
doc.meta['file_path'] = fp
yield doc, content
doc_preprocessor = CSVPathsPreprocessor(path=all_list_file, column=0, delim=',',
parser_factory=CustomTextDocPreprocessor)
# Corpus parser to get features
from snorkel.parser import CorpusParser
corpus_parser = CorpusParser()
%time corpus_parser.apply(doc_preprocessor)
from snorkel.models import Document, Sentence
print "Documents:", session.query(Document).count()
print "Sentences:", session.query(Sentence).count()
# Schema for Minerals
from snorkel.models import candidate_subclass
Mineral = candidate_subclass('Mineral', ['name'])
Target = candidate_subclass('Target', ['name'])
Element = candidate_subclass('Element', ['name'])
from snorkel.candidates import Ngrams, CandidateExtractor
from snorkel.matchers import RegexMatchEach
name_matcher = RegexMatchEach(attrib='pos_tags', rgx="NN.*")
#Elements and Minerals are unigrams
element_cand_extr = CandidateExtractor(Element, [Ngrams(n_max=1)],[name_matcher])
mineral_cand_extr = CandidateExtractor(Mineral, [Ngrams(n_max=1)],[name_matcher])
# Target names can be unto 4 gram longer
target_cand_extr = CandidateExtractor(Target, [Ngrams(n_max=4)],[name_matcher])
# Counts number of nouns in a sentence => could be used for filtering
def number_of_nouns(sentence):
active_sequence = False
count = 0
last_tag = ''
for tag in sentence.pos_tags:
if tag.startswith('NN') and not active_sequence:
active_sequence = True
count += 1
elif not tag.startswith('NN') and active_sequence:
active_sequence = False
return count
Explanation: Load all data to snorkel db
End of explanation
def load_paths(fp):
with open(fp) as fp:
return set(map(lambda x: x.strip().split(',')[0], fp.readlines()))
train_files = load_paths(traing_list_file)
dev_files = load_paths(dev_list_file)
test_files = load_paths(test_list_file)
splits = [train_files, dev_files, test_files]
print("Docs:: Training size:", len(train_files),
"Dev Size:", len(dev_files),
"Test Size", len(test_files))
from snorkel.models import Document
docs = session.query(Document).order_by(Document.name).all()
train_sents = set()
dev_sents = set()
test_sents = set()
for i, doc in enumerate(docs):
fp = doc.meta['file_path']
group_name = []
for j, split in enumerate(splits):
group_name.append('1' if fp in split else '0')
group_name = ''.join(group_name)
if group_name == '000':
raise Exception("Document %s is not part of any split" % doc.name )
elif group_name == '100':
group = train_sents
elif group_name == '010':
group = dev_sents
elif group_name == '001':
group = test_sents
else:
raise Exception("Document %s is in multiple splits %s" % (doc.name, group_name))
for s in doc.sentences:
if number_of_nouns(s) > 0: # atleast one name in sentence
group.add(s)
print("Sentence:: Training size:", len(train_sents),
"Dev Size:", len(dev_sents),
"Test Size", len(test_sents))
Explanation: Split the corpus into train, development and testing
Here we use the same split we used for previous setup with CoreNLP CRF classifier
End of explanation
dataset = train_sents
element_cand_extr.apply(dataset, split=0, clear=True)
mineral_cand_extr.apply(dataset, split=0, clear=False)
target_cand_extr.apply(dataset, split=0, clear=False)
train_elements = session.query(Element).filter(Element.split == 0).all()
print "Number of candidate elements:", len(train_elements)
train_minerals = session.query(Mineral).filter(Mineral.split == 0).all()
print "Number of candidate Minerals:", len(train_minerals)
train_targets = session.query(Target).filter(Target.split == 0).all()
print "Number of candidate targets:", len(train_targets)
Explanation: Extract candidates
Here recall should be high, precison can be bad
NOTE: Dont run this second time... Use the next cell to resume
End of explanation
for i, sents in enumerate([dev_sents, test_sents]):
element_cand_extr.apply(sents, split=i+1, clear=True)
mineral_cand_extr.apply(sents, split=i+1, clear=False)
target_cand_extr.apply(sents, split=i+1, clear=False)
print "Number of Elements:", session.query(Element).filter(Element.split == i+1).count()
print "Number of Minerals:", session.query(Mineral).filter(Mineral.split == i+1).count()
print "Number of Targets:", session.query(Target).filter(Target.split == i+1).count()
Explanation: Extract Develop and Test Sets
NOTE: DO not run this second time...
End of explanation
def load_set(path, lower=True):
with codecs.open(path, 'r', 'utf-8') as f:
lines = f.readlines()
lines = map(lambda x: x.strip(), lines)
lines = filter(lambda x: x and not x.startswith('#'), lines)
if lower:
lines = map(lambda x: x.lower(), lines)
return set(lines)
mte_targets = load_set("/Users/thammegr/work/mte/git/ref/MER-targets-pruned.txt", lower=False)
print("Found %d target names in MTE dictionary" % len(mte_targets))
mte_targets = set(map(lambda x: x.replace('_', ' ').title(), mte_targets))
##
def LF_mte_targets_dict(c):
return 1 if c.name.get_span().title() in mte_targets else -1
Explanation: Labelling Functions
Targets
1. From a known dictionary
End of explanation
from lxml import etree
# lxml supports XPath 1.0 which doesn't've regex match, so extending it
ns = etree.FunctionNamespace(None)
ns['matches'] = lambda _, val, patrn: re.match(patrn, str(val[0]) if val else "") is not None
import requests
mars_rocks_page = "https://en.wikipedia.org/wiki/List_of_rocks_on_Mars"
tree = etree.HTML(requests.get(mars_rocks_page).text)
names = tree.xpath('//h2[matches(span/@id, "^[0-9]{4}_.*")]/following-sibling::div[2]/ul/li//text()')
names = map(lambda x: re.sub("\(.*\)", "", x), names) # remove explainations in ()
names = map(lambda x: re.sub(r'[^\w\s]','', x), names) # remove punctuations
names = map(lambda x: x.strip(), names) # remove whitespaces
names = filter(lambda x: re.match("^\d+$", x) is None, names) # remove the number citations which were [num] originally
names = filter(lambda x: x and x[0].isupper(), names) # name should start with capital letter
names = map(lambda x: x.title(), names) # map to title case
wikipedia_targets = set(names)
print("Found %d target names on wikipedia %s" %(len(wikipedia_targets), mars_rocks_page))
def LF_wikip_targets_dict(c):
return 1 if c.name.get_span().title() in wikipedia_targets else 0
# this list is not exhaustive, so return 0 for missing
# Debugging label functions
from pprint import pprint
labeled = []
for c in session.query(Target).filter(Target.split == 0).all():
if LF_wikip_targets_dict(c) != 0: # function
labeled.append(c)
print "Number labeled:", len(labeled)
# Sample labeled
labeled[:10]
LFs = [
LF_mte_targets_dict,
LF_wikip_targets_dict
]
print("We have %d labeling functions" % len(LFs))
from snorkel.annotations import LabelAnnotator
import numpy as np
labeler = LabelAnnotator(f=LFs)
#Let us label the training set
np.random.seed(1701)
%time L_train = labeler.apply(split=0)
L_train
# Loading it again -- resume from here
L_train = labeler.load_matrix(session, split=0)
L_train
L_train.get_candidate(session, 10)
# Get stats of LFs
L_train.lf_stats(session, )
Explanation: 2. From wikipedia page
End of explanation
from snorkel.learning import GenerativeModel
gen_model = GenerativeModel()
gen_model.train(L_train, epochs=500, decay=0.95, step_size=0.1/L_train.shape[0], reg_param=1e-6)
train_marginals = gen_model.marginals(L_train)
# visualize
import matplotlib.pyplot as plt
plt.hist(train_marginals, bins=20)
plt.show()
gen_model.weights.lf_accuracy()
L_dev = labeler.apply_existing(split=1)
L_dev
# development split
dev_cands = session.query(Target).filter(Target.split == 1).all()
len(dev_cands)
dev_cands[:10]f
from snorkel.viewer import SentenceNgramViewer
sv = SentenceNgramViewer(dev_cands, session)
sv
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name=os.environ['USER'], split=1)
L_gold_dev
type(L_gold_dev)
from snorkel.annotations import (csr_LabelMatrix, load_matrix, GoldLabelKey, GoldLabel)
from snorkel.models import StableLabel
from snorkel.db_helpers import reload_annotator_labels
# NOTE: this is a shortcut for labeling
# Ideally we should use labels from the SentenceNgramViewer
true_labeller = LF_mte_targets_dict
def load_gold_labels(cand_set, candidate_class, annotator_name="gold"):
count = 0
for cand in cand_set:
ctx_stable_ids = cand.name.get_span()
query = session.query(StableLabel).filter(StableLabel.context_stable_ids == ctx_stable_ids)
query = query.filter(StableLabel.annotator_name == annotator_name)
if query.count() == 0:
count += 1
true_label = true_labeller(cand)
session.add(StableLabel(
context_stable_ids=ctx_stable_ids,
annotator_name=annotator_name,
value=true_label))
# Commit session
session.commit()
# Reload annotator labels
reload_annotator_labels(session, candidate_class, annotator_name, split=1, filter_label_split=False)
reload_annotator_labels(session, candidate_class, annotator_name, split=2, filter_label_split=False)
load_gold_labels(dev_cands, Target)
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
tot = L_gold_dev.shape[0]
n = len(filter(lambda x: x == 1, L_gold_dev))
print("Found %d positive labels out of %d" % (n, tot))
tp, fp, tn, fn = gen_model.score(session, L_dev, L_gold_dev)
dev_cands[0].name.get_span()
Explanation: Generative model
End of explanation |
4,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Two-Level
Step2: We'll just check that the pulse area is what we want.
Step3: Solve the Problem
Step4: Plot Output
Step5: Analysis
The $6 \pi$ sech pulse breaks up into three $2 \pi$ pulses, which travel at a speed according to their width.
Movie | Python Code:
import numpy as np
SECH_FWHM_CONV = 1./2.6339157938
t_width = 1.0*SECH_FWHM_CONV # [τ]
print('t_width', t_width)
mb_solve_json =
{
"atom": {
"fields": [
{
"coupled_levels": [[0, 1]],
"rabi_freq_t_args": {
"n_pi": 6.0,
"centre": 0.0,
"width": %f
},
"rabi_freq_t_func": "sech"
}
],
"num_states": 2
},
"t_min": -2.0,
"t_max": 10.0,
"t_steps": 240,
"z_min": -0.5,
"z_max": 1.5,
"z_steps": 100,
"z_steps_inner": 2,
"interaction_strengths": [
10.0
],
"savefile": "mbs-two-sech-6pi"
}
%(t_width)
from maxwellbloch import mb_solve
mbs = mb_solve.MBSolve().from_json_str(mb_solve_json)
Explanation: Two-Level: Sech Pulse 6π — Pulse Breakup
Define the Problem
First we need to define a sech pulse with the area we want. We'll fix the width of the pulse and the area to find the right amplitude.
The full-width at half maximum (FWHM) $t_s$ of the sech pulse is related to the FWHM of a Gaussian by a factor of $1/2.6339157938$. (See §3.2.2 of my PhD thesis).
End of explanation
print('The input pulse area is {0:.3f}.'.format(
np.trapz(mbs.Omegas_zt[0,0,:].real, mbs.tlist)/np.pi))
Explanation: We'll just check that the pulse area is what we want.
End of explanation
Omegas_zt, states_zt = mbs.mbsolve(recalc=False)
Explanation: Solve the Problem
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
sns.set_style('darkgrid')
fig = plt.figure(1, figsize=(16, 6))
ax = fig.add_subplot(111)
cmap_range = np.linspace(0.0, 4.0, 11)
cf = ax.contourf(mbs.tlist, mbs.zlist,
np.abs(mbs.Omegas_zt[0]/(2*np.pi)),
cmap_range, cmap=plt.cm.Blues)
ax.set_title('Rabi Frequency ($\Gamma / 2\pi $)')
ax.set_xlabel('Time ($1/\Gamma$)')
ax.set_ylabel('Distance ($L$)')
for y in [0.0, 1.0]:
ax.axhline(y, c='grey', lw=1.0, ls='dotted')
plt.colorbar(cf);
fig, ax = plt.subplots(figsize=(16, 5))
ax.plot(mbs.zlist, mbs.fields_area()[0]/np.pi, clip_on=False)
ax.set_ylim([0.0, 8.0])
ax.set_xlabel('Distance ($L$)')
ax.set_ylabel('Pulse Area ($\pi$)');
Explanation: Plot Output
End of explanation
# C = 0.1 # speed of light
# Y_MIN = 0.0 # Y-axis min
# Y_MAX = 4.0 # y-axis max
# ZOOM = 2 # level of linear interpolation
# FPS = 60 # frames per second
# ATOMS_ALPHA = 0.2 # Atom indicator transparency
# FNAME = "images/mb-solve-two-sech-6pi"
# FNAME_JSON = FNAME + '.json'
# with open(FNAME_JSON, "w") as f:
# f.write(mb_solve_json)
# !make-mp4-fixed-frame.py -f $FNAME_JSON -c $C --fps $FPS --y-min $Y_MIN --y-max $Y_MAX \
# --zoom $ZOOM --atoms-alpha $ATOMS_ALPHA #--peak-line --c-line
# FNAME_MP4 = FNAME + '.mp4'
# !make-gif-ffmpeg.sh -f $FNAME_MP4 --in-fps $FPS
# from IPython.display import Image
# Image(url=FNAME_MP4 +'.gif', format='gif')
Explanation: Analysis
The $6 \pi$ sech pulse breaks up into three $2 \pi$ pulses, which travel at a speed according to their width.
Movie
End of explanation |
4,225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spectral Line Data Cubes in Astronomy - Part 1
In this notebook we will introduce spectral line data cubes in astronomy. They are a convenient way to store many spectra at points in the sky. Much like having a spectrum at every pixel in a CCD. In this Part 1 we will keep it as much "pure python", and not use astronomical units and just work in "pixel" or "voxel" space. In Part2 we will repeat the process with a more astronomy rich set of modules that you will have to install.
They normally are presented as a FITS file, with two sky coordinates (often Right Ascension and Declination) and one spectral coordinate (either an observing frequency or wavelength, and when there is a known spectral line, you can reference using this line with a velocity using the doppler effect). For radio data, such as ALMA and the VLA, we often use frequency, in GHz or MHz. For optical data we often use the wavelength, in Angstrom (the visible range is around 4000 - 8000 Angstrom, or 400 - 800 nm).
Outline
Main Goal
Step1: This first line of code is actually not real python code, but a magic ipython command, to make sure that the standard plotting commands are going to be displayed within the browser. You will see that happen below. The cube figure about is just a static PNG file.
As we will progress learning about the data and how to explore it further, you will notice this decision making process throughout this notebook..
1. Reading the data
Step2: The astropy package has an I/O package to simplify reading and writing a number of popular formats common in astronomy.
Step3: A FITS file consists of a series of Header-Data-Units (HDU). Usually there is only one, representing the image. But this file has two. For now, we're going to ignore the second, which is a special table and in this example happens to be empty anyways. Each HDU has a header, and data. The data in this case is a numpy array, and represents the image (cube)
Step4: From the shape (1,89,251,371) we can see this image is actually 4 dimensional, although the 4th dimension is dummy. There are 371 pixels along X, 251 along Y, and 89 slices or spectral channels. It looks like the noise is around 0.00073 and a peak value 0.017, thus a signal to noise of a little over 23, so quite strong.
In python you can remove that dummy 4th axis, since we are not going to use it any further. Otherwise we have to keep addressing this dummy 4th axis.
Step5: In case you were wondering about that 4th redundant axis. In astronomy we sometimes observe more than one type of radiation. Since waves are polarized, we can have up to 4 so called Stokes parameters, describing the waves as e.g. linear or circular polarized radiation. We will ignore that here, but they are sometimes stored in that 4th dimension. Sometimes they are stored as separate cubes. YMMV.
2. Plotting some basics
Step6: There are 89 channels (slices) in this cube, numbered 0 through 88 in the usual python sense. Pick a few other slices by changing the value in
z= and notice that the first few and last few appear to be just noise and that the V-shaped signal changes shape through the channels. Perhaps you should not be surprised that these are referred to as butterfly diagrams.
Step7: Notice that the histogram is on the left in the plot, and we already saw the maximum data point is 0.0169835.
So let us plot the vertical axis logarithmically, so we can better see what is going on.
Step8: Exercise
Step9: Question
Step10: Next we are interested in the Signal/Noise per channel where is there is no signal. This is clear in the first few and last channels. Recall that in the absence of real signal the peak will always be a few times sigma, purely based on the error function behavior of the distribution of gaussian noise. In our case something like $4\sigma$. For small maps more like $3\sigma$, for really big maps or cubes $5\sigma$.
Step12: Gaussian noise probability distribution is given by
$$
P(x) = { 1 \over {\sigma \sqrt{2\pi}}} {e^{- { x^2 \over {2 \sigma^2}}}}
$$
where the mean is 0 and RMS is $\sigma$. This function is normalized, integrated over x results in 1.
Lets do a simulation to see if we can understand the S/N in this plot. We will need the error function to compute the chance of being in the tail part of the gaussian. The error function is defined as
Step13: Is the noise correlated? Hanning smoothing is often used to increase the S/N. Test this by taking the differences between neighboring signals and computing the RMS of this "signal". If noise is normal and not correllated, the ratio of this RMS to the original RMS of the signal should be $\sqrt{2}$. Pick a point where there is no obvious signal, such as the (310,50) position.
Step14: The ratio of the noise you see here should be $\sqrt{2}$, but let's see for a typical normal distribution how close we are to $\sqrt{2}$
Step15: 4. Smoothing a cube to enhance the signal to noise
Step16: Notice that the noise is indeed lower than your earlier value of sigma0. We only smoothed one single slice, but we actually need to smooth the whole cube. Each slice with sigma, but we can optionally also smooth in the spectral dimension a little bit. Since we have 89 channels, lets smooth by only 1 (FWHM = 2.355 * sigma)
Step17: Notice that, although the peak value was lowered a bit due to the smoothing, the signal to noise has increased from the original cube. So, the signal should stand out a lot better.
Exercise
Step18: 6. Velocity fields
The mean velocity is defined as the first moment
$$
V = {\Sigma{(v.I(v))} \over \Sigma{(I(v))} }
$$
also known as an intensity weighted mean velocity. You can also fit a gaussian, or use the peak, or any method to derive a velocity from a spectrum $I(v)$. We do this for pixels in the map, and we have a velocity field.
Step19: Although we can recognize an area of coherent motions (the red and blue shifted sides of the galaxy), there is a lot of noise in this image. Looking at the math, we are dividing two numbers, both of which can be noise, so the outcome can be "anything". If anything, it should be a value between 0 and 88, so we could mask for that and see how that looks.
Let us first try to see how the smoothed cube looked.
Step20: Although more coherent, there are still bogus values outside the image of the galaxy. So we are looking for a hybrid of the two methods. In the smooth cube we saw the signal to noise is a lot better defined, so we will define areas in the cube where the signal to noise is high enough and use those in the original high resolution cube.
Step21: And voila, now this looks a lot better, although only velocies between 0 and 88 are possible. Any other values are no doubt due to a noisy division of two numbers. Experiment with a different value of nsigma here.
We need to do one final correction
Step22: This also means the first channel is the red-shifted (receding) side. So the colors in the previous plot are "wrong"!
Saving your output
This result is now stored in the vmean numpy array. But how do we make this information persistent?
The answer is again in FITS format. Where the fits.open() function would retrieve a Header and Data
(or series of), we need to construct a Header with this Data and write it using fits.writeto().
Step23: Rotation Curves
The simplest way perhaps is to measure the velocities along the major axis. Here's a simple way to extract it along the whole major axis, using interpolation
Step24: Receding and Approaching rotation curve
And here the version of a rotation curve along the receding and approaching sides. This is generally a better idea, not only to get an idea if the two sides are compatible.
N6503, according to http
Step25: Compare these two rotation curves with Figure 12 in the Greisen et al. (2009) paper | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Spectral Line Data Cubes in Astronomy - Part 1
In this notebook we will introduce spectral line data cubes in astronomy. They are a convenient way to store many spectra at points in the sky. Much like having a spectrum at every pixel in a CCD. In this Part 1 we will keep it as much "pure python", and not use astronomical units and just work in "pixel" or "voxel" space. In Part2 we will repeat the process with a more astronomy rich set of modules that you will have to install.
They normally are presented as a FITS file, with two sky coordinates (often Right Ascension and Declination) and one spectral coordinate (either an observing frequency or wavelength, and when there is a known spectral line, you can reference using this line with a velocity using the doppler effect). For radio data, such as ALMA and the VLA, we often use frequency, in GHz or MHz. For optical data we often use the wavelength, in Angstrom (the visible range is around 4000 - 8000 Angstrom, or 400 - 800 nm).
Outline
Main Goal: To introduce the concepts of spectral line data cubes
Definition of image cube
Data representation image cube
Introduction to galaxy rotating disks
End of explanation
import numpy as np
import math
from astropy.io import fits
Explanation: This first line of code is actually not real python code, but a magic ipython command, to make sure that the standard plotting commands are going to be displayed within the browser. You will see that happen below. The cube figure about is just a static PNG file.
As we will progress learning about the data and how to explore it further, you will notice this decision making process throughout this notebook..
1. Reading the data
End of explanation
hdu = fits.open('data/ngc6503.cube.fits')
print(len(hdu))
print(hdu[0])
print(hdu[1])
Explanation: The astropy package has an I/O package to simplify reading and writing a number of popular formats common in astronomy.
End of explanation
h = hdu[0].header
d = hdu[0].data
print(d.shape, d.min(), d.max(), d.mean(), np.median(d), d.std())
print("Signal/Noise (S/N):",d.max()/d.std())
Explanation: A FITS file consists of a series of Header-Data-Units (HDU). Usually there is only one, representing the image. But this file has two. For now, we're going to ignore the second, which is a special table and in this example happens to be empty anyways. Each HDU has a header, and data. The data in this case is a numpy array, and represents the image (cube):
End of explanation
# printing out the (dictionary) header:
print(h.keys)
# for example we can thus get the value for one of those header variables:
print('crval3 =',h['CRVAL3'])
print('restfreq =',h['RESTFREQ'])
d = d.squeeze()
print(d.shape)
# nz=d.shape[0]
Explanation: From the shape (1,89,251,371) we can see this image is actually 4 dimensional, although the 4th dimension is dummy. There are 371 pixels along X, 251 along Y, and 89 slices or spectral channels. It looks like the noise is around 0.00073 and a peak value 0.017, thus a signal to noise of a little over 23, so quite strong.
In python you can remove that dummy 4th axis, since we are not going to use it any further. Otherwise we have to keep addressing this dummy 4th axis.
End of explanation
z = 38 # pick a channel (z from the XYZ cube)
z = 45 # the mystery blob
z = 0
im = d[z,:,:] # im = d[z] also works
#im = d[z, 50:110, 210:270] # how to select a sub region
#im = d[z, 100:150, 140:180]
plt.imshow(im,origin=['Lower'])
plt.colorbar()
print(im.shape)
Explanation: In case you were wondering about that 4th redundant axis. In astronomy we sometimes observe more than one type of radiation. Since waves are polarized, we can have up to 4 so called Stokes parameters, describing the waves as e.g. linear or circular polarized radiation. We will ignore that here, but they are sometimes stored in that 4th dimension. Sometimes they are stored as separate cubes. YMMV.
2. Plotting some basics
End of explanation
# look at a histogram of all the data (histogram needs a 1D array)
d1 = d.ravel() # ravel() doesn't make a new copy of the array, saving memory
print(d1.shape)
(n,b,p) = plt.hist(d1, bins=100)
Explanation: There are 89 channels (slices) in this cube, numbered 0 through 88 in the usual python sense. Pick a few other slices by changing the value in
z= and notice that the first few and last few appear to be just noise and that the V-shaped signal changes shape through the channels. Perhaps you should not be surprised that these are referred to as butterfly diagrams.
End of explanation
(n,b,p) = plt.hist(d1,bins=100,log=True)
# pick a slice and make a histogram and print the mean and standard deviation of the signal in that slice
z=0
imz = d[z,:,:].flatten()
(n,b,p) = plt.hist(imz,bins=100)
print(imz.mean(), imz.std())
Explanation: Notice that the histogram is on the left in the plot, and we already saw the maximum data point is 0.0169835.
So let us plot the vertical axis logarithmically, so we can better see what is going on.
End of explanation
nchan = d.shape[0]
channel = np.arange(nchan)
rms = np.zeros(nchan)
peak = np.zeros(nchan)
cuberms = np.zeros(nchan) + d.std()
for z in range(nchan):
imz = d[z,:,:].flatten()
rms[z] = imz.std()
peak[z] = imz.max()
plt.plot(channel,rms,label='chan_rms')
#plt.plot(channel,peak,label='peak')
plt.plot(channel,cuberms,label='cube_rms',color='black')
plt.legend(loc='best')
plt.xlabel("Channel")
plt.ylabel("RMS")
plt.title("Noise for each channel")
Explanation: Exercise : observe by picking some values of z that the noise seems to vary a little bit from one end of the band to the other. Store the noise in channel 0 and 88 in variables sigma0 and sigma88:
3. Statistics
Now that we have computed the RMS in a channel, we might as well compute them for all channels!
We are also comparing this channel based RMS with the single cube RMS that we determined earlier.
End of explanation
# helper function for slice statistics
import numpy.ma as ma
def robust(d, method=0, ns=4.0, rf=1.5):
if method==0:
return d.std()
elif method==1:
m = d.mean()
s = d.std()
d1 = ma.masked_outside(d,m-ns*s,m+ns*s)
return d1.std()
elif method==2:
# assume mean is close enough to zero and no absorbtion
m = d.min()
d1 = ma.masked_outside(d,m,-m)
return d1.std()
elif method==3:
n = len(d)
d.sort()
q1 = d[n//4]
q3 = d[(3*n)//4]
D = q3-q1
d1 = ma.masked_outside(d,q1-rf*D,q3+rf*D)
return d1.std()
else:
return d.std()
nchan = d.shape[0]
channel = np.arange(nchan)
rms0 = np.zeros(nchan)
rms1 = np.zeros(nchan)
rms2 = np.zeros(nchan)
rms3 = np.zeros(nchan)
rms4 = np.zeros(nchan)
rms5 = np.zeros(nchan)
peak = np.zeros(nchan)
cuberms = np.zeros(nchan) + d.std()
for z in range(nchan):
imz = d[z,:,:].flatten()
imz4 = d[z,0:80,280:355].flatten()
imz5 = d[z,170:250,0:120].flatten()
rms0[z] = robust(imz,0)
rms1[z] = robust(imz,1,ns=4.0)
rms2[z] = robust(imz,2)
rms3[z] = robust(imz,3,rf=1.5)
rms4[z] = robust(imz4,0)
rms5[z] = robust(imz5,0)
peak[z] = imz.max()
plt.plot(channel,rms0,label='chan_rms0')
plt.plot(channel,rms1,label='chan_rms1')
plt.plot(channel,rms2,label='chan_rms2')
plt.plot(channel,rms3,label='chan_rms3')
plt.plot(channel,rms4,label='chan_rms_lr')
plt.plot(channel,rms5,label='chan_rms_tl')
# plt.plot(channel,peak,label='peak')
plt.plot(channel,cuberms,label='cube_rms',color='black')
plt.legend(loc='best',fontsize='small')
plt.xlabel("Channel")
plt.ylabel("RMS")
plt.savefig("n6503_rms.png")
Explanation: Question: can you think of a better way to compute the RMS as function of channel (the blue line) and not have it depend so much on where there is signal?
End of explanation
n0 = 15 # the first few channels
n1 = 13 # the last few channels
rms0 = rms[:n0].mean()
rms1 = rms[-n1:].mean()
cuberms = np.zeros(nchan) + 0.5*(rms0+rms1)
sn0 = peak/rms0
sn1 = peak/rms1
plt.plot(channel,sn0,label='S/N(low)')
plt.plot(channel,sn1,label='S/N(high)')
plt.plot(channel[:n0],np.zeros(n0)+1,color='black',label='edge')
plt.plot(channel[-n1:],np.zeros(n1)+1,color='black')
plt.legend(loc='best')
print(rms0,rms1)
s1=peak[0:15]/rms[0:15]
s2=peak[75:88]/rms[75:88]
print("First few channels:",s1.mean(),s1.std())
print("Last few channels:",s2.mean(),s2.std())
Explanation: Next we are interested in the Signal/Noise per channel where is there is no signal. This is clear in the first few and last channels. Recall that in the absence of real signal the peak will always be a few times sigma, purely based on the error function behavior of the distribution of gaussian noise. In our case something like $4\sigma$. For small maps more like $3\sigma$, for really big maps or cubes $5\sigma$.
End of explanation
def pnoise(n):
chance of measuring noise of n * sigma
return 0.5*math.erfc(n/math.sqrt(2.0))
nsample = 10000
g = np.random.normal(size=nsample)
sn = g.max()/g.std()
print("S/N: ",sn)
print("1/P(S/N)=",1/pnoise(sn))
# 1/chance for a +1,2,3 sigma detection
print(1/pnoise(1.0))
print(1/pnoise(2.0))
print(1/pnoise(3.0))
print(1/pnoise(4.0))
print(1/pnoise(5.0))
nxy = d.shape[1]*d.shape[2]
print("Number of pixels in a map:",nxy)
peakpos = (175,125) # some strong point in the disk of the galaxy
peakpos = (231,80) # the mystery blob?
#peakpos = (310,50) # no signal
spectrum = d[:,peakpos[1],peakpos[0]]
sns = spectrum.max()/rms[0:15].mean()
zero = spectrum * 0.0
plt.plot(channel,spectrum,'o-',markersize=2)
plt.plot(channel,zero)
plt.plot(channel,cuberms,'r--',label='1$\sigma$')
plt.plot(channel,-cuberms,'r--')
plt.title("Spectrum at position %s S/N=%.3g" % (str(peakpos),sns))
plt.legend();
Explanation: Gaussian noise probability distribution is given by
$$
P(x) = { 1 \over {\sigma \sqrt{2\pi}}} {e^{- { x^2 \over {2 \sigma^2}}}}
$$
where the mean is 0 and RMS is $\sigma$. This function is normalized, integrated over x results in 1.
Lets do a simulation to see if we can understand the S/N in this plot. We will need the error function to compute the chance of being in the tail part of the gaussian. The error function is defined as:
$$
erf(x) = { {2}\over{\sqrt{\pi}}} \int_0^x e^{-t^2} dt
$$
End of explanation
cdelt3 = h['CDELT3']
crval3 = h['CRVAL3']
crpix3 = h['CRPIX3']
restfreq=h['RESTFREQ']
freq = (channel+1-crpix3)*cdelt3 + crval3 # at the ref. pixel we get the ref. value
c = 299792.458 # speed of light in km/s
channelv = (1.0-freq/restfreq) * c # convert to doppler velocity in km/s
print("min/max/dv:",channelv[0],channelv[nchan-1],channelv[0]-channelv[1])
plt.plot(channelv,spectrum,'o-',markersize=2)
plt.plot(channelv,zero)
plt.plot(channelv,cuberms,'r--',label='1$\sigma$')
plt.plot(channelv,-cuberms,'r--')
plt.title("Spectrum at position %s S/N=%.3g" % (str(peakpos),sns))
plt.legend()
plt.xlabel("velocity (km/s)");
# saving a descriptive spectrum using pickle
try:
import cPickle as pickle
except:
import pickle
# construct a descriptive spectrum
sp = {}
sp['z'] = channelv
sp['i'] = spectrum
sp['zunit'] = 'km/s'
sp['iunit'] = h['BUNIT']
sp['xpos'] = peakpos[0]
sp['ypos'] = peakpos[1]
# write it
pfile = "n6503-sp.p"
pickle.dump(sp,open(pfile,"wb"))
print("Wrote spectrum",pfile)
dspectrum = spectrum[1:] - spectrum[:-1]
# dspectrum = np.diff(spectrum) # this also works (but look up documentation!)
rms1 = dspectrum.std()
rms0 = spectrum.std()
print(rms1,"/",rms0,"=",rms1/rms0)
Explanation: Is the noise correlated? Hanning smoothing is often used to increase the S/N. Test this by taking the differences between neighboring signals and computing the RMS of this "signal". If noise is normal and not correllated, the ratio of this RMS to the original RMS of the signal should be $\sqrt{2}$. Pick a point where there is no obvious signal, such as the (310,50) position.
End of explanation
%%time
nsample = 100000
g = np.random.normal(10.0,5.0,nsample)
delta = np.diff(g)
gh=plt.hist([g,delta],32)
print(g.std(),delta.std(),delta.std()/g.std()/math.sqrt(2))
Explanation: The ratio of the noise you see here should be $\sqrt{2}$, but let's see for a typical normal distribution how close we are to $\sqrt{2}$:
End of explanation
import scipy.signal
import scipy.ndimage.filters as filters
z = 0
sigma = 2.0
ds1 = filters.gaussian_filter(d[z],sigma) # ds1 is a single smoothed slice
print("new mean/std:", ds1.mean(), ds1.std())
print("old mean/std:", d[z].mean(),d[z].std())
plt.imshow(ds1,origin=['Lower'])
plt.colorbar();
Explanation: 4. Smoothing a cube to enhance the signal to noise
End of explanation
ds = filters.gaussian_filter(d,[1.0,sigma,sigma]) # ds is now a smoothed cube
plt.imshow(ds[z],origin=['Lower'])
plt.colorbar()
print(ds[z].std())
print(ds.max(),ds.max()/ds1.std())
Explanation: Notice that the noise is indeed lower than your earlier value of sigma0. We only smoothed one single slice, but we actually need to smooth the whole cube. Each slice with sigma, but we can optionally also smooth in the spectral dimension a little bit. Since we have 89 channels, lets smooth by only 1 (FWHM = 2.355 * sigma)
End of explanation
import numpy.ma as ma
# sigma0 is the noise in the original cube
sigma0 = rms0
nsigma = 0.0
dm = ma.masked_inside(d,-nsigma*sigma0,nsigma*sigma0)
print("Percentage of unmasked voxels:",dm.count()/len(d.ravel()) * 100)
mom0 = dm.sum(axis=0)
plt.imshow(mom0,origin=['Lower'])
plt.colorbar()
#
(ypeak,xpeak) = np.unravel_index(mom0.argmax(),mom0.shape)
print("PEAK at location:",xpeak,ypeak,mom0.argmax())
spectrum1 = d[:,ypeak,xpeak]
spectrum2 = ds[:,ypeak,xpeak]
plt.plot(channel,spectrum1)
plt.plot(channel,spectrum2)
plt.plot(channel,zero)
mom0s = ds.sum(axis=0)
plt.imshow(mom0s,origin=['Lower'])
plt.colorbar()
Explanation: Notice that, although the peak value was lowered a bit due to the smoothing, the signal to noise has increased from the original cube. So, the signal should stand out a lot better.
Exercise : Observe a subtle difference in the last two plots. Can you see what happened here?
5. Masking
End of explanation
nz = d.shape[0]
vchan = np.arange(nz).reshape(nz,1,1)
vsum = vchan * d
vmean = vsum.sum(axis=0)/d.sum(axis=0)
print("MINMAX",vmean.min(),vmean.max())
plt.imshow(vmean,origin=['Lower'],vmin=0,vmax=88)
#plt.imshow(vmean,origin=['Lower'])
plt.colorbar();
Explanation: 6. Velocity fields
The mean velocity is defined as the first moment
$$
V = {\Sigma{(v.I(v))} \over \Sigma{(I(v))} }
$$
also known as an intensity weighted mean velocity. You can also fit a gaussian, or use the peak, or any method to derive a velocity from a spectrum $I(v)$. We do this for pixels in the map, and we have a velocity field.
End of explanation
nz = ds.shape[0]
vchan = np.arange(nz).reshape(nz,1,1)
# vchan = channelv.reshape(nz,1,1)
vsum = vchan * ds
vmean = vsum.sum(axis=0)/ds.sum(axis=0)
print(vmean.shape,vmean.min(),vmean.max())
plt.imshow(vmean,origin=['Lower'],vmin=0,vmax=88)
plt.colorbar();
Explanation: Although we can recognize an area of coherent motions (the red and blue shifted sides of the galaxy), there is a lot of noise in this image. Looking at the math, we are dividing two numbers, both of which can be noise, so the outcome can be "anything". If anything, it should be a value between 0 and 88, so we could mask for that and see how that looks.
Let us first try to see how the smoothed cube looked.
End of explanation
# this is all messy , we need a better solution, a hybrid of the two:
noise = ds[0:5].flatten()
(n,b,p) = plt.hist(noise,bins=100)
print(noise.mean(), noise.std())
sigma0 = noise.std()
nsigma = 5.0
cutoff = sigma0*nsigma
dm = ma.masked_inside(ds,-cutoff,cutoff) # assumes mean is close to 0
print(cutoff,dm.count())
dm2=ma.masked_where(ma.getmask(dm),d)
vsum = vchan * dm2
vmean = vsum.sum(axis=0)/dm2.sum(axis=0)
print("min/max velocity:",vmean.min(),vmean.max())
plt.imshow(vmean,origin=['Lower'],vmin=0,vmax=88)
plt.colorbar()
# some guess of where the major axis of the galaxy is
#s = [50,55,300,205]
#plt.plot([s[0],s[2]],[s[1],s[3]], 'k-', lw=2)
# print(vmean.shape)
Explanation: Although more coherent, there are still bogus values outside the image of the galaxy. So we are looking for a hybrid of the two methods. In the smooth cube we saw the signal to noise is a lot better defined, so we will define areas in the cube where the signal to noise is high enough and use those in the original high resolution cube.
End of explanation
print(channelv[0],channelv[88])
Explanation: And voila, now this looks a lot better, although only velocies between 0 and 88 are possible. Any other values are no doubt due to a noisy division of two numbers. Experiment with a different value of nsigma here.
We need to do one final correction: the velocity field is in channels, not in km/s... In a previous cell we created an array channelv which are the doppler shifted velocities in each of these channels:
End of explanation
# the old hdu[0] is still available, but points to a 3D cube, so lets just try and make it 2D
hv = h.copy()
hv['NAXIS'] = 2
# cannot write yet: complains about illegal axes
hv.remove('NAXIS3')
hv.remove('NAXIS4')
print(type(vmean))
print(vmean.shape)
print(h['BITPIX'])
# cannot write yet: complains about masking
vmean0 = ma.filled(vmean,0.0)
# finally write it successfully
fits.writeto('n6503-vmean.fits',vmean0,hv,clobber=True)
Explanation: This also means the first channel is the red-shifted (receding) side. So the colors in the previous plot are "wrong"!
Saving your output
This result is now stored in the vmean numpy array. But how do we make this information persistent?
The answer is again in FITS format. Where the fits.open() function would retrieve a Header and Data
(or series of), we need to construct a Header with this Data and write it using fits.writeto().
End of explanation
import scipy.ndimage
s = [20,30,300,210] # define a line from (x0,y0) to (x1,y1)
x0, y0 = s[0],s[1]
x1, y1 = s[2],s[3]
num = 200
x, y = np.linspace(x0, x1, num), np.linspace(y0, y1, num)
z = scipy.ndimage.map_coordinates(np.transpose(vmean), np.vstack((x,y)))
plt.imshow(vmean,origin=['Lower'],vmin=0,vmax=88)
plt.plot([x0, x1], [y0, y1], 'ro-')
plt.colorbar()
plt.title("Velocity field with slice");
plt.plot(x,z);
plt.title("Slice along the major axis");
Explanation: Rotation Curves
The simplest way perhaps is to measure the velocities along the major axis. Here's a simple way to extract it along the whole major axis, using interpolation
End of explanation
# here is the reference pixel, which you will see is not where the galaxy is
c1=h['crpix1']
c2=h['crpix2']
pixel_size = h['cdelt2']*3600.0
print(c1,c2,pixel_size)
import scipy.ndimage
center = [163.5,123.2] # orig: 5.4
center = [164.5,124.2] # 10
center = [162.5,122.2] # wow, about same
center = [161.5,121.2]
center = [162.3,122.1]
length = 150 # in pixels
pa = 119.9 # in degrees
cosp = math.cos(pa*math.pi/180.0)
sinp = math.sin(pa*math.pi/180.0)
x0,y0 = center
#
x1 = x0 - length * sinp # blue shifted side
y1 = y0 + length * cosp
x2 = x0 + length * sinp # red shifted side
y2 = y0 - length * cosp
num = length
x1s, y1s = np.linspace(x0, x1, num), np.linspace(y0, y1, num)
x2s, y2s = np.linspace(x0, x2, num), np.linspace(y0, y2, num)
r = np.arange(num)*pixel_size
z1 = scipy.ndimage.map_coordinates(np.transpose(vmean), np.vstack((x1s,y1s)))
z2 = scipy.ndimage.map_coordinates(np.transpose(vmean), np.vstack((x2s,y2s)))
plt.imshow(vmean,origin=['Lower'],vmin=0,vmax=89)
plt.colorbar()
plt.plot([x0, x1], [y0, y1], 'ro-') # blue shifted
plt.plot([x0, x2], [y0, y2], 'bo-') # red shifted
plt.plot([c1,c1],[c2,c2],'yo-') # peakpos
plt.title("velocity field");
print(x1,y1)
plt.plot(r,z1,c='b')
plt.plot(r,z2,c='r')
vsys = 28.5
v1 = z1 - z1[0]
v2 = z2[0]-z2
print("Vsys (literature) = ",vsys)
print("Offsets: v1,v2=",z1[0],z2[0])
plt.plot(r,abs(v1),c='b')
plt.plot(r,abs(v2),c='r')
iflat = 50
print(v1[iflat:].mean(), v1[iflat:].std())
print(v2[iflat:].mean(), v2[iflat:].std())
print(v1[iflat:].mean()-v2[iflat:].mean())
Explanation: Receding and Approaching rotation curve
And here the version of a rotation curve along the receding and approaching sides. This is generally a better idea, not only to get an idea if the two sides are compatible.
N6503, according to http://ned.ipac.caltech.edu/ is located at 17h49m26.4s +70d08m40s
We can either use ds9 to use the cursor at this RA/DEC and read off the pixel value (but we should note each pixel is 4")
By zooming in a bit I got about : 163.4 123.1
This is based on the first pixel in the lower left of the image being (1,1)
For python we need a (0,0) based system, so this would be (162.3,122.1)
End of explanation
# replotting to compare with figure 12
# fold in the inclination, since observations get v * sin(i)
inc = 75.1
sini = math.sin(inc * math.pi/180)
print(sini)
#
plt.plot(r,abs(v1)/sini,c='b')
plt.plot(r,abs(v2)/sini,c='r')
plt.xlim(0,950)
plt.ylim(88,132)
plt.xlabel("Radius (arcsec)")
plt.ylabel("Rotation Velocity (km/s)")
plt.title("Comparing with Greisen et al. Figure 12");
Explanation: Compare these two rotation curves with Figure 12 in the Greisen et al. (2009) paper:
End of explanation |
4,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: テンソルと演算
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: テンソル
テンソルは多次元の配列です。NumPy ndarray オブジェクトと同様に、tf.Tensor オブジェクトにもデータ型と形状があります。また、tf.Tensor はアクセラレータのメモリに留まることが可能です(GPU 同様)。TensorFlow には、tf.Tensor を消費して生成する演算(tf.add、tf.matmul、tf.linalg.inv など)が多数含まれたライブラリが用意されています。これらの演算によって、組み込みの Python 型が自動的に変換されます。以下に例を示します。
Step3: それぞれのtf.Tensorには、形状とデータ型があります。
Step4: NumPy 配列と tf.Tensor の間のもっとも明確な違いは
テンソルは( GPU や TPU などの)アクセラレータメモリを使用できる
テンソルは変更不可
NumPy互換性
TensorFlow のtf.Tensorと NumPy の ndarray 間の変換は簡単です。
TensorFlow の演算により NumPy の ndarray は自動的にテンソルに変換される
NumPy の演算によりテンソルは自動的に NumPy の ndarray に変換される
テンソルは .numpy() メソッドを使って明示的に NumPy の ndarray に変換されます。NumPy のndarray と tf.Tensor はその下敷きとなるメモリ上の表現が、できるかぎり共通化されているので、通常この変換のコストは小さいです。しかし、NumPy 配列はホスト側のメモリに置かれる一方、tf.Tensor はGPU のメモリに置かれる可能性もあるため、下層の表現をいつも共通化できるとは限りません。また、変換にはGPU からホスト側メモリへのコピーも関わってきます。
Step5: GPU による高速化
TensorFlow の演算の多くは、GPU を計算に使用することで高速化されます。TensorFlow は演算に注釈をつけなくとも、自動的に GPU と CPU のどちらかを選択し、必要であればテンソルを GPU メモリと CPU メモリの間でコピーして実行します。演算で生成されたテンソルは通常演算を実行したデバイスのメモリに置かれます。例を見てみましょう。
Step6: デバイス名
Tensor.device プロパティにより、そのテンソルの内容を保持しているデバイスの完全な名前文字列を得ることができます。この名前には、プログラムを実行中のホストのネットワークアドレスや、ホスト上のデバイスについての詳細がエンコードされています。この情報は、TensorFlow プログラムの分散実行に必要なものです。テンソルがホスト上の N 番目のGPUにある場合、文字列の最後は GPU
Step8: データセット
このセクションでは tf.data.Dataset API を使って、モデルにデータを供給するためのパイプラインを構築します。tf.data.Dataset APIは、単純で再利用可能な部品をもとに、モデルの訓練あるいは評価ループにデータを供給する高性能で複雑な入力パイプラインを構築するために使われます。
ソースDatasetの作成
Dataset.from_tensors やDataset.from_tensor_slices といったファクトリー関数または TextLineDataset あるいはTFRecordDataset のようなファイルを読み込むオブジェクトを使って、 元となるデータセットを作成しましょう。詳しくは、TensorFlow Dataset guide を参照してください。
Step9: 変換の適用
map, batch, shuffle などの変換関数を使って、データセットレコードに変換を適用します。
Step10: イテレート
tf.data.Dataset オブジェクトは、中のレコードを繰り返し利用するためのイテレーションをサポートします。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
Explanation: テンソルと演算
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/customization/basics"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/customization/basics.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">Download notebook</a> </td>
</table>
これは、下記の手法を示す TensorFlow の入門チュートリアルです。
必要なパッケージのインポート
テンソルの作成と使用
GPUによる高速化の使用
tf.data.Datasetのデモ
TensorFlowのインポート
始めるにはまず、tensorflow モジュールをインポートします。TensorFlow 2 の時点では、Eager execution はデフォルトでオンになっています。Eager execution では、TensorFlow のフロントエンドがよりインタラクティブになります。これについては、後で詳しく説明します。
End of explanation
print(tf.add(1, 2))
print(tf.add([1, 2], [3, 4]))
print(tf.square(5))
print(tf.reduce_sum([1, 2, 3]))
# Operator overloading is also supported
print(tf.square(2) + tf.square(3))
Explanation: テンソル
テンソルは多次元の配列です。NumPy ndarray オブジェクトと同様に、tf.Tensor オブジェクトにもデータ型と形状があります。また、tf.Tensor はアクセラレータのメモリに留まることが可能です(GPU 同様)。TensorFlow には、tf.Tensor を消費して生成する演算(tf.add、tf.matmul、tf.linalg.inv など)が多数含まれたライブラリが用意されています。これらの演算によって、組み込みの Python 型が自動的に変換されます。以下に例を示します。
End of explanation
x = tf.matmul([[1]], [[2, 3]])
print(x)
print(x.shape)
print(x.dtype)
Explanation: それぞれのtf.Tensorには、形状とデータ型があります。
End of explanation
import numpy as np
ndarray = np.ones([3, 3])
print("TensorFlow operations convert numpy arrays to Tensors automatically")
tensor = tf.multiply(ndarray, 42)
print(tensor)
print("And NumPy operations convert Tensors to numpy arrays automatically")
print(np.add(tensor, 1))
print("The .numpy() method explicitly converts a Tensor to a numpy array")
print(tensor.numpy())
Explanation: NumPy 配列と tf.Tensor の間のもっとも明確な違いは
テンソルは( GPU や TPU などの)アクセラレータメモリを使用できる
テンソルは変更不可
NumPy互換性
TensorFlow のtf.Tensorと NumPy の ndarray 間の変換は簡単です。
TensorFlow の演算により NumPy の ndarray は自動的にテンソルに変換される
NumPy の演算によりテンソルは自動的に NumPy の ndarray に変換される
テンソルは .numpy() メソッドを使って明示的に NumPy の ndarray に変換されます。NumPy のndarray と tf.Tensor はその下敷きとなるメモリ上の表現が、できるかぎり共通化されているので、通常この変換のコストは小さいです。しかし、NumPy 配列はホスト側のメモリに置かれる一方、tf.Tensor はGPU のメモリに置かれる可能性もあるため、下層の表現をいつも共通化できるとは限りません。また、変換にはGPU からホスト側メモリへのコピーも関わってきます。
End of explanation
x = tf.random.uniform([3, 3])
print("Is there a GPU available: "),
print(tf.config.list_physical_devices("GPU"))
print("Is the Tensor on GPU #0: "),
print(x.device.endswith('GPU:0'))
Explanation: GPU による高速化
TensorFlow の演算の多くは、GPU を計算に使用することで高速化されます。TensorFlow は演算に注釈をつけなくとも、自動的に GPU と CPU のどちらかを選択し、必要であればテンソルを GPU メモリと CPU メモリの間でコピーして実行します。演算で生成されたテンソルは通常演算を実行したデバイスのメモリに置かれます。例を見てみましょう。
End of explanation
import time
def time_matmul(x):
start = time.time()
for loop in range(10):
tf.matmul(x, x)
result = time.time()-start
print("10 loops: {:0.2f}ms".format(1000*result))
# Force execution on CPU
print("On CPU:")
with tf.device("CPU:0"):
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("CPU:0")
time_matmul(x)
# Force execution on GPU #0 if available
if tf.config.list_physical_devices("GPU"):
print("On GPU:")
with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc.
x = tf.random.uniform([1000, 1000])
assert x.device.endswith("GPU:0")
time_matmul(x)
Explanation: デバイス名
Tensor.device プロパティにより、そのテンソルの内容を保持しているデバイスの完全な名前文字列を得ることができます。この名前には、プログラムを実行中のホストのネットワークアドレスや、ホスト上のデバイスについての詳細がエンコードされています。この情報は、TensorFlow プログラムの分散実行に必要なものです。テンソルがホスト上の N 番目のGPUにある場合、文字列の最後は GPU:<N> となります。
明示的デバイス配置
TensorFlowでいう配置は、個々の演算を実行するためにどのようにデバイスにアサイン(配置)されるかを指します。前述のとおり、明示的な示唆がなければ、TensorFlow は演算を実行するデバイスを自動的に決め、必要であればテンソルをそのデバイスにコピーします。しかし、tf.device コンテキストマネジャーを使うことで、TensorFlow の演算を特定のデバイスに配置することができます。例を見てみましょう。
End of explanation
ds_tensors = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6])
# Create a CSV file
import tempfile
_, filename = tempfile.mkstemp()
with open(filename, 'w') as f:
f.write(Line 1
Line 2
Line 3
)
ds_file = tf.data.TextLineDataset(filename)
Explanation: データセット
このセクションでは tf.data.Dataset API を使って、モデルにデータを供給するためのパイプラインを構築します。tf.data.Dataset APIは、単純で再利用可能な部品をもとに、モデルの訓練あるいは評価ループにデータを供給する高性能で複雑な入力パイプラインを構築するために使われます。
ソースDatasetの作成
Dataset.from_tensors やDataset.from_tensor_slices といったファクトリー関数または TextLineDataset あるいはTFRecordDataset のようなファイルを読み込むオブジェクトを使って、 元となるデータセットを作成しましょう。詳しくは、TensorFlow Dataset guide を参照してください。
End of explanation
ds_tensors = ds_tensors.map(tf.square).shuffle(2).batch(2)
ds_file = ds_file.batch(2)
Explanation: 変換の適用
map, batch, shuffle などの変換関数を使って、データセットレコードに変換を適用します。
End of explanation
print('Elements of ds_tensors:')
for x in ds_tensors:
print(x)
print('\nElements in ds_file:')
for x in ds_file:
print(x)
Explanation: イテレート
tf.data.Dataset オブジェクトは、中のレコードを繰り返し利用するためのイテレーションをサポートします。
End of explanation |
4,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using deep features to build an image classifier
Fire up GraphLab Create
Step1: Load a common image analysis dataset
We will use a popular benchmark dataset in computer vision called CIFAR-10.
(We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)
This dataset is already split into a training set and test set.
Step2: Exploring the image data
Step3: Train a classifier on the raw image pixels
We first start by training a classifier on just the raw pixels of the image.
Step4: Make a prediction with the simple model based on raw pixels
Step5: The model makes wrong predictions for all three images.
Evaluating raw pixel model on test data
Step6: The accuracy of this model is poor, getting only about 46% accuracy.
Can we improve the model using deep features
We only have 2005 data points, so it is not possible to train a deep neural network effectively with so little data. Instead, we will use transfer learning
Step7: Computing deep features for our images
The two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded.
(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
Step8: As we can see, the column deep_features already contains the pre-computed deep features for this data.
Step9: Given the deep features, let's train a classifier
Step10: Apply the deep features model to first few images of test set
Step11: The classifier with deep features gets all of these images right!
Compute test_data accuracy of deep_features_model
As we can see, deep features provide us with significantly better accuracy (about 78%) | Python Code:
import graphlab
Explanation: Using deep features to build an image classifier
Fire up GraphLab Create
End of explanation
image_train = graphlab.SFrame('image_train_data/')
image_test = graphlab.SFrame('image_test_data/')
Explanation: Load a common image analysis dataset
We will use a popular benchmark dataset in computer vision called CIFAR-10.
(We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)
This dataset is already split into a training set and test set.
End of explanation
graphlab.canvas.set_target('ipynb')
image_train['image'].show()
Explanation: Exploring the image data
End of explanation
raw_pixel_model = graphlab.logistic_classifier.create(image_train,target='label',
features=['image_array'])
Explanation: Train a classifier on the raw image pixels
We first start by training a classifier on just the raw pixels of the image.
End of explanation
image_test[0:3]['image'].show()
image_test[0:3]['label']
raw_pixel_model.predict(image_test[0:3])
Explanation: Make a prediction with the simple model based on raw pixels
End of explanation
raw_pixel_model.evaluate(image_test)
Explanation: The model makes wrong predictions for all three images.
Evaluating raw pixel model on test data
End of explanation
len(image_train)
Explanation: The accuracy of this model is poor, getting only about 46% accuracy.
Can we improve the model using deep features
We only have 2005 data points, so it is not possible to train a deep neural network effectively with so little data. Instead, we will use transfer learning: using deep features trained on the full ImageNet dataset, we will train a simple model on this small dataset.
End of explanation
#deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')
#image_train['deep_features'] = deep_learning_model.extract_features(image_train)
Explanation: Computing deep features for our images
The two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded.
(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
End of explanation
image_train.head()
Explanation: As we can see, the column deep_features already contains the pre-computed deep features for this data.
End of explanation
deep_features_model = graphlab.logistic_classifier.create(image_train,
features=['deep_features'],
target='label')
Explanation: Given the deep features, let's train a classifier
End of explanation
image_test[0:3]['image'].show()
deep_features_model.predict(image_test[0:3])
Explanation: Apply the deep features model to first few images of test set
End of explanation
deep_features_model.evaluate(image_test)
image_train['label'].sketch_summary()
auto_data = image_train[image_train['label'] == 'automobile']
cat_data = image_train[image_train['label'] == 'cat']
dog_data = image_train[image_train['label'] == 'dog']
bird_data = image_train[image_train['label'] == 'bird']
len(auto_data)
Explanation: The classifier with deep features gets all of these images right!
Compute test_data accuracy of deep_features_model
As we can see, deep features provide us with significantly better accuracy (about 78%)
End of explanation |
4,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matérn Spectral Mixture (MSM) kernel
Gaussian process priors for pitch detection in polyphonic music
Learning kernels in frequency domain
Written by Pablo A. Alvarado, Centre for Digital Music, Queen Mary University of London.
Last updated Friday, 12 May 2017.
The aim of this notebook is to ilustrate the approach proposed in (link) for pitch detection in polyphonic signals, applying Gaussian processes (GPs) models in Python using GPflow. We first outline the mathematical formulation of the proposed model. Then we introduce how to learn hyperparameters in frequency domain. Finally we present an example for detecting two pitches in a polyphonic music signal.
The dataset used in this tutorial corresponds to the electric guitar mixture signal in http
Step1: Learning frequency content of each component process $\left\lbrace w_{m}(t)\right\rbrace_{m=1}^{M}$
Example fitting one frequency component
Step2: Learning hyperparameters of activation processes $\left\lbrace \phi_{m}(t)\right\rbrace_{m=1}^{M}$
So far we have learnt the harmoninc content of the isolated events. Now we try to learn the parameters of the GP that describes the amplitude envelope.
Step3: Using the learnt MSM kernel for pitch detection
We keep the same form for the learnt variances $\sigma^{2}$, but we modify the lengthscale because we learnt the inverse, i.e. $l = \lambda^{-1}$. Also, we learnt the frequency vector in radians, that is why we convert it to Hz.
Step4: Piano-roll | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (16, 4)
import numpy as np
import scipy as sp
import scipy.io as sio
import scipy.io.wavfile as wav
from scipy import signal
from scipy.fftpack import fft
import gpflow
import GPitch
sf, y = wav.read('guitar.wav') #loading dataset
y = y.astype(np.float64)
yaudio = y / np.max(np.abs(y))
N = np.size(yaudio)
Xaudio = np.linspace(0, (N-1.)/sf, N)
X1, y1 = Xaudio[0:2*sf], yaudio[0:2*sf] # define training data for component 1 and 2
X2, y2 = Xaudio[2*sf:4*sf], yaudio[2*sf:4*sf]
y1f, y2f = sp.fftpack.fft(y1), sp.fftpack.fft(y2) # get spectral density for each isolated sound event
N1 = y1.size # Number of sample points
T = 1.0 / sf # sample period
F = np.linspace(0.0, 1.0/(2.0*T), N1/2)
S1 = 2.0/N1 * np.abs(y1f[0:N1/2])
S2 = 2.0/N1 * np.abs(y2f[0:N1/2])
plt.figure()
plt.subplot(1,2,1), plt.title('Waveform sound event with pitch C4'), plt.xlabel('Time (s)'),
plt.plot(X1, y1),
plt.subplot(1,2,2), plt.title('Waveform sound event with pitch E4'), plt.xlabel('Time (s)'),
plt.plot(X2, y2)
plt.figure()
plt.subplot(1,2,1), plt.title('Spectral density sound event with pitch C4'), plt.xlabel('Frequency (Hz)'),
plt.plot(F, S1, lw=2), plt.xlim([0, 5000])
plt.subplot(1,2,2), plt.title('Spectral density sound event with pitch E4'), plt.xlabel('Frequency (Hz)'),
plt.plot(F, S2, lw=2), plt.xlim([0, 5000]);
Explanation: Matérn Spectral Mixture (MSM) kernel
Gaussian process priors for pitch detection in polyphonic music
Learning kernels in frequency domain
Written by Pablo A. Alvarado, Centre for Digital Music, Queen Mary University of London.
Last updated Friday, 12 May 2017.
The aim of this notebook is to ilustrate the approach proposed in (link) for pitch detection in polyphonic signals, applying Gaussian processes (GPs) models in Python using GPflow. We first outline the mathematical formulation of the proposed model. Then we introduce how to learn hyperparameters in frequency domain. Finally we present an example for detecting two pitches in a polyphonic music signal.
The dataset used in this tutorial corresponds to the electric guitar mixture signal in http://winnie.kuis.kyoto-u.ac.jp/~yoshii/psdtf/. The first 4 seconds of the data were used for training, this segment corresponds to 2 isolated sound events, with pitch C4 and E4 respectively. The test data was made of three sound events with pitches C4, E4, C4+E4, i.e. the training data in addition to an event with both pitches.
GP additive model for pitch detection
Automatic music transcription aims to infer a symbolic representation, such as piano-roll or score, given an observed audio recording. From a Bayesian latent variable perspective, transcription consists in updating our beliefs about the symbolic description for a certain piece of music, after observing a corresponding audio recording.
We approach the transcription problem from a time-domain source separation perspective. That is, given an audio recording $\mathcal{D}=\left\lbrace y_n, t_n \right\rbrace_{n=1}^{N}$, we seek to formulate a generative probabilistic model that describes how the observed polyphonic signal (mixture of sources) was generated and, moreover, that allows us to infer the latent variables associated with the piano-roll representation. To do so, we use the regression model
\begin{align}
y_n = f(t_n) + \epsilon,
\end{align}
where $y_n$ is the value of the analysed polyphonic signal at time $t_n$, the noise follows a normal distribution $\epsilon \sim \mathcal{N}(0,\sigma^2)$, and the function $f(t)$ is a random process composed by a linear combination of $M$ sources $\left\lbrace f_m(t) \right\rbrace _{m=1}^{M} $, that is
\begin{align}
f(t) = \sum_{m=1}^{M} f_{m}(t).
\end{align}
Each source is decomposed into the product of two factors, an amplitude-envelope or activation function $\phi_m(t)$, and a quasi-periodic or component function $w_{m}(t)$. The overall model is then
\begin{align}
y(t) = \sum_{m=1}^{M} \phi_{m}(t) w_{m}(t) + \epsilon.
\end{align}
We can interpret the set $\left\lbrace w_{m}(t)\right\rbrace_{m=1}^{M}$ as a dictionary where each component $ w_{m}(t)$ is a quasi-periodic stochastic function with a defined pitch. Likewise, each stochastic function in $\left\lbrace \phi_{m}(t)\right\rbrace_{m=1}^{M}$ represents a row of the piano roll-matrix, i.e the time dependent non-negative activation of a specific pitch throughout the analysed piece of music.
Components $\left\lbrace w_{m}(t)\right\rbrace_{m=1}^{M}$ follow
\begin{align}
w_m(t) \sim \mathcal{GP}(0,k_m(t,t')),
\end{align}
where the covariance $k_m(t,t')$ reflect the frequency content of the $m^{\text{th}}$ component, and has the form of a Matérn spectral mixture (MSM) kernel.
To guarantee the activations to be non-negative we apply non-linear transformations to GPs. To do so, we use the sigmoid function
\begin{align}
\sigma(x) = \left[ 1 + \exp(-x) \right]^{-1},
\end{align}
Then, activations are defined as
\begin{align}
\phi_m(t) = \sigma( {g_{m}(t)} ),
\end{align}
where $ \left\lbrace g_{m}(t)\right\rbrace_{m=1}^{M} $ are GPs. The sigmoid model follows
\begin{align}
y(t)=
\sum_{m = 1}^{M}
\sigma( {g_{m}(t)} )
\
w_{m}(t)
+ \epsilon.
\end{align}
Learning hyperparameters in frequency domain
In this section we describe how to learn the hyperparameters for the components $\left\lbrace w_{m}(t)\right\rbrace_{m=1}^{M}$, and the activations $\left\lbrace g_{m}(t)\right\rbrace_{m=1}^{M}$.
End of explanation
# example fitting one harmonic
idx = np.argmax(S1)
a, b = idx - 50, idx + 50
X, y = F[a: b,].reshape(-1,), S1[a: b,].reshape(-1,)
p0 = np.array([1., 1., 2*np.pi*F[idx]])
phat = sp.optimize.minimize(GPitch.Lloss, p0, method='L-BFGS-B', args=(X, y), tol = 1e-10, options={'disp': True})
pstar = phat.x
Xaux = np.linspace(X.min(), X.max(), 1000)
L = GPitch.Lorentzian(pstar,Xaux)
plt.figure(), plt.xlim([X.min(), X.max()])
plt.plot(Xaux, L, lw=2)
plt.plot(X, y, '.', ms=8);
Nh = 15 # maximun number of harmonics
s_1, l_1, f_1 = GPitch.learnparams(F, S1, Nh)
Nh1 = s_1.size
s_2, l_2, f_2 = GPitch.learnparams(F, S2, Nh)
Nh2 = s_2.size
plt.figure()
for i in range(0, Nh1):
idx = np.argmin(np.abs(F - f_1[i]))
a = idx - 50
b = idx + 50
pstar = np.array([s_1[i], 1./l_1[i], 2.*np.pi*f_1[i]])
learntfun = GPitch.Lorentzian(pstar, F)
plt.subplot(1,Nh,i+1)
plt.plot(F[a:b],learntfun[a:b],'', lw = 2)
plt.plot(F[a:b],S1[a:b],'.', ms = 3)
plt.axis('off')
plt.ylim([S1.min(), S1.max()])
plt.figure()
for i in range(0, Nh2):
idx = np.argmin(np.abs(F - f_2[i]))
a = idx - 50
b = idx + 50
pstar = np.array([s_2[i], 1./l_2[i], 2.*np.pi*f_2[i]])
learntfun = GPitch.Lorentzian(pstar, F)
plt.subplot(1,Nh,i+1)
plt.plot(F[a:b],learntfun[a:b],'', lw = 2)
plt.plot(F[a:b],S2[a:b],'.', ms = 3)
plt.axis('off')
plt.ylim([S2.min(), S2.max()])
S1k = GPitch.LorM(F, s=s_1, l=1./l_1, f=2*np.pi*f_1)
S2k = GPitch.LorM(F, s=s_2, l=1./l_2, f=2*np.pi*f_2)
plt.figure(), plt.title('Approximate spectrum using Lorentzian mixture')
plt.subplot(1,2,1), plt.plot(F, S1, lw=2), plt.plot(F, S1k, lw=2)
plt.legend(['FT source 1', 'S kernel 1']), plt.xlabel('Frequency (Hz)')
plt.subplot(1,2,2), plt.plot(F, S2, lw=2), plt.plot(F, S2k, lw=2)
plt.legend(['FT source 2', 'S kernel 2']), plt.xlabel('Frequency (Hz)');
Xk = np.linspace(-1.,1.,2*16000).reshape(-1,1)
IFT_y1 = np.fft.ifft(np.abs(y1f))
IFT_y2 = np.fft.ifft(np.abs(y2f))
k1 = GPitch.MaternSM(Xk, s=s_1, l=1./l_1, f=2*np.pi*f_1)
k2 = GPitch.MaternSM(Xk, s=s_2, l=1./l_2, f=2*np.pi*f_2)
plt.figure()
plt.subplot(1,2,1), plt.plot(Xk, k1, lw=2), plt.xlim([-0.005, 0.005])
plt.subplot(1,2,2), plt.plot(Xk, k2, lw=2), plt.xlim([-0.005, 0.005])
plt.figure()
plt.subplot(1,2,1), plt.plot(Xk, k1)
plt.subplot(1,2,2), plt.plot(Xk, k2)
plt.figure(),
plt.subplot(1,2,1), plt.plot(X1[0:16000],IFT_y1[0:16000], lw=1), plt.plot(X1[0:16000],k1[16000:32000], lw=1)
plt.xlim([0, 0.03])
plt.subplot(1,2,2), plt.plot(X1[0:16000],IFT_y2[0:16000], lw=1), plt.plot(X1[0:16000],k2[16000:32000], lw=1)
plt.xlim([0, 0.03]);
Explanation: Learning frequency content of each component process $\left\lbrace w_{m}(t)\right\rbrace_{m=1}^{M}$
Example fitting one frequency component
End of explanation
win = signal.hann(200)
env1 = signal.convolve(np.abs(y1), win, mode='same') / sum(win)
env1 = np.max(np.abs(y1))*(env1 / np.max(env1))
env1 = env1.reshape(-1,)
env2 = signal.convolve(np.abs(y2), win, mode='same') / sum(win)
env2 = np.max(np.abs(y2))*(env2 / np.max(env2))
env2 = env2.reshape(-1,)
plt.figure()
plt.subplot(1,2,1), plt.plot(X1, y1, ''), plt.plot(X1, env1, '', lw = 2)
plt.subplot(1,2,2), plt.plot(X2, y2, ''), plt.plot(X2, env2, '', lw = 2);
yf = fft(env1)
xf = np.linspace(-1.0/(2.0*T), 1.0/(2.0*T), N1)
S = 1.0/N1 * np.abs(yf)
sht = np.fft.fftshift(S)
a = np.argmin(np.abs(xf - (-10.)))
b = np.argmin(np.abs(xf - ( 10.)))
X, y = xf[a:b].reshape(-1,), sht[a:b].reshape(-1,)
p0 = np.array([1., 1., 0.])
phat = sp.optimize.minimize(GPitch.Lloss, p0, method='L-BFGS-B', args=(X, y), tol = 1e-10, options={'disp': True})
X2 = np.linspace(X.min(), X.max(), 1000)
L = GPitch.Lorentzian(phat.x, X2)
plt.figure()
plt.subplot(1,2,1), plt.plot(X2, L), plt.plot(X, y, '.', ms=8)
plt.subplot(1,2,2), plt.plot(X2, L), plt.plot(X, y, '.', ms=8)
s_env, l_env, f_env = np.hsplit(phat.x, 3)
Explanation: Learning hyperparameters of activation processes $\left\lbrace \phi_{m}(t)\right\rbrace_{m=1}^{M}$
So far we have learnt the harmoninc content of the isolated events. Now we try to learn the parameters of the GP that describes the amplitude envelope.
End of explanation
# To run the pitch detection change: RunExperiment = True
RunExperiment = False
if RunExperiment == True:
GPitch.pitch('guitar.wav', windowsize=16000)
else:
results = np.load('SIG_FL_results.npz')
X = results['X']
g1 = results['mu_g1']
g2 = results['mu_g2']
phi1 = GPitch.logistic(g1)
phi2 = GPitch.logistic(g2)
w1 = results['mu_f1']
w2 = results['mu_f2']
f1 = phi1*w1
f2 = phi2*w2
Xtest1, ytest1 = Xaudio[0:4*sf], yaudio[0:4*sf]
Xtest2, ytest2 = Xaudio[6*sf:8*sf], yaudio[6*sf:8*sf]
Xtest = np.hstack((Xtest1, Xtest2)).reshape(-1,1)
ytest = np.hstack((ytest1, ytest2)).reshape(-1,1)
sf, sourceC = wav.read('source_C.wav')
sourceC = sourceC.astype(np.float64)
sourceC = sourceC / np.max(np.abs(sourceC))
sf, sourceE = wav.read('source_E.wav')
sourceE = sourceE.astype(np.float64)
sourceE = sourceE / np.max(np.abs(sourceE))
plt.figure()
plt.plot(Xtest, ytest)
plt.figure()
plt.subplot(1,2,1), plt.plot(Xaudio, sourceC)
plt.xlim([X.min(), X.max()])
plt.subplot(1,2,2), plt.plot(Xaudio, sourceE)
plt.xlim([X.min(), X.max()])
plt.figure()
plt.subplot(1,2,1), plt.plot(X, f1)
plt.subplot(1,2,2), plt.plot(X, f2)
plt.figure()
plt.subplot(1,2,1), plt.plot(X, phi1)
plt.subplot(1,2,2), plt.plot(X, phi2)
plt.figure()
plt.subplot(1,2,1), plt.plot(X, w1)
plt.subplot(1,2,2), plt.plot(X, w2)
Explanation: Using the learnt MSM kernel for pitch detection
We keep the same form for the learnt variances $\sigma^{2}$, but we modify the lengthscale because we learnt the inverse, i.e. $l = \lambda^{-1}$. Also, we learnt the frequency vector in radians, that is why we convert it to Hz.
End of explanation
#%% genetare piano-roll ground truth
jump = 100 #downsample
Xsubset = X[::jump]
oct1 = 24
oct2 = 84
Y = np.arange(oct1,oct2).reshape(-1,1)
Ns = Xsubset.size
Phi = np.zeros((Y.size, Ns))
#Phi[47-oct1, (Xsubset> 0.050 and Xsubset< 1.973)] = 1.
C4_a1 = np.argmin(np.abs(Xsubset-0.050))
C4_b1 = np.argmin(np.abs(Xsubset-1.973))
C4_a2 = np.argmin(np.abs(Xsubset-6.050))
C4_b2 = np.argmin(np.abs(Xsubset-7.979))
Phi[47-oct1, C4_a1:C4_b1 ] = 1.
Phi[47-oct1, C4_a2:C4_b2 ] = 1.
Phi[47-oct1, C4_a3:C4_b3 ] = 1.
Phi[47-oct1, C4_a4:C4_b4 ] = 1.
E4_a1 = np.argmin(np.abs(Xsubset-2.050))
E4_b1 = np.argmin(np.abs(Xsubset-3.979))
E4_a2 = np.argmin(np.abs(Xsubset-6.050))
E4_b2 = np.argmin(np.abs(Xsubset-7.979))
Phi[51-oct1, E4_a1:E4_b1 ] = 1.
Phi[51-oct1, E4_a2:E4_b2 ] = 1.
Phi[51-oct1, E4_a3:E4_b3 ] = 1.
Phi[51-oct1, E4_a4:E4_b4 ] = 1.
Phi = np.abs(Phi-1)
[Xm, Ym] = np.meshgrid(Xsubset,Y)
#infered piano roll
Phi_i = np.zeros((Y.size, Ns))
Phi_i[47-oct1, :] = phi1[::jump].reshape(-1,)
Phi_i[51-oct1, :] = phi2[::jump].reshape(-1,)
Phi_i = np.abs(Phi_i-1)
plt.figure()
plt.ylabel('')
plt.xlabel('Time (s)')
plt.pcolormesh(Xm, Ym, Phi, cmap='gray')
plt.ylim([oct1, oct2])
plt.xlim([0, 8])
plt.xlabel('Time (seconds)')
plt.figure()
plt.ylabel('')
plt.xlabel('Time (s)')
plt.pcolormesh(Xm, Ym, Phi_i, cmap='gray')
plt.ylim([oct1, oct2])
plt.xlim([0, 8])
plt.xlabel('Time (seconds)')
Explanation: Piano-roll
End of explanation |
4,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
练习 1:仿照求 ∑mi=1i+∑ni=1i+∑ki=1i∑i=1mi+∑i=1ni+∑i=1ki 的完整代码,写程序,可求m!+n!+k!
Step1: 练习 2:写函数可返回1-1/3+1/5-1/7...的前n项的和。在主程序中,分别令n=1000及100000,打印4倍该函数的和。
Step2: 练习 3:将task3中的练习1及练习4改写为函数,并进行调用。
Step3: 挑战性练习:写程序,可以求从整数m到整数n累加的和,间隔为k,求和部分需用函数实现,主程序中由用户输入m,n,k调用函数验证正确性。 | Python Code:
def product_sum(end):
i = 1
total_n = 1
while i < end:
i += 1
total_n *= i
return total_n
m = int(input("请输入第1个整数,以回车结束:"))
n = int(input("请输入第2个整数,以回车结束:"))
k = int(input("请输入第3个整数,以回车结束:"))
print("最终的和是:",product_sum(m)+product_sum(n)+product_sum(k))
Explanation: 练习 1:仿照求 ∑mi=1i+∑ni=1i+∑ki=1i∑i=1mi+∑i=1ni+∑i=1ki 的完整代码,写程序,可求m!+n!+k!
End of explanation
def sum(n):
i = 1
total_n = 0
while i <= n:
total_n += (-1)**(n-1)*(1.0/(2*n-1))
i += 1
return total_n
print(4*sum(1000))
print(4*sum(100000))
Explanation: 练习 2:写函数可返回1-1/3+1/5-1/7...的前n项的和。在主程序中,分别令n=1000及100000,打印4倍该函数的和。
End of explanation
#练习1
def star():
name = input("please enter your name:")
date = input("please enter your date of birth:")
date=float(date)
if 3.21 <= date <= 4.19:
print(name,",你是非常有性格的白羊座!")
elif 4.20 <= date <= 5.20:
print(name,",你是非常有性格的金牛座!")
elif 5.21 <= date <= 6.21:
print(name,",你是非常有性格的双子座!")
elif 6.22 <= date <= 7.22:
print(name,",你是非常有性格的巨蟹座!")
elif 7.23 <= date <= 8.22:
print(name,",你是非常有性格的狮子座!")
elif 8.23 <= date <= 9.22:
print(name,",你是非常有性格的处女座!")
elif 9.23 <= date <= 10.23:
print(name,",你是非常有性格的天秤座!")
elif 10.24 <= date <= 11.22:
print(name,",你是非常有性格的天蝎座!")
elif 11.23 <= date <= 12.21:
print(name,",你是非常有性格的射手座!")
elif 1.20 <= date <= 2.18:
print(name,",你是非常有性格的水瓶座!")
elif 2.19 <= date <= 3.20:
print(name,",你是非常有性格的双鱼座!")
else:
print(name,",你是非常有性格的摩羯座!")
star()
#练习4
def conversion():
word = input("please enter a word:")
if word.endswith("x"):
print(word,"es",sep = "")
elif word.endswith("sh"):
print(word,"es",sep = "")
else:
print(word,"s",sep = "")
conversion()
Explanation: 练习 3:将task3中的练习1及练习4改写为函数,并进行调用。
End of explanation
def sum():
i = m
total = 0
while i <= n:
total += i
i += k
return total
m = int(input("please enter an integer:"))
n = int(input("please enter a biger integer:"))
k = int(input("please enter interval between two numbers:"))
print("最终的和是:",sum())
Explanation: 挑战性练习:写程序,可以求从整数m到整数n累加的和,间隔为k,求和部分需用函数实现,主程序中由用户输入m,n,k调用函数验证正确性。
End of explanation |
4,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The focus of this notebook is refactoring a loop that
- gets user input
- quits if that input matches some sentinel value
- processes the user input
The interesting part starts around cell #4.
Step1: Below is a typical loop for
- getting user input
- quitting the loop if the user enters a special value
- processing the input
Step2: It works as shown below.
Step3: Below is a different way of writing that loop.
How would you apply it to loop at the bottom of
2016-04/2016-Apr-Gutenberg.py?
Step4: It can be reduced to a generator expression.
Step5: 2017-10-06 More thoughts about partial(input, prompt) and alternatives to it. | Python Code:
from functools import partial
def convert(s):
converters = (int, float)
for converter in converters:
try:
value = converter(s)
except ValueError:
pass
else:
return value
return s
def process_input(s):
value = convert(s)
print('%r becomes %r' % (s, value))
Explanation: The focus of this notebook is refactoring a loop that
- gets user input
- quits if that input matches some sentinel value
- processes the user input
The interesting part starts around cell #4.
End of explanation
def main():
prompt = 'gimme: '
while True:
s = input(prompt)
if s == 'quit':
break
process_input(s)
Explanation: Below is a typical loop for
- getting user input
- quitting the loop if the user enters a special value
- processing the input
End of explanation
main()
Explanation: It works as shown below.
End of explanation
def main():
prompt = 'gimme: '
for s in iter(partial(input, prompt), 'quit'):
process_input(s)
main()
Explanation: Below is a different way of writing that loop.
How would you apply it to loop at the bottom of
2016-04/2016-Apr-Gutenberg.py?
End of explanation
prompt = 'gimme: '
get_values = (convert(s) for s in iter(partial(input, prompt), 'quit'))
for value in get_values:
print(value)
Explanation: It can be reduced to a generator expression.
End of explanation
prompt = 'gimme: '
def get_input():
return input(prompt)
def main():
for s in iter(get_input, 'quit'):
process_input(s)
main()
def main():
prompt = 'gimme: '
for s in iter(lambda : input(prompt), 'quit'):
process_input(s)
main()
def my_partial(function, *args, **kwargs):
def helper():
return function(*args, **kwargs)
return helper
def main():
prompt = 'gimme: '
for s in iter(my_partial(input, prompt), 'quit'):
process_input(s)
main()
Explanation: 2017-10-06 More thoughts about partial(input, prompt) and alternatives to it.
End of explanation |
4,231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Set up data
We're working with the movielens data, which contains one rating per row, like this
Step1: Just for display purposes, let's read in the movie names too.
Step2: We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.
Step3: This is the number of latent factors in each embedding.
Step4: Randomly split into training and validation.
Step5: Create subset for Excel
We create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however.
Step6: Dot product
The most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works
Step7: The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well...
Bias
The problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by simply creating an embedding with one output for each movie and each user, and adding it to our output.
Step8: This result is quite a bit better than the best benchmarks that we could find with a quick google search - so looks like a great approach!
Step9: We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.
Step10: Analyze results
To make the analysis of the factors more interesting, we'll restrict it to the top 2000 most popular movies.
Step11: First, we'll look at the movie bias term. We create a 'model' - which in keras is simply a way of associating one or more inputs with one more more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).
Step12: Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.
Step13: We can now do the same thing for the embeddings.
Step14: Because it's hard to interpret 50 embeddings, we use PCA to simplify them down to just 3 vectors.
Step15: Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'.
Step16: The 2nd is 'hollywood blockbuster'.
Step17: The 3rd is 'violent vs happy'.
Step18: We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
Step19: Neural net
Rather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net. | Python Code:
ratings = pd.read_csv(path+'ratings.csv')
ratings.head()
len(ratings)
Explanation: Set up data
We're working with the movielens data, which contains one rating per row, like this:
End of explanation
movie_names = pd.read_csv(path+'movies.csv').set_index('movieId')['title'].to_dict()
users = ratings.userId.unique()
movies = ratings.movieId.unique()
userid2idx = {o:i for i,o in enumerate(users)}
movieid2idx = {o:i for i,o in enumerate(movies)}
Explanation: Just for display purposes, let's read in the movie names too.
End of explanation
ratings.movieId = ratings.movieId.apply(lambda x: movieid2idx[x])
ratings.userId = ratings.userId.apply(lambda x: userid2idx[x])
user_min, user_max, movie_min, movie_max = (ratings.userId.min(),
ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max())
user_min, user_max, movie_min, movie_max
n_users = ratings.userId.nunique()
n_movies = ratings.movieId.nunique()
n_users, n_movies
Explanation: We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.
End of explanation
n_factors = 50
np.random.seed = 42
Explanation: This is the number of latent factors in each embedding.
End of explanation
msk = np.random.rand(len(ratings)) < 0.8
trn = ratings[msk]
val = ratings[~msk]
Explanation: Randomly split into training and validation.
End of explanation
g=ratings.groupby('userId')['rating'].count()
topUsers=g.sort_values(ascending=False)[:15]
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:15]
top_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')
top_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId')
pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)
Explanation: Create subset for Excel
We create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however.
End of explanation
user_in = Input(shape=(1,), dtype='int64', name='user_in')
u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-4))(user_in)
movie_in = Input(shape=(1,), dtype='int64', name='movie_in')
m = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-4))(movie_in)
x = merge([u, m], mode='dot')
x = Flatten()(x)
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.summary()
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=3, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6,verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
Explanation: Dot product
The most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works:
End of explanation
def embedding_input(name, n_in, n_out, reg):
inp = Input(shape=(1,), dtype='int64', name=name)
return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp)
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
def create_bias(inp, n_in):
x = Embedding(n_in, 1, input_length=1)(inp)
return Flatten()(x)
ub = create_bias(user_in, n_users)
mb = create_bias(movie_in, n_movies)
x = merge([u, m], mode='dot')
x = Flatten()(x)
x = merge([x, ub], mode='sum')
x = merge([x, mb], mode='sum')
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=10, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=5, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
Explanation: The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well...
Bias
The problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by simply creating an embedding with one output for each movie and each user, and adding it to our output.
End of explanation
model.save_weights(model_path+'bias.h5')
model.load_weights(model_path+'bias.h5')
Explanation: This result is quite a bit better than the best benchmarks that we could find with a quick google search - so looks like a great approach!
End of explanation
model.predict([np.array([3]), np.array([6])])
Explanation: We can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.
End of explanation
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:2000]
topMovies = np.array(topMovies.index)
Explanation: Analyze results
To make the analysis of the factors more interesting, we'll restrict it to the top 2000 most popular movies.
End of explanation
get_movie_bias = Model(movie_in, mb)
movie_bias = get_movie_bias.predict(topMovies)
movie_ratings = [(b[0], movie_names[movies[i]]) for i,b in zip(topMovies,movie_bias)]
Explanation: First, we'll look at the movie bias term. We create a 'model' - which in keras is simply a way of associating one or more inputs with one more more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).
End of explanation
sorted(movie_ratings, key=itemgetter(0))[:15]
sorted(movie_ratings, key=itemgetter(0), reverse=True)[:15]
Explanation: Now we can look at the top and bottom rated movies. These ratings are corrected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.
End of explanation
get_movie_emb = Model(movie_in, m)
movie_emb = np.squeeze(get_movie_emb.predict([topMovies]))
movie_emb.shape
Explanation: We can now do the same thing for the embeddings.
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
movie_pca = pca.fit(movie_emb.T).components_
fac0 = movie_pca[0]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac0, topMovies)]
Explanation: Because it's hard to interpret 50 embeddings, we use PCA to simplify them down to just 3 vectors.
End of explanation
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac1 = movie_pca[1]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac1, topMovies)]
Explanation: Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'.
End of explanation
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac2 = movie_pca[2]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac2, topMovies)]
Explanation: The 2nd is 'hollywood blockbuster'.
End of explanation
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
Explanation: The 3rd is 'violent vs happy'.
End of explanation
import sys
stdout, stderr = sys.stdout, sys.stderr # save notebook stdout and stderr
reload(sys)
sys.setdefaultencoding('utf-8')
sys.stdout, sys.stderr = stdout, stderr # restore notebook stdout and stderr
start=50; end=100
X = fac0[start:end]
Y = fac2[start:end]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(topMovies[start:end], X, Y):
plt.text(x,y,movie_names[movies[i]], color=np.random.rand(3)*0.7, fontsize=14)
plt.show()
Explanation: We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
End of explanation
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
x = merge([u, m], mode='concat')
x = Flatten()(x)
x = Dropout(0.3)(x)
x = Dense(70, activation='relu')(x)
x = Dropout(0.75)(x)
x = Dense(1)(x)
nn = Model([user_in, movie_in], x)
nn.compile(Adam(0.001), loss='mse')
nn.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=8, verbose=2,
validation_data=([val.userId, val.movieId], val.rating))
Explanation: Neural net
Rather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net.
End of explanation |
4,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with HPE IMC API for Custom Views
In this notebook, we will be covering the basics of using the pyhpimc python module to access the RESTFUL interface ( eAPI ) of the HPE IMC Network Management Server.
The python library is currently available at HPE Github repository
Import Required Modules from pyhpeimc
Before we get started, we must import the required libraries into the ipython interpreter . For this notebook, you will require the Auth and Operator modules from the pyhpimc library, as well as the standard csv module.
Step1: Input IMC Credentials
In this section, we will first create a authentication object which will contain
the protocol
ip address
port number
username
password
If you are running this on your local network, please change the appropriate values in the following field to connect to your local IMC server.
Step2: Create Print View List Helper Function
This function will gather the current list of custom views from the HPE IMC NMS and print them out to the screen.
Step3: Display the current views
This section will count the total number of existing views as well as display the first view returned.
Notice the 'upLevelSymbolId" is 3 for this custom view. The three designates that this is a level one custom view, meaning that it's at the root of the custom view tree.
Step4: Create New Custom View
In this step, we will create two custom views Canada and Alberta using the create_custom_views() function.
For this example, we will create
- the Canada view which is a L1 custom view
- the Alberta custom view which will be a child of the Canada Custom view.
- the Calgary custom view which will be a child of the Alberta Custom View
|Name| Upperview |
|
Step5: Canada View
Notice that the upLevelSymbolId for this view is the same as the default My Network View shown above. We can see that this is a L1 custom view.
Step6: Alberta View
Notice that upLevelSymbolId for this view is equal to the symbolId of the Canada view shown in the previous example. we can see that this is a child of the Canada Custom View.
Step7: Calgary View
Notice that upLevelSymbolId for this view is equal to the symbolId of the Alberta view shown in the previous example. we can see that this is a child of the Canada Custom View.
Step8: Deleting Custom Views
In this section we will delete individual custom views.
Step9: Display Contents of Custom Views.csv
We have prepared a CSV file which contains the new custom views that we wish to create. This can contain both parent and child custom views.
| name | upperview |
| ------ | ----------- |
| Branches | |
| Branch1 | Branches |
| Branch2 | Branches |
| WAN | |
Step11: Create Import Custom Views Function
Here we create a new function which will take a CSV file shown above as an input to the create_custom_views function
Step12: Import Custom Views from CSV File
Step14: Cleaning up After Ourselves | Python Code:
import csv
import time
from pyhpeimc.auth import *
from pyhpeimc.plat.groups import *
from pyhpeimc.version import *
2+34
Explanation: Working with HPE IMC API for Custom Views
In this notebook, we will be covering the basics of using the pyhpimc python module to access the RESTFUL interface ( eAPI ) of the HPE IMC Network Management Server.
The python library is currently available at HPE Github repository
Import Required Modules from pyhpeimc
Before we get started, we must import the required libraries into the ipython interpreter . For this notebook, you will require the Auth and Operator modules from the pyhpimc library, as well as the standard csv module.
End of explanation
auth = IMCAuth("http://", "10.101.0.203", "8080", "admin", "admin")
Explanation: Input IMC Credentials
In this section, we will first create a authentication object which will contain
the protocol
ip address
port number
username
password
If you are running this on your local network, please change the appropriate values in the following field to connect to your local IMC server.
End of explanation
def print_views():
views_list = get_custom_views(url=auth.url, auth=auth.creds)
print ("There are a total of " + str(len(views_list)) + " views currently")
for view in views_list:
print (view['name'])
print (json.dumps(views_list[0], indent = 4))
Explanation: Create Print View List Helper Function
This function will gather the current list of custom views from the HPE IMC NMS and print them out to the screen.
End of explanation
print_views()
Explanation: Display the current views
This section will count the total number of existing views as well as display the first view returned.
Notice the 'upLevelSymbolId" is 3 for this custom view. The three designates that this is a level one custom view, meaning that it's at the root of the custom view tree.
End of explanation
create_custom_views(auth=auth.creds, url=auth.url, name="Canada")
create_custom_views(auth=auth.creds, url=auth.url, name="Alberta",upperview='Canada')
create_custom_views(auth=auth.creds, url=auth.url, name="Calgary",upperview='Alberta')
print_views()
Explanation: Create New Custom View
In this step, we will create two custom views Canada and Alberta using the create_custom_views() function.
For this example, we will create
- the Canada view which is a L1 custom view
- the Alberta custom view which will be a child of the Canada Custom view.
- the Calgary custom view which will be a child of the Alberta Custom View
|Name| Upperview |
|:-----|-----|
|Canada| |
|Alberta| Canada |
|Calgary | Alberta |
End of explanation
get_custom_views(url=auth.url, auth=auth.creds, name="Canada")
Explanation: Canada View
Notice that the upLevelSymbolId for this view is the same as the default My Network View shown above. We can see that this is a L1 custom view.
End of explanation
get_custom_views(url=auth.url, auth=auth.creds, name="Alberta")
Explanation: Alberta View
Notice that upLevelSymbolId for this view is equal to the symbolId of the Canada view shown in the previous example. we can see that this is a child of the Canada Custom View.
End of explanation
get_custom_views(url=auth.url, auth=auth.creds, name="Calgary")
Explanation: Calgary View
Notice that upLevelSymbolId for this view is equal to the symbolId of the Alberta view shown in the previous example. we can see that this is a child of the Canada Custom View.
End of explanation
delete_custom_view(url=auth.url, auth=auth.creds, name='Canada')
delete_custom_view(url=auth.url, auth=auth.creds, name='Alberta')
delete_custom_view(url=auth.url, auth=auth.creds, name='Calgary')
print_views()
Explanation: Deleting Custom Views
In this section we will delete individual custom views.
End of explanation
with open('custom_views.csv') as f:
s = f.read()
print (s)
Explanation: Display Contents of Custom Views.csv
We have prepared a CSV file which contains the new custom views that we wish to create. This can contain both parent and child custom views.
| name | upperview |
| ------ | ----------- |
| Branches | |
| Branch1 | Branches |
| Branch2 | Branches |
| WAN | |
End of explanation
def import_custom_views(filename):
Function which takes in a csv files as input to the create_custom_views function from the pyhpimc python module
available at https://github.com/HPNetworking/HP-Intelligent-Management-Center
:param filename: user-defined filename which contains two column named "name" and "upperview" as input into the
create_custom_views function from the pyhpimc module.
:return: returns output of the create_custom_vies function (str) for each item in the CSV file.
with open(filename) as csvfile:
# decodes file as csv as a python dictionary
reader = csv.DictReader(csvfile)
for view in reader:
# loads each row of the CSV as a JSON string
name = view['name']
upperview = view['upperview']
if len(upperview) is 0:
upperview = None
create_custom_views(auth=auth.creds, url=auth.url,name=name,upperview=upperview)
Explanation: Create Import Custom Views Function
Here we create a new function which will take a CSV file shown above as an input to the create_custom_views function
End of explanation
start_time = time.time()
import_custom_views('custom_views.csv')
print("--- %s seconds ---" % (time.time() - start_time))
views_list = get_custom_views(url=auth.url, auth=auth.creds)
print_views()
print ("There are a total of " + str(len(views_list)) + " views currently")
Explanation: Import Custom Views from CSV File
End of explanation
def delete_custom_views_csv(filename):
Function which takes in a csv files as input to the delete_custom_view function from the pyhpeimc python module
available at https://github.com/HPENetworking/HP-Intelligent-Management-Center
:param filename: user-defined filename which contains two column named "name" and "upperview" as input into the
create_custom_views function from the pyhpimc module.
:return: returns output of the create_custom_vies function (str) for each item in the CSV file.
with open(filename) as csvfile:
# decodes file as csv as a python dictionary
reader = csv.DictReader(csvfile)
for view in reader:
# loads each row of the CSV as a JSON string
name = view['name']
delete_custom_view(url=auth.url, auth=auth.creds, name=name)
start_time = time.time()
delete_custom_views_csv('custom_views.csv')
print("--- %s seconds ---" % (time.time() - start_time))
Explanation: Cleaning up After Ourselves
End of explanation |
4,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's play iPython and BASH a bit
count number of paths in $PATH
Step1: which is the same as the following command in BASH shell
Step2: change the language environment
Step3: look for files
Step4: where [!aeiou] means
Step5: regular expression
Step6: the above BASH command is the same as the following code in Python
Step7: get the line which is not started by any capital or lower-case alphabets
Step8: get the line which is ended by '!'
Step9: get the line which is ended by '.'
Step10: using 'ooo*' will lead to the same result as using 'oo'
Step11: -E flag has to be added if one want to use expressions such as + or |.
Step12: Remark
?
Step13: learn a bit of os.walk()
Step14: bash
chweng@chweng-VirtualBox
Step15: the "top" command
http
Step16: In BASH, a line is excuted successfully if the exit status is 0.
Step17: The above result is wrong. it's necessary to wrap == with SPACES, as the follows
Step18: the command test can be replaced by its synonym [ ]
Step19: example
Step20: example
Step21: See if an integer A is greater equal than another integer B
Step22: alternatively, one can write it like this (with the help of )
Step23: arithmatic calculations are enclosed by (())
Step24: Another code which do exactly the same thing
Step25: therefore, it is arithmatic shift in the above script.
17112016
local variables can be declared as readonly
Step26: switch case
Step27: in the above example, \$(pwd) can be replaced by $PWD if the env variable PWD exists
the "for each" loop
Step28: another way to create a loop, as what people normally do in Java
Step29: review
Step30: while loop
Step31: break & continue
Step32: function
Step33: bash
chweng@chweng-VirtualBox
Step34: list open files
Step35: library trace
Step36: awk
Step37: ```bash
1. ls -al /etc | awk '$1 ~ /^d/ {print "dir
Step38: select ipv4's ip
Step39: bash
chweng@VirtualBox
Step40: bash
chweng@ubuntu221
Step41: bash
strace -c -f java tw.loop.TestDiceThrowEx1
sort
Step42: ```bash
input data from nbafile file
awk '$3 == 82 {print $1," \t",$5}' nbafile
awk '$3 < 78' nbafile
awk '$2 ~ /c.*l/' nbafile
awk '$1 ~ /^s/ && $4 > 80 {print $1 "\t\t" $4}' nbafile
``` | Python Code:
path=!echo $PATH
print path
path[0].split(":")
print len(path[0].split(":"))
Explanation: Let's play iPython and BASH a bit
count number of paths in $PATH:
End of explanation
!echo $PATH|tr ":" " "|wc -w
Explanation: which is the same as the following command in BASH shell:
End of explanation
!locale
!export LANG='en_US.UTF-8'
!locale
Explanation: change the language environment
End of explanation
!ls -ld /etc/p*
!ls -ld /etc/p* | wc -l
!ls -ld /etc/p????
!ls -ld /etc/p???? | wc -l
!ls -ld /etc/p[aeiou]*
!ls -ld /etc/p[aeiou]* | wc -l
!ls -ld /etc/p[!aeiou]*
!ls -ld /etc/p[^aeiou]*
Explanation: look for files:
the flag -ld of "ls" means "list directory".
End of explanation
!touch d{m,n,o}t
!ls
Explanation: where [!aeiou] means: not a or e or i or u.
Indeed,in this case, regular expressoin [^aeiou] also works.
End of explanation
import re
!wget http://linux.vbird.org/linux_basic/0330regularex/regular_express.txt
!cat -n regular_express.txt
!cat regular_express.txt |wc -l
!grep -n 'the' regular_express.txt
Explanation: regular expression
End of explanation
pattern = re.compile('the')
for line in open("regular_express.txt", "r"):
if pattern.search(line) is not None:
print line
!grep -nv 'the' regular_express.txt
!grep -ni 'the' regular_express.txt
!grep -n 'air' regular_express.txt
!grep -ni 't[ae]st' regular_express.txt
!grep -n 't[ae]st' regular_express.txt
!grep -n '[^g]oo' regular_express.txt
!grep -n '[[:digit:]]' regular_express.txt
!grep -n '[[:lower:]]' regular_express.txt
Explanation: the above BASH command is the same as the following code in Python:
End of explanation
!grep -n '^[^A-Za-z]' regular_express.txt
Explanation: get the line which is not started by any capital or lower-case alphabets:
End of explanation
!grep -n '!$' regular_express.txt
Explanation: get the line which is ended by '!':
End of explanation
!grep -n '.$' regular_express.txt
!grep -n 's.c' regular_express.txt
!grep -n 's[a-zA-Z]c' regular_express.txt
!grep -n 'oo' regular_express.txt
!grep -n 'ooo*' regular_express.txt
!grep -n 'g*g' regular_express.txt
Explanation: get the line which is ended by '.' :
End of explanation
!grep -nE 'goo+g' regular_express.txt
!ps -el |grep -E '^[0-9]+ R'
Explanation: using 'ooo*' will lead to the same result as using 'oo'
End of explanation
!dpkg -L iproute2 | grep -E '/bin|/sbin'
!dpkg -L iproute2 | grep -E '/bin|/sbin' | wc -l
!dpkg -L iproute2 | grep -E '/s?bin' | wc -l
Explanation: -E flag has to be added if one want to use expressions such as + or |.
End of explanation
f=open("regular_express.txt", "r")
file=f.read()
print repr(file)
for line in open("regular_express.txt", "r"):
print line
os.mkdir("tmp")
os.listdir(os.getcwd())
ls
os.removedirs("tmp")
os.listdir(".")
Explanation: Remark
?: showing once or 0 times
*: showing for any times (including 0 times)
+: showing at least once
Remark:
-i, --ignore-case
-v --invert-match
-n, --line-number
-E: extended
End of explanation
for root,dirs,files in os.walk(os.getcwd()):
print root,dirs,files
print
for root, dirs, files in os.walk(os.getcwd()):
for file in files:
print os.path.join(root,file)
for root, dirs, files in os.walk(os.getcwd()):
for file in files:
if file.endswith('.txt'):
print file
!whereis regex
!find / -name 'ifconfig'
!find -type f -user chweng -name '*.txt'
!find -type d -user chweng -name '0*'
!sudo fdisk -l | grep -nE "^Disk /dev/[hs]d"
sudo find /etc -type f | wc -l
du -hl /etc/
Explanation: learn a bit of os.walk():
End of explanation
!ps -el
Explanation: bash
chweng@chweng-VirtualBox:~$ jobs
[1]+ Running sleep 100 &
chweng@chweng-VirtualBox:~$ fg 1 # move process 1 to the foreground
sleep 100
then, we can press ctrl+z, which will move the process to the background and pause it.
then, type "bg 1" in order to start it again in the background
process
flags:
-e:all processes
-l:state
-f: also print UID(user ID) and PPID(parent process ID)
End of explanation
%%bash
var=12345
echo "The length of var1=$var is ${#var}."
%%bash
set $(eval du -sh ~$user);dir_sz=$1
echo "$1,$2"
echo "${dir_sz}"
echo ""
set $(eval df -h|grep " /$");fs_sz=$2
echo "$1,$2"
echo "${fs_sz}"
%%bash
set $(eval du -sh ~$user);dir_sz=$1
set $(eval df -h|grep " /$");fs_sz=$2
echo "Size of my home directory is ${dir_sz}."
echo "Size of my file system size is ${fs_sz}."
Explanation: the "top" command
http://mugurel.sumanariu.ro/linux/the-difference-among-virt-res-and-shr-in-top-output/
RES stands for the resident size, which is an accurate representation of how much actual physical memory a process is consuming. (This also corresponds directly to the %MEM column.) This will virtually always be less than the VIRT size, since most programs depend on the C library.
SHR indicates how much of the VIRT size is actually sharable (memory or libraries). In the case of libraries, it does not necessarily mean that the entire library is resident. For example, if a program only uses a few functions in a library, the whole library is mapped and will be counted in VIRT and SHR, but only the parts of the library file containing the functions being used will actually be loaded in and be counted under RES.
use "top -o RES" to sort by resident size(RES)
16.11.2016
The following cell is the content of the script testPS.sh:
```BASH
!/bin/bash
ps -f
read
```
Now, we execute it:
```BASH
chweng@chweng-VirtualBox:~/code/exercises$ ./testPS.sh
UID PID PPID C STIME TTY TIME CMD
chweng 9960 2374 0 16:26 pts/31 00:00:00 bash
chweng 9974 9960 0 16:26 pts/31 00:00:00 /bin/bash ./testPS.sh
chweng 9975 9974 0 16:26 pts/31 00:00:00 ps -f
chweng@chweng-VirtualBox:~/code/exercises$ source testPS.sh
UID PID PPID C STIME TTY TIME CMD
chweng 9960 2374 0 16:26 pts/31 00:00:00 bash
chweng 9980 9960 0 16:26 pts/31 00:00:00 ps -f
```
```BASH
chweng@chweng-VirtualBox:~$ var1=1111
chweng@chweng-VirtualBox:~$ var2=3333
chweng@chweng-VirtualBox:~$ echo "${var1}222"
1111222
chweng@chweng-VirtualBox:~$ set | grep "var1"
var1=1111
chweng@chweng-VirtualBox:~$ set | grep "var2"
var2=3333
```
BASH
chweng@chweng-VirtualBox:$ export var1
chweng@chweng-VirtualBox:$ bash
chweng@chweng-VirtualBox:$ echo $var1
1111
chweng@chweng-VirtualBox:$ echo $var2
Some examples that uses BASH variables:
End of explanation
%%bash
:
echo "exit status=$?"
%%bash
ls /dhuoewyr242q
echo "exit status=$?"
%%bash
true
echo "exit status=$?"
%%bash
false
echo "exit status=$?"
%%bash
value=123
test $value=="123"
echo "exit status=$?"
echo""
test $value=="456"
echo "exit status=$?"
Explanation: In BASH, a line is excuted successfully if the exit status is 0.
End of explanation
%%bash
value=123
test $value == "123"
echo "exit status=$?"
echo""
test $value == "456"
echo "exit status=$?"
Explanation: The above result is wrong. it's necessary to wrap == with SPACES, as the follows:
End of explanation
%%bash
help [
%%bash
value=123
[ $value == "123" ]
echo "exit status=$?"
echo""
[ $value == "456" ]
echo "exit status=$?"
%%bash
/usr/bin/[ 0 == 1 ]
echo "exit status=$?"
Explanation: the command test can be replaced by its synonym [ ]:
End of explanation
%%bash
#!/bin/bash
# using [ and [[
file=/etc/passwd
if [[ -e $file ]]
then
echo "Password file exists."
fi
# [[ Octal and hexadecimal evaluation ]]
# Thank you, Moritz Gronbach, for pointing this out.
decimal=15
octal=017 # = 15 (decimal)
hex=0x0f # = 15 (decimal)
if [ "$decimal" -eq "$octal" ]
then
echo "$decimal equals $octal"
else
echo "$decimal is not equal to $octal" # 15 is not equal to 017
fi # Doesn't evaluate within [ single brackets ]!
if [[ "$decimal" -eq "$octal" ]]
then
echo "$decimal equals $octal" # 15 equals 017
else
echo "$decimal is not equal to $octal"
fi # Evaluates within [[ double brackets ]]!
if [[ "$decimal" -eq "$hex" ]]
then
echo "$decimal equals $hex" # 15 equals 0x0f
else
echo "$decimal is not equal to $hex"
fi # [[ $hexadecimal ]] also evaluates!
Explanation: example: ex4-4.sh:
use the advanced-test:[[]]
to compare different integer strings in different forms (it could be that one in decimal and another in octal format)
End of explanation
!mkdir /home/chweng/a
!touch /home/chweng/a/123.txt
%%bash
#!/bin/bash
# using file test operator
DEST="~/b"
SRC="~/a"
# Make sure backup dir exits
if [ ! -d $DEST ]
then
mkdir -p $DEST
fi
# If source directory does not exits, die...
if [ ! -d $SRC ]
then
echo "$SRC directory not found. Cannot make backup to $DEST"
exit 1
fi
# Okay, dump backup using tar
echo "Backup directory $DEST..."
echo "Source directory $SRC..."
/bin/tar -Jcf $DEST/backup.tar.xz $SRC 2>/dev/null
# Find out if backup failed or not
if [ $? -eq 0 ]
then
echo "Backup done!"
else
echo "Backup failed"
fi
Explanation: example: ex4-4.sh:
End of explanation
%%bash
i=5
if [ $i -ge 0 ];then echo "$i >= 0";fi
Explanation: See if an integer A is greater equal than another integer B:
End of explanation
%%bash
i=5
if (($i >= 0));then echo "$i >= 0";fi
%%bash
i=5
if [ $i >= 0 ];then echo "$i >= 0";fi
%%bash
i=05
if [ $i -ge 0 ];then echo "$i >= 0";fi
Explanation: alternatively, one can write it like this (with the help of )
End of explanation
%%bash
echo $((7**2))
%%bash
echo $((7%3))
%%bash
#!/bin/bash
# calculate the available % of disk space
echo "Current Mount Points:"
mount | grep -E 'ext[234]|xfs' | cut -f 3 -d ' '
#read -p "Enter a Mount Point: " mntpnt
mntpnt="/home"
sizekb=$(df $mntpnt | tail -1 | tr -s ' ' | cut -f 2 -d ' ')
availkb=$(df $mntpnt | tail -1 | tr -s ' ' | cut -f 4 -d ' ')
availpct=$(echo "scale=4; $availkb/$sizekb * 100" | bc)
printf "There is %5.2f%% available in %s\n" $availpct $mntpnt
exit 0
Explanation: arithmatic calculations are enclosed by (()):
End of explanation
%%bash
#!/bin/bash
# calculate the available % of disk space
echo "Current Mount Points:"
mount | egrep 'ext[234]|xfs' | cut -f 3 -d ' ' # -f: field; -d:delimiter
#read -p "Enter a Mount Point: " mntpnt
mntpnt="/home"
df_out="$(df $mntpnt | tail -1)"
set $df_out
availpct=$(echo "scale=4; ${4}/${2} * 100" | bc)
printf "There is %5.2f%% available in %s\n" $availpct $mntpnt
exit 0
%%bash
#!/bin/bash
# shift left is double, shift right is half
declare -i number
#read -p "Enter a number: " number
number=-4
echo " Double $number is: $((number << 1))"
echo " Half of $number is: $((number >> 1))"
exit 0
Explanation: Another code which do exactly the same thing:
End of explanation
%%bash
#!/bin/bash
# declare constants variables
readonly DATA=/home/sales/data/feb09.dat
echo $DATA
echo
DATA=/tmp/foo
# Error ... readonly variable
echo $DATA
exit 0
Explanation: therefore, it is arithmatic shift in the above script.
17112016
local variables can be declared as readonly
End of explanation
%%bash
#!/bin/bash
# Testing ranges of characters.
Keypress=5
#echo; echo "Hit a key, then hit return."
#read Keypress
case "$Keypress" in
[[:lower:]] ) echo "Lowercase letter";;
[[:upper:]] ) echo "Uppercase letter";;
[0-9] ) echo "Digit";;
* ) echo "Punctuation, whitespace, or other";;
esac # Allows ranges of characters in [square brackets],
#+ or POSIX ranges in [[double square brackets.
%%bash
#!/bin/bash
# menu case
echo -n "
Menu of available commands:
=================================
1. full directory listing
2. display current directory name
3. display the date
q. quit
=================================
Select a number from the list: "
#read answer
answer=2
case "$answer" in
q*|exit|bye ) echo "Quitting!" ; exit ;;
1) echo "The contents of the current directory:"
ls -al ;;
2) echo "The name of the current directory is $(pwd)" ;;
3) echo -n "The current date is: "
date +%m/%d/%Y ;;
*) echo "Only choices 1, 2, 3 or q are valid" ;;
esac
exit
Explanation: switch case
End of explanation
for planet in "Mercury 36" "Venus 67" "Earth 93" "Mars 142" "Jupiter 483"
%%bash
#!/bin/bash
# Planets revisited.
# Associate the name of each planet with its distance from the sun.
for planet in "Mercury 36" "Venus 67" "Earth 93" "Mars 142" "Jupiter 483"
do
set -- $planet # Parses variable "planet"
#+ and sets positional parameters.
# The "--" prevents nasty surprises if $planet is null or
#+ begins with a dash.
# May need to save original positional parameters,
#+ since they get overwritten.
# One way of doing this is to use an array,
# original_params=("$@")
echo "$1 $2,000,000 miles from the sun"
#-------two tabs---concatenate zeroes onto parameter $2
done
# (Thanks, S.C., for additional clarification.)
exit 0
Explanation: in the above example, \$(pwd) can be replaced by $PWD if the env variable PWD exists
the "for each" loop
End of explanation
%%bash
#!/bin/bash
#echo $1
#file=$1
cd /home/chweng/code/exercises/
file="hello.sh"
if [ -f $file ]
then
echo "the file $file exists"
fi
for((j=1;j<=5;j++))
do
echo $j, "Hello World"
done
Explanation: another way to create a loop, as what people normally do in Java:
End of explanation
%%bash
#!/bin/bash
#echo $1
#file=$1
cd /home/chweng/code/exercises/
file="hello.sh"
if [[ -f $file && true ]]
then
echo "the file $file exists"
fi
for((j=1;j<=5;j++))
do
echo $j, "Hello World"
done
Explanation: review: when to use the enhanced-test [[ ]]?
A: when && or || operator is used
End of explanation
%%bash
#!/bin/bash
# increment number
# set n to 1
n=1
# continue until $n equals 5
while [ $n -le 5 ]
do
echo "Welcome $n times."
n=$(( n+1 )) # increments $n
done
%%bash
#!/bin/bash
# increment number
# set n to 1
n=1
# continue until $n equals 5
while (( n <= 5 ))
do
echo "Welcome $n times."
(( n++ )) # increments $n
done
%%bash
#!/bin/bash
# while can read data
ls -al | while read perms links owner group size mon day time file
do
[[ "$perms" != "total" && $size -gt 100 ]] && echo "$file $size"
done
exit
Explanation: while loop:
End of explanation
%%bash
#!/bin/bash
# break, continue usage
LIMIT=19 # Upper limit
echo
echo "Printing Numbers 1 through 20 (but not 3 and 11)."
a=0
while (( a <= LIMIT))
do
((a++))
if [[ "$a" -eq 3 || "$a" -eq 11 ]] # Excludes 3 and 11.
then
continue # Skip rest of this particular loop iteration.
fi
echo -n "$a " # This will not execute for 3 and 11.
done
# Exercise:
# Why does the loop print up to 20?
echo; echo
echo Printing Numbers 1 through 20, but something happens after 2.
##################################################################
# Same loop, but substituting 'break' for 'continue'.
a=0
while [ "$a" -le "$LIMIT" ]
do
a=$(($a+1))
if [ "$a" -gt 2 ]
then
break # Skip entire rest of loop.
fi
echo -n "$a "
done
exit 0
%%bash
#!/bin/bash
# The "continue N" command, continuing at the Nth level loop.
for outer in I II III IV V # outer loop
do
echo; echo -n "Group $outer: "
# --------------------------------------------------------------------
for inner in 1 2 3 4 5 6 7 8 9 10 # inner loop
do
if [[ "$inner" -eq 7 && "$outer" = "III" ]]
then
continue 2 # Continue at loop on 2nd level, that is "outer loop".
# Replace above line with a simple "continue"
# to see normal loop behavior.
fi
echo -n "$inner " # 7 8 9 10 will not echo on "Group III."
done
# --------------------------------------------------------------------
done
echo; echo
exit 0
%%bash
#!/bin/bash
# The "continue N" command, continuing at the Nth level loop.
for outer in I II III IV V # outer loop
do
echo; echo -n "Group $outer: "
# --------------------------------------------------------------------
for inner in 1 2 3 4 5 6 7 8 9 10 # inner loop
do
if [[ "$inner" -eq 7 && "$outer" = "III" ]]
then
break 2 # Continue at loop on 2nd level, that is "outer loop".
# Replace above line with a simple "continue"
# to see normal loop behavior.
fi
echo -n "$inner " # 7 8 9 10 will not echo on "Group III."
done
# --------------------------------------------------------------------
done
echo; echo
exit 0
Explanation: break & continue:
End of explanation
%%bash
#!/bin/bash
# Exercising functions (simple).
JUST_A_SECOND=1
funky ()
{ # This is about as simple as functions get.
echo "This is a funky function."
echo "Now exiting funky function."
} # Function declaration must precede call.
fun ()
{ # A somewhat more complex function.
i=0
REPEATS=5
echo
echo "And now the fun really begins."
echo
sleep $JUST_A_SECOND # Hey, wait a second!
while [ $i -lt $REPEATS ]
do
echo "----------FUNCTIONS---------->"
echo "<------------ARE-------------"
echo "<------------FUN------------>"
echo
((i++))
done
}
# Now, call the functions.
funky
fun
exit $?
Explanation: function
End of explanation
%%bash
#!/bin/bash
# Global and local variables inside a function.
func ()
{
local loc_var=23 # Declared as local variable.
echo # Uses the 'local' builtin.
echo "\"loc_var\" in function = $loc_var"
global_var=999 # Not declared as local.
# Therefore, defaults to global.
echo "\"global_var\" in function = $global_var"
}
func
# Now, to see if local variable "loc_var" exists outside the function.
echo
echo "\"loc_var\" outside function = $loc_var"
# $loc_var outside function =
# No, $loc_var not visible globally.
echo "\"global_var\" outside function = $global_var"
# $global_var outside function = 999
# $global_var is visible globally.
echo
exit 0
# In contrast to C, a Bash variable declared inside a function
#+ is local ONLY if declared as such.
%%bash
#!/bin/bash
# passing data to function
# DECLARE FUNCTIONS
shifter() # function to demonstrate parameter
# list management in a function
{
echo "$# parameters passed to $0"
while (( $# > 0 ))
do
echo "$*"
shift
done
}
# MAIN
#read -p "Please type a list of five words (then press Return): " varlist
varlist="i my me mine myself"
set $varlist # this creates positional parameters in the parent
shifter $* # call the function and pass argument list
echo "$# parameters in the parent "
echo "Parameters: $*"
exit
%%bash
#!/bin/bash
# Functions and parameters
DEFAULT=default # Default param value.
func2 () {
if [ -z "$1" ] # Is parameter #1 zero length?
then
echo "-Parameter #1 is zero length.-" # Or no parameter passed.
else
echo "-Parameter #1 is \"$1\".-"
fi
variable=${1-$DEFAULT} # What does
echo "variable = $variable" #+ parameter substitution show?
# ---------------------------
# It distinguishes between
#+ no param and a null param.
if [ "$2" ]
then
echo "-Parameter #2 is \"$2\".-"
fi
return 0
}
echo
echo "Nothing passed."
func2 # Called with no params
echo
echo "Zero-length parameter passed."
func2 "" # Called with zero-length param
echo
echo "Null parameter passed."
func2 "$uninitialized_param" # Called with uninitialized param
echo
echo "One parameter passed."
func2 first # Called with one param
echo
echo "Two parameters passed."
func2 first second # Called with two params
echo
echo "\"\" \"second\" passed."
func2 "" second # Called with zero-length first parameter
echo # and ASCII string as a second one.
exit 0
%%bash
#!/bin/bash
# using stdout passing data
# Declare function
addup() # function to add the number to itself
{
echo "$((numvar + numvar))"
}
# MAIN
while : # start infinite loop
do
clear # clear the screen
declare -i numvar=0 # declare integer variable
# read user input into variable(s)
echo; echo
#read -p "Please enter a number (0 = quit the script): " numvar otherwords
numvar=100
if (( numvar == 0 )) # test the user input
then
exit $numvar
else
result=$(addup) # call the function addup
# and get data from function
echo "$numvar + $numvar = $result"
#read -p "Press any key to continue..."
fi
break # this is added by myself because I'd like to print the output in the notebook
done
Explanation: bash
chweng@chweng-VirtualBox:~$ env |grep "PWD"
PWD=/home/chweng
chweng@chweng-VirtualBox:~$
chweng@chweng-VirtualBox:~$ cd Desktop/
chweng@chweng-VirtualBox:~/Desktop$ env |grep "PWD"
PWD=/home/chweng/Desktop
OLDPWD=/home/chweng
End of explanation
!lsof / |grep "/home/chweng/.ipython"
!strace -c find /etc -name "python*"
Explanation: list open files:
End of explanation
!ltrace -c find /etc -name "python*"
Explanation: library trace: trace library calls demanded by the specified process.
End of explanation
%%bash
#Example 1: Printing the First Field of the /etc/hosts File to stdout
#cat /etc/hosts | awk '{print $1}'
#Pipes data to awk
awk '{print "field one:\t" $1}' /etc/hosts
#Uses /etc/hosts as an input file
#'{print}' #Prints the current record
#'{print $0}' #Prints the current record (more specific)
#'{print $1}' #Prints the first field in the current record
#'{print "field one:" $1}' #Prints some text before field 1
#'{print "field one:\t" $1}' #Prints some text, a tab, then field 1
#'{print "field three:" $3; print $1}' #Prints fields on two lines in
!awk ' { print "" ; print $0 }' ~/code/module08
Explanation: awk:
End of explanation
!ls -al /home/chweng
!ls -al /home/chweng/ | awk '/^d/ {print "dir: ",$9}'
!mount |grep "ext"
!mount | awk '/ ext[234] / {print "device: ",$1,"\tmount point: " $3}'
!mount | awk '/ ext[234] / {print "device: %5s \tmount point: %5s",$1, $3}'
!ip addr show
Explanation: ```bash
1. ls -al /etc | awk '$1 ~ /^d/ {print "dir: ",$9}'
ll /etc | awk '$1 ~ /^d/ && $9 !~ /^./ {print "dir: ",$9}'
ll /sbin |awk '/^-/ && $2 > 1 {print "file:",$9 "\t links: ",$2}'
cat /etc/services | awk '$1 == "ssh" {print "service: ",$1,$2}'
ss -an | awk '/^ESTAB/ && $4 ~ /:22$/ {print "ssh from:",$5}'
mount | awk '$5 ~ /ext[234]/ || $5 == "xfs" {print $3,"("$5")"}'
ps -ef | awk '$2 == 1 , $2 == 10 {print}'
```
End of explanation
!ip addr show | awk '/inet / {print $2}'
!cat /etc/passwd | awk -F : '/^chweng/ {print "id:",$1," \thome:",$6}'
Explanation: select ipv4's ip:
End of explanation
!cat /etc/group | awk -F : '/^sudo/ {print $1 ,"users are:", $4}'
import os
os.chdir("/home/chweng/code")
Explanation: bash
chweng@VirtualBox:~$ sudo fdisk -l | awk '/^Disk \/dev\/[hs]d/ {print $2,$3,$4}'
/dev/sda: 1 TiB,
End of explanation
%%bash
ls -l /home/chweng/code/exercises/ |awk '/^-/ && $2 = 1 {print "file:",$9 "\t links: ",$2}'
Explanation: bash
chweng@ubuntu221:~/code$ javac -d . TestDiceThrowEx1.java
chweng@ubuntu221:~/code$ ls
RoadLog exercises module03 module05 module07 tw
TestDiceThrowEx1.java module02 module04 module06 module08
chweng@ubuntu221:~/code$ java tw.loop.TestDiceThrowEx1
diceNumber=5
Try Again.
diceNumber=5
Try Again.
diceNumber=6
Try Again.
diceNumber=6
Try Again.
diceNumber=4
Try Again.
diceNumber=5
Try Again.
diceNumber=6
Try Again.
diceNumber=2
You Win.
End of explanation
!ps -ef | awk '$2 == 1 , $2 == 10 {print}'
Explanation: bash
strace -c -f java tw.loop.TestDiceThrowEx1
sort:
End of explanation
%%bash
echo "This is a book" | awk '
{ print "length of the string : ",$0," is : ",length($0) }'
Explanation: ```bash
input data from nbafile file
awk '$3 == 82 {print $1," \t",$5}' nbafile
awk '$3 < 78' nbafile
awk '$2 ~ /c.*l/' nbafile
awk '$1 ~ /^s/ && $4 > 80 {print $1 "\t\t" $4}' nbafile
```
End of explanation |
4,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
graded = 8/10
Homework 6
Step1: Problem set #2
Step2: Problem set #3
Step3: Problem set #4
Step4: Problem set #5
Step5: Specifying a field other than name, area or elevation for the sort parameter should fail silently, defaulting to sorting alphabetically. Expected output
Step6: Paste your code
Please paste the code for your entire Flask application in the cell below, in case we want to take a look when grading or debugging your assignment. | Python Code:
import requests
data = requests.get('http://localhost:5000/lakes').json()
print(len(data), "lakes")
for item in data[:10]:
print(item['name'], "- elevation:", item['elevation'], "m / area:", item['area'], "km^2 / type:", item['type'])
Explanation: graded = 8/10
Homework 6: Web Applications
For this homework, you're going to write a web API for the lake data in the MONDIAL database. (Make sure you've imported the data as originally outlined in our week 1 tutorial.)
The API should perform the following tasks:
A request to /lakes should return a JSON list of dictionaries, with the information from the name, elevation, area and type fields from the lake table in MONDIAL.
The API should recognize the query string parameter sort. When left blank or set to name, the results should be sorted by the name of the lake (in alphabetical order). When set to area or elevation, the results should be sorted by the requested field, in descending order.
The API should recognize the query string parameter type. When specified, the results should only include rows that have the specified value in the type field.
You should be able to use both the sort and type parameters in any request.
This notebook contains only test requests to your API. Write the API as a standalone Python program, start the program and then run the code in the cells below to ensure that your API produces the expected output. When you're done, paste the source code in the final cell (so we can check your work, if needed).
Hints when writing your API code:
You'll need to construct the SQL query as a string, piece by piece. This will likely involve a somewhat messy tangle of if statements. Lean into the messy tangle.
Make sure to use parameter placeholders (%s) in the query.
If you're getting SQL errors, print out your SQL statement in the request handler function so you can debug it. (When you use print() in Flask, the results will display in your terminal window.)
When in doubt, return to the test code. Examine it carefully and make sure you know exactly what it's trying to do.
Problem set #1: A list of lakes
Your API should return a JSON list of dictionaries (objects). Use the code below to determine what the keys of the dictionaries should be. (For brevity, this example only prints out the first ten records, but of course your API should return all of them.)
Expected output:
143 lakes
Ammersee - elevation: 533 m / area: 46 km^2 / type: None
Arresoe - elevation: None m / area: 40 km^2 / type: None
Atlin Lake - elevation: 668 m / area: 798 km^2 / type: None
Balaton - elevation: 104 m / area: 594 km^2 / type: None
Barrage de Mbakaou - elevation: None m / area: None km^2 / type: dam
Bodensee - elevation: 395 m / area: 538 km^2 / type: None
Brienzersee - elevation: 564 m / area: 29 km^2 / type: None
Caspian Sea - elevation: -28 m / area: 386400 km^2 / type: salt
Chad Lake - elevation: 250 m / area: 23000 km^2 / type: salt
Chew Bahir - elevation: 520 m / area: 800 km^2 / type: salt
End of explanation
import requests
data = requests.get('http://localhost:5000/lakes?type=salt').json()
avg_area = sum([x['area'] for x in data if x['area'] is not None]) / len(data)
avg_elev = sum([x['elevation'] for x in data if x['elevation'] is not None]) / len(data)
print("average area:", int(avg_area))
print("average elevation:", int(avg_elev))
Explanation: Problem set #2: Lakes of a certain type
The following code fetches all lakes of type salt and finds their average area and elevation.
Expected output:
average area: 18880
average elevation: 970
End of explanation
import requests
data = requests.get('http://localhost:5000/lakes?sort=elevation').json()
for item in [x['name'] for x in data if x['elevation'] is not None][:15]:
print("*", item)
Explanation: Problem set #3: Lakes in order
The following code fetches lakes in reverse order by their elevation and prints out the name of the first fifteen, excluding lakes with an empty elevation field.
Expected output:
* Licancabur Crater Lake
* Nam Co
* Lago Junin
* Lake Titicaca
* Poopo
* Salar de Uyuni
* Koli Sarez
* Lake Irazu
* Qinghai Lake
* Segara Anak
* Lake Tahoe
* Crater Lake
* Lake Tana
* Lake Van
* Issyk-Kul
End of explanation
import requests
data = requests.get('http://localhost:5000/lakes?sort=area&type=caldera').json()
for item in data:
print("*", item['name'])
Explanation: Problem set #4: Order and type
The following code prints the names of the largest caldera lakes, ordered in reverse order by area.
Expected output:
* Lake Nyos
* Lake Toba
* Lago Trasimeno
* Lago di Bolsena
* Lago di Bracciano
* Crater Lake
* Segara Anak
* Laacher Maar
End of explanation
import requests
data = requests.get('http://localhost:5000/lakes', params={'type': "' OR true; --"}).json()
data
Explanation: Problem set #5: Error handling
Your API should work fine even when faced with potential error-causing inputs. For example, the expected output for this statement is an empty list ([]), not every row in the table.
End of explanation
import requests
data = requests.get('http://localhost:5000/lakes', params={'sort': "florb"}).json()
[x['name'] for x in data[:5]]
Explanation: Specifying a field other than name, area or elevation for the sort parameter should fail silently, defaulting to sorting alphabetically. Expected output: ['Ammersee', 'Arresoe', 'Atlin Lake', 'Balaton', 'Barrage de Mbakaou']
End of explanation
## THIS CODE RETURNS CORRECTLY ALL OF THE ABOVE EXCEPT THE FIRST PROBLEM OF PROBLEM SET 5, there are two lines commented out below that I used
# to try solve that but it didn't work out unfortunately.
from flask import Flask, request, jsonify
import pg8000
app = Flask (__name__)
conn = pg8000.connect(database="mondial", user="gcg")
@app.route("/lakes")
def get_lakes():
cursor = conn.cursor()
sort_default = "name"
sorting_option = request.args.get('sort', sort_default)
if 'area' in sorting_option:
sorting_option = sorting_option + " DESC"
if 'elevation' in sorting_option:
sorting_option = sorting_option + " DESC"
if 'florb' in sorting_option:
sorting_option = sort_default
cursor.execute("SELECT name, elevation, area, type FROM lake ORDER BY {}".format(sorting_option))
type_option = request.args.get('type', None)
# if "' " in type_option:
# return ([])
if type_option:
cursor.execute("SELECT name, elevation, area, type FROM lake WHERE type = '{}' ORDER BY {}".format(type_option, sorting_option))
else:
cursor.execute("SELECT name, elevation, area, type FROM lake ORDER BY {}".format(sorting_option))
output=[]
for item in cursor.fetchall():
get_name = str(item[0])
get_type = str(item[3])
if item[1] is not None:
get_elevation = float(item[1])
else:
get_elevation = None
if item[2] is not None:
get_area = float(item[2])
else:
get_area = None
output.append({'name': get_name,
'elevation': get_elevation,
'area': get_area,
'type': get_type})
return jsonify(output)
app.run()
Explanation: Paste your code
Please paste the code for your entire Flask application in the cell below, in case we want to take a look when grading or debugging your assignment.
End of explanation |
4,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression with Differential Privacy
We start by importing the required libraries and modules and collecting the data that we need from the Adult dataset.
Step1: Let's also collect the test data from Adult to test our models once they're trained.
Step2: Logistic Regression with no privacy
To begin, let's first train a regular (non-private) logistic regression classifier, and test its accuracy.
Step3: Differentially private logistic regression
Using the diffprivlib.models.LogisticRegression module of diffprivlib, we can train a logistic regression classifier while satisfying differential privacy.
If we don't specify any parameters, the model defaults to epsilon = 1 and data_norm = None. If the norm of the data is not specified at initialisation (as in this case), the norm will be calculated on the data when .fit() is first called and a warning will be thrown as it causes a privacy leak. To ensure no additional privacy leakage, we should specify the data norm explicitly as an argument, and choose the bounds indepedently of the data (i.e. using domain knowledge).
Additionally, the high data_norm that is read from the data in this instance gives poor results, with accuracy only slightly better than random. This is as a result of the large amount of noise requires to protect data spread over a large domain. By clipping the data to a smaller domain, accuracy improves markedly, as demonstrated below.
Step4: By setting epsilon = float("inf"), we can produce the same result as the non-private logistic regression classifer.
Step5: Tradeoff of accuracy and privacy
We can also visualise the tradeoff between accuracy and epsilon using matplotlib.
Step6: Let's save the results using pickle so we can reproduce the plot easily in the future.
Step7: Let's plot the results using matplotlib. The discontinuty observed near epsilon = 10 is an artifact of the model. Because of the norm-clipping applied to the dataset before training (data_norm=100), the accuracy plateaus without reaching the non-private baseline. | Python Code:
import diffprivlib.models as dp
import numpy as np
from sklearn.linear_model import LogisticRegression
X_train = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
usecols=(0, 4, 10, 11, 12), delimiter=", ")
y_train = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data",
usecols=14, dtype=str, delimiter=", ")
np.unique(y_train)
Explanation: Logistic Regression with Differential Privacy
We start by importing the required libraries and modules and collecting the data that we need from the Adult dataset.
End of explanation
X_test = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test",
usecols=(0, 4, 10, 11, 12), delimiter=", ", skiprows=1)
y_test = np.loadtxt("https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test",
usecols=14, dtype=str, delimiter=", ", skiprows=1)
# Must trim trailing period "." from label
y_test = np.array([a[:-1] for a in y_test])
np.unique(y_test)
Explanation: Let's also collect the test data from Adult to test our models once they're trained.
End of explanation
clf = LogisticRegression(solver="lbfgs")
clf.fit(X_train, y_train)
baseline = clf.score(X_test, y_test)
print("Non-private test accuracy: %.2f%%" % (baseline * 100))
Explanation: Logistic Regression with no privacy
To begin, let's first train a regular (non-private) logistic regression classifier, and test its accuracy.
End of explanation
dp_clf = dp.LogisticRegression()
dp_clf.fit(X_train, y_train)
print("Differentially private test accuracy (epsilon=%.2f): %.2f%%" %
(dp_clf.epsilon, dp_clf.score(X_test, y_test) * 100))
Explanation: Differentially private logistic regression
Using the diffprivlib.models.LogisticRegression module of diffprivlib, we can train a logistic regression classifier while satisfying differential privacy.
If we don't specify any parameters, the model defaults to epsilon = 1 and data_norm = None. If the norm of the data is not specified at initialisation (as in this case), the norm will be calculated on the data when .fit() is first called and a warning will be thrown as it causes a privacy leak. To ensure no additional privacy leakage, we should specify the data norm explicitly as an argument, and choose the bounds indepedently of the data (i.e. using domain knowledge).
Additionally, the high data_norm that is read from the data in this instance gives poor results, with accuracy only slightly better than random. This is as a result of the large amount of noise requires to protect data spread over a large domain. By clipping the data to a smaller domain, accuracy improves markedly, as demonstrated below.
End of explanation
dp_clf = dp.LogisticRegression(epsilon=float("inf"), data_norm=1e5)
dp_clf.fit(X_train, y_train)
print("Agreement between non-private and differentially private (epsilon=inf) classifiers: %.2f%%" %
(dp_clf.score(X_test, clf.predict(X_test)) * 100))
Explanation: By setting epsilon = float("inf"), we can produce the same result as the non-private logistic regression classifer.
End of explanation
accuracy = []
epsilons = np.logspace(-3, 1, 500)
for eps in epsilons:
dp_clf = dp.LogisticRegression(epsilon=eps, data_norm=100)
dp_clf.fit(X_train, y_train)
accuracy.append(dp_clf.score(X_test, y_test))
Explanation: Tradeoff of accuracy and privacy
We can also visualise the tradeoff between accuracy and epsilon using matplotlib.
End of explanation
import pickle
pickle.dump((epsilons, baseline, accuracy), open("lr_accuracy_500.p", "wb" ) )
Explanation: Let's save the results using pickle so we can reproduce the plot easily in the future.
End of explanation
import matplotlib.pyplot as plt
import pickle
epsilons, baseline, accuracy = pickle.load(open("lr_accuracy_500.p", "rb"))
plt.semilogx(epsilons, accuracy, label="Differentially private")
plt.plot(epsilons, np.ones_like(epsilons) * baseline, dashes=[2,2], label="Non-private")
plt.title("Differentially private logistic regression accuracy")
plt.xlabel("epsilon")
plt.ylabel("Accuracy")
plt.ylim(0, 1)
plt.xlim(epsilons[0], epsilons[-1])
plt.legend(loc=3)
plt.show()
Explanation: Let's plot the results using matplotlib. The discontinuty observed near epsilon = 10 is an artifact of the model. Because of the norm-clipping applied to the dataset before training (data_norm=100), the accuracy plateaus without reaching the non-private baseline.
End of explanation |
4,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing the trained weight matrices (not in an ensemble)
Step1: Load the weight matrices from the training
Step2: Visualize the digit from one hot representation through the activity weight matrix to the image representation
- Image is average digit from mnist dataset
Step3: Visualize the rotation of the image using the weight matrix from activity to activity
- does not use the weight matrix used on the recurrent connection | Python Code:
import nengo
import numpy as np
import cPickle
import matplotlib.pyplot as plt
from matplotlib import pylab
import matplotlib.animation as animation
from scipy import linalg
%matplotlib inline
import scipy.ndimage
Explanation: Testing the trained weight matrices (not in an ensemble)
End of explanation
#Weight matrices generated by the neural network after training
#Maps the label vectors to the neuron activity of the ensemble
label_weights = cPickle.load(open("label_weights5000.p", "rb"))
#Maps the activity of the neurons to the visual representation of the image
activity_to_img_weights = cPickle.load(open("activity_to_img_weights5000.p", "rb"))
#Maps the activity of the neurons of an image with the activity of the neurons of an image rotated 6 degrees
#rotation_weights = cPickle.load(open("rotation_weights5000.p", "rb"))
#Create the pointers for the numbers
temp = np.diag([1]*10)
ZERO = temp[0]
ONE = temp[1]
TWO = temp[2]
THREE= temp[3]
FOUR = temp[4]
FIVE = temp[5]
SIX = temp[6]
SEVEN =temp[7]
EIGHT= temp[8]
NINE = temp[9]
labels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE]
#Visualize the one hot representation
print(ZERO)
print(ONE)
Explanation: Load the weight matrices from the training
End of explanation
def intense(img):
newImg = img.copy()
#for i in range(len(newImg)):
# newImg[i] = np.log (newImg[i] + 1.25)
newImg[newImg < 0] = -1
newImg[newImg > 0] = 1
return newImg
#Change this to imagine different digits
imagine = FIVE
#Can also imagine combitnations of numbers (ZERO + ONE)
#Label to activity
test_activity = np.dot(imagine,label_weights)
#Image decoded
test_output_img = np.dot(test_activity, activity_to_img_weights)
#noise = np.random.random([28,28])
#test_output_img = noise+np.reshape(test_output_img,(28,28))
#clean = intense(test_output_img)
#clean = scipy.ndimage.median_filter(test_output_img, 3)
#clean = intense(clean)
clean = scipy.ndimage.gaussian_filter(test_output_img, sigma=1)
#clean = intense(clean)
#clean = scipy.ndimage.binary_opening(test_output_img)
#Edge detection?
#clean = scipy.ndimage.sobel(test_output_img, axis=0, mode='constant')
#Sharpening
#filter_blurred_f = scipy.ndimage.gaussian_filter(test_output_img, 1)
#alpha = 30
#clean = test_output_img + alpha * (test_output_img - filter_blurred_f)
plt.subplot(131)
plt.imshow(test_output_img.reshape(28,28),cmap='gray')
plt.subplot(132)
plt.imshow(clean.reshape(28,28),cmap='gray')
clean = intense(clean)
plt.subplot(133)
plt.imshow(clean.reshape(28,28),cmap='gray')
plt.show()
for i in range(7):
imagine = labels[i]
#Label to activity
test_activity = np.dot(imagine,label_weights)
#Image decoded
test_output_img = np.dot(test_activity, activity_to_img_weights)
noise = np.random.random([28,28])
test_output_img = noise+np.reshape(test_output_img,(28,28))
plt.subplot(131)
plt.imshow(test_output_img.reshape(28,28),cmap='gray')
clean = scipy.ndimage.gaussian_filter(test_output_img, sigma=1)
plt.subplot(132)
plt.imshow(clean.reshape(28,28),cmap='gray')
clean = intense(clean)
plt.subplot(133)
plt.imshow(clean.reshape(28,28),cmap='gray')
plt.show()
Explanation: Visualize the digit from one hot representation through the activity weight matrix to the image representation
- Image is average digit from mnist dataset
End of explanation
#Change this to visualize different digits
imagine = FIVE
#How long the animation should go for
frames=60
#Make a list of the activation of rotated images and add first frame
rot_seq = []
rot_seq.append(np.dot(imagine,label_weights)) #Map the label vector to the activity vector
test_output_img = np.dot(rot_seq[0], activity_to_img_weights) #Map the activity to the visual representation
#add the rest of the frames, using the previous frame to calculate the current frame
for i in range(1,frames):
rot_seq.append(np.dot(rot_seq[i-1],rotation_weights)) #add the activity of the current image to the list
test_output_img = np.dot(rot_seq[i], activity_to_img_weights) #map the new activity to the visual image
#Animation of rotation
fig = plt.figure()
def updatefig(i):
image_vector = np.dot(rot_seq[i], activity_to_img_weights) #map the activity to the image representation
im = pylab.imshow(np.reshape(image_vector,(28,28), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True)
return im,
ani = animation.FuncAnimation(fig, updatefig, interval=50, blit=True)
plt.show()
imagine = FIVE
test_output_img = np.dot(imagine,label_weights) #Map the label vector to the activity vector
test_output_img = np.dot(test_output_img,rotation_weights)
test_output_img = np.dot(test_output_img,linalg.inv(rotation_weights))
test_output_img = np.dot(test_output_img, activity_to_img_weights) #Map the activity to the visual representation
pylab.imshow(np.reshape(test_output_img,(28,28), 'F').T, cmap=plt.get_cmap('Greys_r'))
plt.show()
Explanation: Visualize the rotation of the image using the weight matrix from activity to activity
- does not use the weight matrix used on the recurrent connection
End of explanation |
4,237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sebastian Raschka
last updated
Step1: Using the code above, we created two $3\times20$ datasets - one dataset for each class $\omega_1$ and $\omega_2$ -
where each column can be pictured as a 3-dimensional vector $\pmb x = \begin{pmatrix} x_1 \ x_2 \ x_3 \end{pmatrix}$ so that our dataset will have the form
$\pmb X = \begin{pmatrix} x_{1_1}\; x_{1_2} \; ... \; x_{1_{20}}\ x_{2_1} \; x_{2_2} \; ... \; x_{2_{20}}\ x_{3_1} \; x_{3_2} \; ... \; x_{3_{20}}\end{pmatrix}$
Just to get a rough idea how the samples of our two classes $\omega_1$ and $\omega_2$ are distributed, let us plot them in a 3D scatter plot.
Step2: <br>
<br>
<a name='drop_labels'></a>
1. Taking the whole dataset ignoring the class labels
Because we don't need class labels for the PCA analysis, let us merge the samples for our 2 classes into one $3\times40$-dimensional array.
Step3: <br>
<br>
<a name='mean_vec'></a>
2. Computing the d-dimensional mean vector
Step4: <br>
<br>
<a name="comp_scatter"></a>
3. a) Computing the Scatter Matrix
The scatter matrix is computed by the following equation
Step5: <br>
<br>
<a name="comp_cov"></a>
3. b) Computing the Covariance Matrix (alternatively to the scatter matrix)
Alternatively, instead of calculating the scatter matrix, we could also calculate the covariance matrix using the in-built numpy.cov() function. The equations for the covariance matrix and scatter matrix are very similar, the only difference is, that we use the scaling factor $\frac{1}{N-1}$ (here
Step6: <br>
<br>
<a name="eig_vec"></a>
4. Computing eigenvectors and corresponding eigenvalues
To show that the eigenvectors are indeed identical whether we derived them from the scatter or the covariance matrix, let us put an assert statement into the code. Also, we will see that the eigenvalues were indeed scaled by the factor 39 when we derived it from the scatter matrix.
Step7: Checking the eigenvector-eigenvalue calculation
Let us quickly check that the eigenvector-eigenvalue calculation is correct and satisfy the equation
$\pmb\Sigma\pmb{v} = \lambda\pmb{v}$
<br>
where
$\pmb\Sigma = Covariance \; matrix\
\pmb{v} = \; Eigenvector\
\lambda = \; Eigenvalue$
Step8: Visualizing the eigenvectors
And before we move on to the next step, just to satisfy our own curiosity, we plot the eigenvectors centered at the sample mean.
Step9: <br>
<br>
<a name="sort_eig"></a>
<a name="sort_eig"></a>
<br>
<br>
5.1. Sorting the eigenvectors by decreasing eigenvalues
We started with the goal to reduce the dimensionality of our feature space, i.e., projecting the feature space via PCA onto a smaller subspace, where the eigenvectors will form the axes of this new feature subspace. However, the eigenvectors only define the directions of the new axis, since they have all the same unit length 1, which we can confirm by the following code
Step10: So, in order to decide which eigenvector(s) we want to drop for our lower-dimensional subspace, we have to take a look at the corresponding eigenvalues of the eigenvectors. Roughly speaking, the eigenvectors with the lowest eigenvalues bear the least information about the distribution of the data, and those are the ones we want to drop.
The common approach is to rank the eigenvectors from highest to lowest corresponding eigenvalue and choose the top $k$ eigenvectors.
Step11: <br>
<br>
5.2. Choosing k eigenvectors with the largest eigenvalues
For our simple example, where we are reducing a 3-dimensional feature space to a 2-dimensional feature subspace, we are combining the two eigenvectors with the highest eigenvalues to construct our $d \times k$-dimensional eigenvector matrix $\pmb W$.
Step12: <br>
<br>
<a name='transform'></a>
6. Transforming the samples onto the new subspace
In the last step, we use the $2 \times 3$-dimensional matrix $\pmb W$ that we just computed to transform our samples onto the new subspace via the equation $\pmb y = \pmb W^T \times \pmb x$.
Step13: <br>
<br>
<a name="mat_pca"></a>
Using the PCA() class from the matplotlib.mlab library
Now, that we have seen how a principal component analysis works, we can use the in-built PCA() class from the matplotlib library for our convenience in future applications.
Unfortunately, the original documentation (http
Step14: <br>
<br>
<a name="_diff_mat_pca"></a>
Differences between the step by step approach and matplotlib.mlab.PCA()
When we plot the transformed dataset onto the new 2-dimensional subspace, we observe that the scatter plots from our step by step approach and the matplotlib.mlab.PCA() class do not look identical. This is due to the fact that matplotlib.mlab.PCA() class scales the variables to unit variance prior to calculating the covariance matrices. This will/could eventually lead to different variances along the axes and affect the contribution of the variable to principal components.
One example where a scaling would make sense would be if one variable was measured in the unit inches where the other variable was measured in cm.
However, for our hypothetical example, we assume that both variables have the same (arbitrary) unit, so that we skipped the step of scaling the input data.
<br>
<br>
<a name="sklearn_pca"> </a>
Using the PCA() class from the sklearn.decomposition library to confirm our results
In order to make sure that we have not made a mistake in our step by step approach, we will use another library that doesn't rescale the input data by default.
Here, we will use the PCA class from the scikit-learn machine-learning library. The documentation can be found here
Step15: Depending on your computing environmnent, you may find that the plot above is the exact mirror image of the plot from out step by step approach. This is due to the fact that the signs of the eigenvectors can be either positive or negative, since the eigenvectors are scaled to the unit length 1, both we can simply multiply the transformed data by $\times(-1)$ to revert the mirror image.
Please note that this is not an issue | Python Code:
import numpy as np
np.random.seed(0)
mu_vec1 = np.array([0, 0, 0])
cov_mat1 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
class1_sample = np.random.multivariate_normal(mu_vec1, cov_mat1, 20).T
assert class1_sample.shape == (3, 20), "The matrix has not the dimensions 3x20"
mu_vec2 = np.array([1, 1, 1])
cov_mat2 = np.array([[1, 0, 0],[0, 1, 0], [0, 0, 1]])
class2_sample = np.random.multivariate_normal(mu_vec2, cov_mat2, 20).T
assert class2_sample.shape == (3, 20), "The matrix has not the dimensions 3x20"
Explanation: Sebastian Raschka
last updated: 02/07/2017
Link to the containing GitHub Repository: https://github.com/rasbt/pattern_classification
Link to this IPython Notebook on GitHub: principal_component_analysis.ipynb
<hr>
Stepping through a Principal Component Analysis
- using Python's numpy and matplotlib
<hr>
Sections
<a href="#introduction">Introduction</a>
<a href="#sample_data">Generating 3-dimensional sample data</a>
<a href="#gen_data">The step by step approach</a>
1. <a href="#drop_labels">Taking the whole dataset ignoring the class labels</a>
2. <a href="#mean_vec">Compute the $d$-dimensional mean vector</a>
3. <a href="#comp_scatter">Computing the scatter matrix (alternatively, the covariance matrix)</a>
4. <a href="#eig_vec">Computing eigenvectors and corresponding eigenvalues</a>
5. <a href="#sort_eig">Ranking and choosing $k$ eigenvectors</a>
6. <a href="#transform">Transforming the samples onto the new subspace</a>
<a href="#mat_pca">Using the PCA() class from the matplotlib.mlab library</a>
<a href="#diff_mat_pca">Differences between the step by step approach and matplotlib.mlab.PCA()</a>
<a href="#sklearn_pca">Using the PCA() class from the sklearn.decomposition library to confirm our results</a>
<br>
<a name="introduction"></a>
Introduction
The main purposes of a principal component analysis are the analysis of data to identify patterns and finding patterns to reduce the dimensions of the dataset with minimal loss of information.
Here, our desired outcome of the principal component analysis is to project a feature space (our dataset consisting of $n$ $d$-dimensional samples) onto a smaller subspace that represents our data "well". A possible application would be a pattern classification task, where we want to reduce the computational costs and the error of parameter estimation by reducing the number of dimensions of our feature space by extracting a subspace that describes our data "best".
Principal Component Analysis (PCA) Vs. Multiple Discriminant Analysis (MDA)
Both Multiple Discriminant Analysis (MDA) and Principal Component Analysis (PCA) are linear transformation methods and closely related to each other. In PCA, we are interested to find the directions (components) that maximize the variance in our dataset, where in MDA, we are additionally interested to find the directions that maximize the separation (or discrimination) between different classes (for example, in pattern classification problems where our dataset consists of multiple classes. In contrast two PCA, which ignores the class labels).
In other words, via PCA, we are projecting the entire set of data (without class labels) onto a different subspace, and in MDA, we are trying to determine a suitable subspace to distinguish between patterns that belong to different classes. Or, roughly speaking in PCA we are trying to find the axes with maximum variances where the data is most spread (within a class, since PCA treats the whole data set as one class), and in MDA we are additionally maximizing the spread between classes.
In typical pattern recognition problems, a PCA is often followed by an MDA.
What is a "good" subspace?
Let's assume that our goal is to reduce the dimensions of a $d$-dimensional dataset by projecting it onto a $(k)$-dimensional subspace (where $k\;<\;d$).
So, how do we know what size we should choose for $k$, and how do we know if we have a feature space that represents our data "well"?
Later, we will compute eigenvectors (the components) from our data set and collect them in a so-called scatter-matrix (or alternatively calculate them from the covariance matrix). Each of those eigenvectors is associated with an eigenvalue, which tell us about the "length" or "magnitude" of the eigenvectors. If we observe that all the eigenvalues are of very similar magnitude, this is a good indicator that our data is already in a "good" subspace. Or if some of the eigenvalues are much much higher than others, we might be interested in keeping only those eigenvectors with the much larger eigenvalues, since they contain more information about our data distribution. Vice versa, eigenvalues that are close to 0 are less informative and we might consider in dropping those when we construct the new feature subspace.
Summarizing the PCA approach
Listed below are the 6 general steps for performing a principal component analysis, which we will investigate in the following sections.
<a href="#drop_labels"> Take the whole dataset consisting of $d$-dimensional samples ignoring the class labels</a>
<a href="#mean_vec"> Compute the $d$-dimensional mean vector</a> (i.e., the means for every dimension of the whole dataset)
<a href="#sc_matrix">Compute the scatter matrix (alternatively, the covariance matrix) of the whole data set</a>
<a href="#eig_vec">Compute eigenvectors ($\pmb e_1, \; \pmb e_2, \; ..., \; \pmb e_d $) and corresponding eigenvalues ($\pmb \lambda_1, \; \pmb \lambda_2, \; ..., \; \pmb \lambda_d$)</a>
<a href="#sort_eig">Sort the eigenvectors by decreasing eigenvalues and choose $k$ eigenvectors with the largest eigenvalues to form a $d \times k $ dimensional matrix $\pmb W\;$</a>(where every column represents an eigenvector)
<a href="#transform">Use this $d \times k $ eigenvector matrix to transform the samples onto the new subspace.</a> This can be summarized by the mathematical equation: $\pmb y = \pmb W^T \times \pmb x$ (where $\pmb x$ is a $d \times 1$-dimensional vector representing one sample, and $\pmb y$ is the transformed $k \times 1$-dimensional sample in the new subspace.)
<br>
<br>
<a name="sample_data"></a>
Generating some 3-dimensional sample data
For the following example, we will generate 40 3-dimensional samples randomly drawn from a multivariate Gaussian distribution.
Here, we will assume that the samples stem from two different classes, where one half (i.e., 20) samples of our data set are labeled $\omega_1$ (class 1) and the other half $\omega_2$ (class 2).
$\pmb{\mu_1} = $
$\begin{bmatrix}0\0\0\end{bmatrix}$
$\quad\pmb{\mu_2} = $
$\begin{bmatrix}1\1\1\end{bmatrix}\quad$(sample means)
$\pmb{\Sigma_1} = $
$\begin{bmatrix}1\quad 0\quad 0\0\quad 1\quad0\0\quad0\quad1\end{bmatrix}$
$\quad\pmb{\Sigma_2} = $
$\begin{bmatrix}1\quad 0\quad 0\0\quad 1\quad0\0\quad0\quad1\end{bmatrix}\quad$ (covariance matrices)
Why are we chosing a 3-dimensional sample?
The problem of multi-dimensional data is its visualization, which would make it quite tough to follow our example principal component analysis (at least visually). We could also choose a 2-dimensional sample data set for the following examples, but since the goal of the PCA in an "Diminsionality Reduction" application is to drop at least one of the dimensions, I find it more intuitive and visually appealing to start with a 3-dimensional dataset that we reduce to an 2-dimensional dataset by dropping 1 dimension.
End of explanation
%matplotlib inline
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d import proj3d
fig = plt.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
plt.rcParams['legend.fontsize'] = 10
ax.plot(class1_sample[0, :], class1_sample[1, :],
class1_sample[2, :], 'o', markersize=8,
color='blue', alpha=0.5, label='class1')
ax.plot(class2_sample[0, :], class2_sample[1, :],
class2_sample[2, :], '^', markersize=8,
alpha=0.5, color='red', label='class2')
plt.title('Samples for class 1 and class 2')
ax.legend(loc='upper right')
plt.show()
Explanation: Using the code above, we created two $3\times20$ datasets - one dataset for each class $\omega_1$ and $\omega_2$ -
where each column can be pictured as a 3-dimensional vector $\pmb x = \begin{pmatrix} x_1 \ x_2 \ x_3 \end{pmatrix}$ so that our dataset will have the form
$\pmb X = \begin{pmatrix} x_{1_1}\; x_{1_2} \; ... \; x_{1_{20}}\ x_{2_1} \; x_{2_2} \; ... \; x_{2_{20}}\ x_{3_1} \; x_{3_2} \; ... \; x_{3_{20}}\end{pmatrix}$
Just to get a rough idea how the samples of our two classes $\omega_1$ and $\omega_2$ are distributed, let us plot them in a 3D scatter plot.
End of explanation
all_samples = np.concatenate((class1_sample, class2_sample), axis=1)
assert all_samples.shape == (3, 40), "The matrix has not the dimensions 3x40"
Explanation: <br>
<br>
<a name='drop_labels'></a>
1. Taking the whole dataset ignoring the class labels
Because we don't need class labels for the PCA analysis, let us merge the samples for our 2 classes into one $3\times40$-dimensional array.
End of explanation
mean_x = np.mean(all_samples[0, :])
mean_y = np.mean(all_samples[1, :])
mean_z = np.mean(all_samples[2, :])
mean_vector = np.array([[mean_x],[mean_y],[mean_z]])
print('Mean Vector:\n', mean_vector)
Explanation: <br>
<br>
<a name='mean_vec'></a>
2. Computing the d-dimensional mean vector
End of explanation
scatter_matrix = np.zeros((3, 3))
for i in range(all_samples.shape[1]):
scatter_matrix += (all_samples[:, i].reshape(3, 1)\
- mean_vector).dot((all_samples[:, i].reshape(3, 1)
- mean_vector).T)
print('Scatter Matrix:\n', scatter_matrix)
Explanation: <br>
<br>
<a name="comp_scatter"></a>
3. a) Computing the Scatter Matrix
The scatter matrix is computed by the following equation:
$S = \sum\limits_{k=1}^n (\pmb x_k - \pmb m)\;(\pmb x_k - \pmb m)^T$
where $\pmb m$ is the mean vector
$\pmb m = \frac{1}{n} \sum\limits_{k=1}^n \; \pmb x_k$
End of explanation
cov_mat = np.cov([all_samples[0, :],
all_samples[1, :],
all_samples[2, :]])
print('Covariance Matrix:\n', cov_mat)
Explanation: <br>
<br>
<a name="comp_cov"></a>
3. b) Computing the Covariance Matrix (alternatively to the scatter matrix)
Alternatively, instead of calculating the scatter matrix, we could also calculate the covariance matrix using the in-built numpy.cov() function. The equations for the covariance matrix and scatter matrix are very similar, the only difference is, that we use the scaling factor $\frac{1}{N-1}$ (here: $\frac{1}{40-1} = \frac{1}{39}$) for the covariance matrix. Thus, their eigenspaces will be identical (identical eigenvectors, only the eigenvalues are scaled differently by a constant factor).
$\Sigma_i = \Bigg[
\begin{array}{cc}
\sigma_{11}^2 & \sigma_{12}^2 & \sigma_{13}^2\
\sigma_{21}^2 & \sigma_{22}^2 & \sigma_{23}^2\
\sigma_{31}^2 & \sigma_{32}^2 & \sigma_{33}^2\
\end{array} \Bigg]$
End of explanation
# eigenvectors and eigenvalues for the from the scatter matrix
eig_val_sc, eig_vec_sc = np.linalg.eig(scatter_matrix)
# eigenvectors and eigenvalues for the from the covariance matrix
eig_val_cov, eig_vec_cov = np.linalg.eig(cov_mat)
for i in range(len(eig_val_sc)):
eigvec_sc = eig_vec_sc[:, i].reshape(1, 3).T
eigvec_cov = eig_vec_cov[:,i].reshape(1, 3).T
assert eigvec_sc.all() == eigvec_cov.all(), 'Eigenvectors are not identical'
print('Eigenvector {}: \n{}'.format(i+1, eigvec_sc))
print('Eigenvalue {} from scatter matrix: {}'.format(i+1, eig_val_sc[i]))
print('Eigenvalue {} from covariance matrix: {}'.format(i+1, eig_val_cov[i]))
print('Scaling factor: ', eig_val_sc[i]/eig_val_cov[i])
print(40 * '-')
Explanation: <br>
<br>
<a name="eig_vec"></a>
4. Computing eigenvectors and corresponding eigenvalues
To show that the eigenvectors are indeed identical whether we derived them from the scatter or the covariance matrix, let us put an assert statement into the code. Also, we will see that the eigenvalues were indeed scaled by the factor 39 when we derived it from the scatter matrix.
End of explanation
for i in range(len(eig_val_sc)):
eigv = eig_vec_sc[:, i].reshape(1, 3).T
np.testing.assert_array_almost_equal(scatter_matrix.dot(eigv), eig_val_sc[i] * eigv,
decimal=6, err_msg='', verbose=True)
Explanation: Checking the eigenvector-eigenvalue calculation
Let us quickly check that the eigenvector-eigenvalue calculation is correct and satisfy the equation
$\pmb\Sigma\pmb{v} = \lambda\pmb{v}$
<br>
where
$\pmb\Sigma = Covariance \; matrix\
\pmb{v} = \; Eigenvector\
\lambda = \; Eigenvalue$
End of explanation
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d import proj3d
from matplotlib.patches import FancyArrowPatch
class Arrow3D(FancyArrowPatch):
def __init__(self, xs, ys, zs, *args, **kwargs):
FancyArrowPatch.__init__(self, (0, 0), (0, 0), *args, **kwargs)
self._verts3d = xs, ys, zs
def draw(self, renderer):
xs3d, ys3d, zs3d = self._verts3d
xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, renderer.M)
self.set_positions((xs[0], ys[0]), (xs[1],ys[1]))
FancyArrowPatch.draw(self, renderer)
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.plot(all_samples[0, :], all_samples[1, :], all_samples[2, :],
'o', markersize=8, color='green', alpha=0.2)
ax.plot([mean_x], [mean_y], [mean_z],
'o', markersize=10, color='red', alpha=0.5)
for v in eig_vec_sc.T:
a = Arrow3D([mean_x, v[0]], [mean_y, v[1]],
[mean_z, v[2]], mutation_scale=20,
lw=3, arrowstyle="-|>", color="r")
ax.add_artist(a)
ax.set_xlabel('x_values')
ax.set_ylabel('y_values')
ax.set_zlabel('z_values')
plt.title('Eigenvectors')
plt.show()
Explanation: Visualizing the eigenvectors
And before we move on to the next step, just to satisfy our own curiosity, we plot the eigenvectors centered at the sample mean.
End of explanation
for ev in eig_vec_sc:
np.testing.assert_array_almost_equal(1.0, np.linalg.norm(ev))
# instead of 'assert' because of rounding errors
Explanation: <br>
<br>
<a name="sort_eig"></a>
<a name="sort_eig"></a>
<br>
<br>
5.1. Sorting the eigenvectors by decreasing eigenvalues
We started with the goal to reduce the dimensionality of our feature space, i.e., projecting the feature space via PCA onto a smaller subspace, where the eigenvectors will form the axes of this new feature subspace. However, the eigenvectors only define the directions of the new axis, since they have all the same unit length 1, which we can confirm by the following code:
End of explanation
# Make a list of (eigenvalue, eigenvector) tuples
eig_pairs = [(np.abs(eig_val_sc[i]),
eig_vec_sc[:, i]) for i in range(len(eig_val_sc))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eig_pairs.sort(key=lambda x: x[0], reverse=True)
# Visually confirm that the list is correctly sorted by decreasing eigenvalues
for i in eig_pairs:
print(i[0])
Explanation: So, in order to decide which eigenvector(s) we want to drop for our lower-dimensional subspace, we have to take a look at the corresponding eigenvalues of the eigenvectors. Roughly speaking, the eigenvectors with the lowest eigenvalues bear the least information about the distribution of the data, and those are the ones we want to drop.
The common approach is to rank the eigenvectors from highest to lowest corresponding eigenvalue and choose the top $k$ eigenvectors.
End of explanation
matrix_w = np.hstack((eig_pairs[0][1].reshape(3, 1),
eig_pairs[1][1].reshape(3, 1)))
print('Matrix W:\n', matrix_w)
Explanation: <br>
<br>
5.2. Choosing k eigenvectors with the largest eigenvalues
For our simple example, where we are reducing a 3-dimensional feature space to a 2-dimensional feature subspace, we are combining the two eigenvectors with the highest eigenvalues to construct our $d \times k$-dimensional eigenvector matrix $\pmb W$.
End of explanation
transformed = matrix_w.T.dot(all_samples)
assert transformed.shape == (2, 40), "The matrix is not 2x40 dimensional."
plt.plot(transformed[0, 0:20], transformed[1, 0:20],
'o', markersize=7, color='blue',
alpha=0.5, label='class1')
plt.plot(transformed[0, 20:40], transformed[1, 20:40], '^',
markersize=7, color='red', alpha=0.5, label='class2')
plt.xlabel('x_values')
plt.ylabel('y_values')
plt.legend()
plt.title('Transformed samples with class labels')
plt.show()
Explanation: <br>
<br>
<a name='transform'></a>
6. Transforming the samples onto the new subspace
In the last step, we use the $2 \times 3$-dimensional matrix $\pmb W$ that we just computed to transform our samples onto the new subspace via the equation $\pmb y = \pmb W^T \times \pmb x$.
End of explanation
from matplotlib.mlab import PCA as mlabPCA
mlab_pca = mlabPCA(all_samples.T)
print('PC axes in terms of the measurement axes scaled by the standard deviations:\n', mlab_pca.Wt)
plt.plot(mlab_pca.Y[0:20, 0],mlab_pca.Y[0:20, 1], 'o',
markersize=7, color='blue', alpha=0.5, label='class1')
plt.plot(mlab_pca.Y[20:40, 0], mlab_pca.Y[20:40, 1], '^',
markersize=7, color='red', alpha=0.5, label='class2')
plt.xlabel('x_values')
plt.ylabel('y_values')
plt.legend()
plt.title('Transformed samples with class labels from matplotlib.mlab.PCA()')
plt.show()
Explanation: <br>
<br>
<a name="mat_pca"></a>
Using the PCA() class from the matplotlib.mlab library
Now, that we have seen how a principal component analysis works, we can use the in-built PCA() class from the matplotlib library for our convenience in future applications.
Unfortunately, the original documentation (http://matplotlib.sourceforge.net/api/mlab_api.html#matplotlib.mlab.PCA) is very sparse;
a better documentation can be found here: https://www.clear.rice.edu/comp130/12spring/pca/pca_docs.shtml.
And the original code implementation of the PCA() class can be viewed at:
https://sourcegraph.com/github.com/matplotlib/matplotlib/symbols/python/lib/matplotlib/mlab/PCA
Class attributes of PCA()
Attrs:
a : a centered unit sigma version of input a
numrows, numcols: the dimensions of a
mu : a numdims array of means of a
sigma : a numdims array of atandard deviation of a
fracs : the proportion of variance of each of the principal components
Wt : the weight vector for projecting a numdims point or array into PCA space
Y : a projected into PCA space
Also, it has to be mentioned that the PCA() class expects a np.array() as input where: 'we assume data in a is organized with numrows>numcols'), so that we have to transpose our dataset.
matplotlib.mlab.PCA() keeps all $d$-dimensions of the input dataset after the transformation (stored in the class attribute PCA.Y), and assuming that they are already ordered ("Since the PCA analysis orders the PC axes by descending importance in terms of describing the clustering, we see that fracs is a list of monotonically decreasing values.", https://www.clear.rice.edu/comp130/12spring/pca/pca_docs.shtml) we just need to plot the first 2 columns if we are interested in projecting our 3-dimensional input dataset onto a 2-dimensional subspace.
End of explanation
from sklearn.decomposition import PCA as sklearnPCA
sklearn_pca = sklearnPCA(n_components=2)
sklearn_transf = sklearn_pca.fit_transform(all_samples.T)
plt.plot(sklearn_transf[0:20, 0],sklearn_transf[0:20, 1],
'o', markersize=7, color='blue', alpha=0.5, label='class1')
plt.plot(sklearn_transf[20:40, 0], sklearn_transf[20:40, 1],
'^', markersize=7, color='red', alpha=0.5, label='class2')
plt.xlabel('x_values')
plt.ylabel('y_values')
plt.legend()
plt.title('Transformed samples with class labels from matplotlib.mlab.PCA()')
plt.show()
Explanation: <br>
<br>
<a name="_diff_mat_pca"></a>
Differences between the step by step approach and matplotlib.mlab.PCA()
When we plot the transformed dataset onto the new 2-dimensional subspace, we observe that the scatter plots from our step by step approach and the matplotlib.mlab.PCA() class do not look identical. This is due to the fact that matplotlib.mlab.PCA() class scales the variables to unit variance prior to calculating the covariance matrices. This will/could eventually lead to different variances along the axes and affect the contribution of the variable to principal components.
One example where a scaling would make sense would be if one variable was measured in the unit inches where the other variable was measured in cm.
However, for our hypothetical example, we assume that both variables have the same (arbitrary) unit, so that we skipped the step of scaling the input data.
<br>
<br>
<a name="sklearn_pca"> </a>
Using the PCA() class from the sklearn.decomposition library to confirm our results
In order to make sure that we have not made a mistake in our step by step approach, we will use another library that doesn't rescale the input data by default.
Here, we will use the PCA class from the scikit-learn machine-learning library. The documentation can be found here:
http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html.
For our convenience, we can directly specify to how many components we want to reduce our input dataset via the n_components parameter.
n_components : int, None or string
Number of components to keep. if n_components is not set all components are kept:
n_components == min(n_samples, n_features)
if n_components == ‘mle’, Minka’s MLE is used to guess the dimension if 0 < n_components < 1,
select the number of components such that the amount of variance that needs to be explained
is greater than the percentage specified by n_components
Next, we just need to use the .fit_transform() in order to perform the dimensionality reduction.
End of explanation
# sklearn.decomposition.PCA
sklearn_transf *= (-1)
plt.plot(sklearn_transf[0:20, 0], sklearn_transf[0:20, 1] , 'o',
markersize=7, color='blue', alpha=0.5, label='class1')
plt.plot(sklearn_transf[20:40, 0], sklearn_transf[20:40, 1] , '^',
markersize=7, color='red', alpha=0.5, label='class2')
plt.xlabel('x_values')
plt.ylabel('y_values')
plt.legend()
plt.title('Transformed samples via sklearn.decomposition.PCA')
plt.show()
# step by step PCA
plt.plot(transformed[0, 0:20], transformed[1, 0:20],
'o', markersize=7, color='blue', alpha=0.5, label='class1')
plt.plot(transformed[0, 20:40], transformed[1, 20:40],
'^', markersize=7, color='red', alpha=0.5, label='class2')
plt.xlabel('x_values')
plt.ylabel('y_values')
plt.legend()
plt.title('Transformed samples step by step approach')
plt.show()
# sklearn.decomposition.PCA
sklearn_transf *= (-1)
plt.plot(sklearn_transf[0:20, 0], sklearn_transf[0:20, 1] , 'o',
markersize=7, color='blue', alpha=0.5, label='class1')
plt.plot(sklearn_transf[20:40, 0], sklearn_transf[20:40, 1] , '^',
markersize=7, color='red', alpha=0.5, label='class2')
plt.xlabel('x_values')
plt.ylabel('y_values')
plt.legend()
plt.title('Transformed samples via sklearn.decomposition.PCA')
plt.show()
# step by step PCA
transformed = matrix_w.T.dot(all_samples - mean_vector)
plt.plot(transformed[0, 0:20], transformed[1, 0:20],
'o', markersize=7, color='blue', alpha=0.5, label='class1')
plt.plot(transformed[0, 20:40], transformed[1, 20:40],
'^', markersize=7, color='red', alpha=0.5, label='class2')
plt.xlabel('x_values')
plt.ylabel('y_values')
plt.legend()
plt.title('Transformed samples step by step approach, subtracting mean vectors')
plt.show()
Explanation: Depending on your computing environmnent, you may find that the plot above is the exact mirror image of the plot from out step by step approach. This is due to the fact that the signs of the eigenvectors can be either positive or negative, since the eigenvectors are scaled to the unit length 1, both we can simply multiply the transformed data by $\times(-1)$ to revert the mirror image.
Please note that this is not an issue: If $v$ is an eigenvector of a matrix $\Sigma$, we have,
$$\Sigma v = \lambda v,$$
where $\lambda$ is our eigenvalue. Then $-v$ is also an eigenvector that has the same eigenvalue, since
$$\Sigma(-v) = -\Sigma v = -\lambda v = \lambda(-v).$$
Also, see the note in the scikit-learn documentation:
Due to implementation subtleties of the Singular Value Decomposition (SVD), which is used in this implementation, running fit twice on the same matrix can lead to principal components with signs flipped (change in direction). For this reason, it is important to always use the same estimator object to transform data in a consistent fashion.
(http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html)
End of explanation |
4,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using third-party Native Libraries
Sometimes, the functionnality you need are onmy available in third-party native libraries. There's still an opportunity to use them from within Pythran, using Pythran support for capsules.
Pythran Code
The pythran code requires function pointers to the third-party functions, passed as parameters to your pythran routine, as in the following
Step1: In that case libm_cbrt is expected to be a capsule containing the function pointer to libm's cbrt (cube root) function.
This capsule can be created using ctypes
Step2: The capsule is not usable from Python context (it's some kind of opaque box) but Pythran knows how to use it. beware, it does not try to do any kind of type verification. It trusts your #pythran export line.
Step3: With Pointers
Now, let's try to use the sincos function. It's C signature is void sincos(double, double*, double*). How do we pass that to Pythran?
Step4: There is some magic happening here
Step5: With Pythran
It is naturally also possible to use capsule generated by Pythran. In that case, no type shenanigans is required, we're in our small world.
One just need to use the capsule keyword to indicate we want to generate a capsule.
Step6: It's not possible to call the capsule directly, it's an opaque structure.
Step7: It's possible to pass it to the according pythran function though.
Step8: With Cython
The capsule pythran uses may come from Cython-generated code. This uses a little-known feature from cython
Step9: The cythonized module has a special dictionary that holds the capsule we're looking for. | Python Code:
import pythran
%load_ext pythran.magic
%%pythran
#pythran export pythran_cbrt(float64(float64), float64)
def pythran_cbrt(libm_cbrt, val):
return libm_cbrt(val)
Explanation: Using third-party Native Libraries
Sometimes, the functionnality you need are onmy available in third-party native libraries. There's still an opportunity to use them from within Pythran, using Pythran support for capsules.
Pythran Code
The pythran code requires function pointers to the third-party functions, passed as parameters to your pythran routine, as in the following:
End of explanation
import ctypes
# capsulefactory
PyCapsule_New = ctypes.pythonapi.PyCapsule_New
PyCapsule_New.restype = ctypes.py_object
PyCapsule_New.argtypes = ctypes.c_void_p, ctypes.c_char_p, ctypes.c_void_p
# load libm
libm = ctypes.CDLL('libm.so.6')
# extract the proper symbol
cbrt = libm.cbrt
# wrap it
cbrt_capsule = PyCapsule_New(cbrt, "double(double)".encode(), None)
Explanation: In that case libm_cbrt is expected to be a capsule containing the function pointer to libm's cbrt (cube root) function.
This capsule can be created using ctypes:
End of explanation
pythran_cbrt(cbrt_capsule, 8.)
Explanation: The capsule is not usable from Python context (it's some kind of opaque box) but Pythran knows how to use it. beware, it does not try to do any kind of type verification. It trusts your #pythran export line.
End of explanation
%%pythran
#pythran export pythran_sincos(None(float64, float64*, float64*), float64)
def pythran_sincos(libm_sincos, val):
import numpy as np
val_sin, val_cos = np.empty(1), np.empty(1)
libm_sincos(val, val_sin, val_cos)
return val_sin[0], val_cos[0]
Explanation: With Pointers
Now, let's try to use the sincos function. It's C signature is void sincos(double, double*, double*). How do we pass that to Pythran?
End of explanation
sincos_capsule = PyCapsule_New(libm.sincos, "unchecked anyway".encode(), None)
pythran_sincos(sincos_capsule, 0.)
Explanation: There is some magic happening here:
None is used to state the function pointer does not return anything.
In order to create pointers, we actually create empty one-dimensional array and let pythran handle them as pointer. Beware that you're in charge of all the memory checking stuff!
Apart from that, we can now call our function with the proper capsule parameter.
End of explanation
%%pythran
## This is the capsule.
#pythran export capsule corp((int, str), str set)
def corp(param, lookup):
res, key = param
return res if key in lookup else -1
## This is some dummy callsite
#pythran export brief(int, int((int, str), str set)):
def brief(val, capsule):
return capsule((val, "doctor"), {"some"})
Explanation: With Pythran
It is naturally also possible to use capsule generated by Pythran. In that case, no type shenanigans is required, we're in our small world.
One just need to use the capsule keyword to indicate we want to generate a capsule.
End of explanation
try:
corp((1,"some"),set())
except TypeError as e:
print(e)
Explanation: It's not possible to call the capsule directly, it's an opaque structure.
End of explanation
brief(1, corp)
Explanation: It's possible to pass it to the according pythran function though.
End of explanation
!find -name 'cube*' -delete
%%file cube.pyx
#cython: language_level=3
cdef api double cube(double x) nogil:
return x * x * x
from setuptools import setup
from Cython.Build import cythonize
_ = setup(
name='cube',
ext_modules=cythonize("cube.pyx"),
zip_safe=False,
# fake CLI call
script_name='setup.py',
script_args=['--quiet', 'build_ext', '--inplace']
)
Explanation: With Cython
The capsule pythran uses may come from Cython-generated code. This uses a little-known feature from cython: api and __pyx_capi__. nogil is of importance here: Pythran releases the GIL, so better not call a cythonized function that uses it.
End of explanation
import sys
sys.path.insert(0, '.')
import cube
print(type(cube.__pyx_capi__['cube']))
cython_cube = cube.__pyx_capi__['cube']
pythran_cbrt(cython_cube, 2.)
Explanation: The cythonized module has a special dictionary that holds the capsule we're looking for.
End of explanation |
4,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Captioning
To perform image captioning we are going to apply an approach similar to the work described in references [1],[2], and [3]. The approach applied here uses a recurrent neural network (RNN) to train a network to generate image captions. The input to the RNN is comprised of a high-level representation of an image and a caption describing it. The Microsoft Common Object in Context (MSCOCO) data set is used for this because it has many images and five captions for each one in most cases. In the previous section, we learned how to create and train a simple RNN. For this part, we will learn how to concatenate a feature vector that represents the images with its corresponding sentence and feed this into an RNN.
Step1: MSCOCO Captions
We are going to build on our RNN example. First, we will look at the data and evaluate a single image, its captions, and feature vector.
Step2: How can you look at feature maps from the first convolutional layer? Look here if you need a hint.
Step3: How can you look at the response of different layers in your network?
Next, we are going to combine the feature maps with their respective captions. Many of the images have five captions. Run the code below to view the captions for one image.
Step4: A file with feature vectors from 2000 of the MSCOCO images has been created. Next, you will load these and train. Please note this step can take more than 5 minutes to run.
Step5: In the cell above we created three lists, one for the image_id, feature map. and caption. To verify that the indices of each list are aligned, display the image id and caption for one image.
Step6: The next cell contains functions for queuing our data and the RNN model. What should the output for each function be? If you need a hint look here.
Step7: We can use the function below to estimate how well the network is able to predict the next word in the caption. You can evaluate a single image and its caption from the last batch using the index of the batch. If you need a hint look here.
Please note that depending on the status of the neural network at the time it was saved, incomplete, incoherent, and sometimes inappropriate captions could be generated.
Step8: Questions
[1] Can the show_next_predicted_word function be used for deployment?
Probably not. Can you think of any reason why? Each predicted word is based on the previous ground truth word. In a deployment scenario, we will only have the feature map from our input image.
[2] Can you load your saved network and use it to generate a caption from a validation image?
The validation images are stored in /data/mscoco/val2014. A npy file of the feature vectors is stored /data/mscoco/val_vgg_16_fc7_100.npy. For a hint on how to add this look here.
[3] Do you need to calculate the loss or cost when only performing inference?
[4] Do you use dropout when performing inference?
Step9: The cell below will load a feature vector from one of the images in the validation data set and use it with our pretrained network to generate a caption. Use the VALDATA variable to propagate and image through our RNN and generate a caption. You also need to load the network you just created during training. Look here if you need a hint.
Please note that depending on the status of the neural network at the time it was saved, incomplete, incoherent, and sometimes inappropriate captions could be generated. | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import inspect
import time
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
#import reader
import collections
import os
import re
import json
import matplotlib.pyplot as plt
from scipy import ndimage
from scipy import misc
import sys
sys.path.insert(0, '/data/models/slim')
slim=tf.contrib.slim
from nets import vgg
from preprocessing import vgg_preprocessing
%matplotlib inline
!nvidia-smi
Explanation: Image Captioning
To perform image captioning we are going to apply an approach similar to the work described in references [1],[2], and [3]. The approach applied here uses a recurrent neural network (RNN) to train a network to generate image captions. The input to the RNN is comprised of a high-level representation of an image and a caption describing it. The Microsoft Common Object in Context (MSCOCO) data set is used for this because it has many images and five captions for each one in most cases. In the previous section, we learned how to create and train a simple RNN. For this part, we will learn how to concatenate a feature vector that represents the images with its corresponding sentence and feed this into an RNN.
End of explanation
TRAIN_IMAGE_PATH='/data/mscoco/train2014/'
## Read Training files
with open("/data/mscoco/captions_train2014.json") as data_file:
data=json.load(data_file)
image_feature_vectors={}
tf.reset_default_graph()
one_image=ndimage.imread(TRAIN_IMAGE_PATH+data["images"][0]['file_name'])
#resize for vgg network
resize_img=misc.imresize(one_image,[224,224])
if len(one_image.shape)!= 3: #Check to see if the image is grayscale if True mirror colorband
resize_img=np.asarray(np.dstack((resize_img, resize_img, resize_img)), dtype=np.uint8)
processed_image = vgg_preprocessing.preprocess_image(resize_img, 224, 224, is_training=False)
processed_images = tf.expand_dims(processed_image, 0)
network,endpts= vgg.vgg_16(processed_images, is_training=False)
init_fn = slim.assign_from_checkpoint_fn(os.path.join('/data/mscoco/vgg_16.ckpt'),slim.get_model_variables('vgg_16'))
sess = tf.Session()
init_fn(sess)
NETWORK,ENDPTS=sess.run([network,endpts])
sess.close()
print('fc7 array for a single image')
print(ENDPTS['vgg_16/fc7'][0][0][0])
plt.plot(ENDPTS['vgg_16/fc7'][0][0][0])
plt.xlabel('feature vector index')
plt.ylabel('amplitude')
plt.title('fc7 feature vector')
data["images"][0]['file_name']
Explanation: MSCOCO Captions
We are going to build on our RNN example. First, we will look at the data and evaluate a single image, its captions, and feature vector.
End of explanation
print(ENDPTS['vgg_16/conv1/conv1_1'][0].shape)
FEATUREMAPID=0
print('input image and feature map from conv1')
plt.subplot(1,2,1)
plt.imshow(resize_img)
plt.subplot(1,2,2)
plt.imshow(ENDPTS['vgg_16/conv1/conv1_1'][0][:,:,FEATUREMAPID])
Explanation: How can you look at feature maps from the first convolutional layer? Look here if you need a hint.
End of explanation
CaptionsForOneImage=[]
for k in range(len(data['annotations'])):
if data['annotations'][k]['image_id']==data["images"][0]['id']:
CaptionsForOneImage.append([data['annotations'][k]['caption'].lower()])
plt.imshow(resize_img)
print('MSCOCO captions for a single image')
CaptionsForOneImage
Explanation: How can you look at the response of different layers in your network?
Next, we are going to combine the feature maps with their respective captions. Many of the images have five captions. Run the code below to view the captions for one image.
End of explanation
example_load=np.load('/data/mscoco/train_vgg_16_fc7_2000.npy').tolist()
image_ids=example_load.keys()
#Create 3 lists image_id, feature maps, and captions.
image_id_key=[]
feature_maps_to_id=[]
caption_to_id=[]
for observed_image in image_ids:
for k in range(len(data['annotations'])):
if data['annotations'][k]['image_id']==observed_image:
image_id_key.append([observed_image])
feature_maps_to_id.append(example_load[observed_image])
caption_to_id.append(re.sub('[^A-Za-z0-9]+',' ',data['annotations'][k]['caption']).lower()) #remove punctuation
print('number of images ',len(image_ids))
print('number of captions ',len(caption_to_id))
Explanation: A file with feature vectors from 2000 of the MSCOCO images has been created. Next, you will load these and train. Please note this step can take more than 5 minutes to run.
End of explanation
STRING='%012d' % image_id_key[0][0]
exp_image=ndimage.imread(TRAIN_IMAGE_PATH+'COCO_train2014_'+STRING+'.jpg')
plt.imshow(exp_image)
print('image_id ',image_id_key[:5])
print('the captions for this image ')
print(caption_to_id[:5])
num_steps=20
######################################################################
##Create a list of all of the sentences.
DatasetWordList=[]
for dataset_caption in caption_to_id:
DatasetWordList+=str(dataset_caption).split()
#Determine number of distint words
distintwords=collections.Counter(DatasetWordList)
#Order words
count_pairs = sorted(distintwords.items(), key=lambda x: (-x[1], x[0])) #ascending order
words, occurence = list(zip(*count_pairs))
#DictionaryLength=occurence.index(4) #index for words that occur 4 times or less
words=['PAD','UNK','EOS']+list(words)#[:DictionaryLength])
word_to_id=dict(zip(words, range(len(words))))
##################### Tokenize Sentence #######################
Tokenized=[]
for full_words in caption_to_id:
EmbeddedSentence=[word_to_id[word] for word in full_words.split() if word in word_to_id]+[word_to_id['EOS']]
#Pad sentences that are shorter than the number of steps
if len(EmbeddedSentence)<num_steps:
b=[word_to_id['PAD']]*num_steps
b[:len(EmbeddedSentence)]=EmbeddedSentence
if len(EmbeddedSentence)>num_steps:
b=EmbeddedSentence[:num_steps]
if len(b)==EmbeddedSentence:
b=EmeddedSentence
#b=[word_to_id['UNK'] if x>=DictionaryLength else x for x in b] #turn all words used 4 times or less to 'UNK'
#print(b)
Tokenized+=[b]
print("Number of words in this dictionary ", len(words))
#Tokenized Sentences
Tokenized[::2000]
Explanation: In the cell above we created three lists, one for the image_id, feature map. and caption. To verify that the indices of each list are aligned, display the image id and caption for one image.
End of explanation
def data_queue(caption_input,feature_vector,batch_size,):
train_input_queue = tf.train.slice_input_producer(
[caption_input, np.asarray(feature_vector)],num_epochs=10000,
shuffle=True) #False before
##Set our train data and label input shape for the queue
TrainingInputs=train_input_queue[0]
FeatureVectors=train_input_queue[1]
TrainingInputs.set_shape([num_steps])
FeatureVectors.set_shape([len(feature_vector[0])]) #fc7 is 4096
min_after_dequeue=1000000
capacity = min_after_dequeue + 3 * batch_size
#input_x, target_y
tokenized_caption, input_feature_map = tf.train.batch([TrainingInputs, FeatureVectors],
batch_size=batch_size,
capacity=capacity,
num_threads=6)
return tokenized_caption,input_feature_map
def rnn_model(Xconcat,input_keep_prob,output_keep_prob,num_layers,num_hidden):
#Create a multilayer RNN
#reuse=False for training but reuse=True for sharing
layer_cell=[]
for _ in range(num_layers):
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=num_hidden, state_is_tuple=True)
lstm_cell = tf.contrib.rnn.DropoutWrapper(lstm_cell,
input_keep_prob=input_keep_prob,
output_keep_prob=output_keep_prob)
layer_cell.append(lstm_cell)
cell = tf.contrib.rnn.MultiRNNCell(layer_cell, state_is_tuple=True)
outputs, last_states = tf.contrib.rnn.static_rnn(
cell=cell,
dtype=tf.float32,
inputs=tf.unstack(Xconcat))
output_reshape=tf.reshape(outputs, [batch_size*(num_steps),num_hidden]) #[12==batch_size*num_steps,num_hidden==12]
pred=tf.matmul(output_reshape, variables_dict["weights_mscoco"]) +variables_dict["biases_mscoco"]
return pred
tf.reset_default_graph()
#######################################################################################################
# Parameters
num_hidden=2048
num_steps=num_steps
dict_length=len(words)
batch_size=4
num_layers=2
train_lr=0.00001
#######################################################################################################
TrainingInputs=Tokenized
FeatureVectors=feature_maps_to_id
## Variables ##
# Learning rate placeholder
lr = tf.placeholder(tf.float32, shape=[])
#tf.get_variable_scope().reuse_variables()
variables_dict = {
"weights_mscoco":tf.Variable(tf.truncated_normal([num_hidden,dict_length],
stddev=1.0,dtype=tf.float32),name="weights_mscoco"),
"biases_mscoco": tf.Variable(tf.truncated_normal([dict_length],
stddev=1.0,dtype=tf.float32), name="biases_mscoco")}
tokenized_caption, input_feature_map=data_queue(TrainingInputs,FeatureVectors,batch_size)
mscoco_dict=words
TrainInput=tf.constant(word_to_id['PAD'],shape=[batch_size,1],dtype=tf.int32)
#Pad the beginning of our caption. The first step now only has the image feature vector. Drop the last time step
#to timesteps to 20
TrainInput=tf.concat([tf.constant(word_to_id['PAD'],shape=[batch_size,1],dtype=tf.int32),
tokenized_caption],1)[:,:-1]
X_one_hot=tf.nn.embedding_lookup(np.identity(dict_length), TrainInput) #[batch,num_steps,dictionary_length][2,6,7]
#ImageFeatureTensor=input_feature_map
Xconcat=tf.concat([input_feature_map+tf.zeros([num_steps,batch_size,4096]),
tf.unstack(tf.to_float(X_one_hot),num_steps,1)],2)#[:num_steps,:,:]
pred=rnn_model(Xconcat,1.0,1.0,num_layers,num_hidden)
#the full caption is the target sentence
y_one_hot=tf.unstack(tf.nn.embedding_lookup(np.identity(dict_length), tokenized_caption),num_steps,1) #[batch,num_steps,dictionary_length][2,6,7]
y_target_reshape=tf.reshape(y_one_hot,[batch_size*num_steps,dict_length])
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y_target_reshape))
optimizer = tf.train.MomentumOptimizer(lr,0.9)
gvs = optimizer.compute_gradients(cost,aggregation_method = tf.AggregationMethod.EXPERIMENTAL_TREE)
capped_gvs = [(tf.clip_by_value(grad, -10., 10.), var) for grad, var in gvs]
train_op=optimizer.apply_gradients(capped_gvs)
saver = tf.train.Saver()
init_op = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
#Load a pretrained network
saver.restore(sess, '/data/mscoco/rnn_layermodel_iter40000')
print('Model restored from file')
for i in range(100):
loss,y_pred,target_caption,_=sess.run([cost,pred,tokenized_caption,train_op],feed_dict={lr:train_lr})
if i% 10==0:
print("iteration: ",i, "loss: ",loss)
MODEL_NAME='rnn_model_iter'+str(i)
saver.save(sess, MODEL_NAME)
print('saved trained network ',MODEL_NAME)
print("Done Training")
coord.request_stop()
coord.join(threads)
sess.close()
Explanation: The next cell contains functions for queuing our data and the RNN model. What should the output for each function be? If you need a hint look here.
End of explanation
def show_next_predicted_word(batch_id,batch_size,id_of_image,target_caption,predicted_caption,words,PATH):
Target=[words[ind] for ind in target_caption[batch_id]]
Prediction_Tokenized=np.argmax(predicted_caption[batch_id::batch_size],1)
Prediction=[words[ind] for ind in Prediction_Tokenized]
STRING2='%012d' % id_of_image
img=ndimage.imread(PATH+STRING2+'.jpg')
return Target,Prediction,img,STRING2
#You can change the batch id to a number between [0 , batch_size-1]
batch_id=0
image_id_for_predicted_caption=[x for x in range(len(Tokenized)) if target_caption[batch_id].tolist()== Tokenized[x]][0]
t,p,input_img,string_out=show_next_predicted_word(batch_id,batch_size,image_id_key[image_id_for_predicted_caption][0]
,target_caption,y_pred,words,TRAIN_IMAGE_PATH+'COCO_train2014_')
print('Caption')
print(t)
print('Predicted Words')
print(p)
plt.imshow(input_img)
Explanation: We can use the function below to estimate how well the network is able to predict the next word in the caption. You can evaluate a single image and its caption from the last batch using the index of the batch. If you need a hint look here.
Please note that depending on the status of the neural network at the time it was saved, incomplete, incoherent, and sometimes inappropriate captions could be generated.
End of explanation
##Load and test our test set
val_load=np.load('/data/mscoco/val_vgg_16_fc7_100.npy').tolist()
val_ids=val_load.keys()
#Create 3 lists image_id, feature maps, and captions.
val_id_key=[]
val_map_to_id=[]
val_caption_to_id=[]
for observed_image in val_ids:
val_id_key.append([observed_image])
val_map_to_id.append(val_load[observed_image])
print('number of images ',len(val_ids))
print('number of captions ',len(val_map_to_id))
Explanation: Questions
[1] Can the show_next_predicted_word function be used for deployment?
Probably not. Can you think of any reason why? Each predicted word is based on the previous ground truth word. In a deployment scenario, we will only have the feature map from our input image.
[2] Can you load your saved network and use it to generate a caption from a validation image?
The validation images are stored in /data/mscoco/val2014. A npy file of the feature vectors is stored /data/mscoco/val_vgg_16_fc7_100.npy. For a hint on how to add this look here.
[3] Do you need to calculate the loss or cost when only performing inference?
[4] Do you use dropout when performing inference?
End of explanation
tf.reset_default_graph()
batch_size=1
num_steps=20
print_topn=0 #0for do not display
printnum0f=3
#Choose a image to caption
VALDATA=54 #ValImage fc7 feature vector
variables_dict = {
"weights_mscoco":tf.Variable(tf.truncated_normal([num_hidden,dict_length],
stddev=1.0,dtype=tf.float32),name="weights_mscoco"),
"biases_mscoco": tf.Variable(tf.truncated_normal([dict_length],
stddev=1.0,dtype=tf.float32), name="biases_mscoco")}
StartCaption=np.zeros([batch_size,num_steps],dtype=np.int32).tolist()
CaptionPlaceHolder = tf.placeholder(dtype=tf.int32, shape=(batch_size , num_steps))
ValFeatureMap=val_map_to_id[VALDATA]
X_one_hot=tf.nn.embedding_lookup(np.identity(dict_length), CaptionPlaceHolder) #[batch,num_steps,dictionary_length][2,6,7]
#ImageFeatureTensor=input_feature_map
Xconcat=tf.concat([ValFeatureMap+tf.zeros([num_steps,batch_size,4096]),
tf.unstack(tf.to_float(X_one_hot),num_steps,1)],2)#[:num_steps,:,:]
pred=rnn_model(Xconcat,1.0,1.0,num_layers,num_hidden)
pred=tf.nn.softmax(pred)
saver = tf.train.Saver()
init_op = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())
with tf.Session() as sess:
sess.run(init_op)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
#Load a pretrained network
saver.restore(sess, 'rnn_model_iter99')
print('Model restored from file')
for i in range(num_steps-1):
predict_next_word=sess.run([pred],feed_dict={CaptionPlaceHolder:StartCaption})
INDEX=np.argmax(predict_next_word[0][i])
StartCaption[0][i+1]=INDEX
##Post N most probable next words at each step
if print_topn !=0:
print("Top ",str(printnum0f), "predictions for the", str(i+1), "word in the predicted caption" )
result_args = np.argsort(predict_next_word[0][i])[-printnum0f:][::-1]
NextWord=[words[x] for x in result_args]
print(NextWord)
coord.request_stop()
coord.join(threads)
sess.close()
STRING2='%012d' % val_id_key[VALDATA][0]
img=ndimage.imread('/data/mscoco/val2014/COCO_val2014_'+STRING2+'.jpg')
plt.imshow(img)
plt.title('COCO_val2014_'+STRING2+'.jpg')
PredictedCaption=[words[x] for x in StartCaption[0]]
print("predicted sentence: ",PredictedCaption[1:])
#Free our GPU memory before proceeding to the next part of the lab
import os
os._exit(00)
Explanation: The cell below will load a feature vector from one of the images in the validation data set and use it with our pretrained network to generate a caption. Use the VALDATA variable to propagate and image through our RNN and generate a caption. You also need to load the network you just created during training. Look here if you need a hint.
Please note that depending on the status of the neural network at the time it was saved, incomplete, incoherent, and sometimes inappropriate captions could be generated.
End of explanation |
4,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
REPL Basics
<a href="http
Step1: Persistent Storage
NOTE
Step2: Help
To get help for the various classes and their respective methods, run
Step3: To get help on a specific method in that class, you can pass that in as an argument
Step4: Log Levels
By default on boot, the log level is set to logging.ERROR. If you would like to see more debug messages, you can set the logging level by doing so
Step5: To set it back
Step6: Pretty Printing
The Matter REPL leverages the rich Python package heavily to do pretty printing of various data structures with appropriate colored formatting.
This pretty printer is installed by default into the REPL environment | Python Code:
import chip.native
import pkgutil
module = pkgutil.get_loader('chip.ChipReplStartup')
%run {module.path}
Explanation: REPL Basics
<a href="http://35.236.121.59/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fproject-chip%2Fconnectedhomeip&urlpath=lab%2Ftree%2Fconnectedhomeip%2Fdocs%2Fguides%2Frepl%2FMatter%2520-%2520REPL%2520Intro.ipynb&branch=master">
<img src="https://i.ibb.co/hR3yWsC/launch-playground.png" alt="drawing" width="130"/>
</a>
<br></br>
This goes over the basics of interacting with the REPL.
Initialization
Let's first begin by setting up by importing some key modules that are needed to make it easier for us to interact with the Matter stack.
ChipReplStartup.py is run within the global namespace. This results in all of its imports being made available here.
NOTE: This is not needed if you launch the REPL from the command-line.
End of explanation
import chip.native
import pkgutil
module = pkgutil.get_loader('chip.ChipReplStartup')
%run {module.path} --storagepath /tmp/repl.json
Explanation: Persistent Storage
NOTE: By default, the REPL points to /tmp/repl-storage.json for its persistent storage. To change that location, you can pass that in directly as follows:
End of explanation
matterhelp()
Explanation: Help
To get help for the various classes and their respective methods, run:
End of explanation
matterhelp(devCtrl.SendCommand)
Explanation: To get help on a specific method in that class, you can pass that in as an argument:
End of explanation
mattersetlog(logging.DEBUG)
Explanation: Log Levels
By default on boot, the log level is set to logging.ERROR. If you would like to see more debug messages, you can set the logging level by doing so:
End of explanation
mattersetlog(logging.WARNING)
Explanation: To set it back:
End of explanation
a = {'value': [1, 2, 3, 4, [1, 2]]}
a
Explanation: Pretty Printing
The Matter REPL leverages the rich Python package heavily to do pretty printing of various data structures with appropriate colored formatting.
This pretty printer is installed by default into the REPL environment:
End of explanation |
4,241 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q1
Step1: astropy convolution
How do you convolve fast?
see, e.g., http
Step2: Speed of DFT
Step3: faster fftw
Step4: Q3
Install a module, then keep editing it.
python setup.py develop
Use https | Python Code:
x = StringIO.StringIO()
arr = np.arange(10)
np.savetxt(x,arr, header='test', comments="")
x.seek(0)
print(x.read())
with open('file.txt','w') as f:
f.write(x.getvalue())
%%bash
cat file.txt
Explanation: Q1:
Saving a table to text with a header with no preceding "#"
Also, demo StringIO
End of explanation
from astropy.convolution import convolve, convolve_fft
Explanation: astropy convolution
How do you convolve fast?
see, e.g., http://keflavich.github.io/blog/fft-comparisons-in-python.html
End of explanation
import scipy.fftpack, scipy.ndimage, scipy.signal
scipy.ndimage.convolve
scipy.signal.fftconvolve??
%%bash
factor 9216
Explanation: Speed of DFT: $O(n^2)$
Speed of FFT: $O(n log(n))$
End of explanation
x = np.random.randn(64) + 5 + np.sin(np.arange(64))*3
f = np.fft.fft(x)
pl.plot(x)
%matplotlib inline
import pylab as pl
pl.plot(np.abs(f))
f[0] = 0
f[10] = 0
f[64-10] = 0
pl.plot(np.abs(f))
xi = np.fft.ifft(f)
pl.plot(xi.real)
pl.plot(x)
Explanation: faster fftw: --enable-avx for "advanced vector instructions". 8x FLOPs at a time!
What does it mean to "remove" fft modes?
End of explanation
%%bash
cd ~/repos/astropy
ls
ls build/
Explanation: Q3
Install a module, then keep editing it.
python setup.py develop
Use https://github.com/astropy/package-template to get everything set up in a cool way. develop doesn't do much good for C code.
Within an interactive session, use reload(package) (python2) or import importlib; importlib.reload(package) to reload the package. This finnicky.
Other option, which works with C extensions: python setup.py build_ext --inplace. Or you can use python setup.py build to build into the build/ directory, which will then be accessible using import if you've used python setup.py develop
End of explanation |
4,242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
如何爬取Facebook粉絲頁資料 (comments) ?
基本上是透過 Facebook Graph API 去取得粉絲頁的資料,但是使用 Facebook Graph API 還需要取得權限,有兩種方法
Step1: 第一步 - 要先取得應用程式的帳號,密碼 (app_id, app_secret)
第二步 - 輸入要分析的粉絲團的 id
[教學]如何申請建立 Facebook APP ID 應用程式ID
Step2: 這篇是承襲上一篇 fb粉絲團分析and輸出報告-抓取篇(posts)
上一篇提到說,會爬取每一則粉絲頁po文,而本文的目標就是在把該post裡面的留言
都給爬取出來,所以會需要用到每一個post的id,也就是說需要上一篇的檔案才有辦法爬取
基本上由4個function來完成:
request_until_succeed
來確保完成爬取
getFacebookCommentFeedData
來產生comment的各種資料(message,like_count,created_time,comments,from...)
processFacebookComment
是處理getFacebookPageFeedData得到的各種資料,把它們結構化
scrapeFacebookPageFeedComments
主程式
Step3: url = base + node + fields + parameters
base
Step4: 主程式概念是這樣的,每一個posts會用while迴圈來把所有的留言都爬出來
而每一個留言又會有另一個while迴圈把回覆留言的留言再爬出來,所以總共有兩個while迴圈
Step5: 總共跑完有690628筆,106Mb,要花十幾個小時
all_statuses[0] 為 column name
all_statuses[1 | Python Code:
# 載入python 套件
import requests
import datetime
import time
import pandas as pd
Explanation: 如何爬取Facebook粉絲頁資料 (comments) ?
基本上是透過 Facebook Graph API 去取得粉絲頁的資料,但是使用 Facebook Graph API 還需要取得權限,有兩種方法 :
第一種是取得 Access Token
第二種是建立 Facebook App的應用程式,用該應用程式的帳號,密碼當作權限
兩者的差別在於第一種會有時效限制,必須每隔一段時間去更新Access Token,才能使用
Access Token
本文是採用第二種方法
要先取得應用程式的帳號,密碼 app_id, app_secret
End of explanation
# 粉絲頁的id
page_id = "appledaily.tw"
# 應用程式的帳號,密碼
app_id = ""
app_secret = ""
# 上一篇爬取的post的csv檔案
post_path = 'post/'+page_id+'_post.csv'
access_token = app_id + "|" + app_secret
Explanation: 第一步 - 要先取得應用程式的帳號,密碼 (app_id, app_secret)
第二步 - 輸入要分析的粉絲團的 id
[教學]如何申請建立 Facebook APP ID 應用程式ID
End of explanation
# 判斷response有無正常 正常 200,若無隔五秒鐘之後再試
def request_until_succeed(url):
success = False
while success is False:
try:
req = requests.get(url)
if req.status_code == 200:
success = True
if req.status_code == 400:
return None
except Exception as e:
print(e)
time.sleep(5)
print("Error for URL %s: %s" % (url, datetime.datetime.now()))
print("Retrying.")
return req
Explanation: 這篇是承襲上一篇 fb粉絲團分析and輸出報告-抓取篇(posts)
上一篇提到說,會爬取每一則粉絲頁po文,而本文的目標就是在把該post裡面的留言
都給爬取出來,所以會需要用到每一個post的id,也就是說需要上一篇的檔案才有辦法爬取
基本上由4個function來完成:
request_until_succeed
來確保完成爬取
getFacebookCommentFeedData
來產生comment的各種資料(message,like_count,created_time,comments,from...)
processFacebookComment
是處理getFacebookPageFeedData得到的各種資料,把它們結構化
scrapeFacebookPageFeedComments
主程式
End of explanation
def getFacebookCommentFeedData(status_id, access_token, num_comments):
base = "https://graph.facebook.com/v2.6"
node = "/%s/comments" % status_id
fields = "?fields=id,message,like_count,created_time,comments,from,attachment"
parameters = "&order=chronological&limit=%s&access_token=%s" % \
(num_comments, access_token)
url = base + node + fields + parameters
# 取得data
data = request_until_succeed(url)
if data is None:
return None
else:
return data.json()
def processFacebookComment(comment, status_id, parent_id = ''):
# 確認資料欄位是否有值,並做處理
comment_id = comment['id']
comment_author = comment['from']['name']
if 'message' not in comment:
comment_message = ''
else:
comment_message = comment['message']
if 'like_count' not in comment:
comment_likes = 0
else:
comment_likes = comment['like_count']
if 'attachment' in comment:
attach_tag = "[[%s]]" % comment['attachment']['type'].upper()
if comment_message is '':
comment_message = attach_tag
else:
comment_message = (comment_message+ " " +attach_tag)
comment_published = datetime.datetime.strptime(comment['created_time'],'%Y-%m-%dT%H:%M:%S+0000')
# 根據所在時區 TW +8
comment_published = comment_published + datetime.timedelta(hours=8)
comment_published = comment_published.strftime('%Y-%m-%d %H:%M:%S')
# 回傳tuple形式的資料
return (comment_id, status_id, parent_id, comment_message, comment_author,
comment_published, comment_likes)
Explanation: url = base + node + fields + parameters
base : 可以設定Facebook Graph API的版本,這邊設定v2.6
node : 分析哪個粉絲頁的post 由page_id去設定
fields : 你要取得資料的種類
parameters : 權限設定和每次取多少筆(num_statuses)
End of explanation
def scrapeFacebookPageFeedComments(page_id, access_token, post_path):
# all_statuses 用來儲存的list,先放入欄位名稱
all_comments = [("comment_id", "status_id", "parent_id", "comment_message",
"comment_author", "comment_published", "comment_likes")]
num_processed = 0 # 計算處理多少post
scrape_starttime = datetime.datetime.now()
print("Scraping %s Comments From Posts: %s\n" % (page_id, scrape_starttime))
post_df = pd.read_csv(post_path)
for status_id in post_df['status_id']:
has_next_page = True
comments = getFacebookCommentFeedData(status_id, access_token, 100)
while has_next_page and comments is not None:
for comment in comments['data']:
all_comments.append(processFacebookComment(comment, status_id))
if 'comments' in comment:
has_next_subpage = True
subcomments = getFacebookCommentFeedData(comment['id'], access_token, 100)
while has_next_subpage:
for subcomment in subcomments['data']:
all_comments.append(processFacebookComment(
subcomment,
status_id,
comment['id']))
num_processed += 1
if num_processed % 1000 == 0:
print("%s Comments Processed: %s" %
(num_processed,
datetime.datetime.now()))
if 'paging' in subcomments:
if 'next' in subcomments['paging']:
data = request_until_succeed(subcomments['paging']['next'])
if data != None:
subcomments = data.json()
else:
subcomments = None
else:
has_next_subpage = False
else:
has_next_subpage = False
num_processed += 1
if num_processed % 1000 == 0:
print("%s Comments Processed: %s" %
(num_processed, datetime.datetime.now()))
if 'paging' in comments:
if 'next' in comments['paging']:
data = request_until_succeed(comments['paging']['next'])
if data != None:
comments = data.json()
else:
comments = None
else:
has_next_page = False
else:
has_next_page = False
print("\nDone!\n%s Comments Processed in %s" %
(num_processed, datetime.datetime.now() - scrape_starttime))
return all_comments
Explanation: 主程式概念是這樣的,每一個posts會用while迴圈來把所有的留言都爬出來
而每一個留言又會有另一個while迴圈把回覆留言的留言再爬出來,所以總共有兩個while迴圈
End of explanation
all_comments = scrapeFacebookPageFeedComments(page_id, access_token, post_path)
df = pd.DataFrame(all_comments[1:], columns=all_comments[0])
path = 'comment/'+page_id+'_comment.csv'
df.to_csv(path,index=False,encoding='utf8')
Explanation: 總共跑完有690628筆,106Mb,要花十幾個小時
all_statuses[0] 為 column name
all_statuses[1:] 為處理後結構化的資料
End of explanation |
4,243 | Given the following text description, write Python code to implement the functionality described.
Description:
Area of the biggest possible rhombus that can be inscribed in a rectangle
Function to find the area of the biggest rhombus ; the length and breadth cannot be negative ; area of the rhombus ; Driver code
| Python Code:
def rhombusarea(l , b ) :
if(l < 0 or b < 0 ) :
return - 1
return(l * b ) / 2
if __name__== ' __main __' :
l = 16
b = 6
print(rhombusarea(l , b ) )
|
4,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercício 01
Step1: Exercício 02
Step2: Exercício 03
Step3: Exercício 04 | Python Code:
G2 = nx.barabasi_albert_graph(6,3)
nx.draw_shell(G2)
pos = nx.shell_layout(G2)
labels = dict( enumerate(G2.nodes()) )
nx.draw_networkx_labels(G2,pos,labels,font_size=16);
print "Dist. media: ", nx.average_shortest_path_length(G2)
print "Diametro: ", nx.diameter(G2)
print "Coef. Agrupamento médio: ", nx.average_clustering(G2)
Explanation: Exercício 01: Calcule a distância média, o diâmetro e o coeficiente de agrupamento das redes abaixo.
End of explanation
print "Centralidades de grau:"
for ni,dc in nx.degree_centrality(G2).items():
print ni, dc
print "Centralidades de proximidade:"
for ni,dc in nx.closeness_centrality(G2).items():
print ni, dc
print "Centralidades de betweenness:"
for ni,dc in nx.betweenness_centrality(G2).items():
print ni, dc
Explanation: Exercício 02: Calcule a centralidade de grau, betweenness e proximidade dos nós das redes abaixo:
End of explanation
from community import *
partition = dict( [(2,0),(3,0),(4,0), (1,1),(5,1),(0,1)] )
print "Modularidade: ", modularity(partition,G2)
Explanation: Exercício 03: Calcule a modularidade para a seguinte partição
Partição 1: nós 2, 3 e 4
Partição 2: nós 0, 1 e 5
End of explanation
print "Assortatividade: ", nx.degree_assortativity_coefficient(G2)
Explanation: Exercício 04: Calcule a assortatividade de grau da rede
End of explanation |
4,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Definition(s)
The Karatsuba algorithm is a fast multiplication algorithm.
It reduces the multiplication of two n-digit numbers to at most ${\displaystyle n^{\log _{2}3}\approx n^{1.585}}$ single-digit multiplications in general.
Step1: Convertion utility functions
Step2: Multiplication utility functions
Step3: Karatsuba's algorithm
Step4: Multiplication and testing
Step5: Generate big integers
Step6: Run(s)
Step7: Karatsuba multiplication using Baruchel's implementation
Karatsuba's algorithm is already implemented in Python. Check this package. | Python Code:
import numpy as np # used for generating random numbers
Explanation: Definition(s)
The Karatsuba algorithm is a fast multiplication algorithm.
It reduces the multiplication of two n-digit numbers to at most ${\displaystyle n^{\log _{2}3}\approx n^{1.585}}$ single-digit multiplications in general.
End of explanation
def int_to_big(x):
if x == 0:
return [0]
z = []
while x > 0:
t = x % 10
z.append(t)
x //= 10
trim(z)
return z
def big_to_int(x):
z, p = 0, 1
for d in x:
z += p * d
p *= 10
return z
Explanation: Convertion utility functions
End of explanation
from itertools import zip_longest
def trim(z):
while len(z) > 1 and z[-1] == 0:
z.pop(-1)
def add(x, y):
z, carry = [], 0
for r, s in zip_longest(x, y, fillvalue=0):
carry += r + s
z.append(carry % 10)
carry //= 10
if carry:
z.append(carry)
return z
def subtract(x, y):
z, carry = [], 0
for r, s in zip_longest(x, y, fillvalue=0):
carry += r - s
z.append(carry % 10)
carry //= 10
trim(z)
return z
Explanation: Multiplication utility functions
End of explanation
def karatsuba(x, y):
# ensure same length
while len(x) < len(y):
x.append(0)
while len(x) > len(y):
y.append(0)
# length
n = len(x)
half = n // 2
if n == 1:
return add([x[0] * y[0]], [])
# cut-off for improved efficiency
if n <= 50:
a = big_to_int(x)
b = big_to_int(y)
z = a * b
return int_to_big(z)
x0, x1 = x[:half], x[half:]
y0, y1 = y[:half], y[half:]
# x = x0x1
# y = y0y1
# z0 = x0 * y0
# z1 = x1 * y1
# z2 = (x0 + x1) * (y0 + y1)
# z2 = z2 - (z0 + z1)
z0 = karatsuba(x0, y0)
z1 = karatsuba(x1, y1)
z2 = karatsuba(add(x0, x1), add(y0, y1))
z2 = subtract(z2, add(z0, z1))
z = add(z0, [0] * (half << 1) + z1)
z = add(z, [0] * half + z2)
return z
Explanation: Karatsuba's algorithm
End of explanation
def multiply(x, y):
xb = int_to_big(x)
yb = int_to_big(y)
zb = karatsuba(xb, yb)
return big_to_int(zb)
def test(x, y):
z = multiply(x, y)
assert x * y == z
print("{} x {} = {}".format(x, y, z))
Explanation: Multiplication and testing
End of explanation
def gen_long(n):
x = ''.join(map(str, np.random.randint(0, 10, n)))
return int(x)
Explanation: Generate big integers
End of explanation
test(1432423423420, 12321312332131233)
test(8931283129323420, 1233123602345430533)
tests = 30
for _ in range(tests):
n = np.random.randint(1, 15)
x, y = gen_long(n), gen_long(n)
test(int(x), int(y))
%%time
a, b = gen_long(1000), gen_long(1000)
z = multiply(a, b)
assert z == a * b
%%time
a, b = gen_long(20000), gen_long(20000)
z = multiply(a, b)
assert z == a * b
Explanation: Run(s)
End of explanation
from karatsuba import *
def power_of_two(x):
p = 1
while p < x:
p <<= 1
return p
def reverse(num):
return int(str(num)[::-1])
def kat_multiply(x, y):
if x == 0 or y == 0:
return 0
xs = list(map(int, str(x)))
ys = list(map(int, str(y)))
n = power_of_two(max(len(xs), len(ys)))
plan = make_plan(range(n), range(n))
xs = [0] * (n - len(xs)) + xs
ys = [0] * (n - len(ys)) + ys
zs = plan(xs, ys)
zs.pop(-1)
zs = list(reversed(zs))
while zs[-1] == 0:
zs.pop(-1)
ans = 0
for p, d in enumerate(zs):
ans += d * 10 ** p
return ans
tests = 30
for _ in range(tests):
n = np.random.randint(1, 15)
x, y = gen_long(n), gen_long(n)
z = kat_multiply(x, y)
assert z == x * y
print("{} x {} = {}".format(x, y, z))
%%time
a, b = gen_long(100), gen_long(100)
z = kat_multiply(a, b)
assert z == a * b
Explanation: Karatsuba multiplication using Baruchel's implementation
Karatsuba's algorithm is already implemented in Python. Check this package.
End of explanation |
4,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Two Topics Coupled example
Import Python built-in functions we need to run and plot the game
Step1: Set up inline matplotlib
Step2: Import Game Modules From a Given Path
User have to edit the path and put the correct one on his/her machine.
Step3: Setting Up Game Parameters
Step4: seed PRNG
Step5: Set up the state of the system
State of the system includes
Step6: User Defined States and parameters Can go in the following cell
Step7: Plot the experiment done above
Step8: Skew Uniqueness Tendency Driver
Step10: Initiate State | Python Code:
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import matplotlib.image as mpimg
from matplotlib import rcParams
import seaborn as sb
Explanation: Two Topics Coupled example
Import Python built-in functions we need to run and plot the game
End of explanation
%matplotlib inline
rcParams['figure.figsize'] = 5, 4
sb.set_style('whitegrid')
Explanation: Set up inline matplotlib
End of explanation
import sys
# search path for modules
sys.path.append('/Users/hn/Documents/GitHub/PyOpinionGame/')
import opiniongame.config as og_cfg
import opiniongame.IO as og_io
import opiniongame.coupling as og_coupling
import opiniongame.state as og_state
import opiniongame.adjacency as og_adj
import opiniongame.selection as og_select
import opiniongame.potentials as og_pot
import opiniongame.core as og_core
import opiniongame.stopping as og_stop
import opiniongame.opinions as og_opinions
Explanation: Import Game Modules From a Given Path
User have to edit the path and put the correct one on his/her machine.
End of explanation
config = og_cfg.staticParameters()
path = '/Users/hn/Documents/GitHub/PyOpinionGame/' # path to the 'staticParameters.cfg'
staticParameters = path + 'staticParameters.cfg'
config.readFromFile(staticParameters) # Read static parameters
config.threshold = 0.0001
config.Kthreshold = 0.00001
config.startingseed = 10
config.learning_rate = 0.1
tau = 0.62 #tip of the tent potential function
config.printOut()
Explanation: Setting Up Game Parameters
End of explanation
print("SEEDING PRNG: "+str(config.startingseed))
np.random.seed(config.startingseed)
Explanation: seed PRNG: must do this before any random numbers are ever sampled during default generation
End of explanation
# These are the default matrices for the state of the system:
# If you want to change them, you can generate a new one in the following cell
default_weights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)
default_initialOpinions = og_opinions.initialize_opinions(config.popSize, config.ntopics)
default_adj = og_adj.make_adj(config.popSize, 'full')
state = og_state.WorldState(adj=default_adj,
couplingWeights=default_weights,
initialOpinions=default_initialOpinions,
initialHistorySize=100,
historyGrowthScale=2)
state.validate()
Explanation: Set up the state of the system
State of the system includes:
Weight Matrix (Matrix of the coupling wieghts between topic)
Initial Opinions of agents
Adjacency matrix of the network
This is just initialization of the state, later we update some elements of it.
End of explanation
numberOfCommunities = 3
communityPopSize = 25
config.popSize = numberOfCommunities * communityPopSize
# List of upper bound probability of interaction between communities
uppBound_list = [0.0]
# List of uniqueness Strength parameter
individStrength = [0.0]
config.learning_rate = 0.1
config.iterationMax = 10000
tau = 0.62
config.printOut()
#
# functions for use by the simulation engine
#
ufuncs = og_cfg.UserFunctions(og_select.PickTwoWeighted,
og_stop.iterationStop,
og_pot.createTent(tau))
# Number of different initial opinions,
# i.e. number of different games with different initials.
noInitials = np.arange(1)
noGames = np.arange(1) # Number of different game orders.
# Run experiments with different adjacencies, different initials, and different order of games.
for uniqForce in individStrength:
config.uniqstrength = uniqForce
for upperBound in uppBound_list:
# Generate different adjacency matrix with different prob. of interaction
# between different communities
state.adj = og_adj.CommunitiesMatrix(communityPopSize, numberOfCommunities, upperBound)
for countInitials in noInitials:
# Pick three communities with similar opinions to begin with!
state.initialOpinions = np.zeros((config.popSize, 1))
state.initialOpinions[0:25] = np.random.uniform(low=0.0, high=.25, size=(25,1))
state.initialOpinions[25:50] = np.random.uniform(low=0.41, high=.58, size=(25,1))
state.initialOpinions[50:75] = np.random.uniform(low=0.74, high= 1, size=(25,1))
state.couplingWeights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)
all_experiments_history = {}
print "(uniqForce, upperBound) = ({}, {})".format(uniqForce, upperBound)
print "countInitials = {}".format(countInitials + 1)
for gameOrders in noGames:
#cProfile.run('og_core.run_until_convergence(config, state, ufuncs)')
state = og_core.run_until_convergence(config, state, ufuncs)
print("One Experiment Done" , "gameOrders = " , gameOrders+1)
all_experiments_history[ 'experiment' + str(gameOrders+1)] = state.history[0:state.nextHistoryIndex,:,:]
og_io.saveMatrix('uB' + str(upperBound) + '*uS' + str(config.uniqstrength) +
'*initCount' + str(countInitials+21) + '.mat', all_experiments_history)
print all_experiments_history.keys()
print all_experiments_history['experiment1'].shape
Explanation: User Defined States and parameters Can go in the following cell:
End of explanation
time, population_size, no_of_topics = evolution = all_experiments_history['experiment1'].shape
evolution = all_experiments_history['experiment1'].reshape(time, population_size)
fig = plt.figure()
plt.plot(evolution)
plt.xlabel('Time')
plt.ylabel('Opinionds')
plt.title('Evolution of Opinions')
fig.set_size_inches(10,5)
plt.show()
Explanation: Plot the experiment done above:
End of explanation
state = og_state.WorldState(adj=default_adj,
couplingWeights=default_weights,
initialOpinions=default_initialOpinions,
initialHistorySize=100,
historyGrowthScale=2)
state.validate()
#
# load configuration
#
config = og_cfg.staticParameters()
config.readFromFile('staticParameters.cfg')
config.threshold = 0.01
config.printOut()
#
# seed PRNG: must do this before any random numbers are
# ever sampled during default generation
#
print(("SEEDING PRNG: "+str(config.startingseed)))
np.random.seed(config.startingseed)
Explanation: Skew Uniqueness Tendency Driver:
I observed when having tendency for uniqueness is drawn from normal distribution, we do not get an interesting result. For example, initial intuition was that uniqueness for tendency would delay stabilization of the network, however, it did not. So, here we draw uniqueness tendencies from skew normal distribution.
When most neighbors tend to go in one directions, then probability of individuals to go to the opposite direction would be more than the niose in the same direction:
End of explanation
# These are the default matrices for the state of the system:
# If you want to change them, you can generate a new one in the following cell
default_weights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)
default_initialOpinions = og_opinions.initialize_opinions(config.popSize, config.ntopics)
default_adj = og_adj.make_adj(config.popSize, 'full')
state = og_state.WorldState(adj=default_adj,
couplingWeights=default_weights,
initialOpinions=default_initialOpinions,
initialHistorySize=100,
historyGrowthScale=2)
state.validate()
#
# run
#
numberOfCommunities = 3
communityPopSize = 25
config.popSize = numberOfCommunities * communityPopSize
# List of upper bound probability of interaction between communities
uppBound_list = np.array([.001, 0.004, 0.007, 0.01, 0.013, 0.016, 0.019])
#
# List of uniqueness Strength parameter
#
individStrength = np.arange(0.00001, 0.000251, 0.00006)
individStrength = np.append(0, individStrength)
individStrength = np.array([0.0])
skewstrength = 2.0
tau = 0.62
config.iterationMax = 30000
config.printOut()
#
# functions for use by the simulation engine
#
ufuncs = og_cfg.UserFunctions(og_select.PickTwoWeighted,
og_stop.iterationStop,
og_pot.createTent(tau))
noInitials = np.arange(1) # Number of different initial opinions.
noGames = np.arange(1) # Number of different game orders.
# Run experiments with different adjacencies, different initials, and different order of games.
for uniqForce in individStrength:
config.uniqstrength = uniqForce
for upperBound in uppBound_list:
Generate different adjacency matrix with different prob. of interaction
between different communities
state.adj = og_adj.CommunitiesMatrix(communityPopSize, numberOfCommunities, upperBound)
print"(upperBound, uniqForce) = (", upperBound, "," , uniqForce , ")"
for countInitials in noInitials:
# Pick three communities with similar opinions (stable state) to begin with!
state.initialOpinions = np.zeros((config.popSize, 1))
state.initialOpinions[0:25] = np.random.uniform(low=0.08, high=.1, size=(25,1))
state.initialOpinions[25:50] = np.random.uniform(low=0.49, high=.51, size=(25,1))
state.initialOpinions[50:75] = np.random.uniform(low=0.9, high= .92, size=(25,1))
state.couplingWeights = og_coupling.weights_no_coupling(config.popSize, config.ntopics)
all_experiments_history = {}
print "countInitials=", countInitials + 1
for gameOrders in noGames:
#cProfile.run('og_core.run_until_convergence(config, state, ufuncs)')
state = og_core.run_until_convergence(config, state, ufuncs)
state.history = state.history[0:state.nextHistoryIndex,:,:]
idx_IN_columns = [i for i in xrange(np.shape(state.history)[0]) if (i % (config.popSize)) == 0]
state.history = state.history[idx_IN_columns,:,:]
all_experiments_history[ 'experiment' + str(gameOrders+1)] = state.history
og_io.saveMatrix('uB' + str(upperBound) + '*uS' + str(config.uniqstrength) +
'*initCount' + str(countInitials+1) + '.mat', all_experiments_history)
all_experiments_history.keys()
time, population_size, no_of_topics = all_experiments_history['experiment1'].shape
evolution = all_experiments_history['experiment1'].reshape(time, population_size)
fig = plt.figure()
plt.plot(evolution)
plt.xlabel('Time')
plt.ylabel('Opinionds')
plt.title('Evolution of Opinions of 3 communities')
fig.set_size_inches(10, 5)
plt.show()
Explanation: Initiate State
End of explanation |
4,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
机器学习纳米学位
非监督学习
项目 3
Step1: 分析数据
在这部分,你将开始分析数据,通过可视化和代码来理解每一个特征和其他特征的联系。你会看到关于数据集的统计描述,考虑每一个属性的相关性,然后从数据集中选择若干个样本数据点,你将在整个项目中一直跟踪研究这几个数据点。
运行下面的代码单元给出数据集的一个统计描述。注意这个数据集包含了6个重要的产品类型:'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper'和 'Delicatessen'。想一下这里每一个类型代表你会购买什么样的产品。
Step2: 练习
Step3: 问题 1
在你看来你选择的这三个样本点分别代表什么类型的企业(客户)?对每一个你选择的样本客户,通过它在每一种产品类型上的花费与数据集的统计描述进行比较,给出你做上述判断的理由。
提示: 企业的类型包括超市、咖啡馆、零售商以及其他。注意不要使用具体企业的名字,比如说在描述一个餐饮业客户时,你不能使用麦当劳。
回答
Step4: 问题 2
你尝试预测哪一个特征?预测的得分是多少?这个特征对于区分用户的消费习惯来说必要吗?为什么?
提示: 决定系数(coefficient of determination),$R^2$ 结果在0到1之间,1表示完美拟合,一个负的 $R^2$ 表示模型不能够拟合数据。
回答
Step5: 问题 3
这里是否存在一些特征他们彼此之间存在一定程度相关性?如果有请列出。这个结果是验证了还是否认了你尝试预测的那个特征的相关性?这些特征的数据是怎么分布的?
提示: 这些数据是正态分布(normally distributed)的吗?大多数的数据点分布在哪?
回答
Step6: 观察
在使用了一个自然对数的缩放之后,数据的各个特征会显得更加的正态分布。对于任意的你以前发现有相关关系的特征对,观察他们的相关关系是否还是存在的(并且尝试观察,他们的相关关系相比原来是变强了还是变弱了)。
运行下面的代码以观察样本数据在进行了自然对数转换之后如何改变了。
Step7: 练习
Step8: 问题 4
请列出所有在多于一个特征下被看作是异常的数据点。这些点应该被从数据集中移除吗?为什么?把你认为需要移除的数据点全部加入到到 outliers 变量中。
回答
Step9: 问题 5
数据的第一个和第二个主成分总共表示了多少的方差? 前四个主成分呢?使用上面提供的可视化图像,从用户花费的角度来讨论前四个主要成分中每个主成分代表的消费行为并给出你做出判断的理由。
提示:
* 对每个主成分中的特征分析权重的正负和大小。
* 结合每个主成分权重的正负讨论消费行为。
* 某一特定维度上的正向增长对应正权特征的增长和负权特征的减少。增长和减少的速率和每个特征的权重相关。参考资料:Interpretation of the Principal Components
回答
Step10: 练习:降维
当使用主成分分析的时候,一个主要的目的是减少数据的维度,这实际上降低了问题的复杂度。当然降维也是需要一定代价的:更少的维度能够表示的数据中的总方差更少。因为这个,累计解释方差比(cumulative explained variance ratio)对于我们确定这个问题需要多少维度非常重要。另外,如果大部分的方差都能够通过两个或者是三个维度进行表示的话,降维之后的数据能够被可视化。
在下面的代码单元中,你将实现下面的功能:
- 将 good_data 用两个维度的PCA进行拟合,并将结果存储到 pca 中去。
- 使用 pca.transform 将 good_data 进行转换,并将结果存储在 reduced_data 中。
- 使用 pca.transform 将 log_samples 进行转换,并将结果存储在 pca_samples 中。
Step11: 观察
运行以下代码观察当仅仅使用两个维度进行 PCA 转换后,这个对数样本数据将怎样变化。观察这里的结果与一个使用六个维度的 PCA 转换相比较时,前两维的数值是保持不变的。
Step12: 可视化一个双标图(Biplot)
双标图是一个散点图,每个数据点的位置由它所在主成分的分数确定。坐标系是主成分(这里是 Dimension 1 和 Dimension 2)。此外,双标图还展示出初始特征在主成分上的投影。一个双标图可以帮助我们理解降维后的数据,发现主成分和初始特征之间的关系。
运行下面的代码来创建一个降维后数据的双标图。
Step13: 观察
一旦我们有了原始特征的投影(红色箭头),就能更加容易的理解散点图每个数据点的相对位置。
在这个双标图中,哪些初始特征与第一个主成分有强关联?哪些初始特征与第二个主成分相关联?你观察到的是否与之前得到的 pca_results 图相符?
聚类
在这个部分,你讲选择使用 K-Means 聚类算法或者是高斯混合模型聚类算法以发现数据中隐藏的客户分类。然后,你将从簇中恢复一些特定的关键数据点,通过将它们转换回原始的维度和规模,从而理解他们的含义。
问题 6
使用 K-Means 聚类算法的优点是什么?使用高斯混合模型聚类算法的优点是什么?基于你现在对客户数据的观察结果,你选用了这两个算法中的哪一个,为什么?
回答
Step14: 问题 7
汇报你尝试的不同的聚类数对应的轮廓系数。在这些当中哪一个聚类的数目能够得到最佳的轮廓系数?
回答
Step15: 练习
Step16: 问题 8
考虑上面的代表性数据点在每一个产品类型的花费总数,你认为这些客户分类代表了哪类客户?为什么?需要参考在项目最开始得到的统计值来给出理由。
提示: 一个被分到'Cluster X'的客户最好被用 'Segment X'中的特征集来标识的企业类型表示。
回答
Step17: 回答
Step18: 回答:
可视化内在的分布
在这个项目的开始,我们讨论了从数据集中移除 'Channel' 和 'Region' 特征,这样在分析过程中我们就会着重分析用户产品类别。通过重新引入 Channel 这个特征到数据集中,并施加和原来数据集同样的 PCA 变换的时候我们将能够发现数据集产生一个有趣的结构。
运行下面的代码单元以查看哪一个数据点在降维的空间中被标记为 'HoReCa' (旅馆/餐馆/咖啡厅)或者 'Retail'。另外,你将发现样本点在图中被圈了出来,用以显示他们的标签。 | Python Code:
# 检查你的Python版本
from sys import version_info
if version_info.major != 3:
raise Exception('请使用Python 3.x 来完成此项目')
# 引入这个项目需要的库
import numpy as np
import pandas as pd
import visuals as vs
from IPython.display import display # 使得我们可以对DataFrame使用display()函数
# 设置以内联的形式显示matplotlib绘制的图片(在notebook中显示更美观)
%matplotlib inline
# 高分辨率显示
# %config InlineBackend.figure_format='retina'
# 载入整个客户数据集
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print("Wholesale customers dataset has {} samples with {} features each.".format(*data.shape))
except:
print("Dataset could not be loaded. Is the dataset missing?")
Explanation: 机器学习纳米学位
非监督学习
项目 3: 创建用户分类
欢迎来到机器学习工程师纳米学位的第三个项目!在这个 notebook 文件中,有些模板代码已经提供给你,但你还需要实现更多的功能来完成这个项目。除非有明确要求,你无须修改任何已给出的代码。以'练习'开始的标题表示接下来的代码部分中有你必须要实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以 'TODO' 标出。请仔细阅读所有的提示!
除了实现代码外,你还必须回答一些与项目和你的实现有关的问题。每一个需要你回答的问题都会以'问题 X'为标题。请仔细阅读每个问题,并且在问题后的'回答'文字框中写出完整的答案。我们将根据你对问题的回答和撰写代码所实现的功能来对你提交的项目进行评分。
提示:Code 和 Markdown 区域可通过 Shift + Enter 快捷键运行。此外,Markdown 可以通过双击进入编辑模式。
开始
在这个项目中,你将分析一个数据集的内在结构,这个数据集包含很多客户真对不同类型产品的年度采购额(用金额表示)。这个项目的任务之一是如何最好地描述一个批发商不同种类顾客之间的差异。这样做将能够使得批发商能够更好的组织他们的物流服务以满足每个客户的需求。
这个项目的数据集能够在UCI机器学习信息库中找到.因为这个项目的目的,分析将不会包括 'Channel' 和 'Region' 这两个特征——重点集中在6个记录的客户购买的产品类别上。
运行下面的的代码单元以载入整个客户数据集和一些这个项目需要的 Python 库。如果你的数据集载入成功,你将看到后面输出数据集的大小。
End of explanation
# 显示数据集的一个描述
display(data.describe())
Explanation: 分析数据
在这部分,你将开始分析数据,通过可视化和代码来理解每一个特征和其他特征的联系。你会看到关于数据集的统计描述,考虑每一个属性的相关性,然后从数据集中选择若干个样本数据点,你将在整个项目中一直跟踪研究这几个数据点。
运行下面的代码单元给出数据集的一个统计描述。注意这个数据集包含了6个重要的产品类型:'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper'和 'Delicatessen'。想一下这里每一个类型代表你会购买什么样的产品。
End of explanation
# TODO:从数据集中选择三个你希望抽样的数据点的索引
indices = []
# 为选择的样本建立一个DataFrame
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print("Chosen samples of wholesale customers dataset:")
display(samples)
Explanation: 练习: 选择样本
为了对客户有一个更好的了解,并且了解代表他们的数据将会在这个分析过程中如何变换。最好是选择几个样本数据点,并且更为详细地分析它们。在下面的代码单元中,选择三个索引加入到索引列表indices中,这三个索引代表你要追踪的客户。我们建议你不断尝试,直到找到三个明显不同的客户。
End of explanation
# TODO:为DataFrame创建一个副本,用'drop'函数丢弃一个特征# TODO:
new_data = None
# TODO:使用给定的特征作为目标,将数据分割成训练集和测试集
X_train, X_test, y_train, y_test = (None, None, None, None)
# TODO:创建一个DecisionTreeRegressor(决策树回归器)并在训练集上训练它
regressor = None
# TODO:输出在测试集上的预测得分
score = None
Explanation: 问题 1
在你看来你选择的这三个样本点分别代表什么类型的企业(客户)?对每一个你选择的样本客户,通过它在每一种产品类型上的花费与数据集的统计描述进行比较,给出你做上述判断的理由。
提示: 企业的类型包括超市、咖啡馆、零售商以及其他。注意不要使用具体企业的名字,比如说在描述一个餐饮业客户时,你不能使用麦当劳。
回答:
练习: 特征相关性
一个有趣的想法是,考虑这六个类别中的一个(或者多个)产品类别,是否对于理解客户的购买行为具有实际的相关性。也就是说,当用户购买了一定数量的某一类产品,我们是否能够确定他们必然会成比例地购买另一种类的产品。有一个简单的方法可以检测相关性:我们用移除了某一个特征之后的数据集来构建一个监督学习(回归)模型,然后用这个模型去预测那个被移除的特征,再对这个预测结果进行评分,看看预测结果如何。
在下面的代码单元中,你需要实现以下的功能:
- 使用 DataFrame.drop 函数移除数据集中你选择的不需要的特征,并将移除后的结果赋值给 new_data 。
- 使用 sklearn.model_selection.train_test_split 将数据集分割成训练集和测试集。
- 使用移除的特征作为你的目标标签。设置 test_size 为 0.25 并设置一个 random_state 。
导入一个 DecisionTreeRegressor (决策树回归器),设置一个 random_state,然后用训练集训练它。
使用回归器的 score 函数输出模型在测试集上的预测得分。
End of explanation
# 对于数据中的每一对特征构造一个散布矩阵
pd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: 问题 2
你尝试预测哪一个特征?预测的得分是多少?这个特征对于区分用户的消费习惯来说必要吗?为什么?
提示: 决定系数(coefficient of determination),$R^2$ 结果在0到1之间,1表示完美拟合,一个负的 $R^2$ 表示模型不能够拟合数据。
回答:
可视化特征分布
为了能够对这个数据集有一个更好的理解,我们可以对数据集中的每一个产品特征构建一个散布矩阵(scatter matrix)。如果你发现你在上面尝试预测的特征对于区分一个特定的用户来说是必须的,那么这个特征和其它的特征可能不会在下面的散射矩阵中显示任何关系。相反的,如果你认为这个特征对于识别一个特定的客户是没有作用的,那么通过散布矩阵可以看出在这个数据特征和其它特征中有关联性。运行下面的代码以创建一个散布矩阵。
End of explanation
# TODO:使用自然对数缩放数据
log_data = None
# TODO:使用自然对数缩放样本数据
log_samples = None
# 为每一对新产生的特征制作一个散射矩阵
pd.plotting.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: 问题 3
这里是否存在一些特征他们彼此之间存在一定程度相关性?如果有请列出。这个结果是验证了还是否认了你尝试预测的那个特征的相关性?这些特征的数据是怎么分布的?
提示: 这些数据是正态分布(normally distributed)的吗?大多数的数据点分布在哪?
回答:
数据预处理
在这个部分,你将通过在数据上做一个合适的缩放,并检测异常点(你可以选择性移除)将数据预处理成一个更好的代表客户的形式。预处理数据是保证你在分析中能够得到显著且有意义的结果的重要环节。
练习: 特征缩放
如果数据不是正态分布的,尤其是数据的平均数和中位数相差很大的时候(表示数据非常歪斜)。这时候通常用一个非线性的缩放是很合适的,(英文原文) — 尤其是对于金融数据。一种实现这个缩放的方法是使用 Box-Cox 变换,这个方法能够计算出能够最佳减小数据倾斜的指数变换方法。一个比较简单的并且在大多数情况下都适用的方法是使用自然对数。
在下面的代码单元中,你将需要实现以下功能:
- 使用 np.log 函数在数据 data 上做一个对数缩放,然后将它的副本(不改变原始data的值)赋值给 log_data。
- 使用 np.log 函数在样本数据 samples 上做一个对数缩放,然后将它的副本赋值给 log_samples。
End of explanation
# 展示经过对数变换后的样本数据
display(log_samples)
Explanation: 观察
在使用了一个自然对数的缩放之后,数据的各个特征会显得更加的正态分布。对于任意的你以前发现有相关关系的特征对,观察他们的相关关系是否还是存在的(并且尝试观察,他们的相关关系相比原来是变强了还是变弱了)。
运行下面的代码以观察样本数据在进行了自然对数转换之后如何改变了。
End of explanation
# 对于每一个特征,找到值异常高或者是异常低的数据点
for feature in log_data.keys():
# TODO: 计算给定特征的Q1(数据的25th分位点)
Q1 = None
# TODO: 计算给定特征的Q3(数据的75th分位点)
Q3 = None
# TODO: 使用四分位范围计算异常阶(1.5倍的四分位距)
step = None
# 显示异常点
print("Data points considered outliers for the feature '{}':".format(feature))
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# TODO(可选): 选择你希望移除的数据点的索引
outliers = []
# 以下代码会移除outliers中索引的数据点, 并储存在good_data中
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
Explanation: 练习: 异常值检测
对于任何的分析,在数据预处理的过程中检测数据中的异常值都是非常重要的一步。异常值的出现会使得把这些值考虑进去后结果出现倾斜。这里有很多关于怎样定义什么是数据集中的异常值的经验法则。这里我们将使用 Tukey 的定义异常值的方法:一个异常阶(outlier step)被定义成1.5倍的四分位距(interquartile range,IQR)。一个数据点如果某个特征包含在该特征的 IQR 之外的特征,那么该数据点被认定为异常点。
在下面的代码单元中,你需要完成下面的功能:
- 将指定特征的 25th 分位点的值分配给 Q1 。使用 np.percentile 来完成这个功能。
- 将指定特征的 75th 分位点的值分配给 Q3 。同样的,使用 np.percentile 来完成这个功能。
- 将指定特征的异常阶的计算结果赋值给 step。
- 选择性地通过将索引添加到 outliers 列表中,以移除异常值。
注意: 如果你选择移除异常值,请保证你选择的样本点不在这些移除的点当中!
一旦你完成了这些功能,数据集将存储在 good_data 中。
End of explanation
# TODO:通过在good data上进行PCA,将其转换成6个维度
pca = None
# TODO:使用上面的PCA拟合将变换施加在log_samples上
pca_samples = None
# 生成PCA的结果图
pca_results = vs.pca_results(good_data, pca)
Explanation: 问题 4
请列出所有在多于一个特征下被看作是异常的数据点。这些点应该被从数据集中移除吗?为什么?把你认为需要移除的数据点全部加入到到 outliers 变量中。
回答:
特征转换
在这个部分中你将使用主成分分析(PCA)来分析批发商客户数据的内在结构。由于使用PCA在一个数据集上会计算出最大化方差的维度,我们将找出哪一个特征组合能够最好的描绘客户。
练习: 主成分分析(PCA)
既然数据被缩放到一个更加正态分布的范围中并且我们也移除了需要移除的异常点,我们现在就能够在 good_data 上使用PCA算法以发现数据的哪一个维度能够最大化特征的方差。除了找到这些维度,PCA 也将报告每一个维度的解释方差比(explained variance ratio)--这个数据有多少方差能够用这个单独的维度来解释。注意 PCA 的一个组成部分(维度)能够被看做这个空间中的一个新的“特征”,但是它是原来数据中的特征构成的。
在下面的代码单元中,你将要实现下面的功能:
- 导入 sklearn.decomposition.PCA 并且将 good_data 用 PCA 并且使用6个维度进行拟合后的结果保存到 pca 中。
- 使用 pca.transform 将 log_samples 进行转换,并将结果存储到 pca_samples 中。
End of explanation
# 展示经过PCA转换的sample log-data
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Explanation: 问题 5
数据的第一个和第二个主成分总共表示了多少的方差? 前四个主成分呢?使用上面提供的可视化图像,从用户花费的角度来讨论前四个主要成分中每个主成分代表的消费行为并给出你做出判断的理由。
提示:
* 对每个主成分中的特征分析权重的正负和大小。
* 结合每个主成分权重的正负讨论消费行为。
* 某一特定维度上的正向增长对应正权特征的增长和负权特征的减少。增长和减少的速率和每个特征的权重相关。参考资料:Interpretation of the Principal Components
回答:
观察
运行下面的代码,查看经过对数转换的样本数据在进行一个6个维度的主成分分析(PCA)之后会如何改变。观察样本数据的前四个维度的数值。考虑这和你初始对样本点的解释是否一致。
End of explanation
# TODO:通过在good data上进行PCA,将其转换成两个维度
pca = None
# TODO:使用上面训练的PCA将good data进行转换
reduced_data = None
# TODO:使用上面训练的PCA将log_samples进行转换
pca_samples = None
# 为降维后的数据创建一个DataFrame
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
Explanation: 练习:降维
当使用主成分分析的时候,一个主要的目的是减少数据的维度,这实际上降低了问题的复杂度。当然降维也是需要一定代价的:更少的维度能够表示的数据中的总方差更少。因为这个,累计解释方差比(cumulative explained variance ratio)对于我们确定这个问题需要多少维度非常重要。另外,如果大部分的方差都能够通过两个或者是三个维度进行表示的话,降维之后的数据能够被可视化。
在下面的代码单元中,你将实现下面的功能:
- 将 good_data 用两个维度的PCA进行拟合,并将结果存储到 pca 中去。
- 使用 pca.transform 将 good_data 进行转换,并将结果存储在 reduced_data 中。
- 使用 pca.transform 将 log_samples 进行转换,并将结果存储在 pca_samples 中。
End of explanation
# 展示经过两个维度的PCA转换之后的样本log-data
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
Explanation: 观察
运行以下代码观察当仅仅使用两个维度进行 PCA 转换后,这个对数样本数据将怎样变化。观察这里的结果与一个使用六个维度的 PCA 转换相比较时,前两维的数值是保持不变的。
End of explanation
# 可视化双标图
vs.biplot(good_data, reduced_data, pca)
Explanation: 可视化一个双标图(Biplot)
双标图是一个散点图,每个数据点的位置由它所在主成分的分数确定。坐标系是主成分(这里是 Dimension 1 和 Dimension 2)。此外,双标图还展示出初始特征在主成分上的投影。一个双标图可以帮助我们理解降维后的数据,发现主成分和初始特征之间的关系。
运行下面的代码来创建一个降维后数据的双标图。
End of explanation
# TODO:在降维后的数据上使用你选择的聚类算法
clusterer = None
# TODO:预测每一个点的簇
preds = None
# TODO:找到聚类中心
centers = None
# TODO:预测在每一个转换后的样本点的类
sample_preds = None
# TODO:计算选择的类别的平均轮廓系数(mean silhouette coefficient)
score = None
Explanation: 观察
一旦我们有了原始特征的投影(红色箭头),就能更加容易的理解散点图每个数据点的相对位置。
在这个双标图中,哪些初始特征与第一个主成分有强关联?哪些初始特征与第二个主成分相关联?你观察到的是否与之前得到的 pca_results 图相符?
聚类
在这个部分,你讲选择使用 K-Means 聚类算法或者是高斯混合模型聚类算法以发现数据中隐藏的客户分类。然后,你将从簇中恢复一些特定的关键数据点,通过将它们转换回原始的维度和规模,从而理解他们的含义。
问题 6
使用 K-Means 聚类算法的优点是什么?使用高斯混合模型聚类算法的优点是什么?基于你现在对客户数据的观察结果,你选用了这两个算法中的哪一个,为什么?
回答:
练习: 创建聚类
针对不同情况,有些问题你需要的聚类数目可能是已知的。但是在聚类数目不作为一个先验知道的情况下,我们并不能够保证某个聚类的数目对这个数据是最优的,因为我们对于数据的结构(如果存在的话)是不清楚的。但是,我们可以通过计算每一个簇中点的轮廓系数来衡量聚类的质量。数据点的轮廓系数衡量了它与分配给他的簇的相似度,这个值范围在-1(不相似)到1(相似)。平均轮廓系数为我们提供了一种简单地度量聚类质量的方法。
在接下来的代码单元中,你将实现下列功能:
- 在 reduced_data 上使用一个聚类算法,并将结果赋值到 clusterer,需要设置 random_state 使得结果可以复现。
- 使用 clusterer.predict 预测 reduced_data 中的每一个点的簇,并将结果赋值到 preds。
- 使用算法的某个属性值找到聚类中心,并将它们赋值到 centers。
- 预测 pca_samples 中的每一个样本点的类别并将结果赋值到 sample_preds。
- 导入 sklearn.metrics.silhouette_score 包并计算 reduced_data 相对于 preds 的轮廓系数。
- 将轮廓系数赋值给 score 并输出结果。
End of explanation
# 从已有的实现中展示聚类的结果
vs.cluster_results(reduced_data, preds, centers, pca_samples)
Explanation: 问题 7
汇报你尝试的不同的聚类数对应的轮廓系数。在这些当中哪一个聚类的数目能够得到最佳的轮廓系数?
回答:
聚类可视化
一旦你选好了通过上面的评价函数得到的算法的最佳聚类数目,你就能够通过使用下面的代码块可视化来得到的结果。作为实验,你可以试着调整你的聚类算法的聚类的数量来看一下不同的可视化结果。但是你提供的最终的可视化图像必须和你选择的最优聚类数目一致。
End of explanation
# TODO:反向转换中心点
log_centers = None
# TODO:对中心点做指数转换
true_centers = None
# 显示真实的中心点
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
Explanation: 练习: 数据恢复
上面的可视化图像中提供的每一个聚类都有一个中心点。这些中心(或者叫平均点)并不是数据中真实存在的点,但是是所有预测在这个簇中的数据点的平均。对于创建客户分类的问题,一个簇的中心对应于那个分类的平均用户。因为这个数据现在进行了降维并缩放到一定的范围,我们可以通过施加一个反向的转换恢复这个点所代表的用户的花费。
在下面的代码单元中,你将实现下列的功能:
- 使用 pca.inverse_transform 将 centers 反向转换,并将结果存储在 log_centers 中。
- 使用 np.log 的反函数 np.exp 反向转换 log_centers 并将结果存储到 true_centers 中。
End of explanation
# 显示预测结果
for i, pred in enumerate(sample_preds):
print("Sample point", i, "predicted to be in Cluster", pred)
Explanation: 问题 8
考虑上面的代表性数据点在每一个产品类型的花费总数,你认为这些客户分类代表了哪类客户?为什么?需要参考在项目最开始得到的统计值来给出理由。
提示: 一个被分到'Cluster X'的客户最好被用 'Segment X'中的特征集来标识的企业类型表示。
回答:
问题 9
对于每一个样本点问题 8 中的哪一个分类能够最好的表示它?你之前对样本的预测和现在的结果相符吗?
运行下面的代码单元以找到每一个样本点被预测到哪一个簇中去。
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# 读取包含聚类结果的数据
cluster_data = pd.read_csv("cluster.csv")
y = cluster_data['Region']
X = cluster_data.drop(['Region'], axis = 1)
# 划分训练集测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=24)
clf = RandomForestClassifier(random_state=24)
clf.fit(X_train, y_train)
score_with_cluster = clf.score(X_test, y_test)
# 移除cluster特征
X_train = X_train.copy()
X_train.drop(['cluster'], axis=1, inplace=True)
X_test = X_test.copy()
X_test.drop(['cluster'], axis=1, inplace=True)
clf.fit(X_train, y_train)
score_no_cluster = clf.score(X_test, y_test)
print("不使用cluster特征的得分: %.4f"%score_no_cluster)
print("使用cluster特征的得分: %.4f"%score_with_cluster)
Explanation: 回答:
结论
在最后一部分中,你要学习如何使用已经被分类的数据。首先,你要考虑不同组的客户客户分类,针对不同的派送策略受到的影响会有什么不同。其次,你要考虑到,每一个客户都被打上了标签(客户属于哪一个分类)可以给客户数据提供一个多一个特征。最后,你会把客户分类与一个数据中的隐藏变量做比较,看一下这个分类是否辨识了特定的关系。
问题 10
在对他们的服务或者是产品做细微的改变的时候,公司经常会使用 A/B tests 以确定这些改变会对客户产生积极作用还是消极作用。这个批发商希望考虑将他的派送服务从每周5天变为每周3天,但是他只会对他客户当中对此有积极反馈的客户采用。这个批发商应该如何利用客户分类来知道哪些客户对它的这个派送策略的改变有积极的反馈,如果有的话?你需要给出在这个情形下A/B 测试具体的实现方法,以及最终得出结论的依据是什么?
提示: 我们能假设这个改变对所有的客户影响都一致吗?我们怎样才能够确定它对于哪个类型的客户影响最大?
回答:
问题 11
通过聚类技术,我们能够将原有的没有标记的数据集中的附加结构分析出来。因为每一个客户都有一个最佳的划分(取决于你选择使用的聚类算法),我们可以把用户分类作为数据的一个工程特征。假设批发商最近迎来十位新顾客,并且他已经为每位顾客每个产品类别年度采购额进行了预估。进行了这些估算之后,批发商该如何运用它的预估和非监督学习的结果来对这十个新的客户进行更好的预测?
提示:在下面的代码单元中,我们提供了一个已经做好聚类的数据(聚类结果为数据中的cluster属性),我们将在这个数据集上做一个小实验。尝试运行下面的代码看看我们尝试预测‘Region’的时候,如果存在聚类特征'cluster'与不存在相比对最终的得分会有什么影响?这对你有什么启发?
End of explanation
# 根据‘Channel‘数据显示聚类的结果
vs.channel_results(reduced_data, outliers, pca_samples)
Explanation: 回答:
可视化内在的分布
在这个项目的开始,我们讨论了从数据集中移除 'Channel' 和 'Region' 特征,这样在分析过程中我们就会着重分析用户产品类别。通过重新引入 Channel 这个特征到数据集中,并施加和原来数据集同样的 PCA 变换的时候我们将能够发现数据集产生一个有趣的结构。
运行下面的代码单元以查看哪一个数据点在降维的空间中被标记为 'HoReCa' (旅馆/餐馆/咖啡厅)或者 'Retail'。另外,你将发现样本点在图中被圈了出来,用以显示他们的标签。
End of explanation |
4,248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to solve H(div) PDEs in practice?
This document explores the current, easily accessible, state of the art for solving an $H(\rm div) \times L^2$ formulation of Poisson's problem or equivalently Darcy flow using FEniCS (www.fenicsproject.org) and underlying linear algebra libraries e.g. PETSc.
Mathematical background
Poisson's equation
The underlying PDE in question is
Step3: Some code snippets for running solves, timing them and running repeated solves and reporting timing averages and standard deviations
Step5: Implementation and study of primal formulation
For comparison purposes, it is useful to look at run times for a straightforward implementation of the primal formulation. So, code is included for this first
Step7: Solving the linear system using LU
The very basic method for solving the resulting system of equations is by direct LU factorization. Let's just look at that first. The below code snippet just computes the linear system corresponding to the primal formulation, solves it using the default LU solver and times it
Step8: Ok, so let's just run some timings with this formulation to see how long it takes and how the run time scales with mesh size and/or problem size i.e. number of degrees of freedom.
Step9: Ok, let's plot the run times versus mesh size
Step10: Or, we can look at run times versus problem size (number of degrees of freedom)
Step11: Using AMG instead of LU for the primal formulation
Ok, so we can very easily do much better than using LU for this formulation. Since this is a symmetric and positive definite problem, an obvious choice is to use built-in CG (symmetric problem) with AMG preconditioning out of the box. Here is a short code snippet for doing so and timing the result
Step12: Let's do the same as we did for the LU, run some timings, and plot the results versus mesh size and problem size.Let's also try with N=64 since that should be completely feasible with an iterative solver.
Step13: A quick look at these results indicates that
* AMG beats LU run times around #dofs = 5000 (16 x 16 x 16 mesh) and onwards (increasing problem size);
* AMG run times increase with roughly a factor 20 when the mesh increases with a factor 10.
* AMG for 32 x 32 x 32 takes about 2 seconds
* Number of iterations seems pretty constant with increasing mesh size (3-4)
Dependency on initial state
Does the initial state (value of p) make much of a difference for the iterative solver? Let's have a quick look at that.
Step15: Nope, that does not seem to make much of a difference in this case.
Implementation and exploration of the mixed formulation
Step16: With this mixed variational formulation of the problem and the basic LU solver, let's run some similar experiments
Step17: Some observations
Step18: One can experiment with this a bit, the results look reasonable, but the number of iterations increase with the system size
Step20: Some observations regarding GMRES + iLU | Python Code:
# Import useful libraries
from dolfin import *
import numpy
import pylab
# Plot inline in this notebook
%matplotlib inline
# Set basic optimization parameters for FEniCS
parameters["form_compiler"]["representation"] = "uflacs"
parameters["form_compiler"]["cpp_optimize"] = True
#parameters["plotting_backend"] = "matplotlib"
Explanation: How to solve H(div) PDEs in practice?
This document explores the current, easily accessible, state of the art for solving an $H(\rm div) \times L^2$ formulation of Poisson's problem or equivalently Darcy flow using FEniCS (www.fenicsproject.org) and underlying linear algebra libraries e.g. PETSc.
Mathematical background
Poisson's equation
The underlying PDE in question is: given $f$, find $p$ satisfying
$- \Delta p = f$
over a computational domain $\Omega$ with homogeneous Dirichlet boundary conditions where $\Delta$ is the standard Laplace operator.
Primal formulation
The standard (primal) $H^1$ formulation of this problem reads as: find $p \in H^1_0$ such that
$\int_{\Omega} \nabla p \cdot \nabla q \, \textrm{d} x = \int_{\Omega} f q \, \textrm{d} x$
for all $q \in H^1_0$. Usual finite element spaces for this formulation is for instance Lagrange elements of order $k \geq 1$ i.e. continuous piecewise polynomials of polynomial order $k$ defined relative to a mesh of the domain.
Mixed formulation
The standard (mixed) $H(\rm div) \times L^2$ formulation of this problem reads as: find $u \in H(\rm div)$ and $p \in L^2$ such that
$\int_{\Omega} u \cdot v + \nabla \cdot u q + \nabla \cdot v p \, \textrm{d} x = \int_{\Omega} - f q \, \textrm{d} x$
Stable finite elements for this formulation is for instance Raviart-Thomas elements of order $k+1$ and discontinuous elements of order $k$ for $k \geq 0$.
Implementation basics
Premises for this notebook:
* The implementation should be accessible through the Python interface to the FEniCS software
End of explanation
def time_solve(mesh, algorithm):
Run a given algorithm over a given mesh,
time it and return the time and the dimension of the linear system.
solution, tag = algorithm(mesh)
times = timings(TimingClear_clear, [TimingType_wall])
dim = solution.function_space().dim()
t = times.get_value(tag, "wall tot")
return (t, dim)
def time_solves(mesh, algorithm, R=1):
Run R solves of a given algorithm over a given mesh,
time each and return average time and standard deviation.
times = numpy.empty(R)
h = mesh.hmax()
# Run a set of R solves and time each
for i in range(R):
t, dim = time_solve(mesh, algorithm)
print "%s (s) with N=%d and h=%.2g: %.3g" % (algorithm, dim, h, t)
times[i] = t
# Return average timing and standard deviation
avg_t = numpy.mean(times)
std_t = numpy.std(times)
return (avg_t, std_t)
Explanation: Some code snippets for running solves, timing them and running repeated solves and reporting timing averages and standard deviations
End of explanation
def primal(mesh):
Compute linear system corresponding to H^1 formulation,
return matrix A, right-hand side b and Function for the solution.
Q = FunctionSpace(mesh, "CG", 1)
p = TrialFunction(Q)
q = TestFunction(Q)
a = inner(grad(p), grad(q))*dx
f = Constant(1.0)
L = f*q*dx
bc = DirichletBC(Q, 0.0, "on_boundary")
A, b = assemble_system(a, L, bc)
p = Function(Q)
return (A, b, p)
Explanation: Implementation and study of primal formulation
For comparison purposes, it is useful to look at run times for a straightforward implementation of the primal formulation. So, code is included for this first:
End of explanation
def primal_lu(mesh):
On given mesh, solve H^1 formulation using plain LU.
A, b, p = primal(mesh)
tag = "Primal LU"
timer = Timer(tag)
solver = LUSolver(A)
solver.solve(p.vector(), b)
timer.stop()
return (p, tag)
Explanation: Solving the linear system using LU
The very basic method for solving the resulting system of equations is by direct LU factorization. Let's just look at that first. The below code snippet just computes the linear system corresponding to the primal formulation, solves it using the default LU solver and times it:
End of explanation
sizes = [8, 16, 32]
hs = []
primal_lu_times = []
stds = []
Ns = []
R = 3
for n in sizes:
mesh = UnitCubeMesh(n, n, n)
Ns += [mesh.num_vertices()] # NB CG1 specific
hs += [mesh.hmax()]
avg_t, std_t = time_solves(mesh, primal_lu, R=R)
print "%s took %0.3g (+- %0.3g)" % ("Primal LU", avg_t, std_t)
primal_lu_times += [avg_t]
stds += [std_t]
Explanation: Ok, so let's just run some timings with this formulation to see how long it takes and how the run time scales with mesh size and/or problem size i.e. number of degrees of freedom.
End of explanation
pylab.figure()
pylab.errorbar(hs, primal_lu_times, stds)
pylab.grid(True)
pylab.xlabel("h")
pylab.ylabel("Run time (s)")
pylab.show()
Explanation: Ok, let's plot the run times versus mesh size
End of explanation
pylab.figure()
pylab.errorbar(Ns, primal_lu_times, stds)
pylab.grid(True)
pylab.xlabel("#dofs")
pylab.ylabel("Run time (s)")
pylab.show()
Explanation: Or, we can look at run times versus problem size (number of degrees of freedom)
End of explanation
def primal_amg(mesh):
"Solve primal H^1 formulation using CG with AMG."
A, b, p = primal(mesh)
tag = "Primal AMG"
timer = Timer(tag)
solver = PETScKrylovSolver("cg", "amg")
solver.set_operator(A)
num_it = solver.solve(p.vector(), b)
timer.stop()
print "%s: num_it = " % tag, num_it
return (p, tag)
Explanation: Using AMG instead of LU for the primal formulation
Ok, so we can very easily do much better than using LU for this formulation. Since this is a symmetric and positive definite problem, an obvious choice is to use built-in CG (symmetric problem) with AMG preconditioning out of the box. Here is a short code snippet for doing so and timing the result:
End of explanation
sizes = [8, 16, 32, 64]
hs = []
primal_amg_times = []
stds = []
Ns = []
R = 3
for n in sizes:
mesh = UnitCubeMesh(n, n, n)
Ns += [mesh.num_vertices()] # NB CG1 specific
hs += [mesh.hmax()]
avg_t, std_t = time_solves(mesh, primal_amg, R=R)
print "%s took %0.3g (+- %0.3g)" % ("Primal AMG", avg_t, std_t)
primal_amg_times += [avg_t]
stds += [std_t]
pylab.figure()
pylab.errorbar(hs, primal_amg_times, stds)
pylab.grid(True)
pylab.xlabel("h")
pylab.ylabel("Run time (s)")
pylab.show()
pylab.figure()
pylab.errorbar(Ns, primal_amg_times, stds)
pylab.grid(True)
pylab.xlabel("#dofs")
pylab.ylabel("Run time (s)")
pylab.show()
Explanation: Let's do the same as we did for the LU, run some timings, and plot the results versus mesh size and problem size.Let's also try with N=64 since that should be completely feasible with an iterative solver.
End of explanation
n = 32
mesh = UnitCubeMesh(n, n, n)
A, b, p = primal(mesh)
tag = "Primal AMG with zero initial state"
timer = Timer(tag)
solver = PETScKrylovSolver("cg", "amg")
solver.set_operator(A)
num_it = solver.solve(p.vector(), b)
value = timer.stop()
print "%s (s) = " % tag, value
tag = "Primal AMG with random initial state"
dim = p.vector().size()
p.vector()[:] = numpy.random.rand(dim)
timer = Timer(tag)
solver = PETScKrylovSolver("cg", "amg")
solver.set_operator(A)
solver.parameters["nonzero_initial_guess"] = True
num_it = solver.solve(p.vector(), b)
value = timer.stop()
print "%s (s) = " % tag, value
Explanation: A quick look at these results indicates that
* AMG beats LU run times around #dofs = 5000 (16 x 16 x 16 mesh) and onwards (increasing problem size);
* AMG run times increase with roughly a factor 20 when the mesh increases with a factor 10.
* AMG for 32 x 32 x 32 takes about 2 seconds
* Number of iterations seems pretty constant with increasing mesh size (3-4)
Dependency on initial state
Does the initial state (value of p) make much of a difference for the iterative solver? Let's have a quick look at that.
End of explanation
def darcy(mesh):
Compute and return linear system and solution function
for mixed H(div) x L^2 formulation of Poisson/Darcy.
V = FiniteElement("RT", mesh.ufl_cell(), 1)
Q = FiniteElement("DG", mesh.ufl_cell(), 0)
W = FunctionSpace(mesh, V*Q)
(u, p) = TrialFunctions(W)
(v, q) = TestFunctions(W)
a = (dot(u, v) + div(u)*q + div(v)*p)*dx
f = Constant(-1.0)
L = f*q*dx
A = assemble(a)
b = assemble(L)
w = Function(W)
return (A, b, w)
def darcy_lu(mesh):
"Solve mixed H(div) x L^2 formulation using LU"
tag = "Darcy LU"
(A, b, w) = darcy(mesh)
timer = Timer(tag)
solver = LUSolver(A)
solver.solve(w.vector(), b)
timer.stop()
#(u, p) = w.split(deepcopy=True)
#plot(p)
return (w, tag)
Explanation: Nope, that does not seem to make much of a difference in this case.
Implementation and exploration of the mixed formulation
End of explanation
sizes = [8, 16]
hs = []
times = []
stds = []
Ns = []
R = 3
for n in sizes:
mesh = UnitCubeMesh(n, n, n)
mesh.init()
Ns += [mesh.num_edges() + mesh.num_cells()] # NB: RT0 x DG0 specific
hs += [mesh.hmax()]
avg_t, std_t = time_solves(mesh, darcy_lu, R=R)
print "%s took %0.3g (+- %0.3g)" % ("Darcy LU", avg_t, std_t)
times += [avg_t]
stds += [std_t]
Explanation: With this mixed variational formulation of the problem and the basic LU solver, let's run some similar experiments
End of explanation
def darcy_ilu(mesh):
"Solve mixed H(div) x L^2 formulation using GMRES and ilu"
tag = "Darcy iLU"
(A, b, w) = darcy(mesh)
timer = Timer(tag)
solver = PETScKrylovSolver("gmres", "ilu")
solver.set_operator(A)
num_iter = solver.solve(w.vector(), b)
timer.stop()
print "#iterations (%s) = " % tag, num_iter
return (w, tag)
Explanation: Some observations:
* While LU for the primal formulation on the 8 x 8 x 8 mesh took 0.00537 s, LU for mixed on same mesh takes about 0.141 s which is a factor 20 increase. One can note however that N = 729 for the former and N = 9600 for the latter, which is a factor 10 increase in system size.
* Running with 32 x 32 x 32 runs out of memory.
This formulation is symmetric, but not positive definite (or negative definite), so it is not entirely clear how to proceed. One possibility is to try some out of the box iterative solvers anyway. So, let's start with that.
Let's just start with "gmres" ("cg" gives garbage) and "ilu":
End of explanation
sizes = [8, 16, 32]#, 64]
hs = []
times = []
stds = []
Ns = []
R = 2
for n in sizes:
mesh = UnitCubeMesh(n, n, n)
mesh.init()
Ns += [mesh.num_edges() + mesh.num_cells()] # NB: RT0 x DG0 specific
hs += [mesh.hmax()]
avg_t, std_t = time_solves(mesh, darcy_ilu, R=R)
print "%s took %0.3g (+- %0.3g)" % ("Darcy GMRES + iLU", avg_t, std_t)
print
times += [avg_t]
stds += [std_t]
Explanation: One can experiment with this a bit, the results look reasonable, but the number of iterations increase with the system size:
End of explanation
def darcy_prec1(W):
(u, p) = TrialFunctions(W)
(v, q) = TestFunctions(W)
prec = (inner(u, v) + div(u)*div(v) + p*q)*dx
B = assemble(prec)
return B
def darcy_amg(mesh):
Solve mixed H(div) x L^2 formulation using GMRES and AMG,
with an additionally defined preconditioning matrix
tag = "Darcy AMG"
(A, b, w) = darcy(mesh)
B = darcy_prec1(w.function_space())
timer = Timer(tag)
solver = PETScKrylovSolver("gmres", "amg") # or hypre_amg or petsc_amg
solver.set_operators(A, B)
solver.parameters["relative_tolerance"] = 1.e-10 # To get correct results in eye-norm
num_iter = solver.solve(w.vector(), b)
timer.stop()
print "#iterations (%s) = " % tag, num_iter
return (w, tag)
sizes = [8, 16]
hs = []
times = []
stds = []
Ns = []
R = 2
for n in sizes:
mesh = UnitCubeMesh(n, n, n)
mesh.init()
Ns += [mesh.num_edges() + mesh.num_cells()] # NB: RT0 x DG0 specific
hs += [mesh.hmax()]
avg_t, std_t = time_solves(mesh, darcy_amg, R=R)
print "%s took %0.3g (+- %0.3g)" % ("Darcy GMRES + AMG", avg_t, std_t)
print
times += [avg_t]
stds += [std_t]
Explanation: Some observations regarding GMRES + iLU:
* Feasible solution range (in terms of number of degrees of freedom) greatly increased compared to LU, 64^3 is possible (but takes around 400 s with 1577 iterations).
* Cost per mesh size compared to primal formulation is about 6x for N=32 and increasing (about 8x for N=32)
* Number of iterations increase significantly with mesh size.
* More testing required to examine correctness of solution for more complicated test cases.
* No tolerances set here, nor residual monitored, should do that.
Ok, let's try if adding a preconditioner matrix helps with the AMG:
End of explanation |
4,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NoSQL (Neo4j) (sesión 7)
Esta hoja muestra cómo acceder a bases de datos Neo4j y también a conectar la salida con Jupyter.
Se puede utilizar el propio interfaz de Neo4j también en la dirección http
Step1: Vamos a cargar la extensión ipython-cypher para poder lanzar consultas Cypher directamente a través de la hoja.
Todas las celdas que comiencen por %%cypher y todas las instrucciones Python que comiencen por %cypher se enviarán a Neo4j para su interpretación.
Step2: La siguiente celda genera una consulta en Cypher que devuelve los 10 primeros nodos. Al inicio la base de datos está vacía, pero se puede probar después para ver la salida. Existen plugins para ver gráficamente la salida como un grafo, pero para eso usaremos el interfaz gráfico del propio Neo4j.
Step3: La carga de datos CSV no se podía realizar directamente desde los ficheros CSV la hoja, porque el CSV que acepta Neo4j no es estándar. Envié un issue para que lo arreglaran, y en la versión 3.3 parece que ya funciona si se añade un parámetro de configuración
Step4: El siguiente código carga el CSV de las preguntas y respuestas. El código primero todos los nodos con la etiqueta Post, y después añade la etiqueta Question ó Answer dependiendo del valor del atributo PostTypeId.
Step5: A todas las preguntas, se las etiqueta con Question.
Step6: A todas las respuestas se las etiqueta con Answer.
Step7: Se crea un nodo usuario (o se utiliza uno si ya existe) usando el campo OwnerUserId, siempre que no esté vacío. Nótese que se puede utilizar CREATE porque esta combinación de relación usuario y pregunta no existe. Cuidado, si se ejecuta dos veces creará el doble de relaciones.
Step8: El lenguaje Cypher
El lenguaje Cypher tiene una sintaxis de Query By Example. Acepta funciones y permite creación y búsqueda de nodos y relaciones. Tiene algunas peculiaridades que veremos a continuación. Por lo pronto, se puede ver un resumen de características en la Cypher Reference Card.
La anterior consulta utiliza la construcción LOAD CSV para leer datos CSV dentro de nodos. La cláusula CREATE crea nuevos nodos. La SET permite poner valores a las propiedades de los nodos.
En el caso de la consulta de arriba, a todos los datos leídos se les copia los datos de la línea (primer SET). Después, dependiendo del valor de PostTypeId, se les etiqueta como
Step9: Creamos un índice sobre el Id para acelerar las siguientes búsquedas
Step10: Añadimos una relación entre las preguntas y las respuestas
Step11: Las construcciones %cypher retornan resultados de los que se puede obtener un dataframe de pandas
Step12: La consulta RQ4 se puede resolver de manera muy fácil. En esta primera consulta se devuelve los nodos
Step13: O bien retornar los Id de cada usuario
Step14: Y finalmente, la creación de relaciones
Step15: También se puede buscar el camino mínimo entre dos usuarios cualesquiera. Si existe un camino a través de alguna pregunta o respuesta, la encontrará. Un ejemplo donde hay una comunicación directa
Step16: Mientras que con otro usuario la cadena es más larga
Step17: Finalmente se pueden encontrar todos los caminos mínimos en donde se ve que tiene que existir al menos un par pregunta/respuesta entre los usuarios que son recíprocos
Step18: EJERCICIO
Step19: La siguiente consulta muestra los usuarios que preguntan por cada Tag
Step20: El mismo MATCH se puede usar para encontrar qué conjunto de tags ha usado cada usuario cambiando lo que retornamos | Python Code:
from pprint import pprint as pp
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
Explanation: NoSQL (Neo4j) (sesión 7)
Esta hoja muestra cómo acceder a bases de datos Neo4j y también a conectar la salida con Jupyter.
Se puede utilizar el propio interfaz de Neo4j también en la dirección http://127.0.0.1:7474.
End of explanation
!pip install ipython-cypher
%load_ext cypher
%config CypherMagic.uri='http://neo4j:7474/db/data'
%config CypherMagic.auto_html=False
Explanation: Vamos a cargar la extensión ipython-cypher para poder lanzar consultas Cypher directamente a través de la hoja.
Todas las celdas que comiencen por %%cypher y todas las instrucciones Python que comiencen por %cypher se enviarán a Neo4j para su interpretación.
End of explanation
%%cypher
match (n) return n limit 10;
Explanation: La siguiente celda genera una consulta en Cypher que devuelve los 10 primeros nodos. Al inicio la base de datos está vacía, pero se puede probar después para ver la salida. Existen plugins para ver gráficamente la salida como un grafo, pero para eso usaremos el interfaz gráfico del propio Neo4j.
End of explanation
%%cypher
CREATE INDEX ON :User(Id);
Explanation: La carga de datos CSV no se podía realizar directamente desde los ficheros CSV la hoja, porque el CSV que acepta Neo4j no es estándar. Envié un issue para que lo arreglaran, y en la versión 3.3 parece que ya funciona si se añade un parámetro de configuración: https://github.com/neo4j/neo4j/issues/8472
bash
dbms.import.csv.legacy_quote_escaping = false
He añadido al contenedor de la práctica esta opción en la carga de Neo4j. Tened en cuenta que si usáis otra configuración hay que añadírselo.
Primero se crea un índice sobre el atributo Id de User, que se usará después para crear usuarios y relacionarlos con la pregunta o respuesta que se ha leído. Si no se hace esto, la carga del CSV es muy lenta.
End of explanation
%%cypher
USING PERIODIC COMMIT 10000
LOAD CSV WITH HEADERS FROM "http://neuromancer.inf.um.es:8080/es.stackoverflow/Posts.csv" AS row
CREATE (n)
SET n=row
SET n :Post
;
Explanation: El siguiente código carga el CSV de las preguntas y respuestas. El código primero todos los nodos con la etiqueta Post, y después añade la etiqueta Question ó Answer dependiendo del valor del atributo PostTypeId.
End of explanation
%%cypher
MATCH (n:Post {PostTypeId : "1"})
SET n:Question;
Explanation: A todas las preguntas, se las etiqueta con Question.
End of explanation
%%cypher
MATCH (n:Post {PostTypeId : "2"})
SET n:Answer;
Explanation: A todas las respuestas se las etiqueta con Answer.
End of explanation
%%cypher
MATCH (n:Post)
WHERE n.OwnerUserId <> ""
MERGE (u:User {Id: n.OwnerUserId})
CREATE (u)-[:WROTE]->(n);
Explanation: Se crea un nodo usuario (o se utiliza uno si ya existe) usando el campo OwnerUserId, siempre que no esté vacío. Nótese que se puede utilizar CREATE porque esta combinación de relación usuario y pregunta no existe. Cuidado, si se ejecuta dos veces creará el doble de relaciones.
End of explanation
%%cypher
match (n:Post) WHERE size(labels(n)) = 1 RETURN n;
Explanation: El lenguaje Cypher
El lenguaje Cypher tiene una sintaxis de Query By Example. Acepta funciones y permite creación y búsqueda de nodos y relaciones. Tiene algunas peculiaridades que veremos a continuación. Por lo pronto, se puede ver un resumen de características en la Cypher Reference Card.
La anterior consulta utiliza la construcción LOAD CSV para leer datos CSV dentro de nodos. La cláusula CREATE crea nuevos nodos. La SET permite poner valores a las propiedades de los nodos.
En el caso de la consulta de arriba, a todos los datos leídos se les copia los datos de la línea (primer SET). Después, dependiendo del valor de PostTypeId, se les etiqueta como :Question o como :Answer. Si tienen un usuario asignado a través de OwnerUserId, se añade un usuario si no existe y se crea la relación :WROTE.
También hay otros posts especiales que no eran preguntas ni respuestas. A estos no se les asigna una segunda etiqueta:
End of explanation
%%cypher
CREATE INDEX ON :Post(Id);
Explanation: Creamos un índice sobre el Id para acelerar las siguientes búsquedas:
End of explanation
%%cypher
MATCH (a:Answer), (q:Question {Id: a.ParentId})
CREATE (a)-[:ANSWERS]->(q)
;
Explanation: Añadimos una relación entre las preguntas y las respuestas:
End of explanation
#%%cypher
res = %cypher MATCH q=(r)-[:ANSWERS]->(p) RETURN p.Id,r.Id;
df = res.get_dataframe()
df['r.Id'] = pd.to_numeric(df['r.Id'],downcast='unsigned')
df['p.Id'] = pd.to_numeric(df['p.Id'],downcast='unsigned')
df.plot(kind='scatter',x='p.Id',y='r.Id',figsize=(15,15))
Explanation: Las construcciones %cypher retornan resultados de los que se puede obtener un dataframe de pandas:
End of explanation
%%cypher
// RQ4
MATCH
(u1:User)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u2:User),
(u2)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u1)
WHERE u1 <> u2 AND u1.Id < u2.Id
RETURN DISTINCT u1,u2
;
Explanation: La consulta RQ4 se puede resolver de manera muy fácil. En esta primera consulta se devuelve los nodos:
End of explanation
%%cypher
MATCH
(u1:User)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u2:User),
(u2)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u1)
WHERE u1 <> u2 AND toInt(u1.Id) < toInt(u2.Id)
RETURN DISTINCT u1.Id,u2.Id
ORDER BY toInt(u1.Id)
;
Explanation: O bien retornar los Id de cada usuario:
End of explanation
%%cypher
// RQ4 creando relaciones de reciprocidad
MATCH
(u1:User)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u2:User),
(u2)-[:WROTE]->()-[:ANSWERS]->()<-[:WROTE]-(u1)
WHERE u1 <> u2 AND u1.Id < u2.Id
WITH u1 AS user1,u2 AS user2
MERGE (user1)-[:RECIPROCATE]->(user2)
MERGE (user2)-[:RECIPROCATE]->(user1)
;
Explanation: Y finalmente, la creación de relaciones :RECIPROCATE entre los usuarios. Se introduce también la construcción WITH.
WITH sirve para introducir "espacios de nombres". Permite importar nombres de filas anteriores, hacer alias con AS e introducir nuevos valores con funciones de Cypher. La siguiente consulta es la misma de arriba, RQ4, pero creando relaciones :RECIPROCATE entre cada dos usuarios que se ayudan recíprocamente.
End of explanation
%%cypher
MATCH p=shortestPath( (u1:User {Id: '24'})-[*]-(u2:User {Id:'25'}) ) RETURN p
Explanation: También se puede buscar el camino mínimo entre dos usuarios cualesquiera. Si existe un camino a través de alguna pregunta o respuesta, la encontrará. Un ejemplo donde hay una comunicación directa:
End of explanation
%%cypher
MATCH p=shortestPath( (u1:User {Id: '324'})-[*]-(u2:User {Id:'25'}) ) RETURN p
Explanation: Mientras que con otro usuario la cadena es más larga:
End of explanation
%%cypher
MATCH p=allShortestPaths( (u1:User {Id: '24'})-[*]-(u2:User {Id:'25'}) ) RETURN p
Explanation: Finalmente se pueden encontrar todos los caminos mínimos en donde se ve que tiene que existir al menos un par pregunta/respuesta entre los usuarios que son recíprocos:
End of explanation
%%cypher
MATCH p=(t:Tag)-[:TAGS]->(:Question) WHERE t.name =~ "^java$|^c\\+\\+$" RETURN count(p);
Explanation: EJERCICIO: Construir los nodos :Tag para cada uno de los tags que aparecen en las preguntas. Construir las relaciones post-[:TAGGED_BY]->tag para cada tag y también tag-[:TAGS]->post
Para ello, buscar en la ayuda las construcciones WITH y UNWIND y las funciones replace() y split() de Cypher. La siguiente consulta debe retornar 5703 resultados:
End of explanation
%%cypher
MATCH (t:Tag)-->(:Question)<--(u:User) RETURN t.name,collect(distinct u.Id) ORDER BY t.name;
Explanation: La siguiente consulta muestra los usuarios que preguntan por cada Tag:
End of explanation
%%cypher
MATCH (t:Tag)-->(:Question)<--(u:User) RETURN u.Id, collect(distinct t.name) ORDER BY toInt(u.Id);
Explanation: El mismo MATCH se puede usar para encontrar qué conjunto de tags ha usado cada usuario cambiando lo que retornamos:
End of explanation |
4,250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2
Step1: 4
Step2: 5
Step3: 6 | Python Code:
f = open("dq_unisex_names.csv", "r")
data = f.read()
print(data)
Explanation: 2: Unisex names
3: Read the file into string
Instructions
Use the open() function to return a File object with the parameters:
r for read mode
dq_unisex_names.csv for the file name
Then use the read() method of the File object to read the file into a string. Assign that string to a variable named data.
End of explanation
f = open('dq_unisex_names.csv', 'r')
data = f.read()
data_list = data.split("\n")
print(data_list[:5])
Explanation: 4: Convert the string to a list
Instructions
Use the split() method that strings have to split on the new-line delimiter ("\n") and assign the resulting list to data_list. Then use the print() function to display the first 5 elements in data_list.
Answer
End of explanation
f = open('dq_unisex_names.csv', 'r')
data = f.read()
data_list = data.split('\n')
string_data = []
for data_elm in data_list:
comma_list = data_elm.split(",")
string_data.append(comma_list)
print(string_data[:5])
Explanation: 5: Convert the list of strings to a list of lists
Instructions
Split each element in data_list on the comma delimiter (,) and append the resulting list to string_data.
To accomplish this:
create an empty list and assign it to string_data
write a for loop that iterates over data_list
within the loop body, run the split() method on each element to return a list (you call that list comma_list)
within the loop body, run the append() method to add each list (comma_list) to string_data.
Finally, use the print() function to display the first 5 elements in string_data.
Answer
End of explanation
numerical_data = []
for str_elm in string_data:
if len(str_elm) != 2:
continue
name = str_elm[0]
num = str_elm[1]
lst = [name, num]
numerical_data.append(lst)
# print(numerical_data[:5])
Explanation: 6: Convert numerical values
Instructions
Create a new list of lists called numerical_data where:
the value at index 0 for each list is the unisex name (as a string)
the value at index 1 for each list is the number of people who share that name (as a float)
To accomplish this:
create an empty list and assign to numerical_data
write a for loop that iterates over string_data
in the loop body
retrieve the value at index 0 and assign to a variable
retrieve the value at index 1, convert it to a float, and assign to a variable
create a new list containing these 2 values (in the same order)
use the append() method to add this new list to numerical_data.
Finally, display the first 5 elements in numerical_data.
Answer
End of explanation |
4,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<img align="left" style="padding-right
Step1: As we have seen several times throughout this section, the simplest colorbar can be created with the plt.colorbar function
Step2: We'll now discuss a few ideas for customizing these colorbars and using them effectively in various situations.
Customizing Colorbars
The colormap can be specified using the cmap argument to the plotting function that is creating the visualization
Step5: All the available colormaps are in the plt.cm namespace; using IPython's tab-completion will give you a full list of built-in possibilities
Step6: Notice the bright stripes in the grayscale image.
Even in full color, this uneven brightness means that the eye will be drawn to certain portions of the color range, which will potentially emphasize unimportant parts of the dataset.
It's better to use a colormap such as viridis (the default as of Matplotlib 2.0), which is specifically constructed to have an even brightness variation across the range.
Thus it not only plays well with our color perception, but also will translate well to grayscale printing
Step7: If you favor rainbow schemes, another good option for continuous data is the cubehelix colormap
Step8: For other situations, such as showing positive and negative deviations from some mean, dual-color colorbars such as RdBu (Red-Blue) can be useful. However, as you can see in the following figure, it's important to note that the positive-negative information will be lost upon translation to grayscale!
Step9: We'll see examples of using some of these color maps as we continue.
There are a large number of colormaps available in Matplotlib; to see a list of them, you can use IPython to explore the plt.cm submodule. For a more principled approach to colors in Python, you can refer to the tools and documentation within the Seaborn library (see Visualization With Seaborn).
Color limits and extensions
Matplotlib allows for a large range of colorbar customization.
The colorbar itself is simply an instance of plt.Axes, so all of the axes and tick formatting tricks we've learned are applicable.
The colorbar has some interesting flexibility
Step10: Notice that in the left panel, the default color limits respond to the noisy pixels, and the range of the noise completely washes-out the pattern we are interested in.
In the right panel, we manually set the color limits, and add extensions to indicate values which are above or below those limits.
The result is a much more useful visualization of our data.
Discrete Color Bars
Colormaps are by default continuous, but sometimes you'd like to represent discrete values.
The easiest way to do this is to use the plt.cm.get_cmap() function, and pass the name of a suitable colormap along with the number of desired bins
Step11: The discrete version of a colormap can be used just like any other colormap.
Example
Step12: Because each digit is defined by the hue of its 64 pixels, we can consider each digit to be a point lying in 64-dimensional space
Step13: We'll use our discrete colormap to view the results, setting the ticks and clim to improve the aesthetics of the resulting colorbar | Python Code:
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
import numpy as np
Explanation: <!--BOOK_INFORMATION-->
<img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
No changes were made to the contents of this notebook from the original.
<!--NAVIGATION-->
< Customizing Plot Legends | Contents | Multiple Subplots >
Customizing Colorbars
Plot legends identify discrete labels of discrete points.
For continuous labels based on the color of points, lines, or regions, a labeled colorbar can be a great tool.
In Matplotlib, a colorbar is a separate axes that can provide a key for the meaning of colors in a plot.
Because the book is printed in black-and-white, this section has an accompanying online supplement where you can view the figures in full color (https://github.com/jakevdp/PythonDataScienceHandbook).
We'll start by setting up the notebook for plotting and importing the functions we will use:
End of explanation
x = np.linspace(0, 10, 1000)
I = np.sin(x) * np.cos(x[:, np.newaxis])
plt.imshow(I)
plt.colorbar();
Explanation: As we have seen several times throughout this section, the simplest colorbar can be created with the plt.colorbar function:
End of explanation
plt.imshow(I, cmap='gray');
Explanation: We'll now discuss a few ideas for customizing these colorbars and using them effectively in various situations.
Customizing Colorbars
The colormap can be specified using the cmap argument to the plotting function that is creating the visualization:
End of explanation
from matplotlib.colors import LinearSegmentedColormap
def grayscale_cmap(cmap):
Return a grayscale version of the given colormap
cmap = plt.cm.get_cmap(cmap)
colors = cmap(np.arange(cmap.N))
# convert RGBA to perceived grayscale luminance
# cf. http://alienryderflex.com/hsp.html
RGB_weight = [0.299, 0.587, 0.114]
luminance = np.sqrt(np.dot(colors[:, :3] ** 2, RGB_weight))
colors[:, :3] = luminance[:, np.newaxis]
return LinearSegmentedColormap.from_list(cmap.name + "_gray", colors, cmap.N)
def view_colormap(cmap):
Plot a colormap with its grayscale equivalent
cmap = plt.cm.get_cmap(cmap)
colors = cmap(np.arange(cmap.N))
cmap = grayscale_cmap(cmap)
grayscale = cmap(np.arange(cmap.N))
fig, ax = plt.subplots(2, figsize=(6, 2),
subplot_kw=dict(xticks=[], yticks=[]))
ax[0].imshow([colors], extent=[0, 10, 0, 1])
ax[1].imshow([grayscale], extent=[0, 10, 0, 1])
view_colormap('jet')
Explanation: All the available colormaps are in the plt.cm namespace; using IPython's tab-completion will give you a full list of built-in possibilities:
plt.cm.<TAB>
But being able to choose a colormap is just the first step: more important is how to decide among the possibilities!
The choice turns out to be much more subtle than you might initially expect.
Choosing the Colormap
A full treatment of color choice within visualization is beyond the scope of this book, but for entertaining reading on this subject and others, see the article "Ten Simple Rules for Better Figures".
Matplotlib's online documentation also has an interesting discussion of colormap choice.
Broadly, you should be aware of three different categories of colormaps:
Sequential colormaps: These are made up of one continuous sequence of colors (e.g., binary or viridis).
Divergent colormaps: These usually contain two distinct colors, which show positive and negative deviations from a mean (e.g., RdBu or PuOr).
Qualitative colormaps: these mix colors with no particular sequence (e.g., rainbow or jet).
The jet colormap, which was the default in Matplotlib prior to version 2.0, is an example of a qualitative colormap.
Its status as the default was quite unfortunate, because qualitative maps are often a poor choice for representing quantitative data.
Among the problems is the fact that qualitative maps usually do not display any uniform progression in brightness as the scale increases.
We can see this by converting the jet colorbar into black and white:
End of explanation
view_colormap('viridis')
Explanation: Notice the bright stripes in the grayscale image.
Even in full color, this uneven brightness means that the eye will be drawn to certain portions of the color range, which will potentially emphasize unimportant parts of the dataset.
It's better to use a colormap such as viridis (the default as of Matplotlib 2.0), which is specifically constructed to have an even brightness variation across the range.
Thus it not only plays well with our color perception, but also will translate well to grayscale printing:
End of explanation
view_colormap('cubehelix')
Explanation: If you favor rainbow schemes, another good option for continuous data is the cubehelix colormap:
End of explanation
view_colormap('RdBu')
Explanation: For other situations, such as showing positive and negative deviations from some mean, dual-color colorbars such as RdBu (Red-Blue) can be useful. However, as you can see in the following figure, it's important to note that the positive-negative information will be lost upon translation to grayscale!
End of explanation
# make noise in 1% of the image pixels
speckles = (np.random.random(I.shape) < 0.01)
I[speckles] = np.random.normal(0, 3, np.count_nonzero(speckles))
plt.figure(figsize=(10, 3.5))
plt.subplot(1, 2, 1)
plt.imshow(I, cmap='RdBu')
plt.colorbar()
plt.subplot(1, 2, 2)
plt.imshow(I, cmap='RdBu')
plt.colorbar(extend='both')
plt.clim(-1, 1);
Explanation: We'll see examples of using some of these color maps as we continue.
There are a large number of colormaps available in Matplotlib; to see a list of them, you can use IPython to explore the plt.cm submodule. For a more principled approach to colors in Python, you can refer to the tools and documentation within the Seaborn library (see Visualization With Seaborn).
Color limits and extensions
Matplotlib allows for a large range of colorbar customization.
The colorbar itself is simply an instance of plt.Axes, so all of the axes and tick formatting tricks we've learned are applicable.
The colorbar has some interesting flexibility: for example, we can narrow the color limits and indicate the out-of-bounds values with a triangular arrow at the top and bottom by setting the extend property.
This might come in handy, for example, if displaying an image that is subject to noise:
End of explanation
plt.imshow(I, cmap=plt.cm.get_cmap('Blues', 6))
plt.colorbar()
plt.clim(-1, 1);
Explanation: Notice that in the left panel, the default color limits respond to the noisy pixels, and the range of the noise completely washes-out the pattern we are interested in.
In the right panel, we manually set the color limits, and add extensions to indicate values which are above or below those limits.
The result is a much more useful visualization of our data.
Discrete Color Bars
Colormaps are by default continuous, but sometimes you'd like to represent discrete values.
The easiest way to do this is to use the plt.cm.get_cmap() function, and pass the name of a suitable colormap along with the number of desired bins:
End of explanation
# load images of the digits 0 through 5 and visualize several of them
from sklearn.datasets import load_digits
digits = load_digits(n_class=6)
fig, ax = plt.subplots(8, 8, figsize=(6, 6))
for i, axi in enumerate(ax.flat):
axi.imshow(digits.images[i], cmap='binary')
axi.set(xticks=[], yticks=[])
Explanation: The discrete version of a colormap can be used just like any other colormap.
Example: Handwritten Digits
For an example of where this might be useful, let's look at an interesting visualization of some hand written digits data.
This data is included in Scikit-Learn, and consists of nearly 2,000 $8 \times 8$ thumbnails showing various hand-written digits.
For now, let's start by downloading the digits data and visualizing several of the example images with plt.imshow():
End of explanation
# project the digits into 2 dimensions using IsoMap
from sklearn.manifold import Isomap
iso = Isomap(n_components=2)
projection = iso.fit_transform(digits.data)
Explanation: Because each digit is defined by the hue of its 64 pixels, we can consider each digit to be a point lying in 64-dimensional space: each dimension represents the brightness of one pixel.
But visualizing relationships in such high-dimensional spaces can be extremely difficult.
One way to approach this is to use a dimensionality reduction technique such as manifold learning to reduce the dimensionality of the data while maintaining the relationships of interest.
Dimensionality reduction is an example of unsupervised machine learning, and we will discuss it in more detail in What Is Machine Learning?.
Deferring the discussion of these details, let's take a look at a two-dimensional manifold learning projection of this digits data (see In-Depth: Manifold Learning for details):
End of explanation
# plot the results
plt.scatter(projection[:, 0], projection[:, 1], lw=0.1,
c=digits.target, cmap=plt.cm.get_cmap('cubehelix', 6))
plt.colorbar(ticks=range(6), label='digit value')
plt.clim(-0.5, 5.5)
Explanation: We'll use our discrete colormap to view the results, setting the ticks and clim to improve the aesthetics of the resulting colorbar:
End of explanation |
4,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 2
Step1: In this case instead loading a geo_data object directly, we will create one. The main atributes we need to pass are
Step2: You can visualize the points in 3D (work in progress)
Step3: Or a projection in 2D
Step4: This model consist in 3 different depositional series. This mean that only data in the same depositional series affect the interpolation. To select with formations belong to witch series we will use the set_data_series function which takes a python dictionary as input.
We can see the unique formations with
Step5: Setting the series we also give the specific order of the series. In python 3.6 and above the dictionaries conserve the key order so it is not necessary to give explicitly the order of the series.
Notice as well that the order of the formations within each series is not relevant for the result but in case of being wrong can lead to confusing color coding (work in progress).
In the representation given by get_series the elements get repeated but is only how Pandas print tables.
Step6: Computing the model
Now as in the previous chapter we just need to create the interpolator object and compute the model.
Step7: Now if we analyse the results we have a 3D array where the axis 0 represent the superposition of the series (potential fields). The color coding is working process yet.
Step8: The axis 1 keeps the potential field
Step9: And the axis 2 keeps the faults network that in this model since there is not faults does not represent anything.
Additionally with can export the blocks to vtk in order to visualize them in Paraview. We are working in visualization in place as well. | Python Code:
# These two lines are necessary only if gempy is not installed
import sys, os
sys.path.append("../")
# Importing gempy
import gempy as gp
# Embedding matplotlib figures into the notebooks
%matplotlib inline
# Aux imports
import numpy as np
Explanation: Chapter 2: A real example. Importing data and setting series
Data Management
In this example we will show how we can import data from a csv and generate a model with several depositional series.
End of explanation
# Importing the data from csv files and settign extent and resolution
geo_data = gp.create_data([696000,747000,6863000,6950000,-20000, 200],[50, 50, 50],
path_f = os.pardir+"/input_data/a_Foliations.csv",
path_i = os.pardir+"/input_data/a_Points.csv")
gp.get_raw_data(geo_data, 'interfaces').head()
Explanation: In this case instead loading a geo_data object directly, we will create one. The main atributes we need to pass are:
- Extent: X min, X max, Y min, Y max, Z min, Z max
- Resolution: X,Y,Z
Additionaly we can pass the address to csv files (GeoModeller3D format) with the data.
End of explanation
gp.visualize(geo_data)
Explanation: You can visualize the points in 3D (work in progress)
End of explanation
gp.plot_data(geo_data, direction='z')
Explanation: Or a projection in 2D:
End of explanation
gp.get_series(geo_data)
Explanation: This model consist in 3 different depositional series. This mean that only data in the same depositional series affect the interpolation. To select with formations belong to witch series we will use the set_data_series function which takes a python dictionary as input.
We can see the unique formations with:
End of explanation
# Assigning series to formations as well as their order (timewise)
gp.set_data_series(geo_data, {"EarlyGranite_Series": 'EarlyGranite',
"BIF_Series":('SimpleMafic2', 'SimpleBIF'),
"SimpleMafic_Series":'SimpleMafic1'},
order_series = ["EarlyGranite_Series",
"BIF_Series",
"SimpleMafic_Series"], verbose=1)
Explanation: Setting the series we also give the specific order of the series. In python 3.6 and above the dictionaries conserve the key order so it is not necessary to give explicitly the order of the series.
Notice as well that the order of the formations within each series is not relevant for the result but in case of being wrong can lead to confusing color coding (work in progress).
In the representation given by get_series the elements get repeated but is only how Pandas print tables.
End of explanation
interp_data = gp.InterpolatorInput(geo_data)
sol = gp.compute_model(interp_data)
Explanation: Computing the model
Now as in the previous chapter we just need to create the interpolator object and compute the model.
End of explanation
import matplotlib.pyplot as plt
gp.plot_section(geo_data, sol[0,0,:], 11)
plt.show()
gp.plot_section(geo_data, sol[1,0,:], 11)
plt.show()
gp.plot_section(geo_data, sol[2,0,:], 11)
plt.show()
Explanation: Now if we analyse the results we have a 3D array where the axis 0 represent the superposition of the series (potential fields). The color coding is working process yet.
End of explanation
gp.plot_potential_field(geo_data, sol[0,1,:], 11, cmap='inferno_r')
plt.show()
gp.plot_potential_field(geo_data, sol[1,1,:], 11, cmap='inferno_r')
plt.show()
gp.plot_potential_field(geo_data, sol[2,1,:], 11, cmap='inferno_r')
plt.show()
Explanation: The axis 1 keeps the potential field:
End of explanation
gp.export_vtk_rectilinear(geo_data, sol[-1, 0, :], path=None)
Explanation: And the axis 2 keeps the faults network that in this model since there is not faults does not represent anything.
Additionally with can export the blocks to vtk in order to visualize them in Paraview. We are working in visualization in place as well.
End of explanation |
4,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Examples
One more time, I'll load the data from the NSFG.
Step2: And compute the distribution of birth weight for first babies and others.
Step3: We can plot the PMFs on the same scale, but it is hard to see if there is a difference.
Step4: PercentileRank computes the fraction of scores less than or equal to your_score.
Step5: If this is the list of scores.
Step6: And you got the 88, your percentile rank is 80.
Step7: Percentile takes a percentile rank and computes the corresponding percentile.
Step8: The median is the 50th percentile, which is 77.
Step9: Here's a more efficient way to compute percentiles.
Step10: Let's hope we get the same answer.
Step11: The Cumulative Distribution Function (CDF) is almost the same as PercentileRank. The only difference is that the result is 0-1 instead of 0-100.
Step12: In this list
Step13: We can evaluate the CDF for various values
Step14: Here's an example using real data, the distribution of pregnancy length for live births.
Step15: Cdf provides Prob, which evaluates the CDF; that is, it computes the fraction of values less than or equal to the given value. For example, 94% of pregnancy lengths are less than or equal to 41.
Step16: Value evaluates the inverse CDF; given a fraction, it computes the corresponding value. For example, the median is the value that corresponds to 0.5.
Step17: In general, CDFs are a good way to visualize distributions. They are not as noisy as PMFs, and if you plot several CDFs on the same axes, any differences between them are apparent.
Step18: In this example, we can see that first babies are slightly, but consistently, lighter than others.
We can use the CDF of birth weight to compute percentile-based statistics.
Step19: Again, the median is the 50th percentile.
Step20: The interquartile range is the interval from the 25th to 75th percentile.
Step21: We can use the CDF to look up the percentile rank of a particular value. For example, my second daughter was 10.2 pounds at birth, which is near the 99th percentile.
Step22: If we draw a random sample from the observed weights and map each weigh to its percentile rank.
Step23: The resulting list of ranks should be approximately uniform from 0-1.
Step24: That observation is the basis of Cdf.Sample, which generates a random sample from a Cdf. Here's an example.
Step25: This confirms that the random sample has the same distribution as the original data.
Exercises
Exercise
Step26: Exercise | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import nsfg
import first
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
live, firsts, others = first.MakeFrames()
Explanation: Examples
One more time, I'll load the data from the NSFG.
End of explanation
first_wgt = firsts.totalwgt_lb
first_wgt_dropna = first_wgt.dropna()
print('Firsts', len(first_wgt), len(first_wgt_dropna))
other_wgt = others.totalwgt_lb
other_wgt_dropna = other_wgt.dropna()
print('Others', len(other_wgt), len(other_wgt_dropna))
first_pmf = thinkstats2.Pmf(first_wgt_dropna, label='first')
other_pmf = thinkstats2.Pmf(other_wgt_dropna, label='other')
Explanation: And compute the distribution of birth weight for first babies and others.
End of explanation
width = 0.4 / 16
# plot PMFs of birth weights for first babies and others
thinkplot.PrePlot(2)
thinkplot.Hist(first_pmf, align='right', width=width)
thinkplot.Hist(other_pmf, align='left', width=width)
thinkplot.Config(xlabel='Weight (pounds)', ylabel='PMF')
Explanation: We can plot the PMFs on the same scale, but it is hard to see if there is a difference.
End of explanation
def PercentileRank(scores, your_score):
count = 0
for score in scores:
if score <= your_score:
count += 1
percentile_rank = 100.0 * count / len(scores)
return percentile_rank
Explanation: PercentileRank computes the fraction of scores less than or equal to your_score.
End of explanation
t = [55, 66, 77, 88, 99]
Explanation: If this is the list of scores.
End of explanation
PercentileRank(t, 88)
Explanation: And you got the 88, your percentile rank is 80.
End of explanation
def Percentile(scores, percentile_rank):
scores.sort()
for score in scores:
if PercentileRank(scores, score) >= percentile_rank:
return score
Explanation: Percentile takes a percentile rank and computes the corresponding percentile.
End of explanation
Percentile(t, 50)
Explanation: The median is the 50th percentile, which is 77.
End of explanation
def Percentile2(scores, percentile_rank):
scores.sort()
index = percentile_rank * (len(scores)-1) // 100
return scores[index]
Explanation: Here's a more efficient way to compute percentiles.
End of explanation
Percentile2(t, 50)
Explanation: Let's hope we get the same answer.
End of explanation
def EvalCdf(sample, x):
count = 0.0
for value in sample:
if value <= x:
count += 1
prob = count / len(sample)
return prob
Explanation: The Cumulative Distribution Function (CDF) is almost the same as PercentileRank. The only difference is that the result is 0-1 instead of 0-100.
End of explanation
t = [1, 2, 2, 3, 5]
Explanation: In this list
End of explanation
EvalCdf(t, 0), EvalCdf(t, 1), EvalCdf(t, 2), EvalCdf(t, 3), EvalCdf(t, 4), EvalCdf(t, 5)
Explanation: We can evaluate the CDF for various values:
End of explanation
cdf = thinkstats2.Cdf(live.prglngth, label='prglngth')
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Pregnancy length (weeks)', ylabel='CDF', loc='upper left')
Explanation: Here's an example using real data, the distribution of pregnancy length for live births.
End of explanation
cdf.Prob(41)
Explanation: Cdf provides Prob, which evaluates the CDF; that is, it computes the fraction of values less than or equal to the given value. For example, 94% of pregnancy lengths are less than or equal to 41.
End of explanation
cdf.Value(0.5)
Explanation: Value evaluates the inverse CDF; given a fraction, it computes the corresponding value. For example, the median is the value that corresponds to 0.5.
End of explanation
first_cdf = thinkstats2.Cdf(firsts.totalwgt_lb, label='first')
other_cdf = thinkstats2.Cdf(others.totalwgt_lb, label='other')
thinkplot.PrePlot(2)
thinkplot.Cdfs([first_cdf, other_cdf])
thinkplot.Config(xlabel='Weight (pounds)', ylabel='CDF')
Explanation: In general, CDFs are a good way to visualize distributions. They are not as noisy as PMFs, and if you plot several CDFs on the same axes, any differences between them are apparent.
End of explanation
weights = live.totalwgt_lb
live_cdf = thinkstats2.Cdf(weights, label='live')
Explanation: In this example, we can see that first babies are slightly, but consistently, lighter than others.
We can use the CDF of birth weight to compute percentile-based statistics.
End of explanation
median = live_cdf.Percentile(50)
median
Explanation: Again, the median is the 50th percentile.
End of explanation
iqr = (live_cdf.Percentile(25), live_cdf.Percentile(75))
iqr
Explanation: The interquartile range is the interval from the 25th to 75th percentile.
End of explanation
live_cdf.PercentileRank(10.2)
Explanation: We can use the CDF to look up the percentile rank of a particular value. For example, my second daughter was 10.2 pounds at birth, which is near the 99th percentile.
End of explanation
sample = np.random.choice(weights, 100, replace=True)
ranks = [live_cdf.PercentileRank(x) for x in sample]
Explanation: If we draw a random sample from the observed weights and map each weigh to its percentile rank.
End of explanation
rank_cdf = thinkstats2.Cdf(ranks)
thinkplot.Cdf(rank_cdf)
thinkplot.Config(xlabel='Percentile rank', ylabel='CDF')
Explanation: The resulting list of ranks should be approximately uniform from 0-1.
End of explanation
resample = live_cdf.Sample(1000)
thinkplot.Cdf(live_cdf)
thinkplot.Cdf(thinkstats2.Cdf(resample, label='resample'))
thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='CDF')
Explanation: That observation is the basis of Cdf.Sample, which generates a random sample from a Cdf. Here's an example.
End of explanation
# Solution goes here
cdf.PercentileRank(7.5)
# Solution goes here
other_cdf.PercentileRank(7.5)
Explanation: This confirms that the random sample has the same distribution as the original data.
Exercises
Exercise: How much did you weigh at birth? If you don’t know, call your mother or someone else who knows. Using the NSFG data (all live births), compute the distribution of birth weights and use it to find your percentile rank. If you were a first baby, find your percentile rank in the distribution for first babies. Otherwise use the distribution for others. If you are in the 90th percentile or higher, call your mother back and apologize.
End of explanation
# Solution goes here
paul = np.random.random(1000)
pk2 = thinkstats2.Pmf(paul)
thinkplot.Pmf(pk2)
# Solution goes here
pk3 = thinkstats2.Cdf(paul)
thinkplot.Cdf(pk3)
# Solution goes here
Explanation: Exercise: The numbers generated by numpy.random.random are supposed to be uniform between 0 and 1; that is, every value in the range should have the same probability.
Generate 1000 numbers from numpy.random.random and plot their PMF. What goes wrong?
Now plot the CDF. Is the distribution uniform?
End of explanation |
4,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 8 | Anomaly Detection
Step1: Part 1
Step2: Part 2
Step3: Visualize the fit.
Step4: Part 3
Step5: Best epsilon and F1 found using cross-validation (F1 should be about 0.899e-5)
Step6: Part 4
Step7: best epsilon found
Step8: best F1 score
Step9: Number of outliers found | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
from scipy.stats import multivariate_normal
%matplotlib inline
#%qtconsole
Explanation: Exercise 8 | Anomaly Detection
End of explanation
ex7data1 = scipy.io.loadmat('ex8data1.mat')
X = ex7data1['X']
Xval = ex7data1['Xval']
yval = ex7data1['yval'][:,0]
def plot_data(X, ax):
ax.set_xlabel('Latency')
ax.set_ylabel('Throughput')
ax.plot(X[:,0], X[:,1], 'bx')
fig, ax = plt.subplots()
plot_data(X, ax)
def multivariate_gaussian(X, mu, sigma2):
if len(sigma2) == 1:
sigma2 = np.diag(sigma2)
return multivariate_normal(mean=mu, cov=sigma2).pdf(X)
Explanation: Part 1: Load Example Dataset
We start this exercise by using a small dataset that is easy to
visualize.
Our example case consists of 2 network server statistics across
several machines: the latency and throughput of each machine.
This exercise will help us find possibly faulty (or very fast) machines.
End of explanation
def estimate_gaussian(X):
#ESTIMATEGAUSSIAN This function estimates the parameters of a
#Gaussian distribution using the data in X
# [mu sigma2] = estimateGaussian(X),
# The input X is the dataset with each n-dimensional data point in one row
# The output is an n-dimensional vector mu, the mean of the data set
# and the variances sigma^2, an n x 1 vector
#
m, n = X.shape
# You should return these values correctly
mu = np.zeros(n)
sigma2 = np.ones(n)
# ====================== YOUR CODE HERE ======================
# Instructions: Compute the mean of the data and the variances
# In particular, mu(i) should contain the mean of
# the data for the i-th feature and sigma2(i)
# should contain variance of the i-th feature.
#
# =============================================================
return mu, sigma2
mu, sigma2 = estimate_gaussian(X)
p = multivariate_gaussian(X, mu, sigma2)
Explanation: Part 2: Estimate the dataset statistics
For this exercise, we assume a Gaussian distribution for the dataset.
We first estimate the parameters of our assumed Gaussian distribution,
then compute the probabilities for each of the points and then visualize
both the overall distribution and where each of the points falls in
terms of that distribution.
End of explanation
x1, x2 = np.meshgrid(np.linspace(0, 35), np.linspace(0, 35))
Z = multivariate_gaussian(np.c_[x1.reshape(-1), x2.reshape(-1)], mu, sigma2).reshape(x1.shape)
fig, ax = plt.subplots(figsize=(5,5))
plot_data(X, ax)
ax.contour(x1, x2, Z, levels=np.logspace(-20, 1, 7))
Explanation: Visualize the fit.
End of explanation
def select_threshold(yval, pval):
#SELECTTHRESHOLD Find the best threshold (epsilon) to use for selecting
#outliers
# [bestEpsilon bestF1] = SELECTTHRESHOLD(yval, pval) finds the best
# threshold to use for selecting outliers based on the results from a
# validation set (pval) and the ground truth (yval).
#
best_epsilon = 0
best_f1 = 0
f1 = 0
for epsilon in np.linspace(min(pval), max(pval), 1000):
# ====================== YOUR CODE HERE ======================
# Instructions: Compute the F1 score of choosing epsilon as the
# threshold and place the value in F1. The code at the
# end of the loop will compare the F1 score for this
# choice of epsilon and set it to be the best epsilon if
# it is better than the current choice of epsilon.
#
# Note: You can use predictions = pval < epsilon to get a binary vector
# of 0's and 1's of the outlier predictions
# =============================================================
if f1 > best_f1:
best_epsilon = epsilon
best_f1 = f1
return best_epsilon, best_f1
Explanation: Part 3: Find Outliers
Now you will find a good epsilon threshold using a cross-validation set.
probabilities given the estimated Gaussian distribution.
End of explanation
pval = multivariate_gaussian(Xval, mu, sigma2)
epsilon, F1 = select_threshold(yval, pval)
print(epsilon, F1)
outliers = p < epsilon
fig, ax = plt.subplots(figsize=(5,5))
plot_data(X, ax)
ax.scatter(X[outliers, 0], X[outliers, 1], marker='o', facecolors='none', edgecolors='r', s=100)
Explanation: Best epsilon and F1 found using cross-validation (F1 should be about 0.899e-5):
End of explanation
ex7data2 = scipy.io.loadmat('ex8data2.mat')
X = ex7data2['X']
Xval = ex7data2['Xval']
yval = ex7data2['yval'][:,0]
mu, sigma2 = estimate_gaussian(X)
p = multivariate_gaussian(X, mu, sigma2)
pval = multivariate_gaussian(Xval, mu, sigma2)
epsilon, F1 = select_threshold(yval, pval)
Explanation: Part 4: Multidimensional Outliers
We will now use the code from the previous part and apply it to a
harder problem in which more features describe each datapoint and only
some features indicate whether a point is an outlier.
End of explanation
epsilon
Explanation: best epsilon found: (should be about 1.38e-18)
End of explanation
F1
Explanation: best F1 score:
End of explanation
sum(p < epsilon)
Explanation: Number of outliers found:
End of explanation |
4,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Benchmarking Performance and Scaling of Python Clustering Algorithms
There are a host of different clustering algorithms and implementations thereof for Python. The performance and scaling can depend as much on the implementation as the underlying algorithm. Obviously a well written implementation in C or C++ will beat a naive implementation on pure Python, but there is more to it than just that. The internals and data structures used can have a large impact on performance, and can even significanty change asymptotic performance. All of this means that, given some amount of data that you want to cluster your options as to algorithm and implementation maybe significantly constrained. I'm both lazy, and prefer empirical results for this sort of thing, so rather than analyzing the implementations and deriving asymptotic performance numbers for various implementations I'm just going to run everything and see what happens.
To begin with we need to get together all the clustering implementations, along with some plotting libraries so we can see what is going on once we've got data. Obviously this is not an exhaustive collection of clustering implementations, so if I've left off your favourite I apologise, but one has to draw a line somewhere.
The implementations being test are
Step1: Now we need some benchmarking code at various dataset sizes. Because some clustering algorithms have performance that can vary quite a lot depending on the exact nature of the dataset we'll also need to run several times on randomly generated datasets of each size so as to get a better idea of the average case performance.
We also need to generalise over algorithms which don't necessarily all have the same API. We can resolve that by taking a clustering function, argument tuple and keywords dictionary to let us do semi-arbitrary calls (fortunately all the algorithms do at least take the dataset to cluster as the first parameter).
Finally some algorithms scale poorly, and I don't want to spend forever doing clustering of random datasets so we'll cap the maximum time an algorithm can use; once it has taken longer than max time we'll just abort there and leave the remaining entries in our datasize by samples matrix unfilled.
In the end this all amounts to a fairly straightforward set of nested loops (over datasizes and number of samples) with calls to sklearn to generate mock data and the clustering function inside a timer. Add in some early abort and we're done.
Step2: Comparison of all ten implementations
Now we need a range of dataset sizes to test out our algorithm. Since the scaling performance is wildly different over the ten implementations we're going to look at it will be beneficial to have a number of very small dataset sizes, and increasing spacing as we get larger, spanning out to 32000 datapoints to cluster (to begin with). Numpy provides convenient ways to get this done via arange and vector multiplication. We'll start with step sizes of 500, then shift to steps of 1000 past 3000 datapoints, and finally steps of 2000 past 6000 datapoints.
Step3: Now it is just a matter of running all the clustering algorithms via our benchmark function to collect up all the requsite data. This could be prettier, rolled up into functions appropriately, but sometimes brute force is good enough. More importantly (for me) since this can take a significant amount of compute time, I wanted to be able to comment out algorithms that were slow or I was uninterested in easily. Which brings me to a warning for you the reader and potential user of the notebook
Step4: Now we need to plot the results so we can see what is going on. The catch is that we have several datapoints for each dataset size and ultimately we would like to try and fit a curve through all of it to get the general scaling trend. Fortunately seaborn comes to the rescue here by providing regplot which plots a regression through a dataset, supports higher order regression (we should probably use order two as most algorithms are effectively quadratic) and handles multiple datapoints for each x-value cleanly (using the x_estimator keyword to put a point at the mean and draw an error bar to cover the range of data).
Step5: A few features stand out. First of all there appear to be essentially two classes of implementation, with DeBaCl being an odd case that falls in the middle. The fast implementations tend to be implementations of single linkage agglomerative clustering, K-means, and DBSCAN. The slow cases are largely from sklearn and include agglomerative clustering (in this case using Ward instead of single linkage).
For practical purposes this means that if you have much more than 10000 datapoints your clustering options are significantly constrained
Step6: Again we can use seaborn to do curve fitting and plotting, exactly as before.
Step7: Clearly something has gone woefully wrong with the curve fitting for the scipy single linkage implementation, but what exactly? If we look at the raw data we can see.
Step8: It seems that at around 44000 points we hit a wall and the runtimes spiked. A hint is that I'm running this on a laptop with 8GB of RAM. Both single linkage algorithms use scipy.spatial.pdist to compute pairwise distances between points, which returns an array of shape (n(n-1)/2, 1) of doubles. A quick computation shows that that array of distances is quite large once we nave 44000 points
Step9: If we assume that my laptop is keeping much other than that distance array in RAM then clearly we are going to spend time paging out the distance array to disk and back and hence we will see the runtimes increase dramatically as we become disk IO bound. If we just leave off the last element we can get a better idea of the curve, but keep in mind that the scipy single linkage implementation does not scale past a limit set by your available RAM.
Step10: If we're looking for scaling we can write off the scipy single linkage implementation -- if even we didn't hit the RAM limit the $O(n^2)$ scaling is going to quickly catch up with us. Fastcluster has the same asymptotic scaling, but is heavily optimized to being the constant down much lower -- at this point it is still keeping close to the faster algorithms. It's asymtotics will still catch up with it eventually however.
In practice this is going to mean that for larger datasets you are going to be very constrained in what algorithms you can apply
Step11: Now the some differences become clear. The asymptotic complexity starts to kick in with fastcluster failing to keep up. In turn HDBSCAN and DBSCAN, while having sub-$O(n^2)$ complexity, can't achieve $O(n \log(n))$ at this dataset dimension, and start to curve upward precipitously. Finally it demonstrates again how much of a difference implementation can make
Step12: Now we run that for each of our pre-existing datasets to extrapolate out predicted performance on the relevant dataset sizes. A little pandas wrangling later and we've produced a table of roughly how large a dataset you can tackle in each time frame with each implementation. I had to leave out the scipy KMeans timings because the noise in timing results caused the model to be unrealistic at larger data sizes. Note how the $O(n\log n)$ algorithms utterly dominate here. In the meantime, for medium sizes data sets you can still get quite a lot done with HDBSCAN. | Python Code:
import hdbscan
import debacl
import fastcluster
import sklearn.cluster
import scipy.cluster
import sklearn.datasets
import numpy as np
import pandas as pd
import time
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_context('poster')
sns.set_palette('Paired', 10)
sns.set_color_codes()
Explanation: Benchmarking Performance and Scaling of Python Clustering Algorithms
There are a host of different clustering algorithms and implementations thereof for Python. The performance and scaling can depend as much on the implementation as the underlying algorithm. Obviously a well written implementation in C or C++ will beat a naive implementation on pure Python, but there is more to it than just that. The internals and data structures used can have a large impact on performance, and can even significanty change asymptotic performance. All of this means that, given some amount of data that you want to cluster your options as to algorithm and implementation maybe significantly constrained. I'm both lazy, and prefer empirical results for this sort of thing, so rather than analyzing the implementations and deriving asymptotic performance numbers for various implementations I'm just going to run everything and see what happens.
To begin with we need to get together all the clustering implementations, along with some plotting libraries so we can see what is going on once we've got data. Obviously this is not an exhaustive collection of clustering implementations, so if I've left off your favourite I apologise, but one has to draw a line somewhere.
The implementations being test are:
Sklearn (which implements several algorithms):
K-Means clustering
DBSCAN clustering
Agglomerative clustering
Spectral clustering
Affinity Propagation
Scipy (which provides basic algorithms):
K-Means clustering
Agglomerative clustering
Fastcluster (which provides very fast agglomerative clustering in C++)
DeBaCl (Density Based Clustering; similar to a mix of DBSCAN and Agglomerative)
HDBSCAN (A robust hierarchical version of DBSCAN)
Obviously a major factor in performance will be the algorithm itself. Some algorithms are simply slower -- often, but not always, because they are doing more work to provide a better clustering.
End of explanation
def benchmark_algorithm(dataset_sizes, cluster_function, function_args, function_kwds,
dataset_dimension=10, dataset_n_clusters=10, max_time=45, sample_size=2):
# Initialize the result with NaNs so that any unfilled entries
# will be considered NULL when we convert to a pandas dataframe at the end
result = np.nan * np.ones((len(dataset_sizes), sample_size))
for index, size in enumerate(dataset_sizes):
for s in range(sample_size):
# Use sklearns make_blobs to generate a random dataset with specified size
# dimension and number of clusters
data, labels = sklearn.datasets.make_blobs(n_samples=size,
n_features=dataset_dimension,
centers=dataset_n_clusters)
# Start the clustering with a timer
start_time = time.time()
cluster_function(data, *function_args, **function_kwds)
time_taken = time.time() - start_time
# If we are taking more than max_time then abort -- we don't
# want to spend excessive time on slow algorithms
if time_taken > max_time:
result[index, s] = time_taken
return pd.DataFrame(np.vstack([dataset_sizes.repeat(sample_size),
result.flatten()]).T, columns=['x','y'])
else:
result[index, s] = time_taken
# Return the result as a dataframe for easier handling with seaborn afterwards
return pd.DataFrame(np.vstack([dataset_sizes.repeat(sample_size),
result.flatten()]).T, columns=['x','y'])
Explanation: Now we need some benchmarking code at various dataset sizes. Because some clustering algorithms have performance that can vary quite a lot depending on the exact nature of the dataset we'll also need to run several times on randomly generated datasets of each size so as to get a better idea of the average case performance.
We also need to generalise over algorithms which don't necessarily all have the same API. We can resolve that by taking a clustering function, argument tuple and keywords dictionary to let us do semi-arbitrary calls (fortunately all the algorithms do at least take the dataset to cluster as the first parameter).
Finally some algorithms scale poorly, and I don't want to spend forever doing clustering of random datasets so we'll cap the maximum time an algorithm can use; once it has taken longer than max time we'll just abort there and leave the remaining entries in our datasize by samples matrix unfilled.
In the end this all amounts to a fairly straightforward set of nested loops (over datasizes and number of samples) with calls to sklearn to generate mock data and the clustering function inside a timer. Add in some early abort and we're done.
End of explanation
dataset_sizes = np.hstack([np.arange(1, 6) * 500, np.arange(3,7) * 1000, np.arange(4,17) * 2000])
Explanation: Comparison of all ten implementations
Now we need a range of dataset sizes to test out our algorithm. Since the scaling performance is wildly different over the ten implementations we're going to look at it will be beneficial to have a number of very small dataset sizes, and increasing spacing as we get larger, spanning out to 32000 datapoints to cluster (to begin with). Numpy provides convenient ways to get this done via arange and vector multiplication. We'll start with step sizes of 500, then shift to steps of 1000 past 3000 datapoints, and finally steps of 2000 past 6000 datapoints.
End of explanation
k_means = sklearn.cluster.KMeans(10)
k_means_data = benchmark_algorithm(dataset_sizes, k_means.fit, (), {})
dbscan = sklearn.cluster.DBSCAN(eps=1.25)
dbscan_data = benchmark_algorithm(dataset_sizes, dbscan.fit, (), {})
scipy_k_means_data = benchmark_algorithm(dataset_sizes, scipy.cluster.vq.kmeans, (10,), {})
scipy_single_data = benchmark_algorithm(dataset_sizes, scipy.cluster.hierarchy.single, (), {})
fastclust_data = benchmark_algorithm(dataset_sizes, fastcluster.linkage_vector, (), {})
hdbscan_ = hdbscan.HDBSCAN()
hdbscan_data = benchmark_algorithm(dataset_sizes, hdbscan_.fit, (), {})
debacl_data = benchmark_algorithm(dataset_sizes, debacl.geom_tree.geomTree, (5, 5), {'verbose':False})
agglomerative = sklearn.cluster.AgglomerativeClustering(10)
agg_data = benchmark_algorithm(dataset_sizes, agglomerative.fit, (), {}, sample_size=4)
spectral = sklearn.cluster.SpectralClustering(10)
spectral_data = benchmark_algorithm(dataset_sizes, spectral.fit, (), {}, sample_size=6)
affinity_prop = sklearn.cluster.AffinityPropagation()
ap_data = benchmark_algorithm(dataset_sizes, affinity_prop.fit, (), {}, sample_size=3)
Explanation: Now it is just a matter of running all the clustering algorithms via our benchmark function to collect up all the requsite data. This could be prettier, rolled up into functions appropriately, but sometimes brute force is good enough. More importantly (for me) since this can take a significant amount of compute time, I wanted to be able to comment out algorithms that were slow or I was uninterested in easily. Which brings me to a warning for you the reader and potential user of the notebook: this next step is very expensive. We are running ten different clustering algorithms multiple times each on twenty two different dataset sizes -- and some of the clustering algorithms are slow (we are capping out at forty five seconds per run). That means that the next cell can take an hour or more to run. That doesn't mean "Don't try this at home" (I actually encourage you to try this out yourself and play with dataset parameters and clustering parameters) but it does mean you should be patient if you're going to!
End of explanation
sns.regplot(x='x', y='y', data=k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=hdbscan_data, order=2, label='HDBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=fastclust_data, order=2, label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=scipy_single_data, order=2, label='Scipy Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=debacl_data, order=2, label='DeBaCl Geom Tree', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=spectral_data, order=2, label='Sklearn Spectral', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=agg_data, order=2, label='Sklearn Agglomerative', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=ap_data, order=2, label='Sklearn Affinity Propagation', x_estimator=np.mean)
plt.gca().axis([0, 34000, 0, 120])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Clustering Implementations')
plt.legend()
Explanation: Now we need to plot the results so we can see what is going on. The catch is that we have several datapoints for each dataset size and ultimately we would like to try and fit a curve through all of it to get the general scaling trend. Fortunately seaborn comes to the rescue here by providing regplot which plots a regression through a dataset, supports higher order regression (we should probably use order two as most algorithms are effectively quadratic) and handles multiple datapoints for each x-value cleanly (using the x_estimator keyword to put a point at the mean and draw an error bar to cover the range of data).
End of explanation
large_dataset_sizes = np.arange(1,16) * 4000
hdbscan_boruvka = hdbscan.HDBSCAN(algorithm='boruvka_kdtree')
large_hdbscan_boruvka_data = benchmark_algorithm(large_dataset_sizes,
hdbscan_boruvka.fit, (), {}, max_time=90, sample_size=1)
k_means = sklearn.cluster.KMeans(10)
large_k_means_data = benchmark_algorithm(large_dataset_sizes,
k_means.fit, (), {}, max_time=90, sample_size=1)
dbscan = sklearn.cluster.DBSCAN(eps=1.25, min_samples=5)
large_dbscan_data = benchmark_algorithm(large_dataset_sizes,
dbscan.fit, (), {}, max_time=90, sample_size=1)
large_fastclust_data = benchmark_algorithm(large_dataset_sizes,
fastcluster.linkage_vector, (), {}, max_time=90, sample_size=1)
large_scipy_k_means_data = benchmark_algorithm(large_dataset_sizes,
scipy.cluster.vq.kmeans, (10,), {}, max_time=90, sample_size=1)
large_scipy_single_data = benchmark_algorithm(large_dataset_sizes,
scipy.cluster.hierarchy.single, (), {}, max_time=90, sample_size=1)
Explanation: A few features stand out. First of all there appear to be essentially two classes of implementation, with DeBaCl being an odd case that falls in the middle. The fast implementations tend to be implementations of single linkage agglomerative clustering, K-means, and DBSCAN. The slow cases are largely from sklearn and include agglomerative clustering (in this case using Ward instead of single linkage).
For practical purposes this means that if you have much more than 10000 datapoints your clustering options are significantly constrained: sklearn spectral, agglomerative and affinity propagation are going to take far too long. DeBaCl may still be an option, but given that the hdbscan library provides "robust single linkage clustering" equivalent to what DeBaCl is doing (and with effectively the same runtime as hdbscan as it is a subset of that algorithm) it is probably not the best choice for large dataset sizes.
So let's drop out those slow algorithms so we can scale out a little further and get a closer look at the various algorithms that managed 32000 points in under thirty seconds. There is almost undoubtedly more to learn as we get ever larger dataset sizes.
Comparison of fast implementations
Let's compare the six fastest implementations now. We can scale out a little further as well; based on the curves above it looks like we should be able to comfortably get to 60000 data points without taking much more than a minute per run. We can also note that most of these implementations weren't that noisy so we can get away with a single run per dataset size.
End of explanation
sns.regplot(x='x', y='y', data=large_k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_hdbscan_boruvka_data, order=2, label='HDBSCAN Boruvka', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_fastclust_data, order=2, label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_single_data, order=2, label='Scipy Single Linkage', x_estimator=np.mean)
#sns.regplot(x='x', y='y', data=large_hdbscan_prims_data, order=2, label='HDBSCAN Prims', x_estimator=np.mean)
plt.gca().axis([0, 64000, 0, 150])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Fastest Clustering Implementations')
plt.legend()
Explanation: Again we can use seaborn to do curve fitting and plotting, exactly as before.
End of explanation
large_scipy_single_data.tail(10)
Explanation: Clearly something has gone woefully wrong with the curve fitting for the scipy single linkage implementation, but what exactly? If we look at the raw data we can see.
End of explanation
size_of_array = 44000 * (44000 - 1) / 2 # from pdist documentation
bytes_in_array = size_of_array * 8 # Since doubles use 8 bytes
gigabytes_used = bytes_in_array / (1024.0 ** 3) # divide out to get the number of GB
gigabytes_used
Explanation: It seems that at around 44000 points we hit a wall and the runtimes spiked. A hint is that I'm running this on a laptop with 8GB of RAM. Both single linkage algorithms use scipy.spatial.pdist to compute pairwise distances between points, which returns an array of shape (n(n-1)/2, 1) of doubles. A quick computation shows that that array of distances is quite large once we nave 44000 points:
End of explanation
sns.regplot(x='x', y='y', data=large_k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_hdbscan_boruvka_data, order=2, label='HDBSCAN Boruvka', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_fastclust_data, order=2, label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_single_data[:8], order=2, label='Scipy Single Linkage', x_estimator=np.mean)
#sns.regplot(x='x', y='y', data=large_hdbscan_prims_data, order=2, label='HDBSCAN Prims', x_estimator=np.mean)
plt.gca().axis([0, 64000, 0, 150])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Fastest Clustering Implementations')
plt.legend()
Explanation: If we assume that my laptop is keeping much other than that distance array in RAM then clearly we are going to spend time paging out the distance array to disk and back and hence we will see the runtimes increase dramatically as we become disk IO bound. If we just leave off the last element we can get a better idea of the curve, but keep in mind that the scipy single linkage implementation does not scale past a limit set by your available RAM.
End of explanation
huge_dataset_sizes = np.arange(1,11) * 20000
k_means = sklearn.cluster.KMeans(10)
huge_k_means_data = benchmark_algorithm(huge_dataset_sizes,
k_means.fit, (), {}, max_time=120, sample_size=2, dataset_dimension=10)
dbscan = sklearn.cluster.DBSCAN(eps=1.25)
huge_dbscan_data = benchmark_algorithm(huge_dataset_sizes,
dbscan.fit, (), {}, max_time=120, sample_size=2, dataset_dimension=10)
huge_scipy_k_means_data = benchmark_algorithm(huge_dataset_sizes,
scipy.cluster.vq.kmeans, (10,), {}, max_time=120, sample_size=2, dataset_dimension=10)
hdbscan_boruvka = hdbscan.HDBSCAN(algorithm='boruvka_kdtree')
huge_hdbscan_data = benchmark_algorithm(huge_dataset_sizes,
hdbscan_boruvka.fit, (), {}, max_time=240, sample_size=4, dataset_dimension=10)
huge_fastcluster_data = benchmark_algorithm(huge_dataset_sizes,
fastcluster.linkage_vector, (), {}, max_time=240, sample_size=2, dataset_dimension=10)
sns.regplot(x='x', y='y', data=huge_k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_hdbscan_data, order=2, label='HDBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_fastcluster_data, order=2, label='Fastcluster', x_estimator=np.mean)
plt.gca().axis([0, 200000, 0, 240])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of K-Means and DBSCAN')
plt.legend()
Explanation: If we're looking for scaling we can write off the scipy single linkage implementation -- if even we didn't hit the RAM limit the $O(n^2)$ scaling is going to quickly catch up with us. Fastcluster has the same asymptotic scaling, but is heavily optimized to being the constant down much lower -- at this point it is still keeping close to the faster algorithms. It's asymtotics will still catch up with it eventually however.
In practice this is going to mean that for larger datasets you are going to be very constrained in what algorithms you can apply: if you get enough datapoints only K-Means, DBSCAN, and HDBSCAN will be left. This is somewhat disappointing, paritcularly as K-Means is not a particularly good clustering algorithm, paricularly for exploratory data analysis.
With this in mind it is worth looking at how these last several implementations perform at much larger sizes, to see, for example, when fastscluster starts to have its asymptotic complexity start to pull it away.
Comparison of high performance implementations
At this point we can scale out to 200000 datapoints easily enough, so let's push things at least that far so we can start to really see scaling effects.
End of explanation
import statsmodels.formula.api as sm
time_samples = [1000, 2000, 5000, 10000, 25000, 50000, 75000, 100000, 250000, 500000, 750000,
1000000, 2500000, 5000000, 10000000, 50000000, 100000000, 500000000, 1000000000]
def get_timing_series(data, quadratic=True):
if quadratic:
data['x_squared'] = data.x**2
model = sm.ols('y ~ x + x_squared', data=data).fit()
predictions = [model.params.dot([1.0, i, i**2]) for i in time_samples]
return pd.Series(predictions, index=pd.Index(time_samples))
else: # assume n log(n)
data['xlogx'] = data.x * np.log(data.x)
model = sm.ols('y ~ x + xlogx', data=data).fit()
predictions = [model.params.dot([1.0, i, i*np.log(i)]) for i in time_samples]
return pd.Series(predictions, index=pd.Index(time_samples))
Explanation: Now the some differences become clear. The asymptotic complexity starts to kick in with fastcluster failing to keep up. In turn HDBSCAN and DBSCAN, while having sub-$O(n^2)$ complexity, can't achieve $O(n \log(n))$ at this dataset dimension, and start to curve upward precipitously. Finally it demonstrates again how much of a difference implementation can make: the sklearn implementation of K-Means is far better than the scipy implementation. Since HDBSCAN clustering is a lot better than K-Means (unless you have good reasons to assume that the clusters partition your data and are all drawn from Gaussian distributions) and the scaling is still pretty good I would suggest that unless you have a truly stupendous amount of data you wish to cluster then the HDBSCAN implementation is a good choice.
But should I get a coffee?
So we know which implementations scale and which don't; a more useful thing to know in practice is, given a dataset, what can I run interactively? What can I run while I go and grab some coffee? How about a run over lunch? What if I'm willing to wait until I get in tomorrow morning? Each of these represent significant breaks in productivity -- once you aren't working interactively anymore your productivity drops measurably, and so on.
We can build a table for this. To start we'll need to be able to approximate how long a given clustering implementation will take to run. Fortunately we already gathered a lot of that data; if we load up the statsmodels package we can fit the data (with a quadratic or $n\log n$ fit depending on the implementation; DBSCAN and HDBSCAN get caught here, since while they are under $O(n^2)$ scaling, they don't have an easily described model, so I'll model them as $n^2$ for now) and use the resulting model to make our predictions. Obviously this has some caveats: if you fill your RAM with a distance matrix your runtime isn't going to fit the curve.
I've hand built a time_samples list to give a reasonable set of potential data sizes that are nice and human readable. After that we just need a function to fit and build the curves.
End of explanation
ap_timings = get_timing_series(ap_data)
spectral_timings = get_timing_series(spectral_data)
agg_timings = get_timing_series(agg_data)
debacl_timings = get_timing_series(debacl_data)
fastclust_timings = get_timing_series(large_fastclust_data.ix[:10,:].copy())
scipy_single_timings = get_timing_series(large_scipy_single_data.ix[:10,:].copy())
hdbscan_boruvka = get_timing_series(huge_hdbscan_data, quadratic=True)
#scipy_k_means_timings = get_timing_series(huge_scipy_k_means_data, quadratic=False)
dbscan_timings = get_timing_series(huge_dbscan_data, quadratic=True)
k_means_timings = get_timing_series(huge_k_means_data, quadratic=False)
timing_data = pd.concat([ap_timings, spectral_timings, agg_timings, debacl_timings,
scipy_single_timings, fastclust_timings, hdbscan_boruvka,
dbscan_timings, k_means_timings
], axis=1)
timing_data.columns=['AffinityPropagation', 'Spectral', 'Agglomerative',
'DeBaCl', 'ScipySingleLinkage', 'Fastcluster',
'HDBSCAN', 'DBSCAN', 'SKLearn KMeans'
]
def get_size(series, max_time):
return series.index[series < max_time].max()
datasize_table = pd.concat([
timing_data.apply(get_size, max_time=30),
timing_data.apply(get_size, max_time=300),
timing_data.apply(get_size, max_time=3600),
timing_data.apply(get_size, max_time=8*3600)
], axis=1)
datasize_table.columns=('Interactive', 'Get Coffee', 'Over Lunch', 'Overnight')
datasize_table
Explanation: Now we run that for each of our pre-existing datasets to extrapolate out predicted performance on the relevant dataset sizes. A little pandas wrangling later and we've produced a table of roughly how large a dataset you can tackle in each time frame with each implementation. I had to leave out the scipy KMeans timings because the noise in timing results caused the model to be unrealistic at larger data sizes. Note how the $O(n\log n)$ algorithms utterly dominate here. In the meantime, for medium sizes data sets you can still get quite a lot done with HDBSCAN.
End of explanation |
4,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 7
Step1: Here, range() will return a number of integers, starting from zero, up to (but not including) the number which we pass as an argument to the function. Using range() is of course much more convenient to generate such lists of numbers than writing e.g. a while-loop to achieve the same result. Note that we can pass more than one argument to range(), if we want to start counting from a number higher than zero (which will be the default when you only pass a single parameter to the function)
Step2: We can even specify a 'step size' as a third argument, which controls how much a variable will increase with each step
Step3: If you don't specify the step size explicitly, it will default to 1. If you want to store or print the result of calling range(), you have to cast it explicitly, for instance, to a list
Step4: Enumerate
Of course, range() can also be used to iterate over the items in a list or tuple, typically in combination with calling len() to avoid IndexErrors
Step5: Naturally, the same result can just as easily be obtained using a for-loop
Step6: One drawback of such an easy-to-write loop, however, is that it doesn't keep track of the index of the word that we are printing in one of the iterations. Suppose that we would like to print the index of each word in our example above, we would then have to work with a counter...
Step7: ... or indeed use a call to range() and len()
Step8: A function that makes life in Python much easier in this respect is enumerate(). If we pass a list to enumerate(), it will return a list of mini-tuples
Step9: Here -- as with range() -- we have to cast the result of enumerate() to e.g. a list before we can actually print it. Iterating over the result of enumerate(), on the other hand, is not a problem. Here, we print out each mini-tuple, consisting of an index and an time in a for-loop
Step10: When using such for-loops and enumerate(), we can do something really cool. Remember that we can 'unpack' tuples
Step11: In our for-loop example, we can apply the same kind of unpacking in each iteration
Step12: However, there is also a super-convenient shortcut for this in Python, where we unpack each item in the for-statement already
Step13: How cool is that? Note how easy it becomes now, to solve our problem with the index above
Step14: Zip
Obviously, enumerate() can be really useful when you're working lists or other kinds of data sequences. Another helpful function in this respect is zip(). Supposed that we have a small database of 5 books in the forms of three lists
Step15: In each of these lists, the third item always corresponds to Dante's masterpiece and the last item to the Aeneid by Vergil, which inspired him. The use of zip() can now easily be illustrated
Step16: Do you see what happened here? In fact, zip() really functions like a 'zipper' in the real-world
Step17: How awesome is that? Here too
Step18: As you can understand, this is really useful functionality for dealing with long, complex lists and especially combinations of them.
Comprehensions
Now it's time to have a look at comprehensions in Python
Step19: We can create the exact same list of numbers using a list comprehension which only takes up one line of Python code
Step20: OK, impressive, but there are a lot of new things going on here. Let's go through this step by step. The first step is easy
Step21: Inside the squared brackets, we can find the actual comprehension which will determine what goes inside our new list. Note that it not always possible to read these comprehensions for left to right, so you will have to get used to the way they are build up from a syntactic point of view. First of all, we add an expression that determines which elements will make it into our list, in this case
Step22: Moreover, we don't have to include the if-statement at the end (it is always optional)
Step23: In the comprehensions above, words is the only pre-existing input to our comprehension; all the other variables are created and manipulated inside the function. The new range() function which we saw at the beginning of this chapter is also often often as the input for a comprehension
Step24: Importantly, we can just as easily create a tuple using the same comprehension syntax, but this time calling tuple() on the comprehension, instead of using the squared brackets to create a normal list
Step25: This is very useful, especially if you can figure out why the following code block will generate an error...
Step26: Good programmers can do amazing things with comprehensions. With list comprehensions, it becomes really easy, for example, to create nested lists (lists that themselves consist of lists or tuples). Can you figure out what is happening in the following code block
Step27: In the first line above, we create a new list (nested_list) but we don't fill it with single numbers, but instead with mini-lists that contain two values. We could just as easily have done this with mini-tuples, by using round brackets. Can you spot the differences below?
Step28: Note that zip() can also be very useful in this respect, because you can unpack items inside the comprehension. Do you understand what is going in the following code block
Step29: Again, more complex comprehensions are thinkable
Step30: Great
Step31: Finally, we should also mention that dictionaries and sets can also be filled in a one-liner using such comprehensions. For sets, the syntax runs entirely parallel to that of list and tuple comprehensions, but here, we use curly brackets to surround the expression
Step32: For dictionaries, which consist of key-value pairs, the syntax is only slightly more complicated. Here, you have to make sure that you link the correct key to the correct value using a colon, in the very first part of the comprehension. The following example will make this clearer
Step33: You've reached the end of Chapter 7! Ignore the code below, it's only here to make the page pretty | Python Code:
for i in range(10):
print(i)
Explanation: Chapter 7: More on Loops
In the previous chapters we have often discussed the powerful concept of looping in Python. Using loops, we can easily repeat certain actions when coding. With for-loops, for instance, it is really easy to visit the items in a list in a list and print them for example. In this chapter, we will discuss some more advanced forms of looping, as well as new, quick ways to create and deal with lists and other data sequences.
Range
The first new function that we will discuss here is range(). Using this function, we can quickly generate a list of numbers in a specific range:
End of explanation
for i in range(300, 306):
print(i)
Explanation: Here, range() will return a number of integers, starting from zero, up to (but not including) the number which we pass as an argument to the function. Using range() is of course much more convenient to generate such lists of numbers than writing e.g. a while-loop to achieve the same result. Note that we can pass more than one argument to range(), if we want to start counting from a number higher than zero (which will be the default when you only pass a single parameter to the function):
End of explanation
for i in range(15, 26, 3):
print(i)
Explanation: We can even specify a 'step size' as a third argument, which controls how much a variable will increase with each step:
End of explanation
numbers = list(range(10))
print(numbers[3:])
Explanation: If you don't specify the step size explicitly, it will default to 1. If you want to store or print the result of calling range(), you have to cast it explicitly, for instance, to a list:
End of explanation
words = "Be yourself; everyone else is already taken".split()
for i in range(len(words)):
print(words[i])
Explanation: Enumerate
Of course, range() can also be used to iterate over the items in a list or tuple, typically in combination with calling len() to avoid IndexErrors:
End of explanation
for word in words:
print(word)
Explanation: Naturally, the same result can just as easily be obtained using a for-loop:
End of explanation
counter = 0
for word in words:
print(word, ": index", counter)
counter+=1
Explanation: One drawback of such an easy-to-write loop, however, is that it doesn't keep track of the index of the word that we are printing in one of the iterations. Suppose that we would like to print the index of each word in our example above, we would then have to work with a counter...
End of explanation
for i in range(len(words)):
print(words[i], ": index", i)
Explanation: ... or indeed use a call to range() and len():
End of explanation
print(list(enumerate(words)))
Explanation: A function that makes life in Python much easier in this respect is enumerate(). If we pass a list to enumerate(), it will return a list of mini-tuples: each mini-tuple will contain as its first element the indices of the items, and as second element the actual item:
End of explanation
for mini_tuple in enumerate(words):
print(mini_tuple)
Explanation: Here -- as with range() -- we have to cast the result of enumerate() to e.g. a list before we can actually print it. Iterating over the result of enumerate(), on the other hand, is not a problem. Here, we print out each mini-tuple, consisting of an index and an time in a for-loop:
End of explanation
item = (5, 'already')
index, word = item # this is the same as: index, word = (5, "already")
print(index)
print(word)
Explanation: When using such for-loops and enumerate(), we can do something really cool. Remember that we can 'unpack' tuples: if a tuple consists of two elements, we can unpack it on one line of code to two different variables via the assignment operator:
End of explanation
for item in enumerate(words):
index, word = item
print(index)
print(word)
print("=======")
Explanation: In our for-loop example, we can apply the same kind of unpacking in each iteration:
End of explanation
for index, word in enumerate(words):
print(index)
print(word)
print("====")
Explanation: However, there is also a super-convenient shortcut for this in Python, where we unpack each item in the for-statement already:
End of explanation
for i, word in enumerate(words):
print(word, ": index", i)
Explanation: How cool is that? Note how easy it becomes now, to solve our problem with the index above:
End of explanation
titles = ["Emma", "Stoner", "Inferno", "1984", "Aeneid"]
authors = ["J. Austen", "J. Williams", "D. Alighieri", "G. Orwell", "P. Vergilius"]
dates = ["1815", "2006", "Ca. 1321", "1949", "before 19 BC"]
Explanation: Zip
Obviously, enumerate() can be really useful when you're working lists or other kinds of data sequences. Another helpful function in this respect is zip(). Supposed that we have a small database of 5 books in the forms of three lists: the first list contains the titles of the books, the second the author, while the third list contains the dates of publication:
End of explanation
list(zip(titles, authors))
list(zip(titles, dates))
list(zip(authors, dates))
Explanation: In each of these lists, the third item always corresponds to Dante's masterpiece and the last item to the Aeneid by Vergil, which inspired him. The use of zip() can now easily be illustrated:
End of explanation
list(zip(authors, titles, dates))
Explanation: Do you see what happened here? In fact, zip() really functions like a 'zipper' in the real-world: it zips together multiple lists, and return a list of mini-tuples, in which the correct authors, titles and dates will be combined with each other. Moreover, you can pass multiple sequences to at once zip():
End of explanation
for author, title in zip(authors, titles):
print(author)
print(title)
print("===")
Explanation: How awesome is that? Here too: don't forget to cast the result of zip() to a list or tuple, e.g. if you want to print it. As with enumerate() we can now also unzip each mini-tuple when declaring a for-loop:
End of explanation
import string
words = "I have not failed . I’ve just found 10,000 ways that won’t work .".split()
word_lengths = []
for word in words:
if word not in string.punctuation:
word_lengths.append(len(word))
print(word_lengths)
Explanation: As you can understand, this is really useful functionality for dealing with long, complex lists and especially combinations of them.
Comprehensions
Now it's time to have a look at comprehensions in Python: comprehensions, such a list comprehensions or tuple comprehensions provide an easy way to create and fill new lists. They are also often used to change one list into another. Typically, comprehensions can be written in a single line of Python code, which is why people often feel like they are more readable than normal Python code. Let's start with an example. Say that we would like to fill a list of numbers that represent the length of each word in a sentence, but only if that word isn't a punctuation mark. By now, we can of course easily create such a list using a for-loop:
End of explanation
word_lengths = [len(word) for word in words if word not in string.punctuation]
print(word_lengths)
Explanation: We can create the exact same list of numbers using a list comprehension which only takes up one line of Python code:
End of explanation
print(type(word_lengths))
Explanation: OK, impressive, but there are a lot of new things going on here. Let's go through this step by step. The first step is easy: we initialize a variable word_lengths to which we assign a value using the assignment operator. The type of that value will eventually be a list: this is indicated by the square brackets which enclose the list comprehension:
End of explanation
words_without_punc = [word for word in words if word not in string.punctuation]
print(words_without_punc)
Explanation: Inside the squared brackets, we can find the actual comprehension which will determine what goes inside our new list. Note that it not always possible to read these comprehensions for left to right, so you will have to get used to the way they are build up from a syntactic point of view. First of all, we add an expression that determines which elements will make it into our list, in this case: len(word). The variable word, in this case, is generated by the following for-statement: for word in words:. Finally, we add a condition to our statement that will determine whether or not len(word) should be added to our list. In this case, len(word) will only be included in our list if the word is not a punctuation mark: if word not in string.punctuation. This is a full list comprehension, but simpler ones exist. We could for instance not have called len() on word before appending it to our list. Like this, we could remove, for example, easily remove all punctuation for our wordlist:
End of explanation
all_word_lengths = [len(word) for word in words]
print(all_word_lengths)
Explanation: Moreover, we don't have to include the if-statement at the end (it is always optional):
End of explanation
square_numbers = [x*x for x in range(10)]
print(square_numbers)
Explanation: In the comprehensions above, words is the only pre-existing input to our comprehension; all the other variables are created and manipulated inside the function. The new range() function which we saw at the beginning of this chapter is also often often as the input for a comprehension:
End of explanation
tuple_word_lengths = tuple(len(word) for word in words if word not in string.punctuation)
print(tuple_word_lengths)
print(type(tuple_word_lengths))
Explanation: Importantly, we can just as easily create a tuple using the same comprehension syntax, but this time calling tuple() on the comprehension, instead of using the squared brackets to create a normal list:
End of explanation
tuple_word_lengths = tuple()
for word in words:
if word not in string.punctuation:
tuple_word_lengths.append(len(word))
print(tuple_word_lengths)
Explanation: This is very useful, especially if you can figure out why the following code block will generate an error...
End of explanation
nested_list = [[x,x+2] for x in range(10, 22, 3)]
print(nested_list)
print(type(nested_list))
print(type(nested_list[3]))
Explanation: Good programmers can do amazing things with comprehensions. With list comprehensions, it becomes really easy, for example, to create nested lists (lists that themselves consist of lists or tuples). Can you figure out what is happening in the following code block:
End of explanation
nested_tuple = [(x,x+2) for x in range(10, 22, 3)]
print(nested_tuple)
print(type(nested_tuple))
print(type(nested_tuple[3]))
nested_tuple = tuple((x,x+2) for x in range(10, 22, 3))
print(nested_tuple)
print(type(nested_tuple))
print(type(nested_tuple[3]))
Explanation: In the first line above, we create a new list (nested_list) but we don't fill it with single numbers, but instead with mini-lists that contain two values. We could just as easily have done this with mini-tuples, by using round brackets. Can you spot the differences below?
End of explanation
a = [2, 3, 5, 7, 0, 2, 8]
b = [3, 2, 1, 7, 0, 0, 9]
diffs = [a-b for a,b in zip(a, b)]
print(diffs)
Explanation: Note that zip() can also be very useful in this respect, because you can unpack items inside the comprehension. Do you understand what is going in the following code block:
End of explanation
diffs = [abs(a-b) for a,b in zip(a, b) if (a & b)]
print(diffs)
Explanation: Again, more complex comprehensions are thinkable:
End of explanation
A = tuple([x-1,x+3] for x in range(10, 100, 3))
B = [(n*n, n+50) for n in range(10, 1000, 3) if n <= 100]
sums = sum(tuple(item_a[1]+item_b[0] for item_a, item_b in zip(A[:10], B[:10])))
print(sums)
Explanation: Great: you are starting to become a real pro at comprehensions! The following, very dense code block, however, might be more challenging: can you figure out what is going on?
End of explanation
text = "This text contains a lot of different characters, but probably not all of them."
chars = {char.lower() for char in text if char not in string.punctuation}
print(chars)
Explanation: Finally, we should also mention that dictionaries and sets can also be filled in a one-liner using such comprehensions. For sets, the syntax runs entirely parallel to that of list and tuple comprehensions, but here, we use curly brackets to surround the expression:
End of explanation
counts = {word:len(word) for word in words}
print(counts)
Explanation: For dictionaries, which consist of key-value pairs, the syntax is only slightly more complicated. Here, you have to make sure that you link the correct key to the correct value using a colon, in the very first part of the comprehension. The following example will make this clearer:
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: You've reached the end of Chapter 7! Ignore the code below, it's only here to make the page pretty:
End of explanation |
4,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Efficient Computation of Powers
The function power takes two natural numbers $m$ and $n$ and computes $m^n$. Our first implementation is inefficient and takes $n-1$ multiplication to compute $m^n$.
Step1: Next, we try a recursive implementation that is based on the following two equations | Python Code:
def power(m, n):
r = 1
for i in range(n):
r *= m
return r
power(2, 3), power(3, 2)
%%time
p = power(3, 500000)
p
Explanation: Efficient Computation of Powers
The function power takes two natural numbers $m$ and $n$ and computes $m^n$. Our first implementation is inefficient and takes $n-1$ multiplication to compute $m^n$.
End of explanation
def power(m, n):
if n == 0:
return 1
p = power(m, n // 2)
if n % 2 == 0:
return p * p
else:
return p * p * m
%%time
p = power(3, 500000)
Explanation: Next, we try a recursive implementation that is based on the following two equations:
1. $m^0 = 1$
2. $m^n = \left{\begin{array}{ll}
m^{n//2} \cdot m^{n//2} & \mbox{if $n$ is even}; \
m^{n//2} \cdot m^{n//2} \cdot m & \mbox{if $n$ is odd}.
\end{array}
\right.
$
End of explanation |
4,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 13
Step1: With NumPy arrays, all the same functionality you know and love from lists is still there.
Step2: These operations all work whether you're using Python lists or NumPy arrays.
The first place in which Python lists and NumPy arrays differ is when we get to multidimensional arrays. We'll start with matrices.
To build matrices using Python lists, you basically needed "nested" lists, or a list containing lists
Step3: To build the NumPy equivalent, you can basically just feed the Python list-matrix into the NumPy array method
Step4: The real difference, though, comes with actually indexing these elements. With Python lists, you can index individual elements only in this way
Step5: With NumPy arrays, you can use that same notation...or you can use comma-separated indices
Step6: It's not earth-shattering, but enough to warrant a heads-up.
When you index NumPy arrays, the nomenclature used is that of an axis
Step7: Here's a great visual summary of slicing NumPy arrays, assuming you're starting from an array with shape (3, 3)
Step8: We know video is 3D because we can also access its ndim attribute.
Step9: Another example--to go straight to cutting-edge academic research--is 3D video microscope data of multiple tagged fluorescent markers. This would result in a five-axis NumPy object
Step10: We can also ask how many elements there are total, using the size attribute
Step11: These are extreme examples, but they're to illustrate how flexible NumPy arrays are.
If in doubt
Step12: Part 2
Step13: how does Python know that you want to add the scalar value 10 to each element of the vector x? Because (in a word) broadcasting.
Broadcasting is the operation through which a low(er)-dimensional array is in some way "replicated" to be the same shape as a high(er)-dimensional array.
We saw this in our previous example
Step14: In this example, the scalar value 1 is broadcast to all the elements of zeros, converting the operation to element-wise addition.
This all happens under the NumPy hood--we don't see it! It "just works"...most of the time.
There are some rules that broadcasting abides by. Essentially, dimensions of arrays need to be "compatible" in order for broadcasting to work. "Compatible" is defined as
both dimensions are of equal size (e.g., both have the same number of rows)
one of them is 1 (the scalar case)
If these rules aren't met, you get all kinds of strange errors
Step15: But on some intuitive level, this hopefully makes sense
Step16: In this example, the shape of x is (3, 4). The shape of y is just 4. Their trailing axes are both 4, therefore the "smaller" array will be broadcast to fit the size of the larger array, and the operation (addition, in this case) is performed element-wise.
Part 3
Step17: This is randomly generated data, yes, but it could easily be 7 data points in 4 dimensions. That is, we have 7 observations of variables with 4 descriptors. Perhaps it's 7 people who are described by their height, weight, age, and 40-yard dash time. Or it's a matrix of data on 7 video games, each described by their PC Gamer rating, Steam downloads count, average number of active players, and total cheating complaints.
Whatever our data, a common first step before any analysis involves some kind of preprocessing. In this case, if the example we're looking at is the video game scenario from the previous slide, then we know that any negative numbers are junk. After all, how can you have a negative rating? Or a negative number of active players?
So our first course of action might be to set all negative numbers in the data to 0. We could potentially set up a pair of loops, but it's much easier (and faster) to use boolean indexing.
First, we create a mask. This is what it sounds like
Step18: Now, we can use our mask to access only the indices we want to set to 0.
Step19: voilà! Every negative number has been set to 0, and all the other values were left unchanged. Now we can continue with whatever analysis we may have had in mind.
One small caveat with boolean indexing.
Yes, you can string multiple boolean conditions together, as you may recall doing in the lecture with conditionals.
But... and and or DO NOT WORK. You have to use the arithmetic versions of the operators
Step20: Fancy Indexing
"Fancy" indexing is a term coined by the NumPy community to refer to this little indexing trick. To explain is simple enough
Step21: We have 8 rows and 4 columns, where each row is a vector of the same value repeated across the columns, and that value is the index of the row.
In addition to slicing and boolean indexing, we can also use other NumPy arrays to very selectively pick and choose what elements we want, and even the order in which we want them.
Let's say I want rows 7, 0, 5, and 2. In that order.
Step22: Ta-daaa! Pretty spiffy!
But wait, there's more! Rather than just specifying one dimension, you can provide tuples of NumPy arrays that very explicitly pick out certain elements (in a certain order) from another NumPy array.
Step23: Ok, this will take a little explaining, bear with me | Python Code:
li = ["this", "is", "a", "list"]
print(li)
print(li[1:3]) # Print element 1 (inclusive) to 3 (exclusive)
print(li[2:]) # Print element 2 and everything after that
print(li[:-1]) # Print everything BEFORE element -1 (the last one)
Explanation: Lecture 13: Array Indexing, Slicing, and Broadcasting
CSCI 1360: Foundations for Informatics and Analytics
Overview and Objectives
Most of this lecture will be a review of basic indexing and slicing operations, albeit within the context of NumPy arrays. Therefore, there will be some additional functionalities that are critical to understand. By the end of this lecture, you should be able to:
Use "fancy indexing" in NumPy arrays
Create boolean masks to pull out subsets of a NumPy array
Understand array broadcasting for performing operations on subsets of NumPy arrays
Part 1: NumPy Array Indexing and Slicing
Hopefully, you recall basic indexing and slicing from L4.
End of explanation
import numpy as np
x = np.array([1, 2, 3, 4, 5])
print(x)
print(x[1:3])
print(x[2:])
print(x[:-1])
Explanation: With NumPy arrays, all the same functionality you know and love from lists is still there.
End of explanation
python_matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]
print(python_matrix)
Explanation: These operations all work whether you're using Python lists or NumPy arrays.
The first place in which Python lists and NumPy arrays differ is when we get to multidimensional arrays. We'll start with matrices.
To build matrices using Python lists, you basically needed "nested" lists, or a list containing lists:
End of explanation
numpy_matrix = np.array(python_matrix)
print(numpy_matrix)
Explanation: To build the NumPy equivalent, you can basically just feed the Python list-matrix into the NumPy array method:
End of explanation
print(python_matrix) # The full list-of-lists
print(python_matrix[0]) # The inner-list at the 0th position of the outer-list
print(python_matrix[0][0]) # The 0th element of the 0th inner-list
Explanation: The real difference, though, comes with actually indexing these elements. With Python lists, you can index individual elements only in this way:
End of explanation
print(numpy_matrix)
print(numpy_matrix[0])
print(numpy_matrix[0, 0]) # Note the comma-separated format!
Explanation: With NumPy arrays, you can use that same notation...or you can use comma-separated indices:
End of explanation
x = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ])
print(x)
print()
print(x[:, 1]) # Take ALL of axis 0, and one index of axis 1.
Explanation: It's not earth-shattering, but enough to warrant a heads-up.
When you index NumPy arrays, the nomenclature used is that of an axis: you are indexing specific axes of a NumPy array object. In particular, when access the .shape attribute on a NumPy array, that tells you two things:
1: How many axes there are. This number is len(ndarray.shape), or the number of elements in the tuple returned by .shape. In our above example, numpy_matrix.shape would return (3, 3), so it would have 2 axes.
2: How many elements are in each axis. In our above example, where numpy_matrix.shape returns (3, 3), there are 2 axes (since the length of that tuple is 2), and both axes have 3 elements (hence the numbers 3).
Here's the breakdown of axis notation and indices used in a 2D NumPy array:
As with lists, if you want an entire axis, just use the colon operator all by itself:
End of explanation
video = np.empty(shape = (1920, 1080, 5000))
print("Axis 0 length: {}".format(video.shape[0])) # How many rows?
print("Axis 1 length: {}".format(video.shape[1])) # How many columns?
print("Axis 2 length: {}".format(video.shape[2])) # How many frames?
Explanation: Here's a great visual summary of slicing NumPy arrays, assuming you're starting from an array with shape (3, 3):
Depending on your field, it's entirely possible that you'll go beyond 2D matrices. If so, it's important to be able to recognize what these structures "look" like.
For example, a video can be thought of as a 3D cube. Put another way, it's a NumPy array with 3 axes: the first axis is height, the second axis is width, and the third axis is number of frames.
End of explanation
print(video.ndim)
del video
Explanation: We know video is 3D because we can also access its ndim attribute.
End of explanation
tensor = np.empty(shape = (2, 640, 480, 360, 100))
print(tensor.shape)
# Axis 0: color channel--used to differentiate between fluorescent markers
# Axis 1: height--same as before
# Axis 2: width--same as before
# Axis 3: depth--capturing 3D depth at each time interval, like a 3D movie
# Axis 4: frame--same as before
Explanation: Another example--to go straight to cutting-edge academic research--is 3D video microscope data of multiple tagged fluorescent markers. This would result in a five-axis NumPy object:
End of explanation
print(tensor.size)
del tensor
Explanation: We can also ask how many elements there are total, using the size attribute:
End of explanation
example = np.empty(shape = (3, 5, 9))
print(example.shape)
sliced = example[0] # Indexed the first axis.
print(sliced.shape)
sliced_again = example[0, 0] # Indexed the first and second axes.
print(sliced_again.shape)
Explanation: These are extreme examples, but they're to illustrate how flexible NumPy arrays are.
If in doubt: once you index the first axis, the NumPy array you get back has the shape of all the remaining axes.
End of explanation
x = np.array([1, 2, 3, 4, 5])
x += 10
print(x)
Explanation: Part 2: NumPy Array Broadcasting
"Broadcasting" is a fancy term for how Python--specifically, NumPy--handles vectorized operations when arrays of differing shapes are involved. (this is, in some sense, "how the sausage is made")
When you write code like this:
End of explanation
zeros = np.zeros(shape = (3, 4))
ones = 1
zeros += ones
print(zeros)
Explanation: how does Python know that you want to add the scalar value 10 to each element of the vector x? Because (in a word) broadcasting.
Broadcasting is the operation through which a low(er)-dimensional array is in some way "replicated" to be the same shape as a high(er)-dimensional array.
We saw this in our previous example: the low-dimensional scalar was replicated, or broadcast, to each element of the array x so that the addition operation could be performed element-wise.
This concept can be generalized to higher-dimensional NumPy arrays.
End of explanation
x = np.zeros(shape = (3, 3))
y = np.ones(4)
x + y
Explanation: In this example, the scalar value 1 is broadcast to all the elements of zeros, converting the operation to element-wise addition.
This all happens under the NumPy hood--we don't see it! It "just works"...most of the time.
There are some rules that broadcasting abides by. Essentially, dimensions of arrays need to be "compatible" in order for broadcasting to work. "Compatible" is defined as
both dimensions are of equal size (e.g., both have the same number of rows)
one of them is 1 (the scalar case)
If these rules aren't met, you get all kinds of strange errors:
End of explanation
x = np.zeros(shape = (3, 4))
y = np.array([1, 2, 3, 4])
z = x + y
print(z)
Explanation: But on some intuitive level, this hopefully makes sense: there's no reasonable arithmetic operation that can be performed when you have one $3 \times 3$ matrix and a vector of length 4.
To be rigorous, though: it's the trailing dimensions / axes that you want to make sure line up.
End of explanation
x = np.random.standard_normal(size = (7, 4))
print(x)
Explanation: In this example, the shape of x is (3, 4). The shape of y is just 4. Their trailing axes are both 4, therefore the "smaller" array will be broadcast to fit the size of the larger array, and the operation (addition, in this case) is performed element-wise.
Part 3: "Fancy" Indexing
Hopefully you have at least an intuitive understanding of how indexing works so far. Unfortunately, it gets more complicated, but still retains a modicum of simplicity.
First: indexing by boolean masks.
Boolean indexing
We've already seen that you can index by integers. Using the colon operator, you can even specify ranges, slicing out entire swaths of rows and columns.
But suppose we want something very specific; data in our array which satisfies certain criteria, as opposed to data which is found at certain indices?
Put another way: can we pull data out of an array that meets certain conditions?
Let's say you have some data.
End of explanation
mask = x < 0
print(mask)
Explanation: This is randomly generated data, yes, but it could easily be 7 data points in 4 dimensions. That is, we have 7 observations of variables with 4 descriptors. Perhaps it's 7 people who are described by their height, weight, age, and 40-yard dash time. Or it's a matrix of data on 7 video games, each described by their PC Gamer rating, Steam downloads count, average number of active players, and total cheating complaints.
Whatever our data, a common first step before any analysis involves some kind of preprocessing. In this case, if the example we're looking at is the video game scenario from the previous slide, then we know that any negative numbers are junk. After all, how can you have a negative rating? Or a negative number of active players?
So our first course of action might be to set all negative numbers in the data to 0. We could potentially set up a pair of loops, but it's much easier (and faster) to use boolean indexing.
First, we create a mask. This is what it sounds like: it "masks" certain portions of the data we don't want to change (in this case, all the numbers greater than 0).
End of explanation
x[mask] = 0
print(x)
Explanation: Now, we can use our mask to access only the indices we want to set to 0.
End of explanation
mask = (x < 1) & (x > 0.5) # True for any value less than 1 but greater than 0.5
x[mask] = 99
print(x)
Explanation: voilà! Every negative number has been set to 0, and all the other values were left unchanged. Now we can continue with whatever analysis we may have had in mind.
One small caveat with boolean indexing.
Yes, you can string multiple boolean conditions together, as you may recall doing in the lecture with conditionals.
But... and and or DO NOT WORK. You have to use the arithmetic versions of the operators: & (for and) and | (for or).
End of explanation
matrix = np.empty(shape = (8, 4))
for i in range(8):
matrix[i] = i # Broadcasting is happening here!
print(matrix)
Explanation: Fancy Indexing
"Fancy" indexing is a term coined by the NumPy community to refer to this little indexing trick. To explain is simple enough: fancy indexing allows you to index arrays with other [integer] arrays.
Now, to demonstrate:
Let's build a 2D array that, for the sake of simplicity, has across each row the index of that row.
End of explanation
indices = np.array([7, 0, 5, 2])
print(matrix[indices])
Explanation: We have 8 rows and 4 columns, where each row is a vector of the same value repeated across the columns, and that value is the index of the row.
In addition to slicing and boolean indexing, we can also use other NumPy arrays to very selectively pick and choose what elements we want, and even the order in which we want them.
Let's say I want rows 7, 0, 5, and 2. In that order.
End of explanation
matrix = np.arange(32).reshape((8, 4))
print(matrix) # This 8x4 matrix has integer elements that increment by 1 column-wise, then row-wise.
indices = ( np.array([1, 7, 4]), np.array([3, 0, 1]) ) # This is a tuple of 2 NumPy arrays!
print(matrix[indices])
Explanation: Ta-daaa! Pretty spiffy!
But wait, there's more! Rather than just specifying one dimension, you can provide tuples of NumPy arrays that very explicitly pick out certain elements (in a certain order) from another NumPy array.
End of explanation
( np.array([1, 7, 4]), np.array([3, 0, 1]) )
Explanation: Ok, this will take a little explaining, bear with me:
When you pass in tuples as indices, they act as $(x, y)$ coordinate pairs: the first NumPy array of the tuple is the list of $x$ coordinates, while the second NumPy array is the list of corresponding $y$ coordinates.
In this way, the corresponding elements of the two NumPy arrays in the tuple give you the row and column indices to be selected from the original NumPy array.
In our previous example, this was our tuple of indices:
End of explanation |
4,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inspect Raw Netcdf
Playing around with efficient ways to merge and view netcdf data from the tower. This ipython notebook depends on the python script of the same name.
Step1: Using the xray dataset directly (only works for Table1)
Step2: Using a pandas dataframe with xray | Python Code:
usr = 'Julia'
FILEDIR = 'C:/Users/%s/Dropbox (PE)/KenyaLab/Data/Tower/TowerData/'%usr
NETCDFLOC = FILEDIR + 'raw_netcdf_output/'
DATALOC = 'F:/towerdata/'
Explanation: Inspect Raw Netcdf
Playing around with efficient ways to merge and view netcdf data from the tower. This ipython notebook depends on the python script of the same name.
End of explanation
import datetime as dt
from inspect_raw_netcdf import *
import matplotlib.pyplot as plt
%matplotlib inline
ds, start, end = process(NETCDFLOC)
L, places, ps, depths, colors, data_options = clean_Table1(ds)
data, data_list = pick_type(L, data_options)
fig = make_plots(FILEDIR,ds,start,end,places,ps,depths,colors,data,data_list)
Explanation: Using the xray dataset directly (only works for Table1)
End of explanation
from __future__ import print_function
import pandas as pd
import datetime as dt
import xray
def one_week(input_dir):
datas = ['lws','licor','Table1','Table1_rain']
#start = dt.datetime.utcnow()-dt.timedelta(7)
start = dt.datetime(2014,01,1)
end = dt.datetime(2014,01,10)
for data in datas:
try:
ds,df,params = inspect_raw(input_dir,data,start,end)
except:
print('\nThere doesn\'t seem to be any %s data for this interval'%data)
return ds,df,params
def inspect_raw(input_dir,data,start,end):
ds = grabDateRange(input_dir,data,start,end)
df = ds.to_dataframe().dropna(axis=1,how='all')
non_null = set(df.columns)
params = set(ds.vars)
null_params = list(params - non_null)
null_params.sort()
print('\n%s data ranges from:\n'%data,
ds.coords['time'].values[0], 'to\n',
ds.coords['time'].values[-1],
'\n and contains null values for:' )
for p in null_params:
print(' ', p)
return ds,df,params
Explanation: Using a pandas dataframe with xray
End of explanation |
4,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Parametric Regression
Notebook version
Step1: A quick note on the mathematical notation
In this notebook we will make extensive use of probability distributions. In general, we will use capital letters
${\bf X}$, $S$, $E$ ..., to denote random variables, and lower-case letters ${\bf x}$, $s$, $\epsilon$ ..., to denote the values they can take.
In general, we will use letter $p$ for probability density functions (pdf). When necessary, we will use, capital subindices to make the random variable explicit. For instance, $p_{{\bf X}, S}({\bf x}, s)$ would be the joint pdf of random variables ${\bf X}$ and $S$ at values ${\bf x}$ and $s$, respectively.
However, to avoid a notation overload, we will omit subindices when they are clear from the context. For instance, we will use $p({\bf x}, s)$ instead of $p_{{\bf X}, S}({\bf x}, s)$.
1. Model-based parametric regression
1.1. The regression problem.
Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing good predictions about some unknown variable $s$. To do so, we assume that a set of labelled training examples, ${{\bf x}k, s_k}{k=0}^{K-1}$ is available.
The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the test set) of labelled samples.
1.2. Model-based parametric regression
Model-based regression methods assume that all data in the training and test dataset have been generated by some stochastic process. In parametric regression, we assume that the probability distribution generating the data has a known parametric form, but the values of some parameters are unknown.
In particular, in this notebook we will assume the target variables in all pairs $({\bf x}_k, s_k)$ from the training and test sets have been generated independently from some posterior distribution $p(s| {\bf x}, {\bf w})$, were ${\bf w}$ is some unknown parameter. The training dataset is used to estimate ${\bf w}$.
<img src="figs/ParametricReg.png" width=300>
1.3. Model assumptions
In order to estimate ${\bf w}$ from the training data in a mathematicaly rigorous and compact form let us group the target variables into a vector
$$
{\bf s} = \left(s_0, \dots, s_{K-1}\right)^\top
$$
and the input vectors into a matrix
$$
{\bf X} = \left({\bf x}0, \dots, {\bf x}{K-1}\right)^\top
$$
We will make the following assumptions
Step2: The data observation will modify our belief about the true data model according to the posterior distribution. In the following we will analyze this in a Gaussian case.
4. Bayesian regression for a Gaussian model.
We will apply the above steps to derive a Bayesian regression algorithm for a Gaussian model.
4.1. Step 1
Step3: Fit a Bayesian linear regression model assuming $z= x$ and
Step4: To do so, compute the posterior weight distribution using the first $k$ samples in the complete dataset, for $k = 1,2,4,8,\ldots 128$. Draw all these posteriors along with the prior distribution in the same plot.
Step5: Exercise 3
Step6: 5. Maximum likelihood vs Bayesian Inference.
5.1. The Maximum Likelihood Estimate.
For comparative purposes, it is interesting to see here that the likelihood function is enough to compute the Maximum Likelihood (ML) estimate
\begin{align}
{\bf w}\text{ML} &= \arg \max{\bf w} p(\mathcal{D}|{\bf w}) \
&= \arg \min_{\bf w} \|{\bf s}-{\bf Z}{\bf w}\|^2
\end{align}
which leads to the Least Squares (LS) solution
$$
{\bf w}_\text{ML} = ({\bf Z}^\top{\bf Z})^{-1}{\bf Z}^\top{\bf s}
$$
ML estimation is prone to overfiting. In general, if the number of parameters (i.e. the dimension of ${\bf w}$) is large in relation to the size of the training data, the predictor based on the ML estimate may have a small square error over the training set but a large error over the test set. Therefore, in practice, some cross validation procedure is required to keep the complexity of the predictor function under control depending on the size of the training set.
By defining a prior distribution over the unknown parameters, and using the Bayesian inference methods, the overfitting problems can be alleviated
5.2 Making predictions
Following an ML approach, we retain a single model, ${\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w})$. Then, the predictive distribution of the target value for a new point would be obtained as
Step7: Let us assume that the cosine form of the noise-free signal is unknown, and we assume a polynomial model with a high degree. The following code plots the LS estimate
Step8: The following fragment of code computes the posterior weight distribution, draws random vectors from $p({\bf w}|{\bf s})$, and plots the corresponding regression curves along with the training points. Compare these curves with those extracted from the prior distribution of ${\bf w}$ and with the LS solution.
Step9: Not only do we obtain a better predictive model, but we also have confidence intervals (error bars) for the predictions.
Step10: Exercise 5
Step11: The above curve may change the position of its maximum from run to run.
We conclude the notebook by plotting the result of the Bayesian inference for $M=6$ | Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
from IPython import display
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
import pylab
import time
Explanation: Bayesian Parametric Regression
Notebook version: 1.5 (Sep 24, 2019)
Author: Jerónimo Arenas García ([email protected])
Jesús Cid-Sueiro ([email protected])
Changes: v.1.0 - First version
v.1.1 - ML Model selection included
v.1.2 - Some typos corrected
v.1.3 - Rewriting text, reorganizing content, some exercises.
v.1.4 - Revised introduction
v.1.5 - Revised notation. Solved exercise 5
Pending changes: * Include regression on the stock data
End of explanation
n_grid = 200
degree = 3
nplots = 20
# Prior distribution parameters
mean_w = np.zeros((degree+1,))
v_p = 0.2 ### Try increasing this value
var_w = v_p * np.eye(degree+1)
xmin = -1
xmax = 1
X_grid = np.linspace(xmin, xmax, n_grid)
fig = plt.figure()
ax = fig.add_subplot(111)
for k in range(nplots):
#Draw weigths fromt the prior distribution
w_iter = np.random.multivariate_normal(mean_w, var_w)
S_grid_iter = np.polyval(w_iter, X_grid)
ax.plot(X_grid, S_grid_iter,'g-')
ax.set_xlim(xmin, xmax)
ax.set_ylim(-1, 1)
ax.set_xlabel('$x$')
ax.set_ylabel('$s$')
plt.show()
Explanation: A quick note on the mathematical notation
In this notebook we will make extensive use of probability distributions. In general, we will use capital letters
${\bf X}$, $S$, $E$ ..., to denote random variables, and lower-case letters ${\bf x}$, $s$, $\epsilon$ ..., to denote the values they can take.
In general, we will use letter $p$ for probability density functions (pdf). When necessary, we will use, capital subindices to make the random variable explicit. For instance, $p_{{\bf X}, S}({\bf x}, s)$ would be the joint pdf of random variables ${\bf X}$ and $S$ at values ${\bf x}$ and $s$, respectively.
However, to avoid a notation overload, we will omit subindices when they are clear from the context. For instance, we will use $p({\bf x}, s)$ instead of $p_{{\bf X}, S}({\bf x}, s)$.
1. Model-based parametric regression
1.1. The regression problem.
Given an observation vector ${\bf x}$, the goal of the regression problem is to find a function $f({\bf x})$ providing good predictions about some unknown variable $s$. To do so, we assume that a set of labelled training examples, ${{\bf x}k, s_k}{k=0}^{K-1}$ is available.
The predictor function should make good predictions for new observations ${\bf x}$ not used during training. In practice, this is tested using a second set (the test set) of labelled samples.
1.2. Model-based parametric regression
Model-based regression methods assume that all data in the training and test dataset have been generated by some stochastic process. In parametric regression, we assume that the probability distribution generating the data has a known parametric form, but the values of some parameters are unknown.
In particular, in this notebook we will assume the target variables in all pairs $({\bf x}_k, s_k)$ from the training and test sets have been generated independently from some posterior distribution $p(s| {\bf x}, {\bf w})$, were ${\bf w}$ is some unknown parameter. The training dataset is used to estimate ${\bf w}$.
<img src="figs/ParametricReg.png" width=300>
1.3. Model assumptions
In order to estimate ${\bf w}$ from the training data in a mathematicaly rigorous and compact form let us group the target variables into a vector
$$
{\bf s} = \left(s_0, \dots, s_{K-1}\right)^\top
$$
and the input vectors into a matrix
$$
{\bf X} = \left({\bf x}0, \dots, {\bf x}{K-1}\right)^\top
$$
We will make the following assumptions:
A1. All samples in ${\cal D}$ have been generated by the same distribution, $p({\bf x}, s \mid {\bf w})$
A2. Input variables ${\bf x}$ do not depend on ${\bf w}$. This implies that
$$
p({\bf X} \mid {\bf w}) = p({\bf X})
$$
A3. Targets $s_0, \dots, s_{K-1}$ are statistically independent, given ${\bf w}$ and the inputs ${\bf x}0,\ldots, {\bf x}{K-1}$, that is:
$$
p({\bf s} \mid {\bf X}, {\bf w}) = \prod_{k=0}^{K-1} p(s_k \mid {\bf x}_k, {\bf w})
$$
2. Bayesian inference.
2.1. The Bayesian approach
The main idea of Bayesian inference is the following: assume we want to estimate some unknown variable $U$ given an observed variable $O$. If $U$ and $O$ are random variables, we can describe the relation between $U$ and $O$ through the following functions:
Prior distribution: $p_U(u)$. It describes our uncertainty on the true value of $U$ before observing $O$.
Likelihood function: $p_{O \mid U}(o \mid u)$. It describes how the value of the observation is generated for a given value of $U$.
Posterior distribution: $p_{U|O}(u \mid o)$. It describes our uncertainty on the true value of $U$ once the true value of $O$ is observed.
The major component of the Bayesian inference is the posterior distribution. All Bayesian estimates are computed as some of its central statistics (e.g. the mean, the median or the mode), for instance
Maximum A Posteriori (MAP) estimate: $\qquad{\widehat{u}}{\text{MAP}} = \arg\max_u p{U \mid O}(u \mid o)$
Minimum Mean Square Error (MSE) estimate: $\qquad\widehat{u}_{\text{MSE}} = \mathbb{E}{U \mid O=o}$
The choice between the MAP or the MSE estimate may depend on practical or computational considerations. From a theoretical point of view, $\widehat{u}_{\text{MSE}}$ has some nice properties: it minimizes $\mathbb{E}{(U-\widehat{u})^2}$ among all possible estimates, $\widehat{u}$, so it is a natural choice. However, it involves the computation of an integral, which may not have a closed-form solution. In such cases, the MAP estimate can be a better choice.
The prior and the likelihood function are auxiliary distributions: if the posterior distribution is unknown, it can be computed from them using the Bayes rule:
\begin{equation}
p_{U|O}(u \mid o) = \frac{p_{O|U}(o \mid u) \cdot p_{U}(u)}{p_{O}(o)}
\end{equation}
In the next two sections we show that the Bayesian approach can be applied to both the prediction and the estimation problems.
2.2. Bayesian prediction under a known model
Assuming that the model parameters ${\bf w}$ are known, we can apply the Bayesian approach to predict ${\bf s}$ for an input ${\bf x}$. In that case, we can take
Unknown variable: ${\bf s}$, and
Observations: ${\bf x}$
the MAP and MSE predictions become
Maximum A Posterior (MAP): $\qquad\widehat{s}_{\text{MAP}} = \arg\max_s p(s| {\bf x}, {\bf w})$
Minimum Mean Square Error (MSE): $\qquad\widehat{s}_{\text{MSE}} = \mathbb{E}{S |{\bf x}, {\bf w}}$
Exercise 1:
Assuming
$$
p(s\mid x, w) = \frac{s}{w x^2} \exp\left({-\frac{s^2}{2 w x^2}}\right), \qquad s \geq 0,
$$
compute the MAP and MSE predictions of $s$ given $x$ and $w$.
Solution:
<SOL>
</SOL>
2.2.1. The Gaussian case
A particularly interesting case arises when the data model is Gaussian:
$$p(s|{\bf x}, {\bf w}) =
\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}
\exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right)
$$
where ${\bf z}=T({\bf x})$ is a vector with components which can be computed directly from the observed variables. For a Gaussian distribution (and for any unimodal symetric distributions) the mean and the mode are the same and, thus,
$$
\widehat{s}\text{MAP} = \widehat{s}\text{MSE} = {\bf w}^\top{\bf z}
$$
Such expression includes a linear regression model, where ${\bf z} = [1; {\bf x}]$, as well as any other non-linear model as long as it can be expressed as a <i>"linear in the parameters"</i> model.
2.3. Bayesian Inference for Parameter Estimation
In a similar way, we can apply Bayesian inference to estimate the model parameters ${\bf w}$ from a given dataset, $\cal{D}$. In that case
the unknown variable is ${\bf w}$, and
the observation is $\cal{D} \equiv {{\bf X}, {\bf s}}$
so that
Maximum A Posterior (MAP): $\qquad\widehat{\bf w}{\text{MAP}} = \arg\max{\bf w} p({\bf w}| {\cal D})$
Minimum Mean Square Error (MSE): $\qquad\widehat{\bf w}_{\text{MSE}} = \mathbb{E}{{\bf W} | {\cal D}}$
3. Bayesian parameter estimation
NOTE: Since the training data inputs are known, all probability density functions and expectations in the remainder of this notebook will be conditioned on the data matrix, ${\bf X}$. To simplify the mathematical notation, from now on we will remove ${\bf X}$ from all conditions. For instance, we will write $p({\bf s}|{\bf w})$ instead of $p({\bf s}|{\bf w}, {\bf X})$, etc. Keep in mind that, in any case, all probabilities and expectations may depend on ${\bf X}$ implicitely.
Summarizing, the steps to design a Bayesian parametric regresion algorithm are the following:
Assume a parametric data model $p(s| {\bf x},{\bf w})$ and a prior distribution $p({\bf w})$.
Using the data model and the i.i.d. assumption, compute $p({\bf s}|{\bf w})$.
Applying the bayes rule, compute the posterior distribution $p({\bf w}|{\bf s})$.
Compute the MAP or the MSE estimate of ${\bf w}$ given ${\bf x}$.
Compute predictions using the selected estimate.
3.1. Bayesian Inference and Maximum Likelihood.
Applying the Bayes rule the MAP estimate can be alternatively expressed as
\begin{align}
\qquad\widehat{\bf w}{\text{MAP}}
&= \arg\max{\bf w} \frac{p({\cal D}| {\bf w}) \cdot p({\bf w})}{p({\cal D})} \
&= \arg\max_{\bf w} p({\cal D}| {\bf w}) \cdot p({\bf w})
\end{align}
By comparisons, the ML (Maximum Likelihood) estimate has the form:
$$
\widehat{\bf w}{\text{ML}} = \arg \max{\bf w} p(\mathcal{D}|{\bf w})
$$
This shows that the MAP estimate takes into account the prior distribution on the unknown parameter.
Another advantage of the Bayesian approach is that it provides not only a point estimate of the unknown parameter, but a whole funtion, the posterior distribution, which encompasses our belief on the unknown parameter given the data. For instance, we can take second order statistics like the variance of the posterior distributions to measure the uncertainty on the true value of the parameter around the mean.
3.2. The prior distribution
Since each value of ${\bf w}$ determines a regression function, by stating a prior distribution over the weights we state also a prior distribution over the space of regression functions.
For instance, assume that the data likelihood follows the Gaussian model in sec. 2.2.1, with $T(x) = (1, x, x^2, x^3)$, i.e. the regression functions have the form
$$
w_0 + w_1 x + w_2 x^2 + w_3 x^3
$$
Each value of ${\bf w}$ determines a specific polynomial of degree 3. Thus, the prior distribution over ${\bf w}$ describes which polynomials are more likely before observing the data.
For instance, assume a Gaussian prior with zero mean and variance ${\bf V}p$, i.e.,
$$
p({\bf w}) = \frac{1}{(2\pi)^{D/2} |{\bf V}_p|^{1/2}}
\exp \left(-\frac{1}{2} {\bf w}^\intercal {\bf V}{p}^{-1}{\bf w} \right)
$$
where $D$ is the dimension of ${\bf w}$. To abbreviate, we will also express this as
$${\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)$$
The following code samples ${\bf w}$ according to this distribution for ${\bf V}_p = 0.002 \, {\bf I}$, and plots the resulting polynomial over the scatter plot of an arbitrary dataset.
You can check the effect of modifying the variance of the prior distribution.
End of explanation
# True data parameters
w_true = 3
std_n = 0.4
# Generate the whole dataset
n_max = 64
X_tr = 3 * np.random.random((n_max,1)) - 0.5
S_tr = w_true * X_tr + std_n * np.random.randn(n_max,1)
# Plot data
plt.figure()
plt.plot(X_tr, S_tr, 'b.')
plt.xlabel('$x$')
plt.ylabel('$s$')
plt.show()
Explanation: The data observation will modify our belief about the true data model according to the posterior distribution. In the following we will analyze this in a Gaussian case.
4. Bayesian regression for a Gaussian model.
We will apply the above steps to derive a Bayesian regression algorithm for a Gaussian model.
4.1. Step 1: The Gaussian model.
Let us assume that the likelihood function is given by the Gaussian model described in Sec. 1.3.2.
$$
s~|~{\bf w} \sim {\cal N}\left({\bf z}^\top{\bf w}, \sigma_\varepsilon^2 \right)
$$
that is
$$p(s|{\bf x}, {\bf w}) =
\frac{1}{\sqrt{2\pi}\sigma_\varepsilon}
\exp\left(-\frac{(s-{\bf w}^\top{\bf z})^2}{2\sigma_\varepsilon^2}\right)
$$
Assume, also, that the prior is Gaussian
$$
{\bf w} \sim {\cal N}\left({\bf 0},{\bf V}_{p} \right)
$$
4.2. Step 2: Complete data likelihood
Using the assumptions A1, A2 and A3, it can be shown that
$$
{\bf s}~|~{\bf w} \sim {\cal N}\left({\bf Z}{\bf w},\sigma_\varepsilon^2 {\bf I} \right)
$$
that is
$$
p({\bf s}| {\bf w})
= \frac{1}{\left(\sqrt{2\pi}\sigma_\varepsilon\right)^K}
\exp\left(-\frac{1}{2\sigma_\varepsilon^2}\|{\bf s}-{\bf Z}{\bf w}\|^2\right)
$$
4.3. Step 3: Posterior weight distribution
The posterior distribution of the weights can be computed using the Bayes rule
$$p({\bf w}|{\bf s}) = \frac{p({\bf s}|{\bf w})~p({\bf w})}{p({\bf s})}$$
Since both $p({\bf s}|{\bf w})$ and $p({\bf w})$ follow a Gaussian distribution, we know also that the joint distribution and the posterior distribution of ${\bf w}$ given ${\bf s}$ are also Gaussian. Therefore,
$${\bf w}~|~{\bf s} \sim {\cal N}\left({\bf w}\text{MSE}, {\bf V}{\bf w}\right)$$
After some algebra, it can be shown that mean and the covariance matrix of the distribution are:
$${\bf V}{\bf w} = \left[\frac{1}{\sigma\varepsilon^2} {\bf Z}^{\top}{\bf Z}
+ {\bf V}_p^{-1}\right]^{-1}$$
$${\bf w}\text{MSE} = {\sigma\varepsilon^{-2}} {\bf V}_{\bf w} {\bf Z}^\top {\bf s}$$
Exercise 2:
Consider the dataset with one-dimensional inputs given by
End of explanation
# Model parameters
sigma_eps = 0.4
mean_w = np.zeros((1,))
sigma_p = 1e6
Var_p = sigma_p**2* np.eye(1)
Explanation: Fit a Bayesian linear regression model assuming $z= x$ and
End of explanation
# No. of points to analyze
n_points = [1, 2, 4, 8, 16, 32, 64]
# Prepare plots
w_grid = np.linspace(2.7, 3.4, 5000) # Sample the w axis
plt.figure()
# Compute the prior distribution over the grid points in w_grid
# p = <FILL IN>
plt.plot(w_grid, p,'g-')
for k in n_points:
# Select the first k samples
Zk = X_tr[0:k, :]
Sk = S_tr[0:k]
# Parameters of the posterior distribution
# 1. Compute the posterior variance.
# (Make sure that the resulting variable, Var_w, is a 1x1 numpy array.)
# Var_w = <FILL IN>
# 2. Compute the posterior mean.
# (Make sure that the resulting variable, w_MSE, is a scalar)
# w_MSE = <FILL IN>
# Compute the posterior distribution over the grid points in w_grid
sigma_w = np.sqrt(Var_w.flatten()) # First we take a scalar standard deviation
# p = <FILL IN>
plt.plot(w_grid, p,'g-')
plt.fill_between(w_grid, 0, p, alpha=0.8, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=1, antialiased=True)
plt.title('Posterior distribution after {} samples'.format(k))
plt.xlim(w_grid[0], w_grid[-1])
plt.ylim(0, np.max(p))
plt.xlabel('$w$')
plt.ylabel('$p(w|s)$')
display.clear_output(wait=True)
display.display(plt.gcf())
time.sleep(2.0)
# Remove the temporary plots and fix the last one
display.clear_output(wait=True)
plt.show()
Explanation: To do so, compute the posterior weight distribution using the first $k$ samples in the complete dataset, for $k = 1,2,4,8,\ldots 128$. Draw all these posteriors along with the prior distribution in the same plot.
End of explanation
# <SOL>
# </SOL>
Explanation: Exercise 3:
Note that, in the example above, the model assumptions are correct: the target variables have been generated by a linear model with noise standard deviation sigma_n which is exactly equal to the value assumed by the model, stored in variable sigma_eps. Check what happens if we take sigma_eps=4*sigma_n or sigma_eps=sigma_n/4.
Does the algorithm fail in that cases?
What differences can you observe with respect to the ideal case sigma_eps=sigma_n?
4.4. Step 4: Weight estimation.
Since the posterior weight distribution is Gaussian, both the MAP and the MSE estimates are equal to the posterior mean, which has been already computed in step 3:
$$\widehat{\bf w}\text{MAP} = \widehat{\bf w}\text{MSE} = {\sigma_\varepsilon^{-2}} {\bf V}_{\bf w} {\bf Z}^\top {\bf s}$$
4.5. Step 5: Prediction
Using the MSE estimate, the final predictions are given by
$$
\widehat{s}\text{MSE} = \widehat{\bf w}\text{MSE}^\top{\bf z}
$$
Exercise 4:
Plot the minimum MSE predictions of $s$ for inputs $x$ in the interval [-1, 3].
End of explanation
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
# Data generation
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
# Signal
xmin = np.min(X_tr) - 0.1
xmax = np.max(X_tr) + 0.1
X_grid = np.linspace(xmin, xmax, n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
# Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z = np.asmatrix(Z)
# Plot data
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
# Plot noise-free function
ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal')
# Set axes
ax.set_xlim(xmin, xmax)
ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2)
ax.legend(loc='best')
plt.show()
Explanation: 5. Maximum likelihood vs Bayesian Inference.
5.1. The Maximum Likelihood Estimate.
For comparative purposes, it is interesting to see here that the likelihood function is enough to compute the Maximum Likelihood (ML) estimate
\begin{align}
{\bf w}\text{ML} &= \arg \max{\bf w} p(\mathcal{D}|{\bf w}) \
&= \arg \min_{\bf w} \|{\bf s}-{\bf Z}{\bf w}\|^2
\end{align}
which leads to the Least Squares (LS) solution
$$
{\bf w}_\text{ML} = ({\bf Z}^\top{\bf Z})^{-1}{\bf Z}^\top{\bf s}
$$
ML estimation is prone to overfiting. In general, if the number of parameters (i.e. the dimension of ${\bf w}$) is large in relation to the size of the training data, the predictor based on the ML estimate may have a small square error over the training set but a large error over the test set. Therefore, in practice, some cross validation procedure is required to keep the complexity of the predictor function under control depending on the size of the training set.
By defining a prior distribution over the unknown parameters, and using the Bayesian inference methods, the overfitting problems can be alleviated
5.2 Making predictions
Following an ML approach, we retain a single model, ${\bf w}{ML} = \arg \max{\bf w} p({\bf s}|{\bf w})$. Then, the predictive distribution of the target value for a new point would be obtained as:
$$p({s^}|{\bf w}_{ML},{\bf x}^) $$
For the generative model of Section 3.1.2 (additive i.i.d. Gaussian noise), this distribution is:
$$p({s^}|{\bf w}_{ML},{\bf x}^) = \frac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^ - {\bf w}_{ML}^\top {\bf z}^\right)^2}{2 \sigma_\varepsilon^2} \right)$$
The mean of $s^*$ is just the same as the prediction of the LS model, and the same uncertainty is assumed independently of the observation vector (i.e., the variance of the noise of the model).
If a single value is to be kept, we would probably keep the mean of the distribution, which is equivalent to the LS prediction.
Using <b>Bayesian inference</b>, we retain all models. Then, the inference of the value $s^ = s({\bf x}^)$ is carried out by mixing all models, according to the weights given by the posterior distribution.
\begin{align}
p({s^}|{\bf x}^,{\bf s})
& = \int p({s^}~|~{\bf w},{\bf x}^) p({\bf w}~|~{\bf s}) d{\bf w}
\end{align}
where:
$p({s^}|{\bf w},{\bf x}^) = \dfrac{1}{\sqrt{2\pi\sigma_\varepsilon^2}} \exp \left(-\frac{\left(s^ - {\bf w}^\top {\bf z}^\right)^2}{2 \sigma_\varepsilon^2} \right)$
$p({\bf w} \mid {\bf s})$ is the posterior distribution of the weights, that can be computed using Bayes' Theorem.
In general the integral expression of the posterior distribution $p({s^}|{\bf x}^,{\bf s})$ cannot be computed analytically. Fortunately, for the Gaussian model, the computation of the posterior is simple, as we will show in the following section.
6. Posterior distribution of the target variable
In the same way that we have computed a distribution on ${\bf w}$, we can compute a distribution on the target variable for a given input ${\bf x}$ and given the whole dataset.
Since ${\bf w}$ is a random variable, the noise-free component of the target variable for an arbitrary input ${\bf x}$, that is, $f = f({\bf x}) = {\bf w}^\top{\bf z}$ is also a random variable, and we can compute its distribution from the posterior distribution of ${\bf w}$
Since ${\bf w}$ is Gaussian and $f$ is a linear transformation of ${\bf w}$, $f$ is also a Gaussian random variable, whose posterior mean and variance can be calculated as follows:
\begin{align}
\mathbb{E}{f \mid {\bf s}, {\bf z}}
&= \mathbb{E}{{\bf w}^\top {\bf z}~|~{\bf s}, {\bf z}}
= \mathbb{E}{{\bf w} ~|~{\bf s}, {\bf z}}^\top {\bf z} \
&= \widehat{\bf w}\text{MSE} ^\top {\bf z} \
% &= {\sigma\varepsilon^{-2}} {{\bf z}}^\top {\bf V}_{\bf w} {\bf Z}^\top {\bf s}
\end{align}
\begin{align}
\text{Cov}\left[{{\bf z}}^\top {\bf w}~|~{\bf s}, {\bf z}\right]
&= {\bf z}^\top \text{Cov}\left[{\bf w}~|~{\bf s}\right] {\bf z} \
&= {\bf z}^\top {\bf V}_{\bf w} {{\bf z}}
\end{align}
Therefore,
$$
f^*~|~{\bf s}, {\bf x}
\sim {\cal N}\left(\widehat{\bf w}\text{MSE} ^\top {\bf z}, ~~
{\bf z}^\top {\bf V}{\bf w} {\bf z} \right)
$$
Finally, for $s = f + \varepsilon$, the posterior distribution is
$$
s ~|~{\bf s}, {\bf z}^*
\sim {\cal N}\left(\widehat{\bf w}\text{MSE} ^\top {\bf z}, ~~
{\bf z}^\top {\bf V}{\bf w} {\bf z} + \sigma_\varepsilon^2\right)
$$
Example:
The next figure shows a one-dimensional dataset with 15 points, which are noisy samples from a cosine signal (shown in the dotted curve)
End of explanation
degree = 12
# We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_LS = np.polyval(w_LS,X_grid)
# Plot data
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
# Plot noise-free function
ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal')
# Plot LS regression function
ax.plot(X_grid, S_grid_LS, 'm-', label='LS regression')
# Set axis
ax.set_xlim(xmin, xmax)
ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2)
ax.legend(loc='best')
plt.show()
Explanation: Let us assume that the cosine form of the noise-free signal is unknown, and we assume a polynomial model with a high degree. The following code plots the LS estimate
End of explanation
nplots = 6
# Prior distribution parameters
sigma_eps = 0.2
mean_w = np.zeros((degree+1,))
sigma_p = .5
Var_p = sigma_p**2 * np.eye(degree+1)
# Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z = np.asmatrix(Z)
#Compute posterior distribution parameters
Var_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(Var_p))
posterior_mean = Var_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
# Plot data
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
# Plot noise-free function
ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal')
# Plot LS regression function
ax.plot(X_grid, S_grid_LS, 'm-', label='LS regression')
for k in range(nplots):
# Draw weights from the posterior distribution
w_iter = np.random.multivariate_normal(posterior_mean, Var_w)
# Note that polyval assumes the first element of weight vector is the coefficient of
# the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(w_iter[::-1], X_grid)
ax.plot(X_grid,S_grid_iter,'g-')
# Set axis
ax.set_xlim(xmin, xmax)
ax.set_ylim(S_tr[0] - 2, S_tr[-1] + 2)
ax.legend(loc='best')
plt.show()
Explanation: The following fragment of code computes the posterior weight distribution, draws random vectors from $p({\bf w}|{\bf s})$, and plots the corresponding regression curves along with the training points. Compare these curves with those extracted from the prior distribution of ${\bf w}$ and with the LS solution.
End of explanation
# Compute standard deviation
std_x = []
for el in X_grid:
x_ast = np.array([el**k for k in range(degree+1)])
std_x.append(np.sqrt(x_ast.dot(Var_w).dot(x_ast)[0,0]))
std_x = np.array(std_x)
# Plot data
fig = plt.figure(figsize=(10,6))
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
# Plot the posterior mean
# Note that polyval assumes the first element of weight vector is the coefficient of
# the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(posterior_mean[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI')
#Plot confidence intervals for the Bayesian Inference
plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x,
alpha=0.4, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=2, antialiased=True)
#We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
# Plot noise-free function
ax.plot(X_grid, S_grid, 'b:', label='Noise-free signal')
# Plot LS regression function
ax.plot(X_grid, S_grid_LS, 'm-', label='LS regression')
# Set axis
ax.set_xlim(xmin, xmax)
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.set_title('Predicting the target variable')
ax.set_xlabel('Input variable')
ax.set_ylabel('Target variable')
ax.legend(loc='best')
plt.show()
Explanation: Not only do we obtain a better predictive model, but we also have confidence intervals (error bars) for the predictions.
End of explanation
from math import pi
n_points = 15
frec = 3
std_n = 0.2
max_degree = 12
#Prior distribution parameters
sigma_eps = 0.2
mean_w = np.zeros((degree+1,))
sigma_p = 0.5
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Evaluate the posterior evidence
logE = []
for deg in range(max_degree):
Z_iter = Z[:,:deg+1]
logE_iter = -((deg+1)*np.log(2*pi)/2) \
-np.log(np.linalg.det((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points)))/2 \
-S_tr.T.dot(np.linalg.inv((sigma_p**2)*Z_iter.dot(Z_iter.T) + (sigma_eps**2)*np.eye(n_points))).dot(S_tr)/2
logE.append(logE_iter[0,0])
plt.plot(np.array(range(max_degree))+1,logE)
plt.xlabel('Polynomia degree')
plt.ylabel('log evidence')
plt.show()
Explanation: Exercise 5:
Assume the dataset ${\cal{D}} = \left{ x_k, s_k \right}_{k=0}^{K-1}$ containing $K$ i.i.d. samples from a distribution
$$p(s|x,w) = w x \exp(-w x s), \qquad s>0,\quad x> 0,\quad w> 0$$
We model also our uncertainty about the value of $w$ assuming a prior distribution for $w$ following a Gamma distribution with parameters $\alpha>0$ and $\beta>0$.
$$
w \sim \text{Gamma}\left(\alpha, \beta \right)
= \frac{\beta^\alpha}{\Gamma(\alpha)} w^{\alpha-1} \exp\left(-\beta w\right), \qquad w>0
$$
Note that the mean and the mode of a Gamma distribution can be calculated in closed-form as
$$
\mathbb{E}\left{w\right}=\frac{\alpha}{\beta}; \qquad
$$
$$
\text{mode}{w} = \arg\max_w p(w) = \frac{\alpha-1}{\beta}
$$
1. Determine an expression for the likelihood function.
Solution:
2. Determine the maximum likelihood coefficient, $\widehat{w}_{\text{ML}}$.
Solution:
3. Obtain the posterior distribution $p(w|{\bf s})$. Note that you do not need to calculate $p({\bf s})$ since the posterior distribution can be readily identified as another Gamma distribution.
Solution:
4. Determine the MSE and MAP a posteriori estimators of $w$: $w_\text{MSE}=\mathbb{E}\left{w|{\bf s}\right}$ and $w_\text{MAP} = \max_w p(w|{\bf s})$.
Solution:
5. Compute the following estimators of $S$:
$\qquad\widehat{s}1 = \mathbb{E}{s|w\text{ML},x}$
$\qquad\widehat{s}2 = \mathbb{E}{s|w\text{MSE},x}$
$\qquad\widehat{s}3 = \mathbb{E}{s|w\text{MAP},x}$
Solution:
7. Maximum evidence model selection
We have already addressed with Bayesian Inference the following two issues:
For a given degree, how do we choose the weights?
Should we focus on just one model, or can we use several models at once?
However, we still needed some assumptions: a parametric model (i.e., polynomial function and <i>a priori</i> degree selection) and several parameters needed to be adjusted.
Though we can recur to cross-validation, Bayesian inference opens the door to other strategies.
We could argue that rather than keeping single selections of these parameters, we could use simultaneously several sets of parameters (and/or several parametric forms), and average them in a probabilistic way ... (like we did with the models)
We will follow a simpler strategy, selecting just the most likely set of parameters according to an ML criterion
7.1 Model evidence
The evidence of a model is defined as
$$L = p({\bf s}~|~{\cal M})$$
where ${\cal M}$ denotes the model itself and any free parameters it may have. For instance, for the polynomial model we have assumed so far, ${\cal M}$ would represent the degree of the polynomia, the variance of the additive noise, and the <i>a priori</i> covariance matrix of the weights
Applying the Theorem of Total probability, we can compute the evidence of the model as
$$L = \int p({\bf s}~|~{\bf f},{\cal M}) p({\bf f}~|~{\cal M}) d{\bf f} $$
For the linear model $f({\bf x}) = {\bf w}^\top{\bf z}$, the evidence can be computed as
$$L = \int p({\bf s}~|~{\bf w},{\cal M}) p({\bf w}~|~{\cal M}) d{\bf w} $$
It is important to notice that these probability density functions are exactly the ones we computed on the previous section. We are just making explicit that they depend on a particular model and the selection of its parameters. Therefore:
$p({\bf s}~|~{\bf w},{\cal M})$ is the likelihood of ${\bf w}$
$p({\bf w}~|~{\cal M})$ is the <i>a priori</i> distribution of the weights
7.2 Model selection via evidence maximization
As we have already mentioned, we could propose a prior distribution for the model parameters, $p({\cal M})$, and use it to infer the posterior. However, this can be very involved (usually no closed-form expressions can be derived)
Alternatively, maximizing the evidence is normally good enough
$${\cal M}\text{ML} = \arg\max{\cal M} p(s~|~{\cal M})$$
Note that we are using the subscript 'ML' because the evidence can also be referred to as the likelihood of the model
7.3 Example: Selection of the degree of the polynomia
For the previous example we had (we consider a spherical Gaussian for the weights):
${\bf s}~|~{\bf w},{\cal M}~\sim~{\cal N}\left({\bf Z}{\bf w},~\sigma_\varepsilon^2 {\bf I} \right)$
${\bf w}~|~{\cal M}~\sim~{\cal N}\left({\bf 0},~\sigma_p^2 {\bf I} \right)$
In this case, $p({\bf s}~|~{\cal M})$ follows also a Gaussian distribution, and it can be shown that
$L = p({\bf s}~|~{\cal M}) = {\cal N}\left({\bf 0},\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I} \right)$
If we just pursue the maximization of $L$, this is equivalent to maximizing the log of the evidence
$$\log(L) = -\frac{M}{2} \log(2\pi) -{\frac{1}{2}}\log\mid\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\mid - \frac{1}{2} {\bf s}^\top \left(\sigma_p^2 {\bf Z} {\bf Z}^\top+\sigma_\varepsilon^2 {\bf I}\right)^{-1} {\bf s}$$
where $M$ denotes the length of vector ${\bf z}$ (the degree of the polynomia minus 1).
The following fragment of code evaluates the evidence of the model as a function of the degree of the polynomia
End of explanation
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
degree = 5 #M-1
nplots = 6
#Prior distribution parameters
sigma_eps = 0.1
mean_w = np.zeros((degree+1,))
sigma_p = .5 * np.eye(degree+1)
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-1,3,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
#Compute matrix with training input data for the polynomial model
Z = []
for x_val in X_tr.tolist():
Z.append([x_val[0]**k for k in range(degree+1)])
Z=np.asmatrix(Z)
#Compute posterior distribution parameters
Sigma_w = np.linalg.inv(np.dot(Z.T,Z)/(sigma_eps**2) + np.linalg.inv(sigma_p))
posterior_mean = Sigma_w.dot(Z.T).dot(S_tr)/(sigma_eps**2)
posterior_mean = np.array(posterior_mean).flatten()
#Plot the posterior mean
#Note that polyval assumes the first element of weight vector is the coefficient of
#the highest degree term. Thus, we need to reverse w_iter
S_grid_iter = np.polyval(posterior_mean[::-1],X_grid)
ax.plot(X_grid,S_grid_iter,'g-',label='Predictive mean, BI')
#Plot confidence intervals for the Bayesian Inference
std_x = []
for el in X_grid:
x_ast = np.array([el**k for k in range(degree+1)])
std_x.append(np.sqrt(x_ast.dot(Sigma_w).dot(x_ast)[0,0]))
std_x = np.array(std_x)
plt.fill_between(X_grid, S_grid_iter-std_x, S_grid_iter+std_x,
alpha=0.2, edgecolor='#1B2ACC', facecolor='#089FFF',
linewidth=4, linestyle='dashdot', antialiased=True)
#We plot also the least square solution
w_LS = np.polyfit(X_tr.flatten(), S_tr.flatten(), degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid,S_grid_iter,'m-',label='LS regression')
ax.set_xlim(-1,3)
ax.set_ylim(S_tr[0]-2,S_tr[-1]+2)
ax.legend(loc='best')
plt.show()
Explanation: The above curve may change the position of its maximum from run to run.
We conclude the notebook by plotting the result of the Bayesian inference for $M=6$
End of explanation |
4,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction To Probabalistic Graph Models
Scott Hendrickson
2016-Aug-19
Requirements
Step1: Why is this formalism a useful probabalistic problem solving tool?
This tool can model a much general set of Joint Probability Distributions than our simple student-height problem.
Some advantages of managing statustical thinking with graphs include
Step2: And now, for some data
Survival data from Titanic based on age, sex and travel class.
Step3: A pivot table might give another useful summary
Step4: Learn Parameters of Graph Model give Data
Guess a graph for the model. I guess this
Step5: Some choices
Step6: Now the nodes have conditional probability information stored in them. For example,
Step7: Now let's look at a downstream node.
Step8: Causal Reasoning
Set some assumptions and see how this changes marginal probabilities associated with the other nodes in the garph.
Note
Step9: Learn Graph Structure
One general strategy uses a score to determine structures
1. chose a score (for example AIC or BIC).
2. grab a node, make an edge
3. calculate a model based on data
4. calculate score
5. if AIC goes down, keep the edge
6. go to 2 until you run out of edges to try
This is a lot of computation. People have tried all kinds of heuristics, simplification and sophisticated calculation caching schemes. Some cases can to 1000s of nodes in hours.
Another scheme uses constraints optimizaiton. That's what we will do...
Step10: different model, so learn new parameters
Step11: Queries with New Model | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
G=nx.DiGraph()
G.add_edge('sex','height',weight=0.6)
nx.draw_networkx(G, node_color='y',node_size=2000, width=3)
plt.axis('off')
plt.show()
Explanation: Introduction To Probabalistic Graph Models
Scott Hendrickson
2016-Aug-19
Requirements:
* numpy
* pandas
* libpgm
* networkx (for plotting)
Notes: This is a introductory survey. Unlike my favorite kind of RST, we won't build up models from first assumptions and small steps as much as try I will try to survey the basic concepts with examples, demonstrations and tie-ins to things with which you already are familiar.
Why PGMs?
When we started statistics, we talked about measurements, for example, of the height of students in you school, and noted that instances of these meansurements--events--fall into a probabiliy distribution. In this case, it was likely a Normal-looking distribution with many measurements around the average and a few measurements larger or smaller than average.
This introduced the idea of a random variable.
What do we when measurements of a random variable fall according to some complex probability distribution? This happens all the time. Sometimes we approximate. Other times, we realize that the random variable we are measuring has some comlex underlying dependencies, possibly on other distributions, and we can address these with a model.
For example, imagine that we measure the heights of 7th graders in your school and call this our random variable. Later, we realize that the average height of males and females is different by a few inches. We need a model that accounts for the depencence of height on sex at this age.
From our data we can might write:
$$p(male) = .49$$
$$p(female) = .51$$
$$p(h) \propto \exp{\frac{(h-\bar{h})^2}{2 \sigma_h^2}}$$
And now, we know that $\bar{h}$ and $\sigma_h$ will have different values for males and females.
This kind of problem is common! It turns out there are powerful general strategies for dealing with whole classes of proglems of dependencies in the distribution of random variables.
Probabalistic Graph Models (PGMs)
In general, we want to model the joint probability distribution for a random variable in terms of other random variables and model parameters.
$$X = P(x_1, x_2, \dots ,x_n)$$
for $n$ random variables.
(Let's assume they're are all observable for today.)
Now, we can decompose this probability by applying the chain rule:
$$P(x_1, x_2, \dots ,x_n) = P(x_1| x_2, \dots ,x_n) P(x_2, |x_3, \dots ,x_n) \dots P(x_n)$$
If we unroll all of the terms, this is a pretty big mess. But, for many problems there is another simplificaiton. In general, if $x_1$ is independent from $x_2$ then,
$$P(x_1|x_2) = P(x_1)$$
$$P(x_1, x_2) = P(x_1)P(x_2)$$
Also, with Bayes rule, we can invert dependencies:
$$P(y|x) = \frac{P(y)P(x|y)}{P(x)}$$
Back to our height example...
For height and gender, we have,
$$P(h,s) = P(h|s)P(s)$$
Observations:
* We could have decomposed $P(g,h)$ and eneded up with $P(g|h)P(h)$
* Bayes lets us swap "sex is dependent on height" case to the case where our model says "height is dependent on sex"
* Direction is somewhat of a choice for simultaneous observations (determining causality is still hard in PGMs)
In general, we can represent this decompostion by a graph
To do this, map it like this:
* Nodes: random variables.
* Each node has a Conditional Probability Distribution (CDP).
* Edges: dependencies
There are more tractable graphs and lest tractable graphs for problem solving. An example of one important property is whether P factorizes G. We say a JPD P factorizes over graph G, if P can be encoded by:
$$P(x_1, x_2, \dots ,x_n) = \Pi_{i=1}^n P(x_i|Par_G(x_i))$$
Where $Par_G(x_i)$ is the parent graph of G.
End of explanation
import numpy as np
import pandas as pd
import csv
import json
from libpgm.graphskeleton import GraphSkeleton
from libpgm.nodedata import NodeData
from libpgm.discretebayesiannetwork import DiscreteBayesianNetwork
from libpgm.tablecpdfactorization import TableCPDFactorization
from libpgm.pgmlearner import PGMLearner
Explanation: Why is this formalism a useful probabalistic problem solving tool?
This tool can model a much general set of Joint Probability Distributions than our simple student-height problem.
Some advantages of managing statustical thinking with graphs include:
We can use graphs for book-keeping about distributions and dependencies. Practical models can have many nodes. Graphs models help us manage complexity.
We can talk generally about classes of graphs and sub-graphs that solve problems. E.g.
* Bi-directional Graphs are Markov Models
* Directed Acyclic Graphs (DAGs) are called Baysian Models (we will work here the rest of the day)
We can state reasoning rules that are constent with the statistical assumptions that allow understand and transform nodes or groups of nodes. E.g. by using rules of graph manipulation to reduce, simplify or cut graphs.
Learning and Resoning Tasks
There a some jobs we need to figure out how to do in order to make this tool practically useful. Maybe in order of obviousness:
Ask questions about the probabilities represented by the model. I.e. Inference:
Causal Reasoning is looking for downstream effects of assumptions about parents. E.g. Given a male student, how likely is it he will be 165 cm in height?
Evidential Reasoning is looking for upstream effects of assumptions about childern. E.g. Give a height of 174 cm, what is probability that the student is female?
Intercausal. Given two observations of a common cause, what can we say about the likelihood of one or other of the "causal" measurements?
Learn parameters of a model given graph structure and representative data (E.g. naive bayes, LSA topic modeling)
Learn the structure of the graph given representative data
Build heuristics and rules by which we can reason about graphs. E.g. How to convert Bayese models to Markov models? What sort of structures allow influence to propogate and which do not? How to simplify complex sub-graphs? In general, learn how to reason about statistics by learning the rules of reasoning about PGMs.
Tasks and Examples
Let's try to look at 1-3. Also, I am going to switch 0 and 1 so we can have a graph model with parameters to use for inference.
Practical problem solving often procees from guessing the graph. Graph discovery may require big, big data and be computationally challenging.
End of explanation
titanic = pd.DataFrame.from_csv("./data/titanic3.csv", index_col = None)
titanic.head()
titanic.describe()
Explanation: And now, for some data
Survival data from Titanic based on age, sex and travel class.
End of explanation
ptable = pd.pivot_table(titanic, values=["name"], columns=["survived", "pclass","sex"], aggfunc=lambda x: len(x.unique()), margins=True)
print ptable
# housekeeping
# libpgm needs data as node:value list for each row
with open("./data/titanic3.csv") as f:
rdr = csv.reader(f, )
headers = next(rdr, None)
data = [{k:float(v) for k,v in zip(headers, row) if k !="name"} for row in rdr]
headers.remove("name") # not going to model survival based on name
#print data
Explanation: A pivot table might give another useful summary
End of explanation
pgn = {
"V": headers,
"E": [["age", "pclass"],
["sex", "survived"],
["pclass", "survived"]],
"Vdata": None }
# print pgn
G=nx.DiGraph()
for f,t in pgn["E"]:
G.add_edge(f,t)
nx.draw_networkx(G, node_color='y',node_size=2000, width=3)
plt.axis('off')
plt.show()
Explanation: Learn Parameters of Graph Model give Data
Guess a graph for the model. I guess this:
* age determines class -- older people have more money and less patience
* survival is determined by sex -- "women and children first"
* survival is determined by class of travel -- people in steerage had farther to go to get out
End of explanation
skel = GraphSkeleton()
skel.V = pgn["V"]
skel.E = pgn["E"]
skel.toporder()
learner = PGMLearner()
result = learner.discrete_mle_estimateparams(skel, data)
Explanation: Some choices:
* Baysian model (directed graph, probabilities "propogate")
* Discrete distributions on the nodes (continuous is another world)
* A common algorithm for fitting parameters is a maximum likelihood algorithm. There are others.
While it is totally worth seeing how it is done, here we just do it to make sure we get to look at examples of most of our tasks.
End of explanation
pd.DataFrame(result.Vdata["sex"]["cprob"]).transpose()
pd.DataFrame(result.Vdata["age"]["cprob"]).transpose()
Explanation: Now the nodes have conditional probability information stored in them. For example,
End of explanation
pd.DataFrame(result.Vdata["pclass"]["cprob"]).transpose()
Explanation: Now let's look at a downstream node.
End of explanation
# use our solutions from above
nd = NodeData()
nd.Vdata = result.Vdata
nd.alldata = None
bn = DiscreteBayesianNetwork(skel, nd)
# query alters tables
tcpd = TableCPDFactorization(bn)
print "What is p(male=0)? {:.3%}".format(
tcpd.specificquery(dict(sex=[1]), dict())
)
tcpd = TableCPDFactorization(bn)
print "What is p(female=1)? {:.3%}".format(
tcpd.specificquery(dict(sex=[0]), dict())
)
# query alters tables
tcpd = TableCPDFactorization(bn)
print "What is p(female=1,survived=1)? {:.3%}".format(
tcpd.specificquery(dict(sex=[1]), dict(survived=1))
)
# query alters tables
tcpd = TableCPDFactorization(bn)
print "What is p(male=0,survived=0)? {:.3%}".format(
tcpd.specificquery(dict(sex=[0]), dict(survived=0))
)
# query alters tables
tcpd = TableCPDFactorization(bn)
print "What is p(male=0,class=3,survived=0)? {:.3%}".format(
tcpd.specificquery(dict(sex=[0],pclass=[3.0]), dict(survived=0))
)
# maybe useful for comparison
pd.pivot_table(titanic, values=["name"], columns=["sex", "pclass","survived"], aggfunc=lambda x: len(x.unique()))
Explanation: Causal Reasoning
Set some assumptions and see how this changes marginal probabilities associated with the other nodes in the garph.
Note: Querying change the graph information--the graph has to be recalculated for the stated assumptions.
End of explanation
# instantiate my learner
learner = PGMLearner()
# estimate structure
result = learner.lg_constraint_estimatestruct(data, indegree=1)
# output
print json.dumps(result.E, indent=2)
print json.dumps(result.V, indent=2)
G=nx.DiGraph()
for f,t in result.E:
G.add_edge(f,t,weight=0.6)
nx.draw_networkx(G, node_color='y',node_size=2000, width=3)
plt.axis('off')
plt.show()
Explanation: Learn Graph Structure
One general strategy uses a score to determine structures
1. chose a score (for example AIC or BIC).
2. grab a node, make an edge
3. calculate a model based on data
4. calculate score
5. if AIC goes down, keep the edge
6. go to 2 until you run out of edges to try
This is a lot of computation. People have tried all kinds of heuristics, simplification and sophisticated calculation caching schemes. Some cases can to 1000s of nodes in hours.
Another scheme uses constraints optimizaiton. That's what we will do...
End of explanation
skel = GraphSkeleton()
skel.V = result.V
skel.E = result.E
skel.toporder()
learner = PGMLearner()
result = learner.discrete_mle_estimateparams(skel, data)
Explanation: different model, so learn new parameters
End of explanation
nd = NodeData()
nd.Vdata = result.Vdata
nd.alldata = None
bn = DiscreteBayesianNetwork(skel, nd)
# query alters tables
tcpd = TableCPDFactorization(bn)
print "What is p(male=0)? {:.3%}".format(
tcpd.specificquery(dict(sex=[1]), dict())
)
tcpd = TableCPDFactorization(bn)
print "What is p(female=1)? {:.3%}".format(
tcpd.specificquery(dict(sex=[0]), dict())
)
# query alters tables
tcpd = TableCPDFactorization(bn)
print "What is p(female=1,survived=1)? {:.3%}".format(
tcpd.specificquery(dict(sex=[1]), dict(survived=1))
)
# query alters tables
tcpd = TableCPDFactorization(bn)
print "What is p(male=0,survived=0)? {:.3%}".format(
tcpd.specificquery(dict(sex=[0]), dict(survived=0))
)
# query alters tables
tcpd = TableCPDFactorization(bn)
print "What is p(male=0,class=3,survived=0)? {:.3%}".format(
tcpd.specificquery(dict(sex=[0],pclass=[3.0]), dict(survived=0))
)
Explanation: Queries with New Model
End of explanation |
4,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ok, we've had a little peek at our dataset, lets prep it for our model.
Step1: Prep is done, time for the model.
Step2: We've defined the cost and accuracy functions, time to train our model. | Python Code:
randinds = np.random.permutation(len(digits.target))
# shuffle the values
from sklearn.utils import shuffle
data, targets = shuffle(digits.data, digits.target, random_state=0)
# scale the data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(data)
data_scaled = scaler.transform(data)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data_scaled, targets, test_size=0.20, random_state=0)
X_train.shape, y_train.shape
Explanation: Ok, we've had a little peek at our dataset, lets prep it for our model.
End of explanation
from cgt.distributions import categorical
def model(X, y):
# relu(W*x + b)
np.random.seed(0)
h1 = nn.rectify(nn.Affine(64, 512, weight_init=nn.IIDGaussian(std=.1))(X))
h2 = nn.rectify(nn.Affine(512, 512, weight_init=nn.IIDGaussian(std=.1))(h1))
# softmax probabilities
probs = nn.softmax(nn.Affine(512, 10)(h2))
# our prediction is the highest probability
ypreds = cgt.argmax(probs, axis=1)
acc = cgt.cast(cgt.equal(ypreds, y), cgt.floatX).mean()
cost = -categorical.loglik(y, probs).mean()
return cost, acc
X = cgt.matrix(name='X', fixed_shape=(None, 64))
y = cgt.vector(name='y', dtype='i8')
cost, acc = model(X, y)
Explanation: Prep is done, time for the model.
End of explanation
learning_rate = 1e-3
epochs = 100
batch_size = 64
# get all the weight parameters for our model
params = nn.get_parameters(cost)
# train via SGD, use 1e-3 as the learning rate
updates = nn.sgd(cost, params, learning_rate)
# Functions
trainf = cgt.function(inputs=[X,y], outputs=[], updates=updates)
cost_and_accf = cgt.function(inputs=[X,y], outputs=[cost,acc])
import time
for i in xrange(epochs):
t1 = time.time()
for srt in xrange(0, X_train.shape[0], batch_size):
end = batch_size+srt
trainf(X_train[srt:end], y_train[srt:end])
elapsed = time.time() - t1
costval, accval = cost_and_accf(X_test, y_test)
print("Epoch {} took {}, test cost = {}, test accuracy = {}".format(i+1, elapsed, costval, accval))
Explanation: We've defined the cost and accuracy functions, time to train our model.
End of explanation |
4,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute and visualize ERDS maps
This example calculates and displays ERDS maps of event-related EEG data. ERDS
(sometimes also written as ERD/ERS) is short for event-related
desynchronization (ERD) and event-related synchronization (ERS)
Step1: As usual, we import everything we need.
Step2: First, we load and preprocess the data. We use runs 6, 10, and 14 from
subject 1 (these runs contains hand and feet motor imagery).
Step3: Now we can create 5s epochs around events of interest.
Step4: Here we set suitable values for computing ERDS maps.
Step5: Finally, we perform time/frequency decomposition over all epochs.
Step6: Similar to ~mne.Epochs objects, we can also export data from
~mne.time_frequency.EpochsTFR and ~mne.time_frequency.AverageTFR objects
to a
Step7: This allows us to use additional plotting functions like
Step8: Having the data as a DataFrame also facilitates subsetting,
grouping, and other transforms.
Here, we use seaborn to plot the average ERDS in the motor imagery interval
as a function of frequency band and imagery condition | Python Code:
# Authors: Clemens Brunner <[email protected]>
# Felix Klotzsche <[email protected]>
#
# License: BSD-3-Clause
Explanation: Compute and visualize ERDS maps
This example calculates and displays ERDS maps of event-related EEG data. ERDS
(sometimes also written as ERD/ERS) is short for event-related
desynchronization (ERD) and event-related synchronization (ERS)
:footcite:PfurtschellerLopesdaSilva1999. Conceptually, ERD corresponds to a
decrease in power in a specific frequency band relative to a baseline.
Similarly, ERS corresponds to an increase in power. An ERDS map is a
time/frequency representation of ERD/ERS over a range of frequencies
:footcite:GraimannEtAl2002. ERDS maps are also known as ERSP (event-related
spectral perturbation) :footcite:Makeig1993.
In this example, we use an EEG BCI data set containing two different motor
imagery tasks (imagined hand and feet movement). Our goal is to generate ERDS
maps for each of the two tasks.
First, we load the data and create epochs of 5s length. The data set contains
multiple channels, but we will only consider C3, Cz, and C4. We compute maps
containing frequencies ranging from 2 to 35Hz. We map ERD to red color and ERS
to blue color, which is customary in many ERDS publications. Finally, we
perform cluster-based permutation tests to estimate significant ERDS values
(corrected for multiple comparisons within channels).
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import TwoSlopeNorm
import pandas as pd
import seaborn as sns
import mne
from mne.datasets import eegbci
from mne.io import concatenate_raws, read_raw_edf
from mne.time_frequency import tfr_multitaper
from mne.stats import permutation_cluster_1samp_test as pcluster_test
Explanation: As usual, we import everything we need.
End of explanation
fnames = eegbci.load_data(subject=1, runs=(6, 10, 14))
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in fnames])
raw.rename_channels(lambda x: x.strip('.')) # remove dots from channel names
events, _ = mne.events_from_annotations(raw, event_id=dict(T1=2, T2=3))
Explanation: First, we load and preprocess the data. We use runs 6, 10, and 14 from
subject 1 (these runs contains hand and feet motor imagery).
End of explanation
tmin, tmax = -1, 4
event_ids = dict(hands=2, feet=3) # map event IDs to tasks
epochs = mne.Epochs(raw, events, event_ids, tmin - 0.5, tmax + 0.5,
picks=('C3', 'Cz', 'C4'), baseline=None, preload=True)
Explanation: Now we can create 5s epochs around events of interest.
End of explanation
freqs = np.arange(2, 36) # frequencies from 2-35Hz
vmin, vmax = -1, 1.5 # set min and max ERDS values in plot
baseline = [-1, 0] # baseline interval (in s)
cnorm = TwoSlopeNorm(vmin=vmin, vcenter=0, vmax=vmax) # min, center & max ERDS
kwargs = dict(n_permutations=100, step_down_p=0.05, seed=1,
buffer_size=None, out_type='mask') # for cluster test
Explanation: Here we set suitable values for computing ERDS maps.
End of explanation
tfr = tfr_multitaper(epochs, freqs=freqs, n_cycles=freqs, use_fft=True,
return_itc=False, average=False, decim=2)
tfr.crop(tmin, tmax).apply_baseline(baseline, mode="percent")
for event in event_ids:
# select desired epochs for visualization
tfr_ev = tfr[event]
fig, axes = plt.subplots(1, 4, figsize=(12, 4),
gridspec_kw={"width_ratios": [10, 10, 10, 1]})
for ch, ax in enumerate(axes[:-1]): # for each channel
# positive clusters
_, c1, p1, _ = pcluster_test(tfr_ev.data[:, ch], tail=1, **kwargs)
# negative clusters
_, c2, p2, _ = pcluster_test(tfr_ev.data[:, ch], tail=-1, **kwargs)
# note that we keep clusters with p <= 0.05 from the combined clusters
# of two independent tests; in this example, we do not correct for
# these two comparisons
c = np.stack(c1 + c2, axis=2) # combined clusters
p = np.concatenate((p1, p2)) # combined p-values
mask = c[..., p <= 0.05].any(axis=-1)
# plot TFR (ERDS map with masking)
tfr_ev.average().plot([ch], cmap="RdBu", cnorm=cnorm, axes=ax,
colorbar=False, show=False, mask=mask,
mask_style="mask")
ax.set_title(epochs.ch_names[ch], fontsize=10)
ax.axvline(0, linewidth=1, color="black", linestyle=":") # event
if ch != 0:
ax.set_ylabel("")
ax.set_yticklabels("")
fig.colorbar(axes[0].images[-1], cax=axes[-1]).ax.set_yscale("linear")
fig.suptitle(f"ERDS ({event})")
plt.show()
Explanation: Finally, we perform time/frequency decomposition over all epochs.
End of explanation
df = tfr.to_data_frame(time_format=None)
df.head()
Explanation: Similar to ~mne.Epochs objects, we can also export data from
~mne.time_frequency.EpochsTFR and ~mne.time_frequency.AverageTFR objects
to a :class:Pandas DataFrame <pandas.DataFrame>. By default, the time
column of the exported data frame is in milliseconds. Here, to be consistent
with the time-frequency plots, we want to keep it in seconds, which we can
achieve by setting time_format=None:
End of explanation
df = tfr.to_data_frame(time_format=None, long_format=True)
# Map to frequency bands:
freq_bounds = {'_': 0,
'delta': 3,
'theta': 7,
'alpha': 13,
'beta': 35,
'gamma': 140}
df['band'] = pd.cut(df['freq'], list(freq_bounds.values()),
labels=list(freq_bounds)[1:])
# Filter to retain only relevant frequency bands:
freq_bands_of_interest = ['delta', 'theta', 'alpha', 'beta']
df = df[df.band.isin(freq_bands_of_interest)]
df['band'] = df['band'].cat.remove_unused_categories()
# Order channels for plotting:
df['channel'] = df['channel'].cat.reorder_categories(('C3', 'Cz', 'C4'),
ordered=True)
g = sns.FacetGrid(df, row='band', col='channel', margin_titles=True)
g.map(sns.lineplot, 'time', 'value', 'condition', n_boot=10)
axline_kw = dict(color='black', linestyle='dashed', linewidth=0.5, alpha=0.5)
g.map(plt.axhline, y=0, **axline_kw)
g.map(plt.axvline, x=0, **axline_kw)
g.set(ylim=(None, 1.5))
g.set_axis_labels("Time (s)", "ERDS (%)")
g.set_titles(col_template="{col_name}", row_template="{row_name}")
g.add_legend(ncol=2, loc='lower center')
g.fig.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.08)
Explanation: This allows us to use additional plotting functions like
:func:seaborn.lineplot to plot confidence bands:
End of explanation
df_mean = (df.query('time > 1')
.groupby(['condition', 'epoch', 'band', 'channel'])[['value']]
.mean()
.reset_index())
g = sns.FacetGrid(df_mean, col='condition', col_order=['hands', 'feet'],
margin_titles=True)
g = (g.map(sns.violinplot, 'channel', 'value', 'band', n_boot=10,
palette='deep', order=['C3', 'Cz', 'C4'],
hue_order=freq_bands_of_interest,
linewidth=0.5).add_legend(ncol=4, loc='lower center'))
g.map(plt.axhline, **axline_kw)
g.set_axis_labels("", "ERDS (%)")
g.set_titles(col_template="{col_name}", row_template="{row_name}")
g.fig.subplots_adjust(left=0.1, right=0.9, top=0.9, bottom=0.3)
Explanation: Having the data as a DataFrame also facilitates subsetting,
grouping, and other transforms.
Here, we use seaborn to plot the average ERDS in the motor imagery interval
as a function of frequency band and imagery condition:
End of explanation |
4,264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: Request 2
Step2: Request 3
Step3: Request 4
Step4: On a side note | Python Code:
fullbase = requests.compat.urljoin(baseurl, endpoint_datatypes)
r = requests.get(
fullbase,
headers=custom_headers,
# params={'limit':1000},
params={'limit':1000, 'datasetid':"NORMAL_DLY"},
)
r.headers
r.text
json.loads(r.text)
Explanation: https://www.ncdc.noaa.gov/cdo-web/api/v2/data?datasetid=GSOM&datatypeid=EMNT&datatypeid=EMXT&datatypeid=EMXP&startdate=2017-07-01&enddate=2017-07-31&stationid=GHCND:USW00026615
Request 1: Available datatypes for a dataset, e.g. daily or monthly normals
End of explanation
fullbase = requests.compat.urljoin(baseurl, endpoint_data)
r = requests.get(
fullbase,
headers=custom_headers,
params=params,
)
json.loads(r.text)
r.headers
Explanation: Request 2: Data for a station
End of explanation
fullbase = requests.compat.urljoin(baseurl, endpoint_datasets)
r = requests.get(
fullbase,
headers=custom_headers,
)
json.loads(r.text)
Explanation: Request 3: Available datasets
End of explanation
for station in all_stations:
path = os.path.join(endpoint_stations, "GHCND:{}".format(station))
fullbase = requests.compat.urljoin(baseurl, path)
r = requests.get(
fullbase,
headers=custom_headers,
)
print(json.dumps(json.loads(r.text), indent=2))
fullbase = requests.compat.urljoin(baseurl, endpoint_stations, "GHCND:{}".format(station))
fullbase
Explanation: Request 4: Information about our stations
End of explanation
0o77
Explanation: On a side note: Python integers that start with 0o (zero - lowercase o) are octal. 7 * 8 + 7 = 63.
End of explanation |
4,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Network Part2
Step1: Normalization
Q1. Apply l2_normalize to x.
Step2: Q2. Calculate the mean and variance of x based on the sufficient statistics.
Step3: Q3. Calculate the mean and variance of x.
Step4: Q4. Calculate the mean and variance of x using unique_x and counts.
Step5: Q5. The code below is to implement the mnist classification task. Complete it by adding batch normalization.
Step6: Losses
Q06. Compute half the L2 norm of x without the sqrt.
Step7: Classification
Q7. Compute softmax cross entropy between logits and labels. Note that the rank of them is not the same.
Step8: Q8. Compute softmax cross entropy between logits and labels.
Step9: Embeddings
Q9. Map tensor x to the embedding. | Python Code:
from __future__ import print_function
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
from datetime import date
date.today()
author = "kyubyong. https://github.com/Kyubyong/tensorflow-exercises"
tf.__version__
np.__version__
Explanation: Neural Network Part2
End of explanation
_x = np.arange(1, 11)
epsilon = 1e-12
x = tf.convert_to_tensor(_x, tf.float32)
Explanation: Normalization
Q1. Apply l2_normalize to x.
End of explanation
_x = np.arange(1, 11)
x = tf.convert_to_tensor(_x, tf.float32)
Explanation: Q2. Calculate the mean and variance of x based on the sufficient statistics.
End of explanation
tf.reset_default_graph()
_x = np.arange(1, 11)
x = tf.convert_to_tensor(_x, tf.float32)
Explanation: Q3. Calculate the mean and variance of x.
End of explanation
tf.reset_default_graph()
x = tf.constant([1, 1, 2, 2, 2, 3], tf.float32)
# From `x`
mean, variance = tf.nn.moments(x, [0])
with tf.Session() as sess:
print(sess.run([mean, variance]))
# From unique elements and their counts
unique_x, _, counts = tf.unique_with_counts(x)
mean, variance = ...
with tf.Session() as sess:
print(sess.run([mean, variance]))
Explanation: Q4. Calculate the mean and variance of x using unique_x and counts.
End of explanation
# Load data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=False)
# build graph
class Graph:
def __init__(self, is_training=False):
# Inputs and labels
self.x = tf.placeholder(tf.float32, shape=[None, 784])
self.y = tf.placeholder(tf.int32, shape=[None])
# Layer 1
w1 = tf.get_variable("w1", shape=[784, 100], initializer=tf.truncated_normal_initializer())
output1 = tf.matmul(self.x, w1)
output1 = tf.contrib.layers.batch_norm(...)
#Layer 2
w2 = tf.get_variable("w2", shape=[100, 10], initializer=tf.truncated_normal_initializer())
logits = tf.matmul(output1, w2)
preds = tf.to_int32(tf.arg_max(logits, dimension=1))
# training
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=self.y, logits=logits)
self.train_op = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
self.acc = tf.reduce_mean(tf.to_float(tf.equal(self.y, preds)))
# Training
tf.reset_default_graph()
g = Graph(is_training=True)
init_op = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init_op)
saver = tf.train.Saver()
for i in range(1, 10000+1):
batch = mnist.train.next_batch(60)
sess.run(g.train_op, {g.x: batch[0], g.y: batch[1]})
# Evaluation
if i % 100 == 0:
print("training steps=", i, "Acc. =", sess.run(g.acc, {g.x: mnist.test.images, g.y: mnist.test.labels}))
save_path = saver.save(sess, './my-model')
# Inference
tf.reset_default_graph()
g2 = Graph(is_training=False)
with tf.Session() as sess:
saver = tf.train.Saver()
saver.restore(sess, save_path)
hits = 0
for i in range(100):
hits += sess.run(g2.acc, {g2.x: [mnist.test.images[i]], g2.y: [mnist.test.labels[i]]})
print(hits)
Explanation: Q5. The code below is to implement the mnist classification task. Complete it by adding batch normalization.
End of explanation
tf.reset_default_graph()
x = tf.constant([1, 1, 2, 2, 2, 3], tf.float32)
Explanation: Losses
Q06. Compute half the L2 norm of x without the sqrt.
End of explanation
tf.reset_default_graph()
logits = tf.random_normal(shape=[2, 5, 10])
labels = tf.convert_to_tensor(np.random.randint(0, 10, size=[2, 5]), tf.int32)
output = tf.nn....
with tf.Session() as sess:
print(sess.run(output))
Explanation: Classification
Q7. Compute softmax cross entropy between logits and labels. Note that the rank of them is not the same.
End of explanation
logits = tf.random_normal(shape=[2, 5, 10])
labels = tf.convert_to_tensor(np.random.randint(0, 10, size=[2, 5]), tf.int32)
labels = tf.one_hot(labels, depth=10)
output = tf.nn....
with tf.Session() as sess:
print(sess.run(output))
Explanation: Q8. Compute softmax cross entropy between logits and labels.
End of explanation
tf.reset_default_graph()
x = tf.constant([0, 2, 1, 3, 4], tf.int32)
embedding = tf.constant([0, 0.1, 0.2, 0.3, 0.4], tf.float32)
output = tf.nn....
with tf.Session() as sess:
print(sess.run(output))
Explanation: Embeddings
Q9. Map tensor x to the embedding.
End of explanation |
4,266 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Catching errors and unit tests
In this tutorial are few examples how catch error and how to perform unit tests in Python.
When you code in Python, keep in mind
Step1: An example of correct use follows.
Step2: Example of incorrect use that finish without error follows.
Step3: Example of incorrect use that results in error follows
Step4: As you can see, if the function is used in incorrect way (different then planed) it can or cannot cause an error. If such a function is part of the application, both options are dangerous. Different version of the function above follows. This function allows only variables that are convertable to float.
Step5: In some cases you cannot allow the applications to crash, but providing of incorrect (predefined) result is ok. Example follows.
Step6: In practice, you should still report/log this issues somehow, otherwise you will have no information about errors silenced like this.
Unit tests
Unit test is term describing tests for partical code units (functions, classes, blocks). In this tutorial are examples of such a tests. For the design of the tests was used <a href="https
Step13: Example of unit tests for functions created before (sum_together1, sum_together2, sum_together3) follows. | Python Code:
def sum_together1(a, b):
return a + b
Explanation: Catching errors and unit tests
In this tutorial are few examples how catch error and how to perform unit tests in Python.
When you code in Python, keep in mind:
Errors should never pass silently. Unless explicitly silenced. (<a href="https://www.python.org/dev/peps/pep-0020/">PEP20 - The Zen of Python</a>)
Error catching and silencing
Following function is designed to sum two variables (numbers - float or integers) together and return this result as float
End of explanation
sum_together1(1., 2)
Explanation: An example of correct use follows.
End of explanation
sum_together1("a", "b")
Explanation: Example of incorrect use that finish without error follows.
End of explanation
sum_together1("a", 1)
Explanation: Example of incorrect use that results in error follows
End of explanation
def sum_together2(a, b):
a = float(a)
b = float(b)
return a + b
sum_together2(1, 2)
sum_together2("a", "b")
Explanation: As you can see, if the function is used in incorrect way (different then planed) it can or cannot cause an error. If such a function is part of the application, both options are dangerous. Different version of the function above follows. This function allows only variables that are convertable to float.
End of explanation
def sum_together3(a, b):
try:
a = float(a)
b = float(b)
return a + b
except:
return 0.0
sum_together3(1, 2)
sum_together3("a", "b")
Explanation: In some cases you cannot allow the applications to crash, but providing of incorrect (predefined) result is ok. Example follows.
End of explanation
import unittest
class Test1(unittest.TestCase):
def test_type_error_number_and_string(self):
with self.assertRaises(TypeError):
1 + "a"
def test_type_error_number_and_number(self): # wrong test!
with self.assertRaises(TypeError):
1 + 1
def test_float_and_int_equality(self):
self.assertEquals(0, 0.0)
def test_equality(self): # this test is wrong!
self.assertEquals(0., 1.)
suite = unittest.TestLoader().loadTestsFromTestCase(Test1)
unittest.TextTestRunner(verbosity=3).run(suite)
Explanation: In practice, you should still report/log this issues somehow, otherwise you will have no information about errors silenced like this.
Unit tests
Unit test is term describing tests for partical code units (functions, classes, blocks). In this tutorial are examples of such a tests. For the design of the tests was used <a href="https://docs.python.org/2.7/library/unittest.html">unittest</a>, that is standard Python library. Few simple tests follows. Some test are designed to fail - they are obviously wrong.
End of explanation
import unittest
class Test2(unittest.TestCase):
def test_nan_sum_together1(self):
Check if it throws error only for unsumable inputs.
# this should pass
sum_together1("a", "b")
# this should fail
with self.assertRaises(TypeError):
sum_together1(1, "b")
def test_nan_sum_together2(self):
Check if it throws error every time.
with self.assertRaises(ValueError):
sum_together2("a", "b")
with self.assertRaises(ValueError):
sum_together2(1, "b")
def test_nan_sum_together3(self):
Check if it provides correct default
self.assertEquals(sum_together3("a", "b"), 0.0)
self.assertEquals(sum_together3(1, "b"), 0.0)
def test_validity_sum_together1(self):
Check if it returns correct values.
self.assertEquals(sum_together1(0, 0), 0.0)
self.assertEquals(sum_together1(1, 0), 1.0)
def test_validity_sum_together2(self):
Check if it returns correct values.
self.assertEquals(sum_together2(0, 0), 0.0)
self.assertEquals(sum_together2(1, 0), 1.0)
def test_validity_sum_together3(self):
Check if it returns correct values.
self.assertEquals(sum_together3(0, 0), 0.0)
self.assertEquals(sum_together3(1, 0), 1.0)
suite = unittest.TestLoader().loadTestsFromTestCase(Test2)
unittest.TextTestRunner(verbosity=3).run(suite)
Explanation: Example of unit tests for functions created before (sum_together1, sum_together2, sum_together3) follows.
End of explanation |
4,267 | Given the following text description, write Python code to implement the functionality described.
Description:
Number of triangles that can be formed with given N points
Python3 implementation of the above approach ; This function returns the required number of triangles ; Hash Map to store the frequency of slope corresponding to a point ( X , Y ) ; Iterate over all possible points ; Calculate slope of all elements with current element ; find the slope with reduced fraction ; Total number of ways to form a triangle having one point as current element ; Subtracting the total number of ways to form a triangle having the same slope or are collinear ; Driver Code
| Python Code:
from collections import defaultdict
from math import gcd
def countTriangles(P , N ) :
mp = defaultdict(lambda : 0 )
ans = 0
for i in range(0 , N ) :
mp . clear()
for j in range(i + 1 , N ) :
X = P[i ][0 ] - P[j ][0 ]
Y = P[i ][1 ] - P[j ][1 ]
g = gcd(X , Y )
X //= g
Y //= g
mp [(X , Y ) ] += 1
num = N -(i + 1 )
ans +=(num *(num - 1 ) ) // 2
for j in mp :
ans -=(mp[j ] *(mp[j ] - 1 ) ) // 2
return ans
if __name__== "__main __":
P =[[ 0 , 0 ] ,[2 , 0 ] ,[1 , 1 ] ,[2 , 2 ] ]
N = len(P )
print(countTriangles(P , N ) )
|
4,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
분류(classification) 성능 평가
분류 문제는 회귀 분석과 달리 모수에 대한 t-검정, 신뢰 구간(confidence interval) 추정 등이 쉽지 않기 때문에 이를 보완하기 위해 다양한 성능 평가 기준이 필요하다.
Scikit-Learn 에서 지원하는 분류 성능 평가 명령
sklearn.metrics 서브 패키지
confusion_matrix()
classfication_report()
accuracy_score(y_true, y_pred)
precision_score(y_true, y_pred)
recall_score(y_true, y_pred)
fbeta_score(y_true, y_pred, beta)
f1_score(y_true, y_pred)
분류 결과표 Confusion Matrix
분류 결과표는 타겟의 원래 클래스와 모형이 예측한 클래스가 일치하는지는 갯수로 센 결과이다.
원래 클래스는 행(row)으로 예측한 클래스는 열(column)로 나타낸다.
| | 예측 클래스 0 | 예측 클래스 1 | 예측 클래스 2 |
|-|-|-|-|
| 원 클래스 0 | <small>원 클래스가 0, 예측 클래스가 0인 표본의 수</small> | <small>원 클래스가 0, 예측 클래스가 1인 표본의 수</small> | <small>원 클래스가 0, 예측 클래스가 2인 표본의 수</small> |
| 원 클래스 1 | <small>원 클래스가 1, 예측 클래스가 0인 표본의 수</small> | <small>원 클래스가 1, 예측 클래스가 1인 표본의 수</small> | <small>원 클래스가 1, 예측 클래스가 2인 표본의 수</small> |
| 원 클래스 2 | <small>원 클래스가 2, 예측 클래스가 0인 표본의 수</small> | <small>원 클래스가 2, 예측 클래스가 1인 표본의 수</small> | <small>원 클래스가 2, 예측 클래스가 2인 표본의 수</small> |
Step1: 이진 분류 결과표 Binary Confusion Matrix
클래스가 0과 1 두 종류 밖에 없는 경우에는 일반적으로 클래스 이름을 "Positive"와 "Negative"로 표시한다.
또, 분류 모형의 예측 결과가 맞은 경우, 즉 Positive를 Positive라고 예측하거나 Negative를 Negative라고 예측한 경우에는 "True"라고 하고 예측 결과가 틀린 경우, 즉 Positive를 Negative라고 예측하거나 Negative를 Positive라고 예측한 경우에는 "False"라고 한다.
이 경우의 이진 분류 결과의 명칭과 결과표는 다음과 같다.
| | Positive라고 예측 | Negative라고 예측 |
|-|-|-|
| 실제 Positive | True Positive | False Negative |
| 실제 Negative | False Positive | True Negative |
FDS(Fraud Detection System)의 예
FDS(Fraud Detection System)는 금융 거래, 회계 장부 등에서 잘못된 거래, 사기 거래를 찾아내는 시스템을 말한다. FDS의 예측 결과가 Positive 이면 사기 거래라고 예측한 것이고 Negative 이면 정상 거래라고 예측한 것이다. 이 결과가 사실과 일치하는지 틀리는지에 따라 다음과 같이 말한다.
True Positive
Step2: ROC 커브
ROC(Receiver Operator Characteristic) 커브는 클래스 판별 기준값의 변화에 따른 Fall-out과 Recall의 변화를 시각화한 것이다.
모든 이진 분류 모형은 판별 평면으로부터의 거리에 해당하는 판별 함수(discriminant function)를 가지며 판별 함수 값이 음수이면 0인 클래스, 양수이면 1인 클래스에 해당한다고 판별한다. 즉 0 이 클래스 판별 기준값이 된다. ROC 커브는 이 클래스 판별 기준값이 달라진다면 판별 결과가 어떻게 달라지는지는 표현한 것이다.
Scikit-Learn 의 Classification 클래스는 판별 함수 값을 계산하는 decision_function 메서드를 제공한다. ROC 커브는 이 판별 함수 값을 이용하여 다음과 같이 작성한다.
모든 표본 데이터에 대해 판별 함수 값을 계산한다.
계산된 판별 함수 값을 정렬한다.
만약 0이 아닌 가장 작은 판별 함수값을 클래스 구분 기준값으로 하면 모든 표본은 클래스 1(Positive)이 된다.
이 때의 Fall-out과 Recall을 계산하면 Recall과 Fall-out이 모두 1이된다.
두번째로 작은 판별 함수값을 클래스 구분 기준값으로 하면 판별 함수 값이 가장 작은 표본 1개를 제외하고 나머지 표본은 클래스 1(Positive)이 된다. 마찬가지로 이 때의 Fall-out과 Recall을 계산하여 기록한다.
가장 큰 판별 함수값이 클래스 구분 기준값이 될 때까지 이를 반복한다. 이 때는 모든 표본이 클래스 0(Negative)으로 판별되며 Recall과 Fall-out이 모두 0이된다.
일반적으로 클래스 판별 기준이 변화함에 따라 Recall과 Fall-out은 같이 증가하거나 감소한다. Recall이 크고 Fall-out이 작은 모형은 좋은 모형으로 생각할 수 있다.
Step3: Multi-Class 예제
Step4: AUC (Area Under the Curve)
AUC는 ROC curve의 면적을 뜻한다. Fall-Out 대비 Recall 값이 클 수록 AUC가 1에 가까운 값이며 우수한 모형이다.
Step5: Precision-Recall 커브
Precision-Recall 커브는 ROC를 계산하는 것과 동일한 방법으로 판별 기준값의 변화에 따른 Precision과 Recall 의 변화를 살펴보는 것이다.
판별 기준값이 증가하면 Recall은 무조건적으로 증가(또는 동일)하지만 Precision은 감소할 수 있다. | Python Code:
from sklearn.metrics import confusion_matrix
y_true = [2, 0, 2, 2, 0, 1]
y_pred = [0, 0, 2, 2, 0, 2]
confusion_matrix(y_true, y_pred)
y_true = ["cat", "ant", "cat", "cat", "ant", "bird"]
y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"]
confusion_matrix(y_true, y_pred, labels=["ant", "bird", "cat"])
Explanation: 분류(classification) 성능 평가
분류 문제는 회귀 분석과 달리 모수에 대한 t-검정, 신뢰 구간(confidence interval) 추정 등이 쉽지 않기 때문에 이를 보완하기 위해 다양한 성능 평가 기준이 필요하다.
Scikit-Learn 에서 지원하는 분류 성능 평가 명령
sklearn.metrics 서브 패키지
confusion_matrix()
classfication_report()
accuracy_score(y_true, y_pred)
precision_score(y_true, y_pred)
recall_score(y_true, y_pred)
fbeta_score(y_true, y_pred, beta)
f1_score(y_true, y_pred)
분류 결과표 Confusion Matrix
분류 결과표는 타겟의 원래 클래스와 모형이 예측한 클래스가 일치하는지는 갯수로 센 결과이다.
원래 클래스는 행(row)으로 예측한 클래스는 열(column)로 나타낸다.
| | 예측 클래스 0 | 예측 클래스 1 | 예측 클래스 2 |
|-|-|-|-|
| 원 클래스 0 | <small>원 클래스가 0, 예측 클래스가 0인 표본의 수</small> | <small>원 클래스가 0, 예측 클래스가 1인 표본의 수</small> | <small>원 클래스가 0, 예측 클래스가 2인 표본의 수</small> |
| 원 클래스 1 | <small>원 클래스가 1, 예측 클래스가 0인 표본의 수</small> | <small>원 클래스가 1, 예측 클래스가 1인 표본의 수</small> | <small>원 클래스가 1, 예측 클래스가 2인 표본의 수</small> |
| 원 클래스 2 | <small>원 클래스가 2, 예측 클래스가 0인 표본의 수</small> | <small>원 클래스가 2, 예측 클래스가 1인 표본의 수</small> | <small>원 클래스가 2, 예측 클래스가 2인 표본의 수</small> |
End of explanation
from sklearn.metrics import classification_report
y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report(y_true, y_pred, target_names=target_names))
y_true = ["cat", "ant", "cat", "cat", "ant", "bird"]
y_pred = ["ant", "ant", "cat", "cat", "ant", "cat"]
print(classification_report(y_true, y_pred, target_names=["ant", "bird", "cat"]))
Explanation: 이진 분류 결과표 Binary Confusion Matrix
클래스가 0과 1 두 종류 밖에 없는 경우에는 일반적으로 클래스 이름을 "Positive"와 "Negative"로 표시한다.
또, 분류 모형의 예측 결과가 맞은 경우, 즉 Positive를 Positive라고 예측하거나 Negative를 Negative라고 예측한 경우에는 "True"라고 하고 예측 결과가 틀린 경우, 즉 Positive를 Negative라고 예측하거나 Negative를 Positive라고 예측한 경우에는 "False"라고 한다.
이 경우의 이진 분류 결과의 명칭과 결과표는 다음과 같다.
| | Positive라고 예측 | Negative라고 예측 |
|-|-|-|
| 실제 Positive | True Positive | False Negative |
| 실제 Negative | False Positive | True Negative |
FDS(Fraud Detection System)의 예
FDS(Fraud Detection System)는 금융 거래, 회계 장부 등에서 잘못된 거래, 사기 거래를 찾아내는 시스템을 말한다. FDS의 예측 결과가 Positive 이면 사기 거래라고 예측한 것이고 Negative 이면 정상 거래라고 예측한 것이다. 이 결과가 사실과 일치하는지 틀리는지에 따라 다음과 같이 말한다.
True Positive: 사기를 사기라고 정확하게 예측
True Negative: 정상을 정상이라고 정확하게 예측
False Positive: 정상을 사기라고 잘못 예측
False Negative: 사기를 정상이라고 잘못 예측
| | 사기 거래라고 예측 | 정상 거래라고 예측 |
| --------------------| ------------------------ | --------------------------------- |
| 실제로 사기 거래 | True Positive | False Negative |
| 실제로 정상 거래 | False Positive | True Negative |
평가 스코어
Accuracy 정확도
전체 샘플 중 맞게 출력한 샘플 수의 비율
$$\text{accuracy} = \dfrac{TP + TN}{TP + TN + FP + FN}$$
Precision 정밀도
클래스에 속한다고 출력한 샘플 중 실제로 클래스에 속하는 샘플 수의 비율
FDS의 경우, 사기 거래라고 판단한 거래 중 실제 사기 거래의 비율. 유죄율
$$\text{precision} = \dfrac{TP}{TP + FP}$$
Recall 재현율
TPR: true positive rate
실제 클래스에 속한 샘플 중에 클래스에 속한다고 출력한 샘플의 수
FDS의 경우, 실제 사기 거래 중에서 실제 사기 거래라고 예측한 거래의 비율. 검거율
sensitivity(민감도)
$$\text{recall} = \dfrac{TP}{TP + FN}$$
Fall-Out
FPR: false positive rate
실제 클래스에 속하지 않는 샘플 중에 클래스에 속한다고 출력한 샘플의 수
FDS의 경우, 실제 정상 거래 중에서 FDS가 사기 거래라고 예측한 거래의 비율, 원죄(寃罪)율
$$\text{fallout} = \dfrac{FP}{FP + TN}$$
F (beta) score
정밀도(Precision)과 재현율(Recall)의 가중 조화 평균
$$
F_\beta = (1 + \beta^2) \, ({\text{precision} \times \text{recall}}) \, / \, ({\beta^2 \, \text{precision} + \text{recall}})
$$
F1 score
beta = 1
$$
F_1 = 2 \cdot \text{precision} \cdot \text{recall} \, / \, (\text{precision} + \text{recall})
$$
End of explanation
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_features=1, n_redundant=0, n_informative=1, n_clusters_per_class=1, random_state=4)
model = LogisticRegression().fit(X, y)
print(confusion_matrix(y, model.predict(X)))
print(classification_report(y, model.predict(X)))
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y, model.decision_function(X))
plt.plot(fpr, tpr)
plt.plot([0, 1], [0, 1], 'k--', label="random guess")
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('Receiver operating characteristic example')
plt.show()
Explanation: ROC 커브
ROC(Receiver Operator Characteristic) 커브는 클래스 판별 기준값의 변화에 따른 Fall-out과 Recall의 변화를 시각화한 것이다.
모든 이진 분류 모형은 판별 평면으로부터의 거리에 해당하는 판별 함수(discriminant function)를 가지며 판별 함수 값이 음수이면 0인 클래스, 양수이면 1인 클래스에 해당한다고 판별한다. 즉 0 이 클래스 판별 기준값이 된다. ROC 커브는 이 클래스 판별 기준값이 달라진다면 판별 결과가 어떻게 달라지는지는 표현한 것이다.
Scikit-Learn 의 Classification 클래스는 판별 함수 값을 계산하는 decision_function 메서드를 제공한다. ROC 커브는 이 판별 함수 값을 이용하여 다음과 같이 작성한다.
모든 표본 데이터에 대해 판별 함수 값을 계산한다.
계산된 판별 함수 값을 정렬한다.
만약 0이 아닌 가장 작은 판별 함수값을 클래스 구분 기준값으로 하면 모든 표본은 클래스 1(Positive)이 된다.
이 때의 Fall-out과 Recall을 계산하면 Recall과 Fall-out이 모두 1이된다.
두번째로 작은 판별 함수값을 클래스 구분 기준값으로 하면 판별 함수 값이 가장 작은 표본 1개를 제외하고 나머지 표본은 클래스 1(Positive)이 된다. 마찬가지로 이 때의 Fall-out과 Recall을 계산하여 기록한다.
가장 큰 판별 함수값이 클래스 구분 기준값이 될 때까지 이를 반복한다. 이 때는 모든 표본이 클래스 0(Negative)으로 판별되며 Recall과 Fall-out이 모두 0이된다.
일반적으로 클래스 판별 기준이 변화함에 따라 Recall과 Fall-out은 같이 증가하거나 감소한다. Recall이 크고 Fall-out이 작은 모형은 좋은 모형으로 생각할 수 있다.
End of explanation
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
iris = load_iris()
model = LogisticRegression().fit(iris.data, iris.target)
from sklearn.metrics import roc_curve
fpr0, tpr0, thresholds0 = roc_curve(iris.target, model.decision_function(iris.data)[:, 0], pos_label=0)
fpr1, tpr1, thresholds1 = roc_curve(iris.target, model.decision_function(iris.data)[:, 1], pos_label=1)
fpr2, tpr2, thresholds2 = roc_curve(iris.target, model.decision_function(iris.data)[:, 2], pos_label=2)
fpr0, tpr0, thresholds0
plt.plot(fpr0, tpr0, "r-", label="class 0 ")
plt.plot(fpr1, tpr1, "g-", label="class 1")
plt.plot(fpr2, tpr2, "b-", label="class 2")
plt.plot([0, 1], [0, 1], 'k--', label="random guess")
plt.xlim(-0.05, 1.0)
plt.ylim(0, 1.05)
plt.xlabel('False Positive Rate (Fall-Out)')
plt.ylabel('True Positive Rate (Recall)')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
print(confusion_matrix(iris.target, model1.predict(iris.data)))
print(classification_report(iris.target, model1.predict(iris.data)))
from sklearn.preprocessing import label_binarize
yb0 = label_binarize(iris.target, classes=[0, 1, 2])
yb1 = label_binarize(model1.predict(iris.data), classes=[0, 1, 2])
print(yb0[:, 0].sum(), yb1[:, 0].sum())
plt.plot(yb0[:, 0], 'ro-', markersize=10, alpha=0.4, label="actual class 0")
plt.plot(yb1[:, 0], 'bs-', markersize=10, alpha=0.4, label="predicted class 0")
plt.legend()
plt.xlim(0, len(iris.target)-1);
plt.ylim(-0.1, 1.1);
print(yb0[:, 1].sum(), yb1[:, 1].sum())
plt.plot(yb0[:, 1], 'ro-', markersize=10, alpha=0.6, label="actual class 1")
plt.plot(yb1[:, 1], 'bs-', markersize=10, alpha=0.6, label="predicted class 1")
plt.legend()
plt.xlim(45, 145);
plt.ylim(-0.1, 1.1);
Explanation: Multi-Class 예제
End of explanation
from sklearn.metrics import auc
auc(fpr0, tpr0), auc(fpr1, tpr1), auc(fpr2, tpr2)
Explanation: AUC (Area Under the Curve)
AUC는 ROC curve의 면적을 뜻한다. Fall-Out 대비 Recall 값이 클 수록 AUC가 1에 가까운 값이며 우수한 모형이다.
End of explanation
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_features=1, n_redundant=0, n_informative=1, n_clusters_per_class=1, weights=[0.9, 0.1], random_state=4)
model = LogisticRegression().fit(X, y)
from sklearn.metrics import precision_recall_curve
pre, rec, thresholds = precision_recall_curve(y, model.decision_function(X))
plt.plot(rec, pre)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curve')
plt.show()
y.sum(), len(y)
Explanation: Precision-Recall 커브
Precision-Recall 커브는 ROC를 계산하는 것과 동일한 방법으로 판별 기준값의 변화에 따른 Precision과 Recall 의 변화를 살펴보는 것이다.
판별 기준값이 증가하면 Recall은 무조건적으로 증가(또는 동일)하지만 Precision은 감소할 수 있다.
End of explanation |
4,269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Buildings / Addresses
I want to understand how buildings and addresses are represented in OSM data.
Required reading
Step1: Nodes tagged as buildings / with addresses
For the Isle of Wight dataset, there are rather few nodes tagged as buildings. There are quite a few more nodes which have address information, however.
Step2: Same for ways
We expected buildings to mostly have a polygonal outline, and so be "ways".
The situation here is reversed
Step3: Finally for relations
Rather few. These tend to be special cases
Step4: Process to a pandas dataframe
In a separate notebook, we explore how to use geopandas to also store and manipulate geometry...
Step5: Do the same for USA / Chicago data
Let's look at a different country, namely the state of Illinois in the USA.
Download the Illinois data, in osm.bz2 format, from http
Step6: Nodes
Step7: Ways
Step8: Relations
Step9: Process into a useful form
Ultimately, I want
Step10: Explore inconsistencies
Many of the "housenumber" values are not integers.
From the 5 examples below, "2211 W" should be "2211"; "145 N." should be "145" but the street should be "North Curtis Avenue". For the remaining 3, I do not know enough about North American addresses to be certain.
All these examples come from areas of data where we don't have house-level building data. So in many ways, the data is not useful for my application anyway.
Step11: The addresses with "#" in appear to be neighbours in a "trailer park", judging from satelite images. This addressing scheme follows how apartments/flats are addressed.
The address "14543-59" is a church, i.e. a large building which probably occupied more than one notional address.
Step12: So let's filter out things which match. (Stand-back, I know regular expressions).
- digits, space, #, digits "\d+\s+#\d+"
- digits, one of NSEW, digits "\d+[NSEW]\d+"
- digits, maybe a space, a single letter. maybe a full stop "\d+\s[a-zA-Z]\."
- digits, maybe a space, "East"/"West" etc.
- things like "123 1/2"
Step13: This leaves a lot left over.
What interests me is whether there are any "way"s which "interpolate" addresses, see http
Step14: Finally, save the data out. We can use any supported fiona driver. Here I use GeoJSON, as it's human readable, and no less space efficient than a Shapefile. A Shapefile can be imported into QGis etc., of course. | Python Code:
import osmdigest.digest as digest
Explanation: Buildings / Addresses
I want to understand how buildings and addresses are represented in OSM data.
Required reading: http://wiki.openstreetmap.org/wiki/Addresses
End of explanation
import os
#filename = os.path.join("//media", "disk", "OSM_Data", "isle-of-wight-latest.osm.xz")
filename = os.path.join("..", "..", "..", "Data", "isle-of-wight-latest.osm.xz")
building_node_ids = []
addr_node_ids = []
for x in digest.parse(filename):
if isinstance(x, digest.Node):
if "building" in x.tags:
building_node_ids.append(x)
if any(key.startswith("addr:") for key in x.tags):
addr_node_ids.append(x)
len(building_node_ids), building_node_ids[:5]
len(addr_node_ids), addr_node_ids[:5]
Explanation: Nodes tagged as buildings / with addresses
For the Isle of Wight dataset, there are rather few nodes tagged as buildings. There are quite a few more nodes which have address information, however.
End of explanation
building_way_ids = []
addr_way_ids = []
for x in digest.parse(filename):
if isinstance(x, digest.Way):
if "building" in x.tags:
building_way_ids.append(x)
if any(key.startswith("addr:") for key in x.tags):
addr_way_ids.append(x)
len(building_way_ids), building_way_ids[:5]
len(addr_way_ids), addr_way_ids[:5]
Explanation: Same for ways
We expected buildings to mostly have a polygonal outline, and so be "ways".
The situation here is reversed: lots of buildings, and fewer addresses. From eyeballing a few ways which have address information, but which are not buildings, we find that the "way" gives the total outline of, say, a school, which may contain a number of buildings, playing fields etc.
End of explanation
building_rel_ids = []
addr_rel_ids = []
for x in digest.parse(filename):
if isinstance(x, digest.Relation):
if "building" in x.tags:
building_rel_ids.append(x)
if any(key.startswith("addr:") for key in x.tags):
addr_rel_ids.append(x)
len(building_rel_ids), building_rel_ids[:5]
len(addr_rel_ids), addr_rel_ids[:5]
Explanation: Finally for relations
Rather few. These tend to be special cases: usually when the building is non-convex (say a stately home, with an inner courtyard) so a relation is required to specify the "inner" and "outer" ways.
End of explanation
import numpy as np
import pandas as pd
gen = digest.parse(filename)
print(next(gen))
print(next(gen))
possible_address_tags = set()
for x in gen:
for key in x.tags:
if key.startswith("addr:"):
possible_address_tags.add(key)
possible_address_tags
gen = digest.parse(filename)
osm = next(gen)
bounds = next(gen)
address_data = { key : [] for key in possible_address_tags }
address_data["osm_id"] = []
for x in gen:
addr = {key : x.tags[key] for key in x.tags if key.startswith("addr:")}
if len(addr) > 0:
address_data["osm_id"].append(x.name+"/"+str(x.osm_id))
for key in possible_address_tags:
if key in addr:
address_data[key].append(addr[key])
else:
address_data[key].append(np.nan)
data = pd.DataFrame(address_data)
data = data.set_index("osm_id")
data[:5]
Explanation: Process to a pandas dataframe
In a separate notebook, we explore how to use geopandas to also store and manipulate geometry...
End of explanation
import osmdigest.sqlite as sq
import os
filename = os.path.join("//tmp", "aaa", "illinois-latest.db")
#filename = os.path.join("..", "..", "..", "Data", "illinois-latest.db")
db = sq.OSM_SQLite(filename)
Explanation: Do the same for USA / Chicago data
Let's look at a different country, namely the state of Illinois in the USA.
Download the Illinois data, in osm.bz2 format, from http://download.geofabrik.de/north-america/us/illinois.html
Run the script convert_to_db.py on this file, generating "illinois-latest.db"
We'll then load the data from the SQLite database, as this is far more memory efficient. We'll find that we still end up with massive python data structures...
The overall pattern is the same, even though the dataset is much larger.
- Most buildings are "ways" and most addresses are "ways".
- There are a few point addresses stored as nodes, and complicated structures (like an airforce base) are stored as relations.
End of explanation
def iterate_over_tags(iterator):
buildings, addresses = [], []
for element in iterator:
if any(key.startswith("building") for key in element.tags):
buildings.append(element)
if any(key.startswith("addr") for key in element.tags):
addresses.append(element)
return buildings, addresses
building_nodes, address_nodes = iterate_over_tags(db.nodes())
len(building_nodes), building_nodes[:5]
len(address_nodes), address_nodes[:5]
Explanation: Nodes
End of explanation
building_ways, address_ways = iterate_over_tags(db.ways())
len(building_ways), building_ways[:5]
len(address_ways), address_ways[:5]
Explanation: Ways
End of explanation
building_rels, address_rels = iterate_over_tags(db.relations())
len(building_rels), building_rels[:5]
len(address_rels), address_rels[:5]
Explanation: Relations
End of explanation
features = []
def make_feature(el, centroid):
return { "properties": {
"street": el.tags["addr:street"],
"housenumber": el.tags["addr:housenumber"],
"osm_id": "{}/{}".format(el.name, el.osm_id)
},
"geometry": { "type": "Point",
"coordinates": centroid } }
for el in db.search_node_tag_keys({"addr:street", "addr:housenumber"}):
features.append(make_feature(el, [el.longitude, el.latitude]))
for el in db.search_way_tag_keys({"addr:street", "addr:housenumber"}):
way = db.complete_way(el)
features.append(make_feature(el, way.centroid()))
for el in db.search_relation_tag_keys({"addr:street", "addr:housenumber"}):
rel = db.complete_relation(el)
features.append(make_feature(el, rel.centroid()))
import geopandas as gpd
frame = gpd.GeoDataFrame.from_features(features)
#frame = frame.set_geometry("centroid")
frame[:5]
Explanation: Process into a useful form
Ultimately, I want:
Elements which have a street name and a housenumber. These are addr:street and addr:housenumber.
A single coordinate to use. For a way we can take the centroid. For a relation, we have to perform a bit of exploring: it is tempting to hope there is always one way with role "outer". This doesn't work, so we settle on looking at each member and taking its centroid, working recursively if necessary. This will not be particularly meaningful in some situations, but should suffice for now.
We construct a GeoJSON-like data structure, and then import it into geoPandas.
End of explanation
unexpected_addresses = frame[~ frame.housenumber.map(lambda x : all(y>='0' and y<='9' for y in x))]
unexpected_addresses.head()
Explanation: Explore inconsistencies
Many of the "housenumber" values are not integers.
From the 5 examples below, "2211 W" should be "2211"; "145 N." should be "145" but the street should be "North Curtis Avenue". For the remaining 3, I do not know enough about North American addresses to be certain.
All these examples come from areas of data where we don't have house-level building data. So in many ways, the data is not useful for my application anyway.
End of explanation
unexpected_addresses[5:10]
Explanation: The addresses with "#" in appear to be neighbours in a "trailer park", judging from satelite images. This addressing scheme follows how apartments/flats are addressed.
The address "14543-59" is a church, i.e. a large building which probably occupied more than one notional address.
End of explanation
import re
one = re.compile("^\\d+\\s+#\\d+$")
assert one.match("4262 #12") is not None
assert one.match("4 62 #12") is None
two = re.compile("^\\d+[NSEW]\\d+$")
assert two.match("19N479") is not None
assert two.match("19NS479") is None
assert two.match("19 479") is None
three = re.compile("^\\d+\\s*[a-zA-Z]\\.*$")
assert three.match("152A") is not None
assert three.match("152 A") is not None
assert three.match("152 A.") is not None
assert three.match("152Ac") is None
four = re.compile("^\\d+\\s*1/2")
matches = {one, two, three, four}
left = unexpected_addresses[~ unexpected_addresses.housenumber.map(lambda x : any(m.match(x) is not None for m in matches))]
left.head()
Explanation: So let's filter out things which match. (Stand-back, I know regular expressions).
- digits, space, #, digits "\d+\s+#\d+"
- digits, one of NSEW, digits "\d+[NSEW]\d+"
- digits, maybe a space, a single letter. maybe a full stop "\d+\s[a-zA-Z]\."
- digits, maybe a space, "East"/"West" etc.
- things like "123 1/2"
End of explanation
for way in db.search_way_tag_keys({"addr:interpolation"}):
print(way)
Explanation: This leaves a lot left over.
What interests me is whether there are any "way"s which "interpolate" addresses, see http://wiki.openstreetmap.org/wiki/Addresses
The answer is: "no". Eyeballing these few in OSM doesn't show anything interesting.
End of explanation
import fiona
fiona.supported_drivers
frame.to_file("illinois_building.json", driver="GeoJSON")
Explanation: Finally, save the data out. We can use any supported fiona driver. Here I use GeoJSON, as it's human readable, and no less space efficient than a Shapefile. A Shapefile can be imported into QGis etc., of course.
End of explanation |
4,270 | Given the following text description, write Python code to implement the functionality described.
Description:
Count of subarrays which forms a permutation from given Array elements
Function returns the required count ; Store the indices of the elements present in A [ ] . ; Store the maximum and minimum index of the elements from 1 to i . ; Update maxi and mini , to store minimum and maximum index for permutation of elements from 1 to i + 1 ; If difference between maxi and mini is equal to i ; Increase count ; Return final count ; Driver Code
| Python Code:
def PermuteTheArray(A , n ) :
arr =[0 ] * n
for i in range(n ) :
arr[A[i ] - 1 ] = i
mini = n
maxi = 0
count = 0
for i in range(n ) :
mini = min(mini , arr[i ] )
maxi = max(maxi , arr[i ] )
if(maxi - mini == i ) :
count += 1
return count
if __name__== "__main __":
A =[4 , 5 , 1 , 3 , 2 , 6 ]
print(PermuteTheArray(A , 6 ) )
|
4,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 DeepMind Technologies Limited.
Step1: Environments
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: Stack Before Writing
The simplest approach is to simply stack the frames before writing it to Reverb.
If there is no overlap between trajectories or if the overlap never "break"
stacks then this approach might be the most efficient as it reduces the post
processing after trajectories have been sampled.
Step5: Store flat and stack when sampled
If there is overlap between trajectories then it is probably more efficient to
store flat sequences of data and create the frame stacking after the data has
been received. Consider for example a trajectory with the following data | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install tf
!pip install dm-tree
!pip install dm-reverb
Explanation: Copyright 2019 DeepMind Technologies Limited.
End of explanation
from collections import deque
import numpy as np
import reverb
import tensorflow as tf
FRAME_SHAPE = (16, 16) # [width, height]
FRAME_DTYPE = np.uint8
def frame_generator(max_num_frames: int = 1000):
for i in range(1, max_num_frames + 1):
yield np.ones(FRAME_SHAPE, dtype=FRAME_DTYPE) * i
Explanation: Environments
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/deepmind/reverb/blob/master/examples/frame_stacking.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/deepmind/reverb/blob/master/examples/frame_stacking.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
Frame Stacking using Reverb
This contains minimal examples of how frame stacking can be implemented using Reverb.
Setup
End of explanation
def store_stacked(stack_size: int, stride: int, sequence_length: int):
Simple example where frames are stacked before sent to Reverb.
If `stride` < `stack_size` then stacks will "overlap".
If `stride` == `stack_size` then stacks will be adjecent.
If `stride` > `stack_size` then frames between stacks will be dropped.
Args:
stack_size: The number of frames to stack.
stride: The number of frames between each stack is created.
sequence_length: The number of stacks in each sampleable item.
server = reverb.Server([reverb.Table.queue('stacked_frames', 100)])
client = server.localhost_client()
with client.trajectory_writer(sequence_length) as writer:
# Create a circular buffer of the `stack_size` most recent frames.
buffer = deque(maxlen=stack_size)
for i, frame in enumerate(frame_generator(5 * stride * sequence_length)):
buffer.append(frame)
# We can't insert anything before the first stack is full.
if len(buffer) < stack_size or (i + 1) % stride != 0:
continue
# Stack the frames in buffer and insert the data into Reverb. The shape of
# the stack is [stack_size, width, height].
writer.append(np.stack(buffer))
# If `sequence_length` full stacks have been written then insert an item
# that can be sampled.
stacks_written = (i + 1) // stride - (stack_size - 1) // stride
if stacks_written >= sequence_length:
writer.create_item(table='stacked_frames',
trajectory=writer.history[-sequence_length:],
priority=1.0)
# Create a dataset that samples sequences of stacked frames.
dataset = reverb.TrajectoryDataset(
server_address=client.server_address,
table='stacked_frames',
max_in_flight_samples_per_worker=2,
dtypes=tf.as_dtype(FRAME_DTYPE),
shapes=tf.TensorShape((sequence_length, stack_size) + FRAME_SHAPE),
max_samples=2)
# Print the result.
for sequence in dataset.take(2):
print(sequence.data)
# Create trajectories with 4 frames stacked together, no frames shared
# between stacks and create sequences of 3 stacks. For example, the first 16
# steps will result in the following 2 samplable items:
#
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]
#
# -> [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]]
# -> [[5, 6, 7, 8], [9, 10, 11, 12], [13, 14, 15, 16]]
#
store_stacked(stack_size=4, stride=4, sequence_length=3)
# Create trajectories with 4 frames stacked together, 2 frames shared between
# stacks and create sequences of 3 stacks. Note that since we stack the frames
# BEFORE sending it to Reverb, most stacks will be stored twice resulting in
# double the storage (before compression is applied).
#
# For example, the first 12 steps will result in the following 3 samplable
# items:
#
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
#
# -> [[1, 2, 3, 4], [3, 4, 5, 6], [5, 6, 7, 8]]
# -> [[3, 4, 5, 6], [5, 6, 7, 8], [7, 8, 9, 10]]
# -> [[5, 6, 7, 8], [7, 8, 9, 10], [9, 10, 11, 12]]
#
store_stacked(stack_size=4, stride=2, sequence_length=3)
# Create trajectories with 2 frames stacked together, a stride of 3 and create
# sequences of 3 stacks. Note that this means that some frames will be dropped.
#
# For example, the first 12 steps will result in the following 3 samplable
# items:
#
# [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
#
# -> [[1, 2], [4, 5], [6, 7]]
# -> [[4, 5], [6, 7], [8, 9]]
# -> [[6, 7], [8, 9], [11, 12]]
#
store_stacked(stack_size=2, stride=3, sequence_length=3)
Explanation: Stack Before Writing
The simplest approach is to simply stack the frames before writing it to Reverb.
If there is no overlap between trajectories or if the overlap never "break"
stacks then this approach might be the most efficient as it reduces the post
processing after trajectories have been sampled.
End of explanation
def store_flat(stack_size: int, sequence_length: int):
Simple example where frames are sent to Reverb and stacked after sampled.
Args:
stack_size: The number of frames to stack.
sequence_length: The number of stacks in each sampleable item.
server = reverb.Server([reverb.Table.queue('flat_frames', 100)])
client = server.localhost_client()
# Insert flat sequences that can be stacked into the desired shape after
# sampling.
flat_sequence_length = sequence_length + stack_size - 1
with client.trajectory_writer(flat_sequence_length) as writer:
for i, frame in enumerate(frame_generator(flat_sequence_length * 5)):
writer.append(frame)
if i + 1 >= flat_sequence_length:
writer.create_item(table='flat_frames',
trajectory=writer.history[-flat_sequence_length:],
priority=1.0)
# Create a dataset that samples sequences of flat frames.
flat_dataset = reverb.TrajectoryDataset(
server_address=client.server_address,
table='flat_frames',
max_in_flight_samples_per_worker=2,
dtypes=tf.as_dtype(FRAME_DTYPE),
shapes=tf.TensorShape((flat_sequence_length,) + FRAME_SHAPE),
max_samples=2)
# Create a transformation that stacks the frames.
def _stack(sample):
stacks = []
for i in range(sequence_length):
stacks.append(sample.data[i:i+stack_size])
return reverb.ReplaySample(
info=sample.info,
data=tf.stack(stacks))
stacked_dataset = flat_dataset.map(_stack)
# Print the result.
for sequence in stacked_dataset:
print(sequence.data)
# Create trajectories of 3 stacks each with 2 frames stacked together. The data
# is stored as a flat sequence and then stacked when sampled.
#
# For example, the first 6 steps will result in the following 3 sequences:
#
# [1, 2, 3, 4, 5, 6]
#
# -> [1, 2, 3, 4] -> [[1, 2], [2, 3], [3, 4]]
# -> [2, 3, 4, 5] -> [[2, 3], [3, 4], [4, 5]]
# -> [3, 4, 5, 6] -> [[3, 4], [4, 5], [5, 6]]
#
store_flat(stack_size=2, sequence_length=3)
Explanation: Store flat and stack when sampled
If there is overlap between trajectories then it is probably more efficient to
store flat sequences of data and create the frame stacking after the data has
been received. Consider for example a trajectory with the following data:
[[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6]]
If each frame has size B then the total size of the trajectory is 4 * 3 * B = 12 * B. This
cost has to be paid both in terms of memory and in network trafic every time the data is transmitted.
It is easy to see that even though the size is 12 * B it only holds 6 * B distinct
data. We could therefore send [1, 2, 3, 4, 5, 6] and with some processing on
the receiver side achieve the same result.
For the general case, assuming maximum overlap, the length of the flat sequence $L_{flat}$ needed to construct a stacked one $L_{stacked}$ with $H$ frames in each stack is:
$L_{flat} = L_{stacked} + H - 1$
For the example this becomes 4 + 3 - 1 = 6.
End of explanation |
4,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iris Dataset
From Wikipedia
Step1: read_html
Wikipedia has the same dataset as a html table at https
Step2: Plotting
Let's use pandas to plot the sepal_length vs the petal_length.
Step3: It would be nice to encode by color and plot all combinations of values, but this isn't easy with matplotlib. Instead, let's use seaborn (conda install seaborn).
Step4: Excercise
Visit the https
Step5: Classification
Let's say that we are an amature botonist and we'd like to determine the specied of Iris in our front yard, but that all we have available to us to make that classification is this dataset and a ruler.
Approach
This is a classic machine learning / classification problem where we want to used a collection of "labeled" data to help us sort through new data that we receive. In this case, the new data is a set of four measurements for a flower in our yard.
Because we have labeled data, this is a "supervised leanring" problem. If we did not know which species each point in the dataset belonged to, we could still use machine learning for "unsupervised learning".
Let's reimport the data using scikit learn.
Step6: Try Different Classifiers
Step7: Which Classifier is Best?
First, let's predict the species from the measurements. Because the classifier is clearly not perfect, we expect some mis-classifications.
Step8: Inaccuracy Score
Because we only have two classes, we can find the accuracy by taking the mean of the magnitude of the difference. This value is percent of time we are inaccurate. A lower score is better.
Step9: Excercise
In the above code we excluded species==0 and we only classified based on the sepal dimensions. Complete the following
Step10: Clustering
Instead of using the labels, we could ignor the labels and do blind clustering on the dataset. Let's try that with sklearn.
Step11: Visualize Clusters
Now let's visualize how we did. We'd hope that the cluster color would be as well-seperated as the original data labels.
Step12: Accuracy
The plot looks good, but it isn't clear how good the labels are until we compare them with the true labels.
Step13: Excercise
Visit http | Python Code:
import pandas as pd
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
df = pd.read_csv(url,names=['sepal_length',
'sepal_width',
'petal_length',
'petal_width',
'species'])
df.head()
Explanation: Iris Dataset
From Wikipedia:
The Iris flower data set or Fisher's Iris data set is a multivariate data set introduced by the British statistician and biologist Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems as an example of linear discriminant analysis. It is sometimes called Anderson's Iris data set because Edgar Anderson collected the data to quantify the morphologic variation of Iris flowers of three related species. Two of the three species were collected in the Gaspé Peninsula "all from the same pasture, and picked on the same day and measured at the same time by the same person with the same apparatus".
Pandas
Pandas is a library modeled after the R dataframe API that enables the quick exploration and processing of heterogenous data.
One of the many great things about pandas is that is has many functions for grabbing data--including functions for grabbing data from the internet. In the cell below, we grabbed data from the https://archive.ics.uci.edu/ml/datasets/Iris, which has the data as a csv (without headers).
End of explanation
df_w = pd.read_html('https://en.wikipedia.org/wiki/Iris_flower_data_set',header=0)[0]
df_w.head()
Explanation: read_html
Wikipedia has the same dataset as a html table at https://en.wikipedia.org/wiki/Iris_flower_data_set. Let's use http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html to grab the data directly from Wikipedia.
You might have to run the following command first:
conda install html5lib BeautifulSoup4 lxml
End of explanation
import pylab as plt
%matplotlib inline
plt.scatter(df.sepal_length, df.petal_length)
Explanation: Plotting
Let's use pandas to plot the sepal_length vs the petal_length.
End of explanation
import seaborn as sns
sns.pairplot(df,vars=['sepal_length',
'sepal_width',
'petal_length',
'petal_width'],hue='species')
sns.swarmplot(x="species", y="petal_length", data=df)
from pandas.tools.plotting import radviz
radviz(df, "species",)
Explanation: It would be nice to encode by color and plot all combinations of values, but this isn't easy with matplotlib. Instead, let's use seaborn (conda install seaborn).
End of explanation
## Plot 1 Here
sns.violinplot(x="species", y="petal_length", data=df)
## Plot 2 Here
sns.interactplot("petal_length", 'petal_width', "sepal_width", data=df)
Explanation: Excercise
Visit the https://seaborn.pydata.org/ and make two new plots with this Iris dataset using seaborn functions we haven't used above.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, svm
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
# keep only two features and keep only two species
X = X[y != 0, :2]
y = y[y != 0]
X,y, X.shape
Explanation: Classification
Let's say that we are an amature botonist and we'd like to determine the specied of Iris in our front yard, but that all we have available to us to make that classification is this dataset and a ruler.
Approach
This is a classic machine learning / classification problem where we want to used a collection of "labeled" data to help us sort through new data that we receive. In this case, the new data is a set of four measurements for a flower in our yard.
Because we have labeled data, this is a "supervised leanring" problem. If we did not know which species each point in the dataset belonged to, we could still use machine learning for "unsupervised learning".
Let's reimport the data using scikit learn.
End of explanation
# fit the model
for fig_num, kernel in enumerate(('linear', 'rbf', 'poly')):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X, y)
plt.figure(fig_num)
plt.clf()
plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10)
plt.axis('tight')
x_min = X[:, 0].min()
x_max = X[:, 0].max()
y_min = X[:, 1].min()
y_max = X[:, 1].max()
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.Paired)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
plt.title(kernel)
plt.show()
Explanation: Try Different Classifiers
End of explanation
y_pred = clf.predict(X)
print(y,y_pred)
Explanation: Which Classifier is Best?
First, let's predict the species from the measurements. Because the classifier is clearly not perfect, we expect some mis-classifications.
End of explanation
for kernel in ('linear', 'rbf', 'poly'):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X, y)
y_pred = clf.predict(X)
print(kernel,np.mean(np.abs(y-y_pred))*100,'%')
Explanation: Inaccuracy Score
Because we only have two classes, we can find the accuracy by taking the mean of the magnitude of the difference. This value is percent of time we are inaccurate. A lower score is better.
End of explanation
## species==1
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
# keep only two features and keep only two species
X = X[y != 1, :2] # changed here
y = y[y != 1] # changed here
# fit the model
for fig_num, kernel in enumerate(('linear', 'rbf', 'poly')):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X, y)
plt.figure(fig_num)
plt.clf()
plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10)
plt.axis('tight')
x_min = X[:, 0].min()
x_max = X[:, 0].max()
y_min = X[:, 1].min()
y_max = X[:, 1].max()
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.Paired)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
plt.title(kernel)
y_pred = clf.predict(X)
print(kernel,np.mean(np.abs(y-y_pred))*100,'%')
plt.show()
## petals
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
# keep only two features and keep only two species
X = X[y != 0, 2:] # changed here
y = y[y != 0]
# fit the model
for fig_num, kernel in enumerate(('linear', 'rbf', 'poly')):
clf = svm.SVC(kernel=kernel, gamma=10)
clf.fit(X, y)
plt.figure(fig_num)
plt.clf()
plt.scatter(X[:, 0], X[:, 1], c=y, zorder=10)
plt.axis('tight')
x_min = X[:, 0].min()
x_max = X[:, 0].max()
y_min = X[:, 1].min()
y_max = X[:, 1].max()
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.decision_function(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.pcolormesh(XX, YY, Z > 0, cmap=plt.cm.Paired)
plt.contour(XX, YY, Z, colors=['k', 'k', 'k'], linestyles=['--', '-', '--'],
levels=[-.5, 0, .5])
plt.title(kernel)
y_pred = clf.predict(X)
print(kernel,np.mean(np.abs(y-y_pred))*100,'%')
plt.show()
Explanation: Excercise
In the above code we excluded species==0 and we only classified based on the sepal dimensions. Complete the following:
Copy the code cells from above and exclude species==1
Copy the code cells from above and use the petal dimensions for classification
For each case, use the inaccuracy score to see how good the classification works.
End of explanation
from sklearn.cluster import KMeans, DBSCAN
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
estimators = {'k_means_iris_3': KMeans(n_clusters=3),
'k_means_iris_8': KMeans(n_clusters=8),
'dbscan_iris_1': DBSCAN(eps=1)}
for name, est in estimators.items():
est.fit(X)
labels = est.labels_
df[name] = labels
Explanation: Clustering
Instead of using the labels, we could ignor the labels and do blind clustering on the dataset. Let's try that with sklearn.
End of explanation
sns.pairplot(df,vars=['sepal_length',
'sepal_width',
'petal_length',
'petal_width'],hue='dbscan_iris_1')
Explanation: Visualize Clusters
Now let's visualize how we did. We'd hope that the cluster color would be as well-seperated as the original data labels.
End of explanation
from sklearn.metrics import homogeneity_score
for name, est in estimators.items():
print('completeness', name, homogeneity_score(df[name],df['species']))
print('homogeneity', name, homogeneity_score(df['species'],df[name]))
Explanation: Accuracy
The plot looks good, but it isn't clear how good the labels are until we compare them with the true labels.
End of explanation
## Algo One
from sklearn.cluster import AgglomerativeClustering, Birch
iris = datasets.load_iris()
X = iris.data
y = iris.target.astype(float)
estimators = {'k_means_iris_3': KMeans(n_clusters=3),
'k_means_iris_8': KMeans(n_clusters=8),
'dbscan_iris_1': DBSCAN(eps=1),
'AgglomerativeClustering': AgglomerativeClustering(n_clusters=3),
'Birch': Birch()}
for name, est in estimators.items():
est.fit(X)
labels = est.labels_
df[name] = labels
name='Birch'
sns.pairplot(df,vars=['sepal_length',
'sepal_width',
'petal_length',
'petal_width'],hue=name)
print('completeness', name, homogeneity_score(df[name],df['species']))
print('homogeneity', name, homogeneity_score(df['species'],df[name]))
## Algo Two
name='AgglomerativeClustering'
sns.pairplot(df,vars=['sepal_length',
'sepal_width',
'petal_length',
'petal_width'],hue=name)
print('completeness', name, homogeneity_score(df[name],df['species']))
print('homogeneity', name, homogeneity_score(df['species'],df[name]))
Explanation: Excercise
Visit http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_comparison.html and add two more clustering algorithms of your choice to the comparisons above.
End of explanation |
4,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
통계적 사고 (2판) 연습문제 (thinkstats2.com, think-stat.xwmooc.org)<br>
Allen Downey / 이광춘(xwMOOC)
Step1: 연습문제 5.1
BRFSS 데이터셋에서 (5.4절 참조), 신장 분포는 대략 남성에 대해 모수 µ = 178 cm, σ = 7.7cm을 갖는 정규분포이며, 여성에 대해서 µ = 163 cm, σ = 7.3 cm 을 갖는다.
블루맨 그룹에 가입하기 위해서, 남성은 5’10”에서 6’1”사이여야 된다. (http
Step2: 예를 들어, <tt>scipy.stats.norm</tt>은 정규분포를 나타낸다.
Step3: "고정된 확률변수(frozen random variable)"는 평균과 표준편차를 계산할 수 있다.
Step4: CDF도 평가할 수 있다. 평균아래 1 표준편차를 벗어난 사람은 얼마나 될까? 약 16%
Step5: 5'10"과 6'1" 사이 얼마나 많은 사람이 분포하고 있는가?
Step6: 연습문제 5.2
파레토 분포에 대한 감각을 갖기 위해서, 만약 사람 신장의 분포가 파레토라면 세상이 얼마나 달라지는지 살펴보자. $x_m = 1$ m, $α = 1.7$ 모수로, 일리있는 최소값 1 m, 중위수 1.5 m 를 갖는 분포가 된다.
상기 분포를 도식화하세요. 파레토 세상에서 평균 사람 신장은 얼마인가? 인구의 얼마나 평균보다 더 적은가? 만약 파레토 세상에 70억 사람이 있다면, 얼마나 많은 사람들이 1 km 보다 더 클 것으로 예상하는가? 가장 작은 사람은 신장이 얼마나 될 것으로 예상하는가?
<tt>scipy.stats.pareto</tt>는 파레토 분포를 나타낸다. 파레토 세상에서, 사람 키 분포는 모수 $x_m = 1$ m, $α = 1.7$을 갖는다. 그래서 가장 키가 작은 사람은 100 cm이고, 중위수는 150cm이다.
Step7: 파레토 세상에서 평균신장이 얼마인가?
Step8: 평균보다 더 키가 작은 사람의 비율은 얼마나 될까?
Step9: 70억 사람중에서, 1 km 보다 더 키가 클 것으로 예상되는 사람은 얼마나 될까? <tt>dist.cdf</tt> 혹은 <tt>dist.sf</tt>을 사용한다.
Step10: 가장 키가 큰 사람은 얼마나 키가 클 것으로 예상되는가? 힌트
Step11: 연습문제 5.3
와이불 분포(Weibull distribution)는 고장 분석에서 나오는 지수분포를 일반화한 것이다 (http
Step12: thinkplot.Cdf 메쏘드는 와이블 분포 CDF를 직선처럼 보이게 만드는 변환기능을 제공한다.
Step13: cdf로부터 무작위 선택을 하시오.
Step14: cdf로부터 임의표본을 뽑으시오.
Step15: cdf로부터 임의표본을 뽑고 나서, 각 값에 대한 백분위순위를 계산하시오. 그리고, 백분위순위 분포를 도식화하시오.
Step16: random.random() 메쏘드를 사용해서 1000 개 난수를 생성하고 해당 표본 PMF를 도식화하시오.
Step17: PMF가 잘 동작하지 않는다고 가정하고, 대신에 CDF 도식화를 시도한다.
Step18: 연습문제 5.4
n의 적은 값으로, 정확하게 해석 분포에 적합되는 경험분포를 기대하지 못한다. 적합 품질을 평가하는 한 방법이 해석적 분포로부터 표본을 생성하고 데이터와 얼마나 잘 매칭되는지 살펴보는 것이다.
예를 들어, 5.1 절에서, 출생사이 시간 분포를 도식화했고 근사적으로 지수분포라는 것을 보았다. 하지만, 분포는 단지 데이터가 44에 불과하다. 데이터가 지수분포에서 나왔는지 살펴보기 위해서, 출생사이 약 33분인, 데이터와 동일한 평균을 갖는 지수분포에서 데이터를 44개 생성한다.
임의 확률 표본의 분포를 도식화하고, 실제 분포와 비교한다. random.expovariate 을 사용해서 값을 생성한다.
Step19: 연습문제 5.5
mystery.py 코드는 임의 데이터 파일을 생성한다. | Python Code:
from __future__ import print_function, division
import thinkstats2
import thinkplot
%matplotlib inline
Explanation: 통계적 사고 (2판) 연습문제 (thinkstats2.com, think-stat.xwmooc.org)<br>
Allen Downey / 이광춘(xwMOOC)
End of explanation
import scipy.stats
Explanation: 연습문제 5.1
BRFSS 데이터셋에서 (5.4절 참조), 신장 분포는 대략 남성에 대해 모수 µ = 178 cm, σ = 7.7cm을 갖는 정규분포이며, 여성에 대해서 µ = 163 cm, σ = 7.3 cm 을 갖는다.
블루맨 그룹에 가입하기 위해서, 남성은 5’10”에서 6’1”사이여야 된다. (http://bluemancasting.com 참조). US 남성 인구의 몇 퍼센티지가 해당 범위에 있을까? 힌트: scipy.stats.norm.cdf를 사용하라.
<tt>scipy.stats</tt> 모듈은 해석분포(analytic distributions)를 나타내는 객체를 담고 있다.
End of explanation
mu = 178
sigma = 7.7
dist = scipy.stats.norm(loc=mu, scale=sigma)
type(dist)
Explanation: 예를 들어, <tt>scipy.stats.norm</tt>은 정규분포를 나타낸다.
End of explanation
dist.mean(), dist.std()
Explanation: "고정된 확률변수(frozen random variable)"는 평균과 표준편차를 계산할 수 있다.
End of explanation
dist.cdf(mu-sigma)
Explanation: CDF도 평가할 수 있다. 평균아래 1 표준편차를 벗어난 사람은 얼마나 될까? 약 16%
End of explanation
low = dist.cdf(177.8)
high = dist.cdf(185.4)
print("177.8 - 185.4 : ", dist.cdf(185.4) - dist.cdf(177.8))
# 5'10'' (177.8cm), 6'1'' (185.4cm)
Explanation: 5'10"과 6'1" 사이 얼마나 많은 사람이 분포하고 있는가?
End of explanation
alpha = 1.7
xmin = 1
dist = scipy.stats.pareto(b=alpha, scale=xmin)
dist.median()
xs, ps = thinkstats2.RenderParetoCdf(xmin, alpha, 0, 10.0, n=100)
thinkplot.Plot(xs, ps, label=r'$\alpha=%g$' % alpha)
thinkplot.Config(xlabel='height (m)', ylabel='CDF')
Explanation: 연습문제 5.2
파레토 분포에 대한 감각을 갖기 위해서, 만약 사람 신장의 분포가 파레토라면 세상이 얼마나 달라지는지 살펴보자. $x_m = 1$ m, $α = 1.7$ 모수로, 일리있는 최소값 1 m, 중위수 1.5 m 를 갖는 분포가 된다.
상기 분포를 도식화하세요. 파레토 세상에서 평균 사람 신장은 얼마인가? 인구의 얼마나 평균보다 더 적은가? 만약 파레토 세상에 70억 사람이 있다면, 얼마나 많은 사람들이 1 km 보다 더 클 것으로 예상하는가? 가장 작은 사람은 신장이 얼마나 될 것으로 예상하는가?
<tt>scipy.stats.pareto</tt>는 파레토 분포를 나타낸다. 파레토 세상에서, 사람 키 분포는 모수 $x_m = 1$ m, $α = 1.7$을 갖는다. 그래서 가장 키가 작은 사람은 100 cm이고, 중위수는 150cm이다.
End of explanation
dist.mean()
Explanation: 파레토 세상에서 평균신장이 얼마인가?
End of explanation
dist.cdf(dist.mean())
Explanation: 평균보다 더 키가 작은 사람의 비율은 얼마나 될까?
End of explanation
(1- dist.cdf(1000)) * 7e9 # 70억 = 7e9
dist.sf(1000) * 7e9
Explanation: 70억 사람중에서, 1 km 보다 더 키가 클 것으로 예상되는 사람은 얼마나 될까? <tt>dist.cdf</tt> 혹은 <tt>dist.sf</tt>을 사용한다.
End of explanation
dist.sf(600000) * 7e9
# sf는 생존함수로 1-cdf 보다 더 정확하다. (http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pareto.html#scipy.stats.pareto)
Explanation: 가장 키가 큰 사람은 얼마나 키가 클 것으로 예상되는가? 힌트: 한 사람에 대한 신장을 찾아본다.
End of explanation
import random
import thinkstats2
import thinkplot
Explanation: 연습문제 5.3
와이불 분포(Weibull distribution)는 고장 분석에서 나오는 지수분포를 일반화한 것이다 (http://wikipedia.org/wiki/Weibull_distribution 참조). CDF는 다음과 같다.
$CDF(x) = 1 − \exp(−(x / λ)^k)$
와이불 분포에서 표본을 생성하고 와이블 분포를 직선처럼 보이게 만드는 변환을 사용해서 도식화하시오.
End of explanation
sample = [random.weibullvariate(2,1) for _ in range(1000)]
cdf = thinkstats2.Cdf(sample)
thinkplot.Cdf(cdf, transform='weibull')
thinkplot.Show(legend=False)
Explanation: thinkplot.Cdf 메쏘드는 와이블 분포 CDF를 직선처럼 보이게 만드는 변환기능을 제공한다.
End of explanation
cdf.Random()
Explanation: cdf로부터 무작위 선택을 하시오.
End of explanation
cdf.Sample(10)
Explanation: cdf로부터 임의표본을 뽑으시오.
End of explanation
prs = [cdf.PercentileRank(x) for x in cdf.Sample(1000)]
pr_cdf = thinkstats2.Cdf(prs)
thinkplot.Cdf(pr_cdf)
Explanation: cdf로부터 임의표본을 뽑고 나서, 각 값에 대한 백분위순위를 계산하시오. 그리고, 백분위순위 분포를 도식화하시오.
End of explanation
values = [random.random() for _ in range(1000)]
pmf = thinkstats2.Pmf(values)
thinkplot.Pmf(pmf, linewidth=0.1)
Explanation: random.random() 메쏘드를 사용해서 1000 개 난수를 생성하고 해당 표본 PMF를 도식화하시오.
End of explanation
cdf = thinkstats2.Cdf(values)
thinkplot.Cdf(cdf)
thinkplot.Show(legend=False)
Explanation: PMF가 잘 동작하지 않는다고 가정하고, 대신에 CDF 도식화를 시도한다.
End of explanation
import analytic
df = analytic.ReadBabyBoom()
diffs = df.minutes.diff()
cdf = thinkstats2.Cdf(diffs, label='actual')
n = len(diffs)
Iam = 44.0 / 24 / 60
sample = [random.expovariate(Iam) for _ in range(n)]
model = thinkstats2.Cdf(sample, label='model')
thinkplot.PrePlot(2)
thinkplot.Cdfs([cdf,model], complement=True)
thinkplot.Show(title='Time between births',
xlabel='minutes',
ylabel='CCDF',
yscale='log')
Explanation: 연습문제 5.4
n의 적은 값으로, 정확하게 해석 분포에 적합되는 경험분포를 기대하지 못한다. 적합 품질을 평가하는 한 방법이 해석적 분포로부터 표본을 생성하고 데이터와 얼마나 잘 매칭되는지 살펴보는 것이다.
예를 들어, 5.1 절에서, 출생사이 시간 분포를 도식화했고 근사적으로 지수분포라는 것을 보았다. 하지만, 분포는 단지 데이터가 44에 불과하다. 데이터가 지수분포에서 나왔는지 살펴보기 위해서, 출생사이 약 33분인, 데이터와 동일한 평균을 갖는 지수분포에서 데이터를 44개 생성한다.
임의 확률 표본의 분포를 도식화하고, 실제 분포와 비교한다. random.expovariate 을 사용해서 값을 생성한다.
End of explanation
from mystery import *
funcs = [uniform_sample, triangular_sample, expo_sample,
gauss_sample, lognorm_sample, pareto_sample,
weibull_sample, gumbel_sample]
for i in range(len(funcs)):
sample = funcs[i](1000)
filename = 'mystery%d.dat' % i
print(filename, funcs[i].__name__)
Explanation: 연습문제 5.5
mystery.py 코드는 임의 데이터 파일을 생성한다.
End of explanation |
4,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project 1
Step1: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation
Step3: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset
Step4: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable
Step5: Answer
Step6: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint
Step7: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint
Step9: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint
Step10: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
Step11: Answer
Step12: Answer | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import visuals as vs # Supplementary code
from sklearn.cross_validation import ShuffleSplit
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print "Boston housing dataset has {} data points with {} variables each.".format(*data.shape)
Explanation: Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project 1: Predicting Boston Housing Prices
Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.
The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:
- 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed.
- 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed.
- The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded.
- The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation.
Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# TODO: Minimum price of the data
minimum_price = np.min(prices)
# TODO: Maximum price of the data
maximum_price = np.max(prices)
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print "Statistics for Boston housing dataset:\n"
print "Minimum price: ${:,.2f}".format(minimum_price)
print "Maximum price: ${:,.2f}".format(maximum_price)
print "Mean price: ${:,.2f}".format(mean_price)
print "Median price ${:,.2f}".format(median_price)
print "Standard deviation of prices: ${:,.2f}".format(std_price)
Explanation: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation: Calculate Statistics
For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.
In the code cell below, you will need to implement the following:
- Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices.
- Store each calculation in their respective variable.
End of explanation
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
Explanation: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood):
- 'RM' is the average number of rooms among homes in the neighborhood.
- 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor).
- 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.
Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each.
Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7?
Answer:
* Increasing the value of the feature RM would lead to an increase in the value of MDEV, 'cause houses with more rooms worth more than houses with less rooms.
* Class of homeowners in the neighborhood usually indicates the overall importance of that geographical location. An area with more "lower class" population usually indicates that houses in that area are less desirable. So an increase in the value of LSTAT would lead to a decrease in the value of MDEV.
* A high value of students to teachers ratio usually indicates poor quality of education. So an increase in PTRATIO would also lead to a decrease in the value of MDEV.
Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
Implementation: Define a Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.
The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 always fails to predict the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is no better than one that naively predicts the mean of the target variable.
For the performance_metric function in the code cell below, you will need to implement the following:
- Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict.
- Assign the performance score to the score variable.
End of explanation
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print "Model has a coefficient of determination, R^2, of {:.3f}.".format(score)
Explanation: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable:
| True Value | Prediction |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
Would you consider this model to have successfully captured the variation of the target variable? Why or why not?
Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination.
End of explanation
# TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0)
# Success
print "Training and testing split was successful."
Explanation: Answer:
It seems that the model works fine in making predictions since it has successfully captured the variation of the target variable since predictions are pretty close to true values which is confirmed by R^2 score of 0.923 being close to 1.
Implementation: Shuffle and Split Data
Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.
For the code cell below, you will need to implement the following:
- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.
- Split the data into 80% training and 20% testing.
- Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.
- Assign the train and testing splits to X_train, X_test, y_train, and y_test.
End of explanation
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
Explanation: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint: What could go wrong with not having a way to test your model?
Answer:
One of the reasons we need testing subset is to check if the model is overfitting the training test and failing to generalize to the unseen data. In general testing subset is used to see how well the model makes predictions for unseen data.
Analyzing Model Performance
In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
Run the code cell below and use these graphs to answer the following question.
End of explanation
vs.ModelComplexity(X_train, y_train)
Explanation: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint: Are the learning curves converging to particular scores?
Answer:
With reference to the graph in the upper right, that is the graph with the max_depth = 3. It shows that the learning curve for the training set slowly declines as the number of points in the training set increases. On its slow decline, it appears to approach roughly an R^2 score of 0.80. A decline (or at best stagnanation) is expected at any depth level, as we go from one point in the training set to more. Basically, one rule could easily classify a few points, its when you have many and variation that more rules are needed.
In this case, adding training points doesn't do much to change the R^2 score of the training set, especially once we have a around 200 points. Changing focus to the testing set's learning curve, we see a ramp up to a R^2 score above 0.60 as we add just 50 points to the trainging set.
From there, the testing set learning curve slowly increases to right below an R^2 score of 0.80. The initial ramp up makes sense, because having just a few points in the testing set would fail to accurately capture variation in the data. So new data would most likely not be accuarely predicted.
Overall, the one with max_depth = 3 seems to be the best spot out of these four graphs. The training and testing set learning curves seem to converge at about 0.80, which is the best R^2 score with convergence out of the graphs.
Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function.
Run the code cell below and use this graph to answer the following two questions.
End of explanation
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.cross_validation import ShuffleSplit
from sklearn.metrics import make_scorer
from sklearn.tree import DecisionTreeRegressor
from sklearn.grid_search import GridSearchCV
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1, 11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search object
grid = GridSearchCV(regressor, param_grid=params, scoring=scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
Explanation: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint: How do you know when a model is suffering from high bias or high variance?
Answer:
From the above graph, we can see that both training score and validation score are quite low when the model is trained with a maximum depth of 1, that means it suffers from high bias.
In the other hand, when the model is trained with a maximum depth of 10, training score is very high (close to 1.0) and the difference between training score & test score is also quite significant. This suggests that the model suffers from high variance.
Question 6 - Best-Guess Optimal Model
Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?
Answer:
max_depth of 4 results in a model that best generalizes to unseen data because the validation score is pretty high and the training score is high as well which means the model isn't overfitting the training data and predicts pretty well for unseen data.
Evaluating Model Performance
In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model.
Question 7 - Grid Search
What is the grid search technique and how it can be applied to optimize a learning algorithm?
Answer:
When we optimize/learn an algorthim, we are minimzing/maximizing some objective function by solving for model parameters (i.e. the slope and intercept in a simple univariate linear regression) . Sometimes, the methods we use to solve these learning algortihms, have other "parameters" we need to choose/set. One example would be the learing rate in stochastic gradient descent. These "parameters" are called hyperparameters. They can be set manually or chosen through some external model mechanism.
One such mechanism is grid search. Grid search is a traditional way of performing hyperparameter optimization (sometimes called parameter sweep), which is simply an exhaustive search through a manually specified subset of the hyperparameter space of a learning algorithm. Grid search is guided by some performance metric, typically measured by cross-validation on the training set or evaluation on a held-out validation set.
Question 8 - Cross-Validation
What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?
Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?
Answer:
In the k-fold cross-validation training we divide the training set into k equal size subsets and train the model on k-1 subsets and then evaluating the error on remaining 1 subset, and doing so for every k-1 subsets. After that we average the error among k subsets on which we evaluated model's error and this gives us average model's error on all the training data. Then we run the same procedure for different sets of parameters and get estimated errors for different parameter sets, after which we choose the parameter set that gives the lowest average error.
One advantage of this method, is how the data is split. Each data point gets to be in a test set exactly once, and gets to be in a training set k-1 times. Therefore, the variance of the resulting estimate is reduced as k is increased. One disadvantage of this method is that the training algorithm has to be rerun k times, which means its more costly.
Using k-fold cross-validation as a performance metric for grid search gives us more belief (less variance) in our hyperparameter choice then just a simple cross validation. It let us minimze error for a given hpyerparameter over k instances, leading to a better (less variant choice in the hyperparameer). Therefore, we do not just chose are parameter on one instance of the learned model.
Also, not using any form of cross validation for grid search, would leave us unaware if are hyperparameter would be any good with new data (most likely, it would not be very good).
Implementation: Fitting a Model
Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms.
For the fit_model function in the code cell below, you will need to implement the following:
- Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object.
- Assign this object to the 'regressor' variable.
- Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable.
- Use make_scorer from sklearn.metrics to create a scoring function object.
- Pass the performance_metric function as a parameter to the object.
- Assign this scoring function to the 'scoring_fnc' variable.
- Use GridSearchCV from sklearn.grid_search to create a grid search object.
- Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object.
- Assign the GridSearchCV object to the 'grid' variable.
End of explanation
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print "Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth'])
Explanation: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
End of explanation
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price)
Explanation: Answer:
The optimal model have maximum depth of 4. My initial guess was right that maximum depth of 4 is the optimal parameter for the model.
Question 10 - Predicting Selling Prices
Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Neighborhood poverty level (as %) | 17% | 32% | 3% |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?
Hint: Use the statistics you calculated in the Data Exploration section to help justify your response.
Run the code block below to have your optimized model make predictions for each client's home.
End of explanation
vs.PredictTrials(features, prices, fit_model, client_data)
Explanation: Answer:
So predicted selling prices would be: Client 1 to sell for USD 391,183.33, Client 2 to sell for USD 189,123.53 and Client 3 to sell for USD 942,666.67. These prices look reasonable if we consider the number of rooms each house has and other parameters discussed earlier. The student to teacher ratio is lower as the price of the house is higher, and the percentage of all Boston homeowners who have a greater net worth than homeowners in the neighborhood is lower as the price of the house is higher. Also if we look at the descriptive statistics for the price of the houses, we can see that predicted price for client 3's home is pretty close to the maximum value for price which can be explained by the high number of rooms, low ratio of students to teachers at school and wealth of the neighbourhood. For client 2 home price we see that it's within 1 standard deviation from the mean price which tells that it's pretty close to average price, and looking at the features it makes sense, and for client 3 home price we can tell that it's within 2 standard deviations from the mean price, which means that it's not an outlier, but closer to the low-price homes, which conforms with the feature values of the home.
Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.
End of explanation |
4,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 映画レビューのテキスト分類
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 感情分析
このノートブックでは、映画レビューのテキストを使用して、それが肯定的であるか否定的であるかに分類するように感情分析モデルをトレーニングします。これは二項分類の例で、機械学習問題では重要な分類法として広く適用されます。
ここでは、Internet Movie Database から抽出した 50,000 件の映画レビューを含む、大規模なレビューデータセットを使います。レビューはトレーニング用とテスト用に 25,000 件ずつに分割されています。トレーニング用とテスト用のデータは<strong>均衡</strong>しています。言い換えると、それぞれが同数の肯定的及び否定的なレビューを含んでいます。
IMDB データセットをダウンロードして調べる
データセットをダウンロードして抽出してから、ディレクトリ構造を調べてみましょう。
Step3: aclImdb/train/pos および aclImdb/train/neg ディレクトリには多くのテキストファイルが含まれており、それぞれが 1 つの映画レビューです。それらの 1 つを見てみましょう。
Step4: データセットを読み込む
次に、データをディスクから読み込み、トレーニングに適した形式に準備します。これを行うには、便利な text_dataset_from_directory ユーティリティを使用します。このユーティリティは、次のようなディレクトリ構造を想定しています。
main_directory/
...class_a/
......a_text_1.txt
......a_text_2.txt
...class_b/
......b_text_1.txt
......b_text_2.txt
二項分類用のデータセットを準備するには、ディスクに class_a および class_bに対応する 2 つのフォルダが必要です。これらは、aclImdb/train/pos および aclImdb/train/neg にある肯定的および否定的な映画レビューになります。IMDB データセットには追加のフォルダーが含まれているため、このユーティリティを使用する前にそれらを削除します。
Step5: 次に、text_dataset_from_directory ユーティリティを使用して、ラベル付きの tf.data.Dataset を作成します。tf.data は、データを操作するための強力なツールのコレクションです。
機械学習実験を実行するときは、データセットをトレーニング、検証、および、テストの 3 つに分割することをお勧めします。
IMDB データセットはすでにトレーニング用とテスト用に分割されていますが、検証セットはありません。以下の validation_split 引数を使用して、トレーニングデータの 80
Step6: 上記のように、トレーニングフォルダには 25,000 の例があり、そのうち 80% (20,000) をトレーニングに使用します。以下に示すとおり、データセットを model.fit に直接渡すことで、モデルをトレーニングできます。tf.data を初めて使用する場合は、データセットを繰り返し処理して、次のようにいくつかの例を出力することもできます。
Step7: レビューには生のテキストが含まれていることに注意してください(句読点や <br/> などのような HTML タグが付いていることもあります)。次のセクションでは、これらの処理方法を示します。
ラベルは 0 または 1 です。これらのどれが肯定的および否定的な映画レビューに対応するかを確認するには、データセットの class_names プロパティを確認できます。
Step8: 次に、検証およびテスト用データセットを作成します。トレーニング用セットの残りの 5,000 件のレビューを検証に使用します。
注意
Step9: トレーニング用データを準備する
次に、便利な tf.keras.layers.TextVectorization レイヤーを使用して、データを標準化、トークン化、およびベクトル化します。
標準化とは、テキストを前処理することを指します。通常、句読点や HTML 要素を削除して、データセットを簡素化します。トークン化とは、文字列をトークンに分割することです (たとえば、空白で分割することにより、文を個々の単語に分割します)。ベクトル化とは、トークンを数値に変換して、ニューラルネットワークに入力できるようにすることです。これらのタスクはすべて、このレイヤーで実行できます。
前述のとおり、レビューには <br /> のようなさまざまな HTML タグが含まれています。これらのタグは、TextVectorization レイヤーのデフォルトの標準化機能によって削除されません (テキストを小文字に変換し、デフォルトで句読点を削除しますが、HTML は削除されません)。HTML を削除するカスタム標準化関数を作成します。
注意
Step10: 次に、TextVectorization レイヤーを作成します。このレイヤーを使用して、データを標準化、トークン化、およびベクトル化します。output_mode を int に設定して、トークンごとに一意の整数インデックスを作成します。
デフォルトの分割関数と、上記で定義したカスタム標準化関数を使用していることに注意してください。また、明示的な最大値 sequence_length など、モデルの定数をいくつか定義します。これにより、レイヤーはシーケンスを正確に sequence_length 値にパディングまたは切り捨てます。
Step11: 次に、adapt を呼び出して、前処理レイヤーの状態をデータセットに適合させます。これにより、モデルは文字列から整数へのインデックスを作成します。
注意
Step12: このレイヤーを使用して一部のデータを前処理した結果を確認する関数を作成します。
Step13: 上記のように、各トークンは整数に置き換えられています。レイヤーで .get_vocabulary() を呼び出すことにより、各整数が対応するトークン(文字列)を検索できます。
Step14: モデルをトレーニングする準備がほぼ整いました。最後の前処理ステップとして、トレーニング、検証、およびデータセットのテストのために前に作成した TextVectorization レイヤーを適用します。
Step15: パフォーマンスのためにデータセットを構成する
以下は、I/O がブロックされないようにするためにデータを読み込むときに使用する必要がある 2 つの重要な方法です。
.cache() はデータをディスクから読み込んだ後、データをメモリに保持します。これにより、モデルのトレーニング中にデータセットがボトルネックになることを回避できます。データセットが大きすぎてメモリに収まらない場合は、この方法を使用して、パフォーマンスの高いオンディスクキャッシュを作成することもできます。これは、多くの小さなファイルを読み込むより効率的です。
.prefetch() はトレーニング中にデータの前処理とモデルの実行をオーバーラップさせます。
以上の 2 つの方法とデータをディスクにキャッシュする方法についての詳細は、データパフォーマンスガイドを参照してください。
Step16: モデルを作成する
ニューラルネットワークを作成します。
Step17: これらのレイヤーは、分類器を構成するため一列に積み重ねられます。
最初のレイヤーは Embedding (埋め込み)レイヤーです。このレイヤーは、整数にエンコードされた語彙を受け取り、それぞれの単語インデックスに対応する埋め込みベクトルを検索します。埋め込みベクトルは、モデルのトレーニングの中で学習されます。ベクトル化のために、出力行列には次元が1つ追加されます。その結果、次元は、(batch, sequence, embedding) となります。埋め込みの詳細については、単語埋め込みチュートリアルを参照してください。
次は、GlobalAveragePooling1D(1次元のグローバル平均プーリング)レイヤーです。このレイヤーは、それぞれのサンプルについて、シーケンスの次元方向に平均値をもとめ、固定長のベクトルを返します。この結果、モデルは最も単純な形で、可変長の入力を扱うことができるようになります。
この固定長の出力ベクトルは、16 個の非表示ユニットを持つ全結合 (Dense) レイヤーに受け渡されます。
最後のレイヤーは、単一の出力ノードと密に接続されています。
損失関数とオプティマイザ
モデルをトレーニングするには、損失関数とオプティマイザが必要です。これは二項分類問題であり、モデルは確率(シグモイドアクティベーションを持つ単一ユニットレイヤー)を出力するため、losses.BinaryCrossentropy 損失関数を使用します。
損失関数の候補はこれだけではありません。例えば、mean_squared_error(平均二乗誤差)を使うこともできます。しかし、一般的には、確率を扱うにはbinary_crossentropyの方が適しています。binary_crossentropyは、確率分布の間の「距離」を測定する尺度です。今回の場合には、真の分布と予測値の分布の間の距離ということになります。
Step18: モデルをトレーニングする
dataset オブジェクトを fit メソッドに渡すことにより、モデルをトレーニングします。
Step19: モデルを評価する
モデルがどのように実行するか見てみましょう。2 つの値が返されます。損失(誤差、値が低いほど良)と正確度です。
Step20: この、かなり素朴なアプローチでも 86% 前後の正解度を達成しました。
経時的な正解度と損失のグラフを作成する
model.fit() は、トレーニング中に発生したすべての情報を詰まったディクショナリを含む History オブジェクトを返します。
Step21: トレーニングと検証中に監視されている各メトリックに対して 1 つずつ、計 4 つのエントリがあります。このエントリを使用して、トレーニングと検証の損失とトレーニングと検証の正解度を比較したグラフを作成することができます。
Step22: このグラフでは、点はトレーニングの損失と正解度を表し、実線は検証の損失と正解度を表します。
トレーニングの損失がエポックごとに下降し、トレーニングの正解度がエポックごとに上昇していることに注目してください。これは、勾配下降最適化を使用しているときに見られる現象で、イテレーションごとに希望する量を最小化します。
これは検証の損失と精度には当てはまりません。これらはトレーニング精度の前にピークに達しているようです。これが過適合の例で、モデルが、遭遇したことのないデータよりもトレーニングデータで優れたパフォーマンスを発揮する現象です。この後、モデルは過度に最適化し、テストデータに一般化しないトレーニングデータ特有の表現を学習します。
この特定のケースでは、検証の正解度が向上しなくなったときにトレーニングを停止することにより、過適合を防ぐことができます。これを行うには、tf.keras.callbacks.EarlyStopping コールバックを使用することができます。
モデルをエクスポートする
上記のコードでは、モデルにテキストをフィードする前に、TextVectorization レイヤーをデータセットに適用しました。モデルで生の文字列を処理できるようにする場合 (たとえば、展開を簡素化するため)、モデル内に TextVectorization レイヤーを含めることができます。これを行うには、トレーニングしたばかりの重みを使用して新しいモデルを作成します。
Step23: 新しいデータの推論
新しい例の予測を取得するには、model.predict()を呼び出します。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import os
import re
import shutil
import string
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import losses
print(tf.__version__)
Explanation: 映画レビューのテキスト分類
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a>
</td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/keras/text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
このチュートリアルでは、ディスクに保存されているプレーンテキストファイルを使用してテキストを分類する方法について説明します。IMDB データセットでセンチメント分析を実行するように、二項分類器をトレーニングします。ノートブックの最後には、Stack Overflow のプログラミングに関する質問のタグを予測するためのマルチクラス分類器をトレーニングする演習があります。
End of explanation
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file("aclImdb_v1", url,
untar=True, cache_dir='.',
cache_subdir='')
dataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')
os.listdir(dataset_dir)
train_dir = os.path.join(dataset_dir, 'train')
os.listdir(train_dir)
Explanation: 感情分析
このノートブックでは、映画レビューのテキストを使用して、それが肯定的であるか否定的であるかに分類するように感情分析モデルをトレーニングします。これは二項分類の例で、機械学習問題では重要な分類法として広く適用されます。
ここでは、Internet Movie Database から抽出した 50,000 件の映画レビューを含む、大規模なレビューデータセットを使います。レビューはトレーニング用とテスト用に 25,000 件ずつに分割されています。トレーニング用とテスト用のデータは<strong>均衡</strong>しています。言い換えると、それぞれが同数の肯定的及び否定的なレビューを含んでいます。
IMDB データセットをダウンロードして調べる
データセットをダウンロードして抽出してから、ディレクトリ構造を調べてみましょう。
End of explanation
sample_file = os.path.join(train_dir, 'pos/1181_9.txt')
with open(sample_file) as f:
print(f.read())
Explanation: aclImdb/train/pos および aclImdb/train/neg ディレクトリには多くのテキストファイルが含まれており、それぞれが 1 つの映画レビューです。それらの 1 つを見てみましょう。
End of explanation
remove_dir = os.path.join(train_dir, 'unsup')
shutil.rmtree(remove_dir)
Explanation: データセットを読み込む
次に、データをディスクから読み込み、トレーニングに適した形式に準備します。これを行うには、便利な text_dataset_from_directory ユーティリティを使用します。このユーティリティは、次のようなディレクトリ構造を想定しています。
main_directory/
...class_a/
......a_text_1.txt
......a_text_2.txt
...class_b/
......b_text_1.txt
......b_text_2.txt
二項分類用のデータセットを準備するには、ディスクに class_a および class_bに対応する 2 つのフォルダが必要です。これらは、aclImdb/train/pos および aclImdb/train/neg にある肯定的および否定的な映画レビューになります。IMDB データセットには追加のフォルダーが含まれているため、このユーティリティを使用する前にそれらを削除します。
End of explanation
batch_size = 32
seed = 42
raw_train_ds = tf.keras.utils.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
Explanation: 次に、text_dataset_from_directory ユーティリティを使用して、ラベル付きの tf.data.Dataset を作成します。tf.data は、データを操作するための強力なツールのコレクションです。
機械学習実験を実行するときは、データセットをトレーニング、検証、および、テストの 3 つに分割することをお勧めします。
IMDB データセットはすでにトレーニング用とテスト用に分割されていますが、検証セットはありません。以下の validation_split 引数を使用して、トレーニングデータの 80:20 分割を使用して検証セットを作成しましょう。
End of explanation
for text_batch, label_batch in raw_train_ds.take(1):
for i in range(3):
print("Review", text_batch.numpy()[i])
print("Label", label_batch.numpy()[i])
Explanation: 上記のように、トレーニングフォルダには 25,000 の例があり、そのうち 80% (20,000) をトレーニングに使用します。以下に示すとおり、データセットを model.fit に直接渡すことで、モデルをトレーニングできます。tf.data を初めて使用する場合は、データセットを繰り返し処理して、次のようにいくつかの例を出力することもできます。
End of explanation
print("Label 0 corresponds to", raw_train_ds.class_names[0])
print("Label 1 corresponds to", raw_train_ds.class_names[1])
Explanation: レビューには生のテキストが含まれていることに注意してください(句読点や <br/> などのような HTML タグが付いていることもあります)。次のセクションでは、これらの処理方法を示します。
ラベルは 0 または 1 です。これらのどれが肯定的および否定的な映画レビューに対応するかを確認するには、データセットの class_names プロパティを確認できます。
End of explanation
raw_val_ds = tf.keras.utils.text_dataset_from_directory(
'aclImdb/train',
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
raw_test_ds = tf.keras.utils.text_dataset_from_directory(
'aclImdb/test',
batch_size=batch_size)
Explanation: 次に、検証およびテスト用データセットを作成します。トレーニング用セットの残りの 5,000 件のレビューを検証に使用します。
注意: validation_split および subset 引数を使用する場合は、必ずランダムシードを指定するか、shuffle=False を渡して、検証とトレーニング分割に重複がないようにします。
End of explanation
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')
return tf.strings.regex_replace(stripped_html,
'[%s]' % re.escape(string.punctuation),
'')
Explanation: トレーニング用データを準備する
次に、便利な tf.keras.layers.TextVectorization レイヤーを使用して、データを標準化、トークン化、およびベクトル化します。
標準化とは、テキストを前処理することを指します。通常、句読点や HTML 要素を削除して、データセットを簡素化します。トークン化とは、文字列をトークンに分割することです (たとえば、空白で分割することにより、文を個々の単語に分割します)。ベクトル化とは、トークンを数値に変換して、ニューラルネットワークに入力できるようにすることです。これらのタスクはすべて、このレイヤーで実行できます。
前述のとおり、レビューには <br /> のようなさまざまな HTML タグが含まれています。これらのタグは、TextVectorization レイヤーのデフォルトの標準化機能によって削除されません (テキストを小文字に変換し、デフォルトで句読点を削除しますが、HTML は削除されません)。HTML を削除するカスタム標準化関数を作成します。
注意: トレーニング/テストスキュー(トレーニング/サービングスキューとも呼ばれます)を防ぐには、トレーニング時とテスト時にデータを同じように前処理することが重要です。これを容易にするためには、このチュートリアルの後半で示すように、TextVectorization レイヤーをモデル内に直接含めます。
End of explanation
max_features = 10000
sequence_length = 250
vectorize_layer = layers.TextVectorization(
standardize=custom_standardization,
max_tokens=max_features,
output_mode='int',
output_sequence_length=sequence_length)
Explanation: 次に、TextVectorization レイヤーを作成します。このレイヤーを使用して、データを標準化、トークン化、およびベクトル化します。output_mode を int に設定して、トークンごとに一意の整数インデックスを作成します。
デフォルトの分割関数と、上記で定義したカスタム標準化関数を使用していることに注意してください。また、明示的な最大値 sequence_length など、モデルの定数をいくつか定義します。これにより、レイヤーはシーケンスを正確に sequence_length 値にパディングまたは切り捨てます。
End of explanation
# Make a text-only dataset (without labels), then call adapt
train_text = raw_train_ds.map(lambda x, y: x)
vectorize_layer.adapt(train_text)
Explanation: 次に、adapt を呼び出して、前処理レイヤーの状態をデータセットに適合させます。これにより、モデルは文字列から整数へのインデックスを作成します。
注意: Adapt を呼び出すときは、トレーニング用データのみを使用することが重要です(テスト用セットを使用すると情報が漏洩します)。
End of explanation
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label
# retrieve a batch (of 32 reviews and labels) from the dataset
text_batch, label_batch = next(iter(raw_train_ds))
first_review, first_label = text_batch[0], label_batch[0]
print("Review", first_review)
print("Label", raw_train_ds.class_names[first_label])
print("Vectorized review", vectorize_text(first_review, first_label))
Explanation: このレイヤーを使用して一部のデータを前処理した結果を確認する関数を作成します。
End of explanation
print("1287 ---> ",vectorize_layer.get_vocabulary()[1287])
print(" 313 ---> ",vectorize_layer.get_vocabulary()[313])
print('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary())))
Explanation: 上記のように、各トークンは整数に置き換えられています。レイヤーで .get_vocabulary() を呼び出すことにより、各整数が対応するトークン(文字列)を検索できます。
End of explanation
train_ds = raw_train_ds.map(vectorize_text)
val_ds = raw_val_ds.map(vectorize_text)
test_ds = raw_test_ds.map(vectorize_text)
Explanation: モデルをトレーニングする準備がほぼ整いました。最後の前処理ステップとして、トレーニング、検証、およびデータセットのテストのために前に作成した TextVectorization レイヤーを適用します。
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)
Explanation: パフォーマンスのためにデータセットを構成する
以下は、I/O がブロックされないようにするためにデータを読み込むときに使用する必要がある 2 つの重要な方法です。
.cache() はデータをディスクから読み込んだ後、データをメモリに保持します。これにより、モデルのトレーニング中にデータセットがボトルネックになることを回避できます。データセットが大きすぎてメモリに収まらない場合は、この方法を使用して、パフォーマンスの高いオンディスクキャッシュを作成することもできます。これは、多くの小さなファイルを読み込むより効率的です。
.prefetch() はトレーニング中にデータの前処理とモデルの実行をオーバーラップさせます。
以上の 2 つの方法とデータをディスクにキャッシュする方法についての詳細は、データパフォーマンスガイドを参照してください。
End of explanation
embedding_dim = 16
model = tf.keras.Sequential([
layers.Embedding(max_features + 1, embedding_dim),
layers.Dropout(0.2),
layers.GlobalAveragePooling1D(),
layers.Dropout(0.2),
layers.Dense(1)])
model.summary()
Explanation: モデルを作成する
ニューラルネットワークを作成します。
End of explanation
model.compile(loss=losses.BinaryCrossentropy(from_logits=True),
optimizer='adam',
metrics=tf.metrics.BinaryAccuracy(threshold=0.0))
Explanation: これらのレイヤーは、分類器を構成するため一列に積み重ねられます。
最初のレイヤーは Embedding (埋め込み)レイヤーです。このレイヤーは、整数にエンコードされた語彙を受け取り、それぞれの単語インデックスに対応する埋め込みベクトルを検索します。埋め込みベクトルは、モデルのトレーニングの中で学習されます。ベクトル化のために、出力行列には次元が1つ追加されます。その結果、次元は、(batch, sequence, embedding) となります。埋め込みの詳細については、単語埋め込みチュートリアルを参照してください。
次は、GlobalAveragePooling1D(1次元のグローバル平均プーリング)レイヤーです。このレイヤーは、それぞれのサンプルについて、シーケンスの次元方向に平均値をもとめ、固定長のベクトルを返します。この結果、モデルは最も単純な形で、可変長の入力を扱うことができるようになります。
この固定長の出力ベクトルは、16 個の非表示ユニットを持つ全結合 (Dense) レイヤーに受け渡されます。
最後のレイヤーは、単一の出力ノードと密に接続されています。
損失関数とオプティマイザ
モデルをトレーニングするには、損失関数とオプティマイザが必要です。これは二項分類問題であり、モデルは確率(シグモイドアクティベーションを持つ単一ユニットレイヤー)を出力するため、losses.BinaryCrossentropy 損失関数を使用します。
損失関数の候補はこれだけではありません。例えば、mean_squared_error(平均二乗誤差)を使うこともできます。しかし、一般的には、確率を扱うにはbinary_crossentropyの方が適しています。binary_crossentropyは、確率分布の間の「距離」を測定する尺度です。今回の場合には、真の分布と予測値の分布の間の距離ということになります。
End of explanation
epochs = 10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs)
Explanation: モデルをトレーニングする
dataset オブジェクトを fit メソッドに渡すことにより、モデルをトレーニングします。
End of explanation
loss, accuracy = model.evaluate(test_ds)
print("Loss: ", loss)
print("Accuracy: ", accuracy)
Explanation: モデルを評価する
モデルがどのように実行するか見てみましょう。2 つの値が返されます。損失(誤差、値が低いほど良)と正確度です。
End of explanation
history_dict = history.history
history_dict.keys()
Explanation: この、かなり素朴なアプローチでも 86% 前後の正解度を達成しました。
経時的な正解度と損失のグラフを作成する
model.fit() は、トレーニング中に発生したすべての情報を詰まったディクショナリを含む History オブジェクトを返します。
End of explanation
acc = history_dict['binary_accuracy']
val_acc = history_dict['val_binary_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
Explanation: トレーニングと検証中に監視されている各メトリックに対して 1 つずつ、計 4 つのエントリがあります。このエントリを使用して、トレーニングと検証の損失とトレーニングと検証の正解度を比較したグラフを作成することができます。
End of explanation
export_model = tf.keras.Sequential([
vectorize_layer,
model,
layers.Activation('sigmoid')
])
export_model.compile(
loss=losses.BinaryCrossentropy(from_logits=False), optimizer="adam", metrics=['accuracy']
)
# Test it with `raw_test_ds`, which yields raw strings
loss, accuracy = export_model.evaluate(raw_test_ds)
print(accuracy)
Explanation: このグラフでは、点はトレーニングの損失と正解度を表し、実線は検証の損失と正解度を表します。
トレーニングの損失がエポックごとに下降し、トレーニングの正解度がエポックごとに上昇していることに注目してください。これは、勾配下降最適化を使用しているときに見られる現象で、イテレーションごとに希望する量を最小化します。
これは検証の損失と精度には当てはまりません。これらはトレーニング精度の前にピークに達しているようです。これが過適合の例で、モデルが、遭遇したことのないデータよりもトレーニングデータで優れたパフォーマンスを発揮する現象です。この後、モデルは過度に最適化し、テストデータに一般化しないトレーニングデータ特有の表現を学習します。
この特定のケースでは、検証の正解度が向上しなくなったときにトレーニングを停止することにより、過適合を防ぐことができます。これを行うには、tf.keras.callbacks.EarlyStopping コールバックを使用することができます。
モデルをエクスポートする
上記のコードでは、モデルにテキストをフィードする前に、TextVectorization レイヤーをデータセットに適用しました。モデルで生の文字列を処理できるようにする場合 (たとえば、展開を簡素化するため)、モデル内に TextVectorization レイヤーを含めることができます。これを行うには、トレーニングしたばかりの重みを使用して新しいモデルを作成します。
End of explanation
examples = [
"The movie was great!",
"The movie was okay.",
"The movie was terrible..."
]
export_model.predict(examples)
Explanation: 新しいデータの推論
新しい例の予測を取得するには、model.predict()を呼び出します。
End of explanation |
4,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3D Shape Classification with Sublevelset Filtrations
In this module, we will explore how TDA can be used to classify 3D shapes. We will begine by clustering triangle meshes of humans in different poses by pose. We will then explore how to cluster a collection of shapes which are undergoing nonrigid transformations, or "articulations."
As always, let's first import all of the necessary libraries.
Step1: Now, let's include some code that performs a sublevelset filtration by some scalar function on the vertices of a triangle mesh.
Step3: Let's also define a function which will plot a particular scalar function on XY and XZ slices of the mesh
Step4: Experiment 1
Step5: Now let's load in all of the meshes and sort them so that contiguous groups of 10 meshes are the same pose (by default they are sorted by subject).
Step6: Finally, we compute the 0D sublevelset filtration on all of the shapes, followed by a Wasserstein distance computation between all pairs to examine how different shapes cluster together. We also display the result of 3D multidimensional scaling using the matrix of all pairs of Wasserstein distances.
Questions
Look at the pairwise Wasserstein distances and the corresponding 3D MDS plot. Which pose classes are similar to each other by our metric? Can you go back above and pull out example poses from different subjects that show why this might be the case?
Step7: Experiment 2
Step8: Let's now load in a few of the nonrigid meshes and compute the sublevelset function of their heat kernel signatures
Step9: Finally, we plot the results | Python Code:
import numpy as np
%matplotlib notebook
import scipy.io as sio
from scipy import sparse
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import sys
sys.path.append("pyhks")
from HKS import *
from GeomUtils import *
from ripser import ripser
from persim import plot_diagrams, wasserstein
from sklearn.manifold import MDS
from sklearn.decomposition import PCA
import warnings
warnings.filterwarnings('ignore')
Explanation: 3D Shape Classification with Sublevelset Filtrations
In this module, we will explore how TDA can be used to classify 3D shapes. We will begine by clustering triangle meshes of humans in different poses by pose. We will then explore how to cluster a collection of shapes which are undergoing nonrigid transformations, or "articulations."
As always, let's first import all of the necessary libraries.
End of explanation
def do0DSublevelsetFiltrationMesh(VPos, ITris, fn):
x = fn(VPos, ITris)
N = VPos.shape[0]
# Add edges between adjacent points in the mesh
I, J = getEdges(VPos, ITris)
V = np.maximum(x[I], x[J])
# Add vertex birth times along the diagonal of the distance matrix
I = np.concatenate((I, np.arange(N)))
J = np.concatenate((J, np.arange(N)))
V = np.concatenate((V, x))
#Create the sparse distance matrix
D = sparse.coo_matrix((V, (I, J)), shape=(N, N)).tocsr()
return ripser(D, distance_matrix=True, maxdim=0)['dgms'][0]
Explanation: Now, let's include some code that performs a sublevelset filtration by some scalar function on the vertices of a triangle mesh.
End of explanation
def plotPCfn(VPos, fn, cmap = 'afmhot'):
plot an XY slice of a mesh with the scalar function used in a
sublevelset filtration
x = fn - np.min(fn)
x = x/np.max(x)
c = plt.get_cmap(cmap)
C = c(np.array(np.round(x*255.0), dtype=np.int64))
plt.scatter(VPos[:, 0], VPos[:, 1], 10, c=C)
plt.axis('equal')
ax = plt.gca()
ax.set_facecolor((0.3, 0.3, 0.3))
Explanation: Let's also define a function which will plot a particular scalar function on XY and XZ slices of the mesh
End of explanation
subjectNum = 1
poseNum = 9
i = subjectNum*10 + poseNum
fn = lambda VPos, ITris: VPos[:, 1] #Return the y coordinate as a function
(VPos, _, ITris) = loadOffFile("shapes/tr_reg_%.03d.off"%i)
x = fn(VPos, ITris)
I = do0DSublevelsetFiltrationMesh(VPos, ITris, fn)
plt.figure(figsize=(10, 4))
plt.subplot(131)
plotPCfn(VPos, x, cmap = 'afmhot')
plt.title("Subject %i Pose %i"%(subjectNum, poseNum))
plt.subplot(132)
plotPCfn(VPos[:, [2, 1, 0]], x, cmap = 'afmhot')
plt.subplot(133)
plot_diagrams([I])
plt.show()
Explanation: Experiment 1: Clustering of Human Poses
In the first experiment, we will load surfaces of 10 different people, each performing one of 10 different poses, for 100 total. To classify by pose, we will use the height function as our sublevelset function. Let's load a few examples to see what they look like. The code below loads in all of the triangle meshes in the "shapes" directory
Questions
After looking at some examples, why would filtering by height be a good idea for picking up on these poses?
End of explanation
meshes = []
for poseNum in range(10):
for subjectNum in range(10):
i = subjectNum*10 + poseNum
VPos, _, ITris = loadOffFile("shapes/tr_reg_%.03d.off"%i)
meshes.append((VPos, ITris))
Explanation: Now let's load in all of the meshes and sort them so that contiguous groups of 10 meshes are the same pose (by default they are sorted by subject).
End of explanation
dgms = []
N = len(meshes)
print("Computing persistence diagrams...")
for i, (VPos, ITris) in enumerate(meshes):
x = fn(VPos, ITris)
I = do0DSublevelsetFiltrationMesh(VPos, ITris, fn)
I = I[np.isfinite(I[:, 1]), :]
dgms.append(I)
# Compute Wasserstein distances in order of pose
DWass = np.zeros((N, N))
for i in range(N):
if i%10 == 0:
print("Comparing pose %i..."%(i/10))
for j in range(i+1, N):
DWass[i, j] = wasserstein(dgms[i], dgms[j])
DWass = DWass + DWass.T
# Re-sort by class
# Now do MDS and PCA, respectively
mds = MDS(n_components=3, dissimilarity='precomputed')
mds.fit_transform(DWass)
XWass = mds.embedding_
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.imshow(DWass, cmap = 'afmhot', interpolation = 'none')
plt.title("Wasserstein")
ax1 = plt.gca()
ax2 = plt.subplot(122, projection='3d')
ax2.set_title("Wasserstein By Pose")
for i in range(10):
X = XWass[i*10:(i+1)*10, :]
ax2.scatter(X[:, 0], X[:, 1], X[:, 2])
Is = (i*10 + np.arange(10)).tolist() + (-2*np.ones(10)).tolist()
Js = (-2*np.ones(10)).tolist() + (i*10 + np.arange(10)).tolist()
ax1.scatter(Is, Js, 10)
plt.show()
Explanation: Finally, we compute the 0D sublevelset filtration on all of the shapes, followed by a Wasserstein distance computation between all pairs to examine how different shapes cluster together. We also display the result of 3D multidimensional scaling using the matrix of all pairs of Wasserstein distances.
Questions
Look at the pairwise Wasserstein distances and the corresponding 3D MDS plot. Which pose classes are similar to each other by our metric? Can you go back above and pull out example poses from different subjects that show why this might be the case?
End of explanation
classNum = 0
articulationNum = 1
classes = ['ant', 'hand', 'human', 'octopus', 'pliers', 'snake', 'shark', 'bear', 'chair']
i = classNum*10 + articulationNum
fn = lambda VPos, ITris: -getHKS(VPos, ITris, 20, t = 30)
(VPos, _, ITris) = loadOffFile("shapes_nonrigid/%.3d.off"%i)
x = fn(VPos, ITris)
I = do0DSublevelsetFiltrationMesh(VPos, ITris, fn)
plt.figure(figsize=(8, 8))
plt.subplot(221)
plotPCfn(VPos, x, cmap = 'afmhot')
plt.title("Class %i Articulation %i"%(classNum, articulationNum))
plt.subplot(222)
plotPCfn(VPos[:, [2, 1, 0]], x, cmap = 'afmhot')
plt.subplot(223)
plotPCfn(VPos[:, [0, 2, 1]], x, cmap = 'afmhot')
plt.subplot(224)
plot_diagrams([I])
plt.show()
Explanation: Experiment 2: Clustering of Nonrigid Shapes
In this experiment, we will use a different sublevelset which is blind to <i>intrinsic isometries</i>. This can be used to cluster shapes in a way which is invariant to articulated poses, which is complementary to the previous approach. As our scalar function will use the "heat kernel signature," which is a numerically stable way to compute curvature at multiple scales. We will actually negate this signature, since we care more about local maxes than local mins in the scalar function. So sublevelsets will start at regions of high curvature.
Let's explore a few examples below in a dataset which is a subset of the McGill 3D Shape Benchmark with 10 shapes in 10 different articulations. In particular, we will load all of the shapes from the "shapes_nonrigid" folder within the TDALabs folder. Run the code and change the "classNum" and "articulationNum" variables to explore different shapes
Questions
Does it seem like the persistence diagrams stay mostly the same within each class? If so, why?
End of explanation
N = 90
meshesNonrigid = []
for i in range(N):
(VPos, _, ITris) = loadOffFile("shapes_nonrigid/%.3d.off"%i)
meshesNonrigid.append((VPos, ITris))
dgmsNonrigid = []
N = len(meshesNonrigid)
print("Computing persistence diagrams...")
for i, (VPos, ITris) in enumerate(meshesNonrigid):
if i%10 == 0:
print("Finished first %i meshes"%i)
x = fn(VPos, ITris)
I = do0DSublevelsetFiltrationMesh(VPos, ITris, lambda VPos, ITris: -getHKS(VPos, ITris, 20, t = 30))
I = I[np.isfinite(I[:, 1]), :]
dgmsNonrigid.append(I)
# Compute Wasserstein distances
print("Computing Wasserstein distances...")
DWassNonrigid = np.zeros((N, N))
for i in range(N):
if i%10 == 0:
print("Finished first %i distances"%i)
for j in range(i+1, N):
DWassNonrigid[i, j] = wasserstein(dgmsNonrigid[i], dgmsNonrigid[j])
DWassNonrigid = DWassNonrigid + DWassNonrigid.T
# Now do MDS and PCA, respectively
mds = MDS(n_components=3, dissimilarity='precomputed')
mds.fit_transform(DWassNonrigid)
XWassNonrigid = mds.embedding_
Explanation: Let's now load in a few of the nonrigid meshes and compute the sublevelset function of their heat kernel signatures
End of explanation
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.imshow(DWassNonrigid, cmap = 'afmhot', interpolation = 'none')
ax1 = plt.gca()
plt.xticks(5+10*np.arange(10), classes, rotation='vertical')
plt.yticks(5+10*np.arange(10), classes)
plt.title("Wasserstein Distances")
ax2 = plt.subplot(122, projection='3d')
ax2.set_title("3D MDS")
for i in range(9):
X = XWassNonrigid[i*10:(i+1)*10, :]
ax2.scatter(X[:, 0], X[:, 1], X[:, 2])
Is = (i*10 + np.arange(10)).tolist() + (-2*np.ones(10)).tolist()
Js = (91*np.ones(10)).tolist() + (i*10 + np.arange(10)).tolist()
ax1.scatter(Is, Js, 10)
plt.show()
Explanation: Finally, we plot the results
End of explanation |
4,277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create datasets for the Content-based Filter
This notebook builds the data we will use for creating our content based model. We'll collect the data via a collection of SQL queries from the publicly avialable Kurier.at dataset in BigQuery.
Kurier.at is an Austrian newsite. The goal of these labs is to recommend an article for a visitor to the site. In this lab we collect the data for training, in the subsequent notebook we train the recommender model.
This notebook illustrates
* how to pull data from BigQuery table and write to local files
* how to make reproducible train and test splits
Step1: We will use this helper funciton to write lists containing article ids, categories, and authors for each article in our database to local file.
Step3: Pull data from BigQuery
The cell below creates a local text file containing all the article ids (i.e. 'content ids') in the dataset.
Have a look at the original dataset in BigQuery. Then read through the query below and make sure you understand what it is doing.
Step5: There should be 15,634 articles in the database.
Next, we'll create a local file which contains a list of article categories and a list of article authors.
Note the change in the index when pulling the article category or author information. Also, we are using the first author of the article to create our author list.
Refer back to the original dataset, use the hits.customDimensions.index field to verify the correct index.
Step7: The categories are 'News', 'Stars & Kultur', and 'Lifestyle'.
When creating the author list, we'll only use the first author information for each article.
Step10: There should be 385 authors in the database.
Create train and test sets.
In this section, we will create the train/test split of our data for training our model. We use the concatenated values for visitor id and content id to create a farm fingerprint, taking approximately 90% of the data for the training set and 10% for the test set.
Step11: Let's have a look at the two csv files we just created containing the training and test set. We'll also do a line count of both files to confirm that we have achieved an approximate 90/10 train/test split.
In the next notebook, Content Based Filtering we will build a model to recommend an article given information about the current article being read, such as the category, title, author, and publish date. | Python Code:
import os
import tensorflow as tf
import numpy as np
from google.cloud import bigquery
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: Create datasets for the Content-based Filter
This notebook builds the data we will use for creating our content based model. We'll collect the data via a collection of SQL queries from the publicly avialable Kurier.at dataset in BigQuery.
Kurier.at is an Austrian newsite. The goal of these labs is to recommend an article for a visitor to the site. In this lab we collect the data for training, in the subsequent notebook we train the recommender model.
This notebook illustrates
* how to pull data from BigQuery table and write to local files
* how to make reproducible train and test splits
End of explanation
def write_list_to_disk(my_list, filename):
with open(filename, 'w') as f:
for item in my_list:
line = "%s\n" % item
f.write(line.encode('utf8'))
Explanation: We will use this helper funciton to write lists containing article ids, categories, and authors for each article in our database to local file.
End of explanation
sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
content_id
content_ids_list = bigquery.Client().query(sql).to_dataframe()['content_id'].tolist()
write_list_to_disk(content_ids_list, "content_ids.txt")
print("Some sample content IDs {}".format(content_ids_list[:3]))
print("The total number of articles is {}".format(len(content_ids_list)))
Explanation: Pull data from BigQuery
The cell below creates a local text file containing all the article ids (i.e. 'content ids') in the dataset.
Have a look at the original dataset in BigQuery. Then read through the query below and make sure you understand what it is doing.
End of explanation
sql=
#standardSQL
SELECT
(SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
category
categories_list = bigquery.Client().query(sql).to_dataframe()['category'].tolist()
write_list_to_disk(categories_list, "categories.txt")
print(categories_list)
Explanation: There should be 15,634 articles in the database.
Next, we'll create a local file which contains a list of article categories and a list of article authors.
Note the change in the index when pulling the article category or author information. Also, we are using the first author of the article to create our author list.
Refer back to the original dataset, use the hits.customDimensions.index field to verify the correct index.
End of explanation
sql=
#standardSQL
SELECT
REGEXP_EXTRACT((SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)), r"^[^,]+") AS first_author
FROM `cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND (SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
GROUP BY
first_author
authors_list = bigquery.Client().query(sql).to_dataframe()['first_author'].tolist()
write_list_to_disk(authors_list, "authors.txt")
print("Some sample authors {}".format(authors_list[:10]))
print("The total number of authors is {}".format(len(authors_list)))
Explanation: The categories are 'News', 'Stars & Kultur', and 'Lifestyle'.
When creating the author list, we'll only use the first author information for each article.
End of explanation
sql=
WITH site_history as (
SELECT
fullVisitorId as visitor_id,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id,
(SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category,
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title,
(SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list,
SPLIT(RPAD((SELECT MAX(IF(index=4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') as year_month_array,
LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) as nextCustomDimensions
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND
fullVisitorId IS NOT NULL
AND
hits.time != 0
AND
hits.time IS NOT NULL
AND
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
)
SELECT
visitor_id,
content_id,
category,
REGEXP_REPLACE(title, r",", "") as title,
REGEXP_EXTRACT(author_list, r"^[^,]+") as author,
DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970,1,1), MONTH) as months_since_epoch,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) as next_content_id
FROM
site_history
WHERE (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(CONCAT(visitor_id, content_id)), 10)) < 9
training_set_df = bigquery.Client().query(sql).to_dataframe()
training_set_df.to_csv('training_set.csv', header=False, index=False, encoding='utf-8')
training_set_df.head()
sql=
WITH site_history as (
SELECT
fullVisitorId as visitor_id,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id,
(SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category,
(SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title,
(SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list,
SPLIT(RPAD((SELECT MAX(IF(index=4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') as year_month_array,
LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) as nextCustomDimensions
FROM
`cloud-training-demos.GA360_test.ga_sessions_sample`,
UNNEST(hits) AS hits
WHERE
# only include hits on pages
hits.type = "PAGE"
AND
fullVisitorId IS NOT NULL
AND
hits.time != 0
AND
hits.time IS NOT NULL
AND
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL
)
SELECT
visitor_id,
content_id,
category,
REGEXP_REPLACE(title, r",", "") as title,
REGEXP_EXTRACT(author_list, r"^[^,]+") as author,
DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970,1,1), MONTH) as months_since_epoch,
(SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) as next_content_id
FROM
site_history
WHERE (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL
AND ABS(MOD(FARM_FINGERPRINT(CONCAT(visitor_id, content_id)), 10)) >= 9
test_set_df = bigquery.Client().query(sql).to_dataframe()
test_set_df.to_csv('test_set.csv', header=False, index=False, encoding='utf-8')
test_set_df.head()
Explanation: There should be 385 authors in the database.
Create train and test sets.
In this section, we will create the train/test split of our data for training our model. We use the concatenated values for visitor id and content id to create a farm fingerprint, taking approximately 90% of the data for the training set and 10% for the test set.
End of explanation
%%bash
wc -l *_set.csv
!head *_set.csv
Explanation: Let's have a look at the two csv files we just created containing the training and test set. We'll also do a line count of both files to confirm that we have achieved an approximate 90/10 train/test split.
In the next notebook, Content Based Filtering we will build a model to recommend an article given information about the current article being read, such as the category, title, author, and publish date.
End of explanation |
4,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to pychord
Create a Chord
Step1: Transpose a Chord
Step2: Get Component Notes
Step3: Find Chords
Step4: Chord Progressions
Step5: Create a Chord from Note Index in a Scale
Step6: Overwrite the default Quality components with yours
```python
from pychord import Chord, QualityManager
Chord("C11").components()
['C', 'G', 'Bb', 'D', 'F']
quality_manager = QualityManager()
quality_manager.set_quality("11", (0, 4, 7, 10, 14, 17))
Chord("C11").components()
['C', 'E', 'G', 'Bb', 'D', 'F']
```
Hoffman's - A Letter for Alexandr(i)a
Step7: Hoffman's Physics Song | Python Code:
c = mus.Chord("Am7")
print(c.info())
Explanation: Intro to pychord
Create a Chord
End of explanation
c = mus.Chord("C")
c.transpose(2)
c
c = mus.Chord("Dm/G")
c.transpose(3)
c
Explanation: Transpose a Chord
End of explanation
c = mus.Chord("Dm7")
c.components()
Explanation: Get Component Notes
End of explanation
c = mus.note_to_chord(['C', 'E', 'G'])
print(c)
print(mus.note_to_chord(["F", "G", "C"]))
print("Ddim7:", mus.note_to_chord(["D", "F", "Ab", "Cb"]))
# mus.Chord("Ddim7") # => ValueError: Unknown quality: dim7
print(mus.Chord("Ddim6").components())
print(mus.note_to_chord(['D', 'F', 'Ab', 'Bb']))
print("Dbdim7=Db E G Bb: ", mus.note_to_chord(["Db", "E", "G", "Bb"]))
print("BbM6=Bb D F G", mus.note_to_chord(["Bb", "D", "F", "G"]))
Explanation: Find Chords
End of explanation
cp = mus.ChordProgression(["C", "G/B", "Am"])
print(cp)
cp.append("Em/G")
print(cp)
cp.transpose(+3)
print(cp)
Explanation: Chord Progressions
End of explanation
print(mus.Chord.from_note_index(note=1, quality="", scale="Cmaj"))
print(mus.Chord.from_note_index(note=3, quality="m7", scale="Fmaj"))
print(mus.Chord.from_note_index(note=5, quality="7", scale="Amin"))
Explanation: Create a Chord from Note Index in a Scale
End of explanation
chordList = [ "EbM7", "Cm6", "Dm7", "Dbdim6", "Cm7", "BM7", "Bb6"]
chords = [mus.Chord(c) for c in chordList]
for c in chordList:
print(f"{c} - ", mus.Chord(c).components())
create_midi('/Users/af59986/Dev/presentations/music/hoffman-a_letter_for_alex.midi', chords)
Explanation: Overwrite the default Quality components with yours
```python
from pychord import Chord, QualityManager
Chord("C11").components()
['C', 'G', 'Bb', 'D', 'F']
quality_manager = QualityManager()
quality_manager.set_quality("11", (0, 4, 7, 10, 14, 17))
Chord("C11").components()
['C', 'E', 'G', 'Bb', 'D', 'F']
```
Hoffman's - A Letter for Alexandr(i)a
End of explanation
# working around pychord
print("Gaug7:", mus.note_to_chord('G B D# F'.split()))
print("Bbdim7:", mus.note_to_chord("Bb Db E G".split()))
print("Bb7:", mus.Chord("Bb7").components())
print("Bb7+11:", mus.Chord("Bb7+11").components())
print("BbM7:", mus.Chord("BbM7").components_with_pitch(4))
# print("BbM7+11:", mus.Chord("BbM7+11").components())
print("BbM7+11:", mus.note_to_chord(['Bb', 'D', 'F', 'A', 'E']))
print("Bbadd11:", mus.Chord("Bbadd11").components())
# print("BbMadd9:", mus.Chord("BbMadd").components())
# print("BbM7add11:", mus.Chord("BbM7add11").components())
# print("Bb7add11:", mus.Chord("Bb7add11").components())
# print("Bbm7add11:", mus.Chord("Bbm7add11").components())
# print("BbM7add#11", mus.note_to_chord('Bb D F A E'.split()))
print("Bm7#5:", mus.note_to_chord("B D G A".split()))
print("B6:", mus.Chord("B6").components())
print("Bb6:", mus.Chord("Bb6").components())
print("Bm7b5:", mus.Chord("Bm7b5").components())
print("Absus2-7#5", mus.note_to_chord('Ab Bb D Gb'.split()))
# section_a = chorder('Bm7, D9, AbM7, GM7, G7+5, Bbdim6')
section_a = chorder('Bm7, D9, GM7, G7+5')
section_b = chorder('Bm7, Bbm7, Bm7, C7')
# section_c = chorder('Dm7, Em7, Bm7b5, BbM7, Absus2, Gmaj7')
section_c = chorder('Dm7, Em7, Bm7b5, BbM7, Bb7+5/Ab, Gmaj7')
# song = sum([section_a,section_b, section_a, section_b, section_c], [])
song = section_a + section_b + section_a + section_b + section_c
print(song)
create_midi('/Users/af59986/Dev/presentations/music/hoffman-physics2.midi', song)
Explanation: Hoffman's Physics Song
End of explanation |
4,279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Data
Seaborn comes with built-in data sets!
Step2: distplot
The distplot shows the distribution of a univariate set of observations.
Step3: To remove the kde layer and just have the histogram use
Step4: jointplot
jointplot() allows you to basically match up two distplots for bivariate data. With your choice of what kind parameter to compare with
Step5: pairplot
pairplot will plot pairwise relationships across an entire dataframe (for the numerical columns) and supports a color hue argument (for categorical columns).
Step6: rugplot
rugplots are actually a very simple concept, they just draw a dash mark for every point on a univariate distribution. They are the building block of a KDE plot
Step7: kdeplot
kdeplots are Kernel Density Estimation plots. These KDE plots replace every single observation with a Gaussian (Normal) distribution centered around that value. For example
Step8: So with our tips dataset | Python Code:
import seaborn as sns
%matplotlib inline
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Distribution Plots
Let's discuss some plots that allow us to visualize the distribution of a data set. These plots are:
distplot
jointplot
pairplot
rugplot
kdeplot
Imports
End of explanation
tips = sns.load_dataset('tips')
tips.head()
Explanation: Data
Seaborn comes with built-in data sets!
End of explanation
sns.distplot(tips['total_bill'])
# Safe to ignore warnings
Explanation: distplot
The distplot shows the distribution of a univariate set of observations.
End of explanation
sns.distplot(tips['total_bill'],kde=False,bins=30)
Explanation: To remove the kde layer and just have the histogram use:
End of explanation
sns.jointplot(x='total_bill',y='tip',data=tips,kind='scatter')
sns.jointplot(x='total_bill',y='tip',data=tips,kind='hex')
sns.jointplot(x='total_bill',y='tip',data=tips,kind='reg')
Explanation: jointplot
jointplot() allows you to basically match up two distplots for bivariate data. With your choice of what kind parameter to compare with:
* “scatter”
* “reg”
* “resid”
* “kde”
* “hex”
End of explanation
sns.pairplot(tips)
sns.pairplot(tips,hue='sex',palette='coolwarm')
Explanation: pairplot
pairplot will plot pairwise relationships across an entire dataframe (for the numerical columns) and supports a color hue argument (for categorical columns).
End of explanation
sns.rugplot(tips['total_bill'])
Explanation: rugplot
rugplots are actually a very simple concept, they just draw a dash mark for every point on a univariate distribution. They are the building block of a KDE plot:
End of explanation
# Don't worry about understanding this code!
# It's just for the diagram below
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#Create dataset
dataset = np.random.randn(25)
# Create another rugplot
sns.rugplot(dataset);
# Set up the x-axis for the plot
x_min = dataset.min() - 2
x_max = dataset.max() + 2
# 100 equally spaced points from x_min to x_max
x_axis = np.linspace(x_min,x_max,100)
# Set up the bandwidth, for info on this:
url = 'http://en.wikipedia.org/wiki/Kernel_density_estimation#Practical_estimation_of_the_bandwidth'
bandwidth = ((4*dataset.std()**5)/(3*len(dataset)))**.2
# Create an empty kernel list
kernel_list = []
# Plot each basis function
for data_point in dataset:
# Create a kernel for each point and append to list
kernel = stats.norm(data_point,bandwidth).pdf(x_axis)
kernel_list.append(kernel)
#Scale for plotting
kernel = kernel / kernel.max()
kernel = kernel * .4
plt.plot(x_axis,kernel,color = 'grey',alpha=0.5)
plt.ylim(0,1)
# To get the kde plot we can sum these basis functions.
# Plot the sum of the basis function
sum_of_kde = np.sum(kernel_list,axis=0)
# Plot figure
fig = plt.plot(x_axis,sum_of_kde,color='indianred')
# Add the initial rugplot
sns.rugplot(dataset,c = 'indianred')
# Get rid of y-tick marks
plt.yticks([])
# Set title
plt.suptitle("Sum of the Basis Functions")
Explanation: kdeplot
kdeplots are Kernel Density Estimation plots. These KDE plots replace every single observation with a Gaussian (Normal) distribution centered around that value. For example:
End of explanation
sns.kdeplot(tips['total_bill'])
sns.rugplot(tips['total_bill'])
sns.kdeplot(tips['tip'])
sns.rugplot(tips['tip'])
Explanation: So with our tips dataset:
End of explanation |
4,280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Performing Clean-up and Analysis on Native Ad Data Scraped "From Around the Web"
Step1: Data Load and Cleaning
Step2: As a side note, the headlines from zergnet all have some newlines we need to get rid of and they appear to have concatenated the headline with the provider. So let's clean those up.
Step3: OK, that's better.
The img_file column values also have ./imgs/ appended to the front of each file name. Let's get rid of those
Step4: Now, let's check, do we have any null values?
Step5: For now only the orig_article column has nulls, as we had not collected those consistently
Step6: Already we can see some interesting trends here. Out of 129399 unique records, only 18022 of the headlines are unique, but 43315 of the links are unique and 23866 of the image files are unique (assuming for sure that there were issues with downloading images).
So it seems already that there are content links which might reuse the same headline, or image for different destination articles.
Also, because we want to inspect the hosts from which the articles and images are coming from, let's parse those out in the data.
Data Preparation
Step7: Next, let's classify each site by a very relaxed set of tags based on perceived political bias. I might be a little off on some, I referenced https
Step8: Now let's remove duplicates based on a subset of the columns using pandas' drop_duplicates for DataFrames
Step9: And let's just check on those null values again...
Step10: Out of curiousity, as we're only left with 43630 records after deduping, let's take a look at the rate of success for our record collection.
Step11: Crud, doing a harvest yields results where only 33% of our sample is worth examining further.
Data Exploration
Let's get the top 10 headlines grouped by img
Step12: But hang on. let's just see what the top headlines are. There's certainly overlap, but it's not a one to one relationship between headlines and their images (or at least maybe it's the same image, but coming from a different URL).
Step13: Note
Step14: TMZ is a bit over-represented here
And what about by classification
Step15: Looks like the over-representation of TMZ is pushing on Tabloids a bit. Not terribly even between left, right, and center, either.
Let's take a look at the sources again as broken down by bother provider and our classification.
Step16: OK so what are the most frequent and least images per classification?
Step17: Yawn! I have to admit this isnt's as interesting as I thought it might be.
Explore over time
Next perhaps let's explore trends over time. First we'll want to make a version of the Data Frame that is indexed by date
Step18: See what dates we're working with
Step19: Let's examine the distribution of the classifications over time
Step20: I think what we're mostly seeing here is that our scraper was most active during the month of June.
Let's see the same distribution for provider.
Step21: Same, we're seeing that our results are biased towards June.
What about if we check all results mentioning certain people
Step22: Again, seeing more of a trend around our data collection. There is an interesting trend that Trump articles are appearing on way more Tabloid articles than we might expect. Obama is appearing a lot on Right classified site articles, but again this is for June, so might just be an artifact of increased data collection. Finally, we see way more results for "Hillary" than we do "Clinton", and most of those are on Tabloid sites in April.
And let's check out some bucketed headline trends, both largest and smallest overall and for the various classifications.
Step24: Finally, we wanted to see if any headlines had more than one image. Let's check a few.
Step25: Well, that was edifying.
Export the data
Step26: Finally, let's generate a json file where each item is an individual image, and for each image we are listing out all the original sources, dates, headlines, classifications, and final locations for it. | Python Code:
import pandas as pd
from datetime import datetime
import dateutil
import matplotlib.pyplot as plt
from IPython.core.display import display, HTML
import re
from urllib.parse import urlparse
import json
Explanation: Performing Clean-up and Analysis on Native Ad Data Scraped "From Around the Web"
End of explanation
data = pd.read_csv('../data/in/native_ad_data.csv')
data.head()
Explanation: Data Load and Cleaning
End of explanation
data['headline'] = data['headline'].apply(lambda x: re.sub('(?<=[a-z])\.?([A-Z](.*))' , '', x.strip()))
data.head()
Explanation: As a side note, the headlines from zergnet all have some newlines we need to get rid of and they appear to have concatenated the headline with the provider. So let's clean those up.
End of explanation
data['img_file'] = data['img_file'].apply(lambda x: re.sub('\.\/imgs\/' , '', str(x).strip()))
Explanation: OK, that's better.
The img_file column values also have ./imgs/ appended to the front of each file name. Let's get rid of those:
End of explanation
for col in data.columns:
print((col, sum(data[col].isnull())))
Explanation: Now, let's check, do we have any null values?
End of explanation
data.describe()
Explanation: For now only the orig_article column has nulls, as we had not collected those consistently
End of explanation
data['img_host'] = data['img'].apply(lambda x: urlparse(x).netloc)
data['link_host'] = data['final_link'].apply(lambda x: urlparse(x).netloc)
Explanation: Already we can see some interesting trends here. Out of 129399 unique records, only 18022 of the headlines are unique, but 43315 of the links are unique and 23866 of the image files are unique (assuming for sure that there were issues with downloading images).
So it seems already that there are content links which might reuse the same headline, or image for different destination articles.
Also, because we want to inspect the hosts from which the articles and images are coming from, let's parse those out in the data.
Data Preparation
End of explanation
left = ['http://www.politico.com/magazine/', 'https://www.washingtonpost.com/', 'http://www.huffingtonpost.com/', 'http://gothamist.com/news', 'http://www.metro.us/news', 'http://www.politico.com/politics', 'http://www.nydailynews.com/news', 'http://www.thedailybeast.com/']
right = ['http://www.breitbart.com', 'http://www.rt.com', 'https://nypost.com/news/', 'http://www.infowars.com/', 'https://www.therebel.media/news', 'http://observer.com/latest/']
center = ['http://www.ibtimes.com/', 'http://www.businessinsider.com/', 'http://thehill.com']
tabloid = ['http://tmz.com', 'http://www.dailymail.co.uk/', 'https://downtrend.com/', 'http://reductress.com/', 'http://preventionpulse.com/', 'http://elitedaily.com/', 'http://worldstarhiphop.com/videos/']
def get_classification(source):
if source in left:
return 'left'
if source in right:
return 'right'
if source in center:
return 'center'
if source in tabloid:
return 'tabloid'
data['source_class'] = data['source'].apply(lambda x: get_classification(x))
data.head()
Explanation: Next, let's classify each site by a very relaxed set of tags based on perceived political bias. I might be a little off on some, I referenced https://www.allsides.com/ where possible, but that was not entirely helpful in all cases. Otherwise, I just went with my own idea of where I felt a site fell on the political spectrum (e.g., left, right, or center). There is also a tag for tabloids, or primarily sites that probably don't really have an editorial perspective so much as a desire to publish whatever gets the most traffic.
End of explanation
deduped = data.drop_duplicates(subset=['headline', 'link', 'img', 'provider', 'source', 'img_file', 'final_link'], keep=False)
deduped.describe()
Explanation: Now let's remove duplicates based on a subset of the columns using pandas' drop_duplicates for DataFrames
End of explanation
for col in deduped.columns:
print((col, sum(deduped[col].isnull())))
Explanation: And let's just check on those null values again...
End of explanation
(43630/129399)*100
Explanation: Out of curiousity, as we're only left with 43630 records after deduping, let's take a look at the rate of success for our record collection.
End of explanation
deduped['headline'].groupby(deduped['img']).value_counts().nlargest(10)
Explanation: Crud, doing a harvest yields results where only 33% of our sample is worth examining further.
Data Exploration
Let's get the top 10 headlines grouped by img
End of explanation
deduped['headline'].value_counts().nlargest(10)
Explanation: But hang on. let's just see what the top headlines are. There's certainly overlap, but it's not a one to one relationship between headlines and their images (or at least maybe it's the same image, but coming from a different URL).
End of explanation
deduped['source'].value_counts().nlargest(25)
Explanation: Note: perhaps something we will want to look into is how many different headline, image permutations there are. I am particularly interested in the reuse of images across different headlines.
And how are our sources distributed?
End of explanation
deduped['source_class'].value_counts()
Explanation: TMZ is a bit over-represented here
And what about by classification
End of explanation
deduped.groupby(['provider', 'source_class'])['source'].value_counts()
Explanation: Looks like the over-representation of TMZ is pushing on Tabloids a bit. Not terribly even between left, right, and center, either.
Let's take a look at the sources again as broken down by bother provider and our classification.
End of explanation
IMG_MAX=5
topimgs_center = deduped['img'][deduped['source_class'].isin(['center'])].value_counts().nlargest(IMG_MAX).index.tolist()
bottomimgs_center = deduped['img'][deduped['source_class'].isin(['center'])].value_counts().nsmallest(IMG_MAX).index.tolist()
topimgs_left = deduped['img'][deduped['source_class'].isin(['left'])].value_counts().nlargest(IMG_MAX).index.tolist()
bottomimgs_left = deduped['img'][deduped['source_class'].isin(['left'])].value_counts().nsmallest(IMG_MAX).index.tolist()
topimgs_right = deduped['img'][deduped['source_class'].isin(['right'])].value_counts().nlargest(IMG_MAX).index.tolist()
bottomimgs_right = deduped['img'][deduped['source_class'].isin(['right'])].value_counts().nsmallest(IMG_MAX).index.tolist()
topimgs_tabloid = deduped['img'][deduped['source_class'].isin(['tabloid'])].value_counts().nlargest(IMG_MAX).index.tolist()
bottomimgs_tabloid = deduped['img'][deduped['source_class'].isin(['tabloid'])].value_counts().nsmallest(IMG_MAX).index.tolist()
for i in topimgs_center:
displaystring = '<img src={} width="200"/>'.format(i)
display(HTML(displaystring))
for i in bottomimgs_center:
displaystring = '<img src={} width="200"/>'.format(i)
display(HTML(displaystring))
for i in topimgs_left:
displaystring = '<img src={} width="200"/>'.format(i)
display(HTML(displaystring))
for i in bottomimgs_left:
displaystring = '<img src={} width="200"/>'.format(i)
display(HTML(displaystring))
for i in topimgs_right:
displaystring = '<img src={} width="200"/>'.format(i)
display(HTML(displaystring))
for i in bottomimgs_right:
displaystring = '<img src={} width="200"/>'.format(i)
display(HTML(displaystring))
for i in topimgs_tabloid:
displaystring = '<img src={} width="200"/>'.format(i)
display(HTML(displaystring))
for i in bottomimgs_tabloid:
displaystring = '<img src={} width="200"/>'.format(i)
display(HTML(displaystring))
Explanation: OK so what are the most frequent and least images per classification?
End of explanation
deduped_date_idx = deduped.copy(deep=False)
deduped_date_idx['date'] = pd.to_datetime(deduped_date_idx.date)
deduped_date_idx.set_index('date',inplace=True)
Explanation: Yawn! I have to admit this isnt's as interesting as I thought it might be.
Explore over time
Next perhaps let's explore trends over time. First we'll want to make a version of the Data Frame that is indexed by date
End of explanation
"Start: {} - End: {}".format(deduped_date_idx.index.min(), deduped_date_idx.index.max())
Explanation: See what dates we're working with
End of explanation
deduped_date_idx['2017-03-01':'2017-07-07'].groupby('source_class').resample('M').size().plot(kind='bar')
plt.show()
Explanation: Let's examine the distribution of the classifications over time
End of explanation
deduped_date_idx['2017-03-01':'2017-07-07'].groupby(['provider']).resample('M').size().plot(kind='bar')
plt.show()
Explanation: I think what we're mostly seeing here is that our scraper was most active during the month of June.
Let's see the same distribution for provider.
End of explanation
(deduped_date_idx[deduped_date_idx['headline'].str.contains('Trump')]['2017-03-01':'2017-07-07']).groupby('source_class').resample('M').size().plot(title="Headlines Containing 'Trump' By Month and Classification", kind='bar', color="pink")
plt.show()
(deduped_date_idx[deduped_date_idx['headline'].str.contains('Clinton')]['2017-03-01':'2017-07-07']).groupby('source_class').resample('M').size().plot(title="Headlines Containing 'Clinton' By Month and Classification", kind='bar', color="gray")
plt.show()
(deduped_date_idx[deduped_date_idx['headline'].str.contains('Hillary')]['2017-03-01':'2017-07-07']).groupby('source_class').resample('M').size().plot(title="Headlines Containing 'Hillary' By Month and Classification" ,kind='bar', color="gray")
plt.show()
(deduped_date_idx[deduped_date_idx['headline'].str.contains('Obama')]['2017-03-01':'2017-07-07']).groupby('source_class').resample('M').size().plot(title="Headlines Containing 'Obama' By Month and Classification", kind='bar')
plt.show()
Explanation: Same, we're seeing that our results are biased towards June.
What about if we check all results mentioning certain people
End of explanation
(deduped_date_idx['2017-03-27':'2017-07-07'])['headline'].value_counts().nlargest(15)
(deduped_date_idx['2017-03-27':'2017-07-07'])['headline'].value_counts().nsmallest(15)
deduped['headline'][deduped['source_class'].isin(['center'])].value_counts().nlargest(25)
deduped['headline'][deduped['source_class'].isin(['center'])].value_counts().nsmallest(25)
deduped['headline'][deduped['source_class'].isin(['left'])].value_counts().nlargest(25)
deduped['headline'][deduped['source_class'].isin(['left'])].value_counts().nsmallest(25)
deduped['headline'][deduped['source_class'].isin(['right'])].value_counts().nlargest(25)
deduped['headline'][deduped['source_class'].isin(['right'])].value_counts().nsmallest(25)
deduped['headline'][deduped['source_class'].isin(['tabloid'])].value_counts().nlargest(25)
deduped['headline'][deduped['source_class'].isin(['tabloid'])].value_counts().nsmallest(25)
Explanation: Again, seeing more of a trend around our data collection. There is an interesting trend that Trump articles are appearing on way more Tabloid articles than we might expect. Obama is appearing a lot on Right classified site articles, but again this is for June, so might just be an artifact of increased data collection. Finally, we see way more results for "Hillary" than we do "Clinton", and most of those are on Tabloid sites in April.
And let's check out some bucketed headline trends, both largest and smallest overall and for the various classifications.
End of explanation
def imgs_from_headlines(headline):
A function to spit out all the different images used for a headline, assuming there's no more than 50/headline
all_images = deduped['img'][deduped['headline'].isin([headline])].value_counts().nlargest(50).index.tolist()
for i in all_images:
displaystring = '<img src={} width="200"/>'.format(i)
display(HTML(displaystring))
imgs_from_headlines("Trump Voters Shocked After Watching This Leaked Video")
imgs_from_headlines("What Tiger Woods' Ex-Wife Looks Like Now Left Us With No Words")
imgs_from_headlines("Nicole Kidman's Yacht Is Far From You'd Expect")
imgs_from_headlines("He Never Mentions His Son, Here's Why")
imgs_from_headlines("Do This Tonight to Make Fungus Disappear by Morning (Try Today)")
Explanation: Finally, we wanted to see if any headlines had more than one image. Let's check a few.
End of explanation
timestamp = datetime.now().strftime('%Y-%m-%d-%H_%M')
datefile = '../data/out/{}_native_ad_data_deduped.csv'.format(timestamp)
deduped.to_csv(datefile, index=False)
Explanation: Well, that was edifying.
Export the data
End of explanation
img_json_data = {}
for index, row in deduped.iterrows():
img_json_data[row['img_file']] = {'url':row['img'],
'dates':[],
'sources':[],
'providers':[],
'classifications':[],
'headlines':[],
'locations':[],
}
print(len(img_json_data.keys()))
for index, row in deduped.iterrows():
record = img_json_data[row['img_file']]
if row['date'] not in record['dates']:
record['dates'].append(row['date'])
if row['headline'] not in record['headlines']:
record['headlines'].append(row['headline'])
if row['provider'] not in record['providers']:
record['providers'].append(row['provider'])
if row['source_class'] not in record['classifications']:
record['classifications'].append(row['source_class'])
if row['source'] not in record['sources']:
record['sources'].append(row['source'])
if row['final_link'] not in record['locations']:
record['locations'].append(row['final_link'])
for i in list(img_json_data.keys())[0:5]:
print(img_json_data[i])
hl_json_data = {}
for index, row in deduped.iterrows():
hl_json_data[row['headline']] = {'img_urls':[],
'dates':[],
'sources':[],
'providers':[],
'classifications':[],
'imgs':[],
'locations':[],
}
print(len(hl_json_data.keys()))
for index, row in deduped.iterrows():
record = hl_json_data[row['headline']]
if row['img'] not in record['img_urls']:
record['img_urls'].append(row['img'])
if row['date'] not in record['dates']:
record['dates'].append(row['date'])
if row['img_file'] not in record['imgs']:
record['imgs'].append(row['img_file'])
if row['provider'] not in record['providers']:
record['providers'].append(row['provider'])
if row['source_class'] not in record['classifications']:
record['classifications'].append(row['source_class'])
if row['source'] not in record['sources']:
record['sources'].append(row['source'])
if row['final_link'] not in record['locations']:
record['locations'].append(row['final_link'])
for i in list(hl_json_data.keys())[0:5]:
print(i, " = " ,hl_json_data[i])
def to_json_file(json_data, prefix):
filename = "../data/out/{}_grouped_data.json".format(prefix)
with open(filename, 'w') as outfile:
json.dump(json_data, outfile, indent=4)
to_json_file(img_json_data, "images")
to_json_file(hl_json_data, "headlines")
Explanation: Finally, let's generate a json file where each item is an individual image, and for each image we are listing out all the original sources, dates, headlines, classifications, and final locations for it.
End of explanation |
4,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
#print predictions
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Sex')
Explanation: Answer: 61.62%.
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'male':
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
#print predictions
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: 78.68%.
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'male':
if passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'female'"])
survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age > 10", "Pclass == 3","Parch == 0"])
survival_stats(data, outcomes, 'Pclass', ["Sex == 'female'"])
# females from classes one and two will survive
survival_stats(data, outcomes, 'Parch', ["Sex == 'female'", "Pclass == 3"])
# in the 3class if parch equal 0 u will moew likely survive
survival_stats(data, outcomes, 'Fare', ["Sex == 'female'", "Pclass == 3", "Parch != 0"])
# Fare less than 20 will survive
Explanation: Answer: 79.35%.
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == 'male':
if passenger['Age'] < 10:
predictions.append(1)
elif passenger['Pclass'] == 1 and passenger['Age'] < 40 and passenger['Age'] >20:
predictions.append(1)
elif passenger['Pclass'] == 3 and passenger['Parch'] == 1 and passenger['Age'] < 30 and passenger['Age'] >20:
predictions.append(1)
else:
predictions.append(0)
else:
if passenger['Pclass'] == 3:
if passenger['Age'] > 40 and passenger['Age'] < 60:
predictions.append(0)
elif passenger['Parch'] == 0:
predictions.append(1)
else:
if passenger['Fare'] < 20:
predictions.append(1)
else:
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation |
4,282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using sklearn's Iris Dataset with neon
Tony Reina<br>
28 JUNE 2017
Here's an example of how we can load one of the standard sklearn datasets into a neon model. We'll be using the iris dataset, a classification model which tries to predict the type of iris flower species (Setosa, Versicolour, and Virginica) based on 4 continuous parameters
Step1: Use sklearn to split the data into training and testing sets
Step2: Make sure that the features are scaled to mean of 0 and standard deviation of 1
This is standard pre-processing for multi-layered perceptron inputs.
Step3: Generate a backend for neon to use
This sets up either our GPU or CPU connection to neon. If we don't start with this, then ArrayIterator won't execute.
We're asking neon to use the cpu, but can change that to a gpu if it is available. Batch size refers to how many data points are taken at a time. Here's a primer on Gradient Descent.
Technical note
Step4: Let's pass the data to neon
We pass our data (both features and labels) into neon's ArrayIterator class. By default, ArrayIterator one-hot encodes the labels (which saves us a step). Once we get our ArrayIterators, then we can pass them directly into neon models.
Step5: Import the neon libraries we need for this MLP
Step6: Initialize the weights and bias variables
We could use numbers from the Gaussian distribution ($\mu=0, \sigma=0.3$) to initialize the weights and bias terms for our regression model. However, we can also use other initializations like GlorotUniform.
Step7: Define a multi-layered perceptron (MLP) model
We just use a simple Python list to add our different layers to the model. The nice thing is that we've already put our data into a neon ArrayIterator. That means the model will automatically know how to handle the input layer.
In this model, the input layer feeds into a 5-neuron rectified linear unit affine layer. That feeds into an 8 neuron hyperbolic tangent layer (with 50% dropout). Finally, that outputs to a softmax of the nClasses. We'll predict based on the argmax of the softmax layer.
I've just thrown together a model haphazardly. There is no reason the model has to be like this. In fact, I would suggest playing with adding different layers, different # of neurons, and different activation functions to see if you can get a better model. What's nice about neon is that we can easily alter the model architecture without much change to our code.
Step8: Cost function
How "close" is the model's prediction is to the true value? For the case of multi-class prediction we typically use Cross Entropy.
Step9: Gradient descent
All of our models will use gradient descent. We will iteratively update the model weights and biases in order to minimize the cost of the model.
There are many optimizing algorithms we can use for gradient descent. Here we'll use Adam.
Step10: Callbacks
Callbacks allow us to run custom code at certain points during the training. For example, in the code below we want to find out how well the model is performing against the testing data after every 2 callbacks of training. If the cross entropy error goes up, then we stop the training early. Otherwise, we might be overfitting the model to the training set.
I've added a patience parameter to the early stopping. If the model's performance has not improved after a certain number of callbacks, then we will stop training early.
Step11: Run the model
This starts gradient descent. The number of epochs is how many times we want to perform gradient descent on our entire training dataset. So 100 epochs means that we repeat gradient descent on our data 100 times in a row.
Step12: Run the model on the testing data
Let's run the model on the testing data and get the predictions. We can then compare those predictions with the true values to see how well our model has performed.
Step13: Save the model
Let's save the model and the parameters.
Step14: Here's the text description of the model.
You could use this to draw a graph of the network. | Python Code:
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data
Y = iris.target
nClasses = len(iris.target_names) # Setosa, Versicolour, and Virginica iris species
Explanation: Using sklearn's Iris Dataset with neon
Tony Reina<br>
28 JUNE 2017
Here's an example of how we can load one of the standard sklearn datasets into a neon model. We'll be using the iris dataset, a classification model which tries to predict the type of iris flower species (Setosa, Versicolour, and Virginica) based on 4 continuous parameters: Sepal Length, Sepal Width, Petal Length and Petal Width. It is based on Ronald Fisher's 1936 paper describing Linear Discriminant Analysis. The dataset is now considered one of the gold standards at monitoring the performance of a new classification method.
In this notebook, we'll walk through loading the data from sklearn into neon's ArrayIterator class and then passing that to a simple multi-layer perceptron model. We should get a misclassification rate of 2% to 8%.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Load the iris dataset from sklearn
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33) # 66% training, 33% testing
Explanation: Use sklearn to split the data into training and testing sets
End of explanation
from sklearn.preprocessing import StandardScaler
scl = StandardScaler()
X_train = scl.fit_transform(X_train)
X_test = scl.transform(X_test)
Explanation: Make sure that the features are scaled to mean of 0 and standard deviation of 1
This is standard pre-processing for multi-layered perceptron inputs.
End of explanation
from neon.data import ArrayIterator
from neon.backends import gen_backend
be = gen_backend(backend='cpu', batch_size=X_train.shape[0]//10) # Change to 'gpu' if you have gpu support
Explanation: Generate a backend for neon to use
This sets up either our GPU or CPU connection to neon. If we don't start with this, then ArrayIterator won't execute.
We're asking neon to use the cpu, but can change that to a gpu if it is available. Batch size refers to how many data points are taken at a time. Here's a primer on Gradient Descent.
Technical note: Your batch size must always be much less than the number of points in your data. So if you have 50 points, then set your batch size to something much less than 50. I'd suggest setting the batch size to no more than 10% of the number of data points. You can always just set your batch size to 1. In that case, you are no longer performing mini-batch gradient descent, but are performing the standard stochastic gradient descent.
End of explanation
training_data = ArrayIterator(X=X_train, y=y_train, nclass=nClasses, make_onehot=True)
testing_data = ArrayIterator(X=X_test, y=y_test, nclass=nClasses, make_onehot=True)
print ('I am using this backend: {}'.format(be))
Explanation: Let's pass the data to neon
We pass our data (both features and labels) into neon's ArrayIterator class. By default, ArrayIterator one-hot encodes the labels (which saves us a step). Once we get our ArrayIterators, then we can pass them directly into neon models.
End of explanation
from neon.initializers import GlorotUniform, Gaussian
from neon.layers import GeneralizedCost, Affine, Dropout
from neon.models import Model
from neon.optimizers import GradientDescentMomentum, Adam
from neon.transforms import Softmax, CrossEntropyMulti, Rectlin, Tanh
from neon.callbacks.callbacks import Callbacks, EarlyStopCallback
from neon.transforms import Misclassification
Explanation: Import the neon libraries we need for this MLP
End of explanation
init = GlorotUniform() #Gaussian(loc=0, scale=0.3)
Explanation: Initialize the weights and bias variables
We could use numbers from the Gaussian distribution ($\mu=0, \sigma=0.3$) to initialize the weights and bias terms for our regression model. However, we can also use other initializations like GlorotUniform.
End of explanation
layers = [
Affine(nout=5, init=init, bias=init, activation=Rectlin()), # Affine layer with 5 neurons (ReLU activation)
Affine(nout=8, init=init, bias=init, activation=Tanh()), # Affine layer with 8 neurons (Tanh activation)
Dropout(0.5), # Dropout layer
Affine(nout=nClasses, init=init, bias=init, activation=Softmax()) # Affine layer with softmax
]
mlp = Model(layers=layers)
Explanation: Define a multi-layered perceptron (MLP) model
We just use a simple Python list to add our different layers to the model. The nice thing is that we've already put our data into a neon ArrayIterator. That means the model will automatically know how to handle the input layer.
In this model, the input layer feeds into a 5-neuron rectified linear unit affine layer. That feeds into an 8 neuron hyperbolic tangent layer (with 50% dropout). Finally, that outputs to a softmax of the nClasses. We'll predict based on the argmax of the softmax layer.
I've just thrown together a model haphazardly. There is no reason the model has to be like this. In fact, I would suggest playing with adding different layers, different # of neurons, and different activation functions to see if you can get a better model. What's nice about neon is that we can easily alter the model architecture without much change to our code.
End of explanation
cost = GeneralizedCost(costfunc=CrossEntropyMulti())
Explanation: Cost function
How "close" is the model's prediction is to the true value? For the case of multi-class prediction we typically use Cross Entropy.
End of explanation
#optimizer = GradientDescentMomentum(0.1, momentum_coef=0.2)
optimizer = Adam(learning_rate=0.1, beta_1=0.9, beta_2=0.999)
Explanation: Gradient descent
All of our models will use gradient descent. We will iteratively update the model weights and biases in order to minimize the cost of the model.
There are many optimizing algorithms we can use for gradient descent. Here we'll use Adam.
End of explanation
# define stopping function
# it takes as input a tuple (State,val[t])
# which describes the cumulative validation state (generated by this function)
# and the validation error at time t
# and returns as output a tuple (State', Bool),
# which represents the new state and whether to stop
def stop_func(s, v):
patience = 4 # If model performance has not improved in this many callbacks, then early stop.
if s is None:
return ([v], False)
if (all(v < i for i in s)): # Check to see if this value is smaller than any in the history
history = [v] # New value is smaller so let's reset the history
print('Model improved performance: {}'.format(v))
else:
history = s + [v] # New value is not smaller, so let's add to current history
print('Model has not improved in {} callbacks.'.format(len(history)-1))
if len(history) > patience: # If our history is greater than the patience, then early terminate.
stop = True
print('Stopping training early.')
else:
stop = False # Otherwise, keep training.
return (history, stop)
# The model trains on the training set, but every 2 epochs we calculate
# its performance against the testing set. If the performance increases, then
# we want to stop early because we are overfitting our model.
callbacks = Callbacks(mlp, eval_set=testing_data, eval_freq=2) # Run the callback every 2 epochs
callbacks.add_callback(EarlyStopCallback(stop_func)) # Add our early stopping function call
Explanation: Callbacks
Callbacks allow us to run custom code at certain points during the training. For example, in the code below we want to find out how well the model is performing against the testing data after every 2 callbacks of training. If the cross entropy error goes up, then we stop the training early. Otherwise, we might be overfitting the model to the training set.
I've added a patience parameter to the early stopping. If the model's performance has not improved after a certain number of callbacks, then we will stop training early.
End of explanation
mlp.fit(training_data, optimizer=optimizer, num_epochs=100, cost=cost, callbacks=callbacks)
Explanation: Run the model
This starts gradient descent. The number of epochs is how many times we want to perform gradient descent on our entire training dataset. So 100 epochs means that we repeat gradient descent on our data 100 times in a row.
End of explanation
results = mlp.get_outputs(testing_data)
prediction = results.argmax(1)
error_pct = 100 * mlp.eval(testing_data, metric=Misclassification())[0]
print ('The model misclassified {:.2f}% of the test data.'.format(error_pct))
Explanation: Run the model on the testing data
Let's run the model on the testing data and get the predictions. We can then compare those predictions with the true values to see how well our model has performed.
End of explanation
mlp.save_params('iris_model.prm')
Explanation: Save the model
Let's save the model and the parameters.
End of explanation
mlp.get_description()['model']
Explanation: Here's the text description of the model.
You could use this to draw a graph of the network.
End of explanation |
4,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Addition Similarity
Step1: Question
Step2: Top 10 most similar additions
Step3: 10 Least Similar additions
Step4: Similarity of a specific combo
Step5: But is that good or bad? How does it compare to others? | Python Code:
# Import libraries
import numpy as np
import pandas as pd
# Import the data
import WTBLoad
wtb = WTBLoad.load()
Explanation: Addition Similarity
End of explanation
import math
# Square the difference of each row, and then return the mean of the column.
# This is the average difference between the two.
# It will be higher if they are different, and lower if they are similar
def similarity(additionA, additionB):
diff = np.square(wtb[additionA] - wtb[additionB])
return diff.mean()
res = []
# Loop through each addition pair
for additionA in wtb.columns:
for additionB in wtb.columns:
# Skip if additionA and combo B are the same.
# To prevent duplicates, skip if A is after B alphabetically
if additionA != additionB and additionA < additionB:
res.append([additionA, additionB, similarity(additionA, additionB)])
df = pd.DataFrame(res, columns=["additionA", "additionB", "similarity"])
Explanation: Question: I want to know how similar 2 additions are. For instance, I'm thinking of brewing a beer with plums and vanilla, and I want to know how similar they are.
How to get there: The dataset shows the percentage of votes that said a style-addition combo would likely taste good. So, we can compare the votes on each style for the two additions, and see how similar they are.
End of explanation
df.sort_values("similarity").head(10)
Explanation: Top 10 most similar additions
End of explanation
df.sort_values("similarity", ascending=False).head(10)
Explanation: 10 Least Similar additions
End of explanation
def comboSimilarity(additionA, additionB):
# additionA needs to be before additionB alphabetically
if additionA > additionB:
addition_temp = additionA
additionA = additionB
additionB = addition_temp
return df.loc[df['additionA'] == additionA].loc[df['additionB'] == additionB]
comboSimilarity('plum', 'vanilla')
Explanation: Similarity of a specific combo
End of explanation
df.describe()
Explanation: But is that good or bad? How does it compare to others?
End of explanation |
4,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 6.2 - Using a pre-trained model with Keras
In this section of the lab, we will load the model we trained in the previous section, along with the training data and mapping dictionaries, and use it to generate longer sequences of text.
Let's start by importing the libraries we will be using
Step1: Next, we will import the data we saved previously using the pickle library.
Step2: Now we need to define the Keras model. Since we will be loading parameters from a pre-trained model, this needs to match exactly the definition from the previous lab section. The only difference is that we will comment out the dropout layer so that the model uses all the hidden neurons when doing the predictions.
Step3: Next we will load the parameters from the model we trained previously, and compile it with the same loss and optimizer function.
Step4: We also need to rewrite the sample() and generate() helper functions so that we can use them in our code
Step5: Now we can use the generate() function to generate text of any length based on our imported pre-trained model and a seed text of our choice. For best result, the length of the seed text should be the same as the length of training sequences (100 in the previous lab section).
In this case, we will test the overfitting of the model by supplying it two seeds | Python Code:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
import sys
import re
import pickle
Explanation: Lab 6.2 - Using a pre-trained model with Keras
In this section of the lab, we will load the model we trained in the previous section, along with the training data and mapping dictionaries, and use it to generate longer sequences of text.
Let's start by importing the libraries we will be using:
End of explanation
pickle_file = '-basic_data.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
X = save['X']
y = save['y']
char_to_int = save['char_to_int']
int_to_char = save['int_to_char']
del save # hint to help gc free up memory
print('Training set', X.shape, y.shape)
Explanation: Next, we will import the data we saved previously using the pickle library.
End of explanation
# define the LSTM model
model = Sequential()
model.add(LSTM(128, return_sequences=False, input_shape=(X.shape[1], X.shape[2])))
# model.add(Dropout(0.50))
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
Explanation: Now we need to define the Keras model. Since we will be loading parameters from a pre-trained model, this needs to match exactly the definition from the previous lab section. The only difference is that we will comment out the dropout layer so that the model uses all the hidden neurons when doing the predictions.
End of explanation
# load the parameters from the pretrained model
filename = "-basic_LSTM.hdf5"
model.load_weights(filename)
model.compile(loss='categorical_crossentropy', optimizer='adam')
Explanation: Next we will load the parameters from the model we trained previously, and compile it with the same loss and optimizer function.
End of explanation
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def generate(sentence, sample_length=50, diversity=0.35):
generated = sentence
sys.stdout.write(generated)
for i in range(sample_length):
x = np.zeros((1, X.shape[1], X.shape[2]))
for t, char in enumerate(sentence):
x[0, t, char_to_int[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = int_to_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print
Explanation: We also need to rewrite the sample() and generate() helper functions so that we can use them in our code:
End of explanation
prediction_length = 500
seed_from_text = "america has shown that progress is possible. last year, income gains were larger for households at t"
seed_original = "and as people around the world began to hear the tale of the lowly colonists who overthrew an empire"
for seed in [seed_from_text, seed_original]:
generate(seed, prediction_length, .50)
print "-" * 20
Explanation: Now we can use the generate() function to generate text of any length based on our imported pre-trained model and a seed text of our choice. For best result, the length of the seed text should be the same as the length of training sequences (100 in the previous lab section).
In this case, we will test the overfitting of the model by supplying it two seeds:
one which comes verbatim from the training text, and
one which comes from another earlier speech by Obama
If the model has not overfit our training data, we should expect it to produce reasonable results for both seeds. If it has overfit, it might produce pretty good results for something coming directly from the training set, but perform poorly on a new seed. This means that it has learned to replicate our training text, but cannot generalize to produce text based on other inputs. Since the original article was very short, however, the entire vocabulary of the model might be very limited, which is why as input we use a part of another speech given by Obama, instead of completely random text.
Since we have not trained the model for that long, we will also use a lower temperature to get the model to generate more accurate if less diverse results. Try running the code a few times with different temperature settings to generate different results.
End of explanation |
4,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing argmax in Python
Which fruit is the most frequent in this basket?
Step1: Returns a tuple, let's get its first element.
Step2: Most common element
Which item appears most times in this list?
Step3: Second solution
Step4: Highest element's index
Which coordinate is the highest in this vector? | Python Code:
basket = [("apple", 12), ("pear", 3), ("plum", 14)]
max(basket, key=lambda pair: pair[1])
Explanation: Computing argmax in Python
Which fruit is the most frequent in this basket?
End of explanation
max(basket, key=lambda pair: pair[1])[0]
Explanation: Returns a tuple, let's get its first element.
End of explanation
basket = ["apple", "apple", "plum", "pear", "plum", "plum"]
max((basket.count(fruit), fruit) for fruit in set(basket))[1]
Explanation: Most common element
Which item appears most times in this list?
End of explanation
max(set(basket), key=basket.count)
Explanation: Second solution
End of explanation
vec = [2.3, -1, 0, 3.4, 1]
max((v, i) for (i, v) in enumerate(vec))[1]
Explanation: Highest element's index
Which coordinate is the highest in this vector?
End of explanation |
4,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Modular neural nets
In the previous exercise, we computed the loss and gradient for a two-layer neural network in a single monolithic function. This isn't very difficult for a small two-layer network, but would be tedious and error-prone for larger networks. Ideally we want to build networks using a more modular design so that we can snap together different types of layers and loss functions in order to quickly experiment with different architectures.
In this exercise we will implement this approach, and develop a number of different layer types in isolation that can then be easily plugged together. For each layer we will implement forward and backward functions. The forward function will receive data, weights, and other parameters, and will return both an output and a cache object that stores data needed for the backward pass. The backward function will recieve upstream derivatives and the cache object, and will return gradients with respect to the data and all of the weights. This will allow us to write code that looks like this
Step2: Affine layer
Step3: Affine layer
Step4: ReLU layer
Step5: ReLU layer
Step6: Loss layers
Step7: Convolution layer
Step9: Aside
Step10: Convolution layer
Step11: Max pooling layer
Step12: Max pooling layer
Step13: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory
Step14: Sandwich layers
There are a couple common layer "sandwiches" that frequently appear in ConvNets. For example convolutional layers are frequently followed by ReLU and pooling, and affine layers are frequently followed by ReLU. To make it more convenient to use these common patterns, we have defined several convenience layers in the file cs231n/layer_utils.py. Lets grad-check them to make sure that they work correctly | Python Code:
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Modular neural nets
In the previous exercise, we computed the loss and gradient for a two-layer neural network in a single monolithic function. This isn't very difficult for a small two-layer network, but would be tedious and error-prone for larger networks. Ideally we want to build networks using a more modular design so that we can snap together different types of layers and loss functions in order to quickly experiment with different architectures.
In this exercise we will implement this approach, and develop a number of different layer types in isolation that can then be easily plugged together. For each layer we will implement forward and backward functions. The forward function will receive data, weights, and other parameters, and will return both an output and a cache object that stores data needed for the backward pass. The backward function will recieve upstream derivatives and the cache object, and will return gradients with respect to the data and all of the weights. This will allow us to write code that looks like this:
```python
def two_layer_net(X, W1, b1, W2, b2, reg):
# Forward pass; compute scores
s1, fc1_cache = affine_forward(X, W1, b1)
a1, relu_cache = relu_forward(s1)
scores, fc2_cache = affine_forward(a1, W2, b2)
# Loss functions return data loss and gradients on scores
data_loss, dscores = svm_loss(scores, y)
# Compute backward pass
da1, dW2, db2 = affine_backward(dscores, fc2_cache)
ds1 = relu_backward(da1, relu_cache)
dX, dW1, db1 = affine_backward(ds1, fc1_cache)
# A real network would add regularization here
# Return loss and gradients
return loss, dW1, db1, dW2, db2
```
End of explanation
# Test the affine_forward function
num_inputs = 2
input_shape = (4, 5, 6)
output_dim = 3
input_size = num_inputs * np.prod(input_shape)
weight_size = output_dim * np.prod(input_shape)
x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape)
w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim)
b = np.linspace(-0.3, 0.1, num=output_dim)
out, _ = affine_forward(x, w, b)
correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297],
[ 3.25553199, 3.5141327, 3.77273342]])
# Compare your output with ours. The error should be around 1e-9.
print('Testing affine_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: Affine layer: forward
Open the file cs231n/layers.py and implement the affine_forward function.
Once you are done we will test your can test your implementation by running the following:
End of explanation
# Test the affine_backward function
x = np.random.randn(10, 2, 3)
w = np.random.randn(6, 5)
b = np.random.randn(5)
dout = np.random.randn(10, 5)
dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout)
_, cache = affine_forward(x, w, b)
dx, dw, db = affine_backward(dout, cache)
# The error should be less than 1e-10
print('Testing affine_backward function:')
print('dx error:', rel_error(dx_num, dx))
print('dw error:', rel_error(dw_num, dw))
print('db error:', rel_error(db_num, db))
Explanation: Affine layer: backward
Now implement the affine_backward function. You can test your implementation using numeric gradient checking.
End of explanation
# Test the relu_forward function
x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4)
out, _ = relu_forward(x)
correct_out = np.array([[ 0., 0., 0., 0., ],
[ 0., 0., 0.04545455, 0.13636364,],
[ 0.22727273, 0.31818182, 0.40909091, 0.5, ]])
# Compare your output with ours. The error should be around 1e-8
print('Testing relu_forward function:')
print('difference: ', rel_error(out, correct_out))
Explanation: ReLU layer: forward
Implement the relu_forward function and test your implementation by running the following:
End of explanation
x = np.random.randn(10, 10)
dout = np.random.randn(*x.shape)
dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout)
_, cache = relu_forward(x)
dx = relu_backward(dout, cache)
# The error should be around 1e-12
print('Testing relu_backward function:')
print('dx error: ', rel_error(dx_num, dx))
Explanation: ReLU layer: backward
Implement the relu_backward function and test your implementation using numeric gradient checking:
End of explanation
num_classes, num_inputs = 10, 50
x = 0.001 * np.random.randn(num_inputs, num_classes)
y = np.random.randint(num_classes, size=num_inputs)
dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False)
loss, dx = svm_loss(x, y)
# Test svm_loss function. Loss should be around 9 and dx error should be 1e-9
print('Testing svm_loss:')
print('loss:', loss)
print('dx error:', rel_error(dx_num, dx))
dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False)
loss, dx = softmax_loss(x, y)
# Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8
print('\nTesting softmax_loss:')
print('loss: ', loss)
print('dx error: ', rel_error(dx_num, dx))
Explanation: Loss layers: Softmax and SVM
You implemented these loss functions in the last assignment, so we'll give them to you for free here. It's still a good idea to test them to make sure they work correctly.
End of explanation
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]]])
# Compare your output to ours; difference should be around 1e-8
print('Testing conv_forward_naive')
print('difference:', rel_error(out, correct_out))
Explanation: Convolution layer: forward naive
We are now ready to implement the forward pass for a convolutional layer. Implement the function conv_forward_naive in the file cs231n/layers.py.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
End of explanation
from scipy.misc import imread, imresize
kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d/2:-d/2, :]
img_size = 200 # Make this smaller if it runs too slow
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1))
x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_noax(img, normalize=True):
Tiny helper to show images as uint8 and remove axis labels
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_noax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_noax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_noax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_noax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_noax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_noax(out[1, 1])
plt.show()
Explanation: Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
End of explanation
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around 1e-9'
print('Testing conv_backward_naive function')
print('dx error: ', rel_error(dx, dx_num))
print('dw error: ', rel_error(dw, dw_num))
print('db error: ', rel_error(db, db_num))
Explanation: Convolution layer: backward naive
Next you need to implement the function conv_backward_naive in the file cs231n/layers.py. As usual, we will check your implementation with numeric gradient checking.
End of explanation
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be around 1e-8.
print('Testing max_pool_forward_naive function:')
print('difference: ', rel_error(out, correct_out))
Explanation: Max pooling layer: forward naive
The last layer we need for a basic convolutional neural network is the max pooling layer. First implement the forward pass in the function max_pool_forward_naive in the file cs231n/layers.py.
End of explanation
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be around 1e-12
print('Testing max_pool_backward_naive function:')
print('dx error: ', rel_error(dx, dx_num))
Explanation: Max pooling layer: backward naive
Implement the backward pass for a max pooling layer in the function max_pool_backward_naive in the file cs231n/layers.py. As always we check the correctness of the backward pass using numerical gradient checking.
End of explanation
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print('Testing conv_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting conv_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
print('dw difference: ', rel_error(dw_naive, dw_fast))
print('db difference: ', rel_error(db_naive, db_fast))
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print('Testing pool_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('fast: %fs' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting pool_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
Explanation: Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory:
bash
python setup.py build_ext --inplace
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
End of explanation
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
print('Testing conv_relu_pool_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
print('Testing conv_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward
x = np.random.randn(2, 3, 4)
w = np.random.randn(12, 10)
b = np.random.randn(10)
dout = np.random.randn(2, 10)
out, cache = affine_relu_forward(x, w, b)
dx, dw, db = affine_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout)
print('Testing affine_relu_forward:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
Explanation: Sandwich layers
There are a couple common layer "sandwiches" that frequently appear in ConvNets. For example convolutional layers are frequently followed by ReLU and pooling, and affine layers are frequently followed by ReLU. To make it more convenient to use these common patterns, we have defined several convenience layers in the file cs231n/layer_utils.py. Lets grad-check them to make sure that they work correctly:
End of explanation |
4,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dimensionality of the inputs to the filter
One of the main strengths of PyMC3 is its dependence on Theano. Theano allows to compute arithmetic operations on arbitrary tensors. This might not sound very impressive, but in the process
Step1: Vectorial observation + vectorial state
Step2: Scalar observation + vectorial state
Step3: Scalar observation + scalar state | Python Code:
import numpy as np
import theano
import theano.tensor as tt
import kalman
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
%matplotlib inline
# True values
T = 500 # Time steps
sigma2_eps0 = 3 # Variance of the observation noise
sigma2_eta0 = 10 # Variance in the update of the mean
# Simulate data
np.random.seed(12345)
eps = np.random.normal(scale=sigma2_eps0**0.5, size=T)
eta = np.random.normal(scale=sigma2_eta0**0.5, size=T)
mu = np.cumsum(eta)
y = mu + eps
# Plot the time series
fig, ax = plt.subplots(figsize=(13,2))
ax.fill_between(np.arange(T), 0, y, facecolor=(0.7,0.7,1), edgecolor=(0,0,1))
ax.set(xlabel='$T$', title='Simulated series');
Explanation: Dimensionality of the inputs to the filter
One of the main strengths of PyMC3 is its dependence on Theano. Theano allows to compute arithmetic operations on arbitrary tensors. This might not sound very impressive, but in the process:
It can apply the chain rule to calculate the gradient of a scalar function on the unknown parameters
Elementwise operations on tensors can be extended to any number of dimensions
Smart optimizations on expressions are applied before compiling, reducing the computing time
Here, we will apply the Kalman filter to scalar observations and/or scalar state spaces. This will result in a noticeable speed improvement with respect to the general vector-vector case.
We will use the same example as in the previous notebook:
End of explanation
# Measurement equation
Z, d, H = tt.dmatrix(name='Z'), tt.dvector(name='d'), tt.dmatrix(name='H')
# Transition equation
T, c, R, Q = tt.dmatrix(name='T'), tt.dvector(name='c'), \
tt.dmatrix(name='R'), tt.dmatrix(name='Q')
# Tensors for the initial state mean and uncertainty
a0, P0 = tt.dvector(name='a0'), tt.dmatrix(name='P0')
# Values for the actual calculation
args = dict(Z = np.array([[1.]]), d = np.array([0.]), H = np.array([[3.]]),
T = np.array([[1.]]), c = np.array([0.]), R = np.array([[1.]]),
Q = np.array([[10.]]),
a0 = np.array([0.]), P0 = np.array([[1e6]]))
# Create function to calculate log-likelihood
kalmanTheano = kalman.KalmanTheano(Z, d, H, T, c, R, Q, a0, P0)
(_,_,lliks),_ = kalmanTheano.filter(y[:,None])
f = theano.function([Z, d, H, T, c, R, Q, a0, P0], lliks[1:].sum())
# Evaluate
%timeit f(**args)
print('Log-likelihood:', f(**args))
Explanation: Vectorial observation + vectorial state
End of explanation
# Measurement equation
Z, d, H = tt.dvector(name='Z'), tt.dscalar(name='d'), tt.dscalar(name='H')
# Transition equation
T, c, R, Q = tt.dmatrix(name='T'), tt.dvector(name='c'), \
tt.dmatrix(name='R'), tt.dmatrix(name='Q')
# Tensors for the initial state mean and uncertainty
a0, P0 = tt.dvector(name='a0'), tt.dmatrix(name='P0')
# Values for the actual calculation
args = dict(Z = np.array([1.]), d = np.array(0.), H = np.array(3.),
T = np.array([[1.]]), c = np.array([0.]), R = np.array([[1.]]),
Q = np.array([[10.]]),
a0 = np.array([0.]), P0 = np.array([[1e6]]))
# Create function to calculate log-likelihood
kalmanTheano = kalman.KalmanTheano(Z, d, H, T, c, R, Q, a0, P0)
(_,_,lliks),_ = kalmanTheano.filter(y)
f = theano.function([Z, d, H, T, c, R, Q, a0, P0], lliks[1:].sum())
# Evaluate
%timeit f(**args)
print('Log-likelihood:', f(**args))
Explanation: Scalar observation + vectorial state
End of explanation
# Measurement equation
Z, d, H = tt.dscalar(name='Z'), tt.dscalar(name='d'), tt.dscalar(name='H')
# Transition equation
T, c, R, Q = tt.dscalar(name='T'), tt.dscalar(name='c'), \
tt.dscalar(name='R'), tt.dscalar(name='Q')
# Tensors for the initial state mean and uncertainty
a0, P0 = tt.dscalar(name='a0'), tt.dscalar(name='P0')
# Values for the actual calculation
args = dict(Z = np.array(1.), d = np.array(0.), H = np.array(3.),
T = np.array(1.), c = np.array(0.), R = np.array(1.),
Q = np.array(10.),
a0 = np.array(0.), P0 = np.array(1e6))
# Create function to calculate log-likelihood
kalmanTheano = kalman.KalmanTheano(Z, d, H, T, c, R, Q, a0, P0)
(_,_,lliks),_ = kalmanTheano.filter(y)
f = theano.function([Z, d, H, T, c, R, Q, a0, P0], lliks[1:].sum())
# Evaluate
%timeit f(**args)
print('Log-likelihood:', f(**args))
Explanation: Scalar observation + scalar state
End of explanation |
4,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 6 - Data Retrieval Functions
Step1: Data Retrieval
globus_download
If you want to access the raw data underlying entries in MDF, you can use globus_download() and provide the results from search() or aggregate(). You can customize how the data files are delivered by specifying a destination path to dest (default local directory) and/or setting preserve_dir=True if you want to recreate the directory structure of the original data.
In order to use globus_download() to download to your computer, you must be running Globus Connect Personal . If you want to download to a different computer (which must be a Globus Endpoint), you have to specify dest_ep=ID_of_destination_endpoint.
Please note that while almost all data in MDF is accessible through a Globus Endpoint, there may be some entries that are not. A few datasets may be hosted elsewhere and only accessible through HTTP (see http_download()) or hosted elsewhere in a custom, non-programmatic configuration.
Step2: http_download
For small data, using Globus is not necessary. You can instead download data using HTTP(S). Except for the endpoint ID, the arguments are the same as globus_download().
Step3: http_stream
If you want to use the data you're downloading directly in your code, you can use http_stream() to have the data yield-ed to you one entry at a time. | Python Code:
from mdf_forge.forge import Forge
mdf = Forge()
Explanation: Part 6 - Data Retrieval Functions
End of explanation
# NBVAL_SKIP
# Running this example will save a file in the current directory.
res = mdf.search("dft.converged:true AND mdf.resource_type:record", limit=10)
mdf.globus_download(res)
Explanation: Data Retrieval
globus_download
If you want to access the raw data underlying entries in MDF, you can use globus_download() and provide the results from search() or aggregate(). You can customize how the data files are delivered by specifying a destination path to dest (default local directory) and/or setting preserve_dir=True if you want to recreate the directory structure of the original data.
In order to use globus_download() to download to your computer, you must be running Globus Connect Personal . If you want to download to a different computer (which must be a Globus Endpoint), you have to specify dest_ep=ID_of_destination_endpoint.
Please note that while almost all data in MDF is accessible through a Globus Endpoint, there may be some entries that are not. A few datasets may be hosted elsewhere and only accessible through HTTP (see http_download()) or hosted elsewhere in a custom, non-programmatic configuration.
End of explanation
# NBVAL_SKIP
# Running this example will save a file in the current directory.
res = mdf.search("mdf.source_name:oqmd* AND mdf.resource_type:record", limit=1)
mdf.http_download(res)
Explanation: http_download
For small data, using Globus is not necessary. You can instead download data using HTTP(S). Except for the endpoint ID, the arguments are the same as globus_download().
End of explanation
# NBVAL_SKIP
res = mdf.search("Al", limit=1)
raw_data = mdf.http_stream(res)
next(raw_data)
Explanation: http_stream
If you want to use the data you're downloading directly in your code, you can use http_stream() to have the data yield-ed to you one entry at a time.
End of explanation |
4,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic damage detection in Wikipedia
This notebook demonstrates the basic contruction of a vandalism classification system using the revscoring library that we have developed specifically for classification models of MediaWiki stuff.
The basic process that we'll follow is this
Step1: OK. Now that we have a set of revisions, we need to label them. In this case, we're going to label them as reverted/not. We want to exclude a few different types of reverts -- e.g. when a user reverts themself or when an edit is reverted back to by someone else. For this, we'll use the mwreverts and mwapi libraries.
Step2: Eeek! This takes too long. You get the idea. So, I uploaded dataset that has already been labeled here @ ../datasets/demo/enwiki.rev_reverted.20k_2015.tsv.bz2
Step3: OK. It looks like we got an error when trying to extract the reverted status of ~132 edits, which is an acceptable loss. Now just to make sure we haven't gone crazy, let's check some of the reverted edits
Step4: OK. In order to train the machine learning model, we'll need to give it a source of signal. This is where "features" come into play. A feature represents a simple numerical statistic that we can extract from our observations that we think will be predictive of our outcome. Luckily, revscoring provides a whole suite of features that work well for damage detection. In this case, we'll be looking at features of the edit diff.
Step5: Now, we'll need to turn to revscorings feature extractor to help us get us feature values for each revision.
Step6: Eeek! Again this takes too long, so again, I uploaded a dataset with features already extracted @ ../datasets/demo/enwiki.features_reverted.training.20k_2015.tsv.bz2
Step7: Part 3
Step8: We now have a trained model that we can play around with. Let's try a few edits from our test set.
Step9: Part 4
Step10: Accuracy -- The proportion of correct predictions
Precision -- The proportion of correct positive predictions
Recall -- The proportion of positive examples predicted as positive
Filter rate at 90% recall -- The proportion of observations that can be ignored while still catching 90% of "reverted" edits.
We'll use revscoring statistics to measure these against the test set.
Step11: Bonus round! Let's listen to Wikipedia's vandalism!
So we don't have the most powerful damage detection classifier, but then again, we're only including 9 features. Usually we run with ~60 features and get to much higher levels of fitness. but this model is still useful and it should help us detect the most egregious vandalism in Wikipedia. In order to listen to Wikipedia, we'll need to connect to RCStream -- the same live feed that powers listen to Wikipedia. | Python Code:
# Magical ipython notebook stuff puts the result of this command into a variable
revids_f = !wget http://quarry.wmflabs.org/run/65415/output/0/tsv?download=true -qO-
revids = [int(line) for line in revids_f[1:]]
len(revids)
Explanation: Basic damage detection in Wikipedia
This notebook demonstrates the basic contruction of a vandalism classification system using the revscoring library that we have developed specifically for classification models of MediaWiki stuff.
The basic process that we'll follow is this:
Gather example of human judgement applied to Wikipedia edits. In this case, we'll take advantage of reverts.
Split the data into a training and testing set
Training the machine learning model
Testing the machine learning model
And then we'll have some fun applying the model to some edits using RCStream. The following diagram gives a good sense for the whole process of training and evaluating a model.
<img style="text-align: center;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/09/Supervised_machine_learning_in_a_nutshell.svg/640px-Supervised_machine_learning_in_a_nutshell.svg.png" />
Part 1: Getting labeled observations
<img style="float: right; margin: 1ex;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/0f/Machine_learning_nutshell_--_Gather_labeled_observations.svg/300px-Machine_learning_nutshell_--_Gather_labeled_observations.svg.png" />
Regretfully, running SQL queries isn't something we can do directly from the notebook yet. So, we'll use Quarry to generate a nice random sample of edits. 20,000 observations should do just fine. Here's the query I want to run:
SQL
USE enwiki_p;
SELECT rev_id
FROM revision
WHERE rev_timestamp BETWEEN "20150201" AND "20160201"
ORDER BY RAND()
LIMIT 20000;
See http://quarry.wmflabs.org/query/7530. By clicking around the UI, I can see that this URL will download my tab-separated file: http://quarry.wmflabs.org/run/65415/output/0/tsv?download=true
End of explanation
import sys, traceback
import mwreverts.api
import mwapi
# We'll use the mwreverts API check. In order to do that, we need an API session
session = mwapi.Session("https://en.wikipedia.org",
user_agent="Revert detection demo <[email protected]>")
# For each revision, find out if it was "reverted" and label it so.
rev_reverteds = []
for rev_id in revids[:20]: # NOTE: Limiting to the first 20!!!!
try:
_, reverted, reverted_to = mwreverts.api.check(
session, rev_id, radius=5, # most reverts within 5 edits
window=48*60*60, # 2 days
rvprop={'user', 'ids'}) # Some properties we'll make use of
except (RuntimeError, KeyError) as e:
sys.stderr.write(str(e))
continue
if reverted is not None:
reverted_doc = [r for r in reverted.reverteds
if r['revid'] == rev_id][0]
if 'user' not in reverted_doc or 'user' not in reverted.reverting:
continue
# self-reverts
self_revert = \
reverted_doc['user'] == reverted.reverting['user']
# revisions that are reverted back to by others
reverted_back_to = \
reverted_to is not None and \
'user' in reverted_to.reverting and \
reverted_doc['user'] != \
reverted_to.reverting['user']
# If we are reverted, not by self or reverted back to by someone else,
# then, let's assume it was damaging.
damaging_reverted = not (self_revert or reverted_back_to)
else:
damaging_reverted = False
rev_reverteds.append((rev_id, damaging_reverted))
sys.stderr.write("r" if damaging_reverted else ".")
Explanation: OK. Now that we have a set of revisions, we need to label them. In this case, we're going to label them as reverted/not. We want to exclude a few different types of reverts -- e.g. when a user reverts themself or when an edit is reverted back to by someone else. For this, we'll use the mwreverts and mwapi libraries.
End of explanation
rev_reverteds_f = !bzcat ../datasets/demo/enwiki.rev_reverted.20k_2015.tsv.bz2
rev_reverteds = [line.strip().split("\t") for line in rev_reverteds_f[1:]]
rev_reverteds = [(int(rev_id), reverted == "True") for rev_id, reverted in rev_reverteds]
len(rev_reverteds)
Explanation: Eeek! This takes too long. You get the idea. So, I uploaded dataset that has already been labeled here @ ../datasets/demo/enwiki.rev_reverted.20k_2015.tsv.bz2
End of explanation
train_set = rev_reverteds[:15000]
test_set = rev_reverteds[15000:]
print("training:", len(train_set))
print("testing:", len(test_set))
Explanation: OK. It looks like we got an error when trying to extract the reverted status of ~132 edits, which is an acceptable loss. Now just to make sure we haven't gone crazy, let's check some of the reverted edits:
https://en.wikipedia.org/wiki/?diff=695071713 (section blanking)
https://en.wikipedia.org/wiki/?diff=667375206 (unexplained addition of nonsense)
https://en.wikipedia.org/wiki/?diff=670204366 (vandalism "I don't know")
https://en.wikipedia.org/wiki/?diff=680329354 (adds non-existent category)
https://en.wikipedia.org/wiki/?diff=668682186 (test edit -- removes punctuation)
https://en.wikipedia.org/wiki/?diff=666882037 (adds spamlink)
https://en.wikipedia.org/wiki/?diff=663302354 (adds nonsense special char)
https://en.wikipedia.org/wiki/?diff=675803278 (unconstructive link changes)
https://en.wikipedia.org/wiki/?diff=680203994 (vandalism -- "Pepe meme")
https://en.wikipedia.org/wiki/?diff=656734057 ("JELENAS BOOTY UNDSO")
OK. Looks like we are doing pretty good. :)
Part 2: Split the data into a training and testing set
<img style="float: right; margin: 1ex;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/88/Machine_learning_nutshell_--_Split_into_train-test_set.svg/320px-Machine_learning_nutshell_--_Split_into_train-test_set.svg.png" />
Before we move on with training, it's important that we hold back some of the data for testing later. If we train on the same data we'll test with, we risk overfitting and not noticing!
In this section, we'll both split the training and testing set and gather prective features for each of the labeled observations.
End of explanation
from revscoring.features import wikitext, revision_oriented, temporal
from revscoring.languages import english
features = [
# Catches long key mashes like kkkkkkkkkkkk
wikitext.revision.diff.longest_repeated_char_added,
# Measures the size of the change in added words
wikitext.revision.diff.words_added,
# Measures the size of the change in removed words
wikitext.revision.diff.words_removed,
# Measures the proportional change in "badwords"
english.badwords.revision.diff.match_prop_delta_sum,
# Measures the proportional change in "informals"
english.informals.revision.diff.match_prop_delta_sum,
# Measures the proportional change meaningful words
english.stopwords.revision.diff.non_stopword_prop_delta_sum,
# Is the user anonymous
revision_oriented.revision.user.is_anon,
# Is the user a bot or a sysop
revision_oriented.revision.user.in_group({'bot', 'sysop'}),
# How long ago did the user register?
temporal.revision.user.seconds_since_registration
]
Explanation: OK. In order to train the machine learning model, we'll need to give it a source of signal. This is where "features" come into play. A feature represents a simple numerical statistic that we can extract from our observations that we think will be predictive of our outcome. Luckily, revscoring provides a whole suite of features that work well for damage detection. In this case, we'll be looking at features of the edit diff.
End of explanation
from revscoring.extractors import api
api_extractor = api.Extractor(session)
revisions = [695071713, 667375206]
for rev_id in revisions:
print("https://en.wikipedia.org/wiki/?diff={0}".format(rev_id))
print(list(api_extractor.extract(rev_id, features)))
# Now for the whole set!
training_features_reverted = []
for rev_id, reverted in train_set[:20]:
try:
feature_values = list(api_extractor.extract(rev_id, features))
observation = {"rev_id": rev_id, "cache": feature_values, "reverted": reverted}
except RuntimeError as e:
sys.stderr.write(str(e))
continue
sys.stderr.write(".")
training_features_reverted.append(observation)
# Uncomment to regenerate the observations file.
#import bz2
#from revscoring.utilities.util import dump_observation
#
#f = bz2.open("../datasets/demo/enwiki.features_reverted.training.20k_2015.json.bz2", "wt")
#for observation in training_features_reverted:
# dump_observation(observation, f)
#f.close()
Explanation: Now, we'll need to turn to revscorings feature extractor to help us get us feature values for each revision.
End of explanation
from revscoring.utilities.util import read_observations
training_features_reverted_f = !bzcat ../datasets/demo/enwiki.features_reverted.training.20k_2015.json.bz2
training_features_reverted = list(read_observations(training_features_reverted_f))
len(training_features_reverted)
Explanation: Eeek! Again this takes too long, so again, I uploaded a dataset with features already extracted @ ../datasets/demo/enwiki.features_reverted.training.20k_2015.tsv.bz2
End of explanation
from revscoring.scoring.models import GradientBoosting
is_reverted = GradientBoosting(features, labels=[True, False], version="live demo!",
learning_rate=0.01, max_features="log2",
n_estimators=700, max_depth=5,
population_rates={False: 0.5, True: 0.5}, scale=True, center=True)
training_unpacked = [(o["cache"], o["reverted"]) for o in training_features_reverted]
is_reverted.train(training_unpacked)
Explanation: Part 3: Training the model
<img style="float: right; margin: 1ex;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/7/7a/Machine_learning_nutshell_--_Train_a_machine_learning_model.svg/320px-Machine_learning_nutshell_--_Train_a_machine_learning_model.svg.png" />
Now that we have a set of features extracted for our training set, it's time to train a model. revscoring provides a set of different classifier algorithms. From past experience, I know a gradient boosting classifier works well, so we'll use that.
End of explanation
reverted_obs = [rev_id for rev_id, reverted in test_set if reverted]
non_reverted_obs = [rev_id for rev_id, reverted in test_set if not reverted]
for rev_id in reverted_obs[:10]:
feature_values = list(api_extractor.extract(rev_id, features))
score = is_reverted.score(feature_values)
print(True, "https://en.wikipedia.org/wiki/?diff=" + str(rev_id),
score['prediction'], round(score['probability'][True], 2))
for rev_id in non_reverted_obs[:10]:
feature_values = list(api_extractor.extract(rev_id, features))
score = is_reverted.score(feature_values)
print(False, "https://en.wikipedia.org/wiki/?diff=" + str(rev_id),
score['prediction'], round(score['probability'][True], 2))
Explanation: We now have a trained model that we can play around with. Let's try a few edits from our test set.
End of explanation
testing_features_reverted_f = !bzcat ../datasets/demo/enwiki.features_reverted.testing.20k_2015.json.bz2
testing_features_reverted = list(read_observations(testing_features_reverted_f))
testing_unpacked = [(o["cache"], o["reverted"]) for o in testing_features_reverted]
len(testing_unpacked)
Explanation: Part 4: Testing the model
So, the above analysis can help give us a sense for whether the model is working or not, but it's hard to standardize between models. So, we can apply some metrics that are specially crafted for machine learning models.
<center>
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/49/Machine_learning_nutshell_--_Test_the_machine_learning_model.svg/640px-Machine_learning_nutshell_--_Test_the_machine_learning_model.svg.png" />
</center>
But first, I'll need to load the pre-generated feature values.
End of explanation
is_reverted.test(testing_unpacked)
print(is_reverted.info.format())
Explanation: Accuracy -- The proportion of correct predictions
Precision -- The proportion of correct positive predictions
Recall -- The proportion of positive examples predicted as positive
Filter rate at 90% recall -- The proportion of observations that can be ignored while still catching 90% of "reverted" edits.
We'll use revscoring statistics to measure these against the test set.
End of explanation
import json
from sseclient import SSEClient as EventSource
url = 'https://stream.wikimedia.org/v2/stream/recentchange'
for event in EventSource(url):
if event.event == 'message':
try:
change = json.loads(event.data)
if change['type'] not in ('new', 'edit'):
continue
rev_id = change['revision']['new']
feature_values = list(api_extractor.extract(rev_id, features))
score = is_reverted.score(feature_values)
if score['prediction']:
print("!!!Please review", "https://en.wikipedia.org/wiki/?diff=" + str(rev_id),
round(score['probability'][True], 2), flush=True)
else:
print("Good edit", "https://en.wikipedia.org/wiki/?diff=" + str(rev_id),
round(score['probability'][True], 2), flush=True)
except ValueError:
pass
Explanation: Bonus round! Let's listen to Wikipedia's vandalism!
So we don't have the most powerful damage detection classifier, but then again, we're only including 9 features. Usually we run with ~60 features and get to much higher levels of fitness. but this model is still useful and it should help us detect the most egregious vandalism in Wikipedia. In order to listen to Wikipedia, we'll need to connect to RCStream -- the same live feed that powers listen to Wikipedia.
End of explanation |
4,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Machine Learning 2nd Edition by Sebastian Raschka, Packt Publishing Ltd. 2017
Code Repository
Step1: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see
Step2: Preprocessing the movie dataset into more convenient format
Step3: Shuffling the DataFrame
Step4: Optional
Step5: <hr>
Note
If you have problems with creating the movie_data.csv, you can find a download a zip archive at
https
Step6: Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts
Step7: As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created
Step8: <br>
Assessing word relevancy via term frequency-inverse document frequency
Step9: When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency
Step10: As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is is
now associated with a relatively small tf-idf (0.45) in document 3 since it is
also contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information.
However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the TfidfTransformer calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are
Step11: If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors
Step12: <br>
Cleaning text data
Step13: <br>
Processing documents into tokens
Step14: <br>
<br>
Training a logistic regression model for document classification
Strip HTML and punctuation to speed up the GridSearch later
Step15: Important Note about n_jobs
Please note that it is highly recommended to use n_jobs=-1 (instead of n_jobs=1) in the previous code example to utilize all available cores on your machine and speed up the grid search. However, some Windows users reported issues when running the previous code with the n_jobs=-1 setting related to pickling the tokenizer and tokenizer_porter functions for multiprocessing on Windows. Another workaround would be to replace those two functions, [tokenizer, tokenizer_porter], with [str.split]. However, note that the replacement by the simple str.split would not support stemming.
Important Note about the running time
Executing the following code cell may take up to 30-60 min depending on your machine, since based on the parameter grid we defined, there are 22235 + 22235 = 240 models to fit.
If you do not wish to wait so long, you could reduce the size of the dataset by decreasing the number of training samples, for example, as follows
Step16: <hr>
<hr>
Start comment
Step17: By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (cv3_idx) to the cross_val_score scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds.
Next, let us use the GridSearchCV object and feed it the same 5 cross-validation sets (via the pre-generated cv3_idx indices)
Step18: As we can see, the scores for the 5 folds are exactly the same as the ones from cross_val_score earlier.
Now, the best_score_ attribute of the GridSearchCV object, which becomes available after fitting, returns the average accuracy score of the best model
Step19: As we can see, the result above is consistent with the average score computed the cross_val_score.
Step20: End comment.
<hr>
<hr>
<br>
<br>
Working with bigger data - online algorithms and out-of-core learning
Step21: Note
You can replace Perceptron(n_iter, ...) by Perceptron(max_iter, ...) in scikit-learn >= 0.19.
Step22: Topic modeling
Decomposing text documents with Latent Dirichlet Allocation
Latent Dirichlet Allocation with scikit-learn
Step23: Based on reading the 5 most important words for each topic, we may guess that the LDA identified the following topics
Step24: Using the preceeding code example, we printed the first 300 characters from the top 3 horror movies and indeed, we can see that the reviews -- even though we don't know which exact movie they belong to -- sound like reviews of horror movies, indeed. (However, one might argue that movie #2 could also belong to topic category 1.)
<br>
<br>
Summary
...
Readers may ignore the next cell. | Python Code:
%load_ext watermark
%watermark -a "Sebastian Raschka" -u -d -v -p numpy,pandas,sklearn,nltk
Explanation: Python Machine Learning 2nd Edition by Sebastian Raschka, Packt Publishing Ltd. 2017
Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-edition
Code License: MIT License
Python Machine Learning - Code Examples
Chapter 8 - Applying Machine Learning To Sentiment Analysis
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
import os
import sys
import tarfile
import time
source = 'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'
target = 'aclImdb_v1.tar.gz'
def reporthook(count, block_size, total_size):
global start_time
if count == 0:
start_time = time.time()
return
duration = time.time() - start_time
progress_size = int(count * block_size)
speed = progress_size / (1024.**2 * duration)
percent = count * block_size * 100. / total_size
sys.stdout.write("\r%d%% | %d MB | %.2f MB/s | %d sec elapsed" %
(percent, progress_size / (1024.**2), speed, duration))
sys.stdout.flush()
if not os.path.isdir('aclImdb') and not os.path.isfile('aclImdb_v1.tar.gz'):
if (sys.version_info < (3, 0)):
import urllib
urllib.urlretrieve(source, target, reporthook)
else:
import urllib.request
urllib.request.urlretrieve(source, target, reporthook)
if not os.path.isdir('aclImdb'):
with tarfile.open(target, 'r:gz') as tar:
tar.extractall()
Explanation: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark.
<br>
<br>
Overview
Preparing the IMDb movie review data for text processing
Obtaining the IMDb movie review dataset
Preprocessing the movie dataset into more convenient format
Introducing the bag-of-words model
Transforming words into feature vectors
Assessing word relevancy via term frequency-inverse document frequency
Cleaning text data
Processing documents into tokens
Training a logistic regression model for document classification
Working with bigger data – online algorithms and out-of-core learning
Topic modeling
Decomposing text documents with Latent Dirichlet Allocation
Latent Dirichlet Allocation with scikit-learn
Summary
<br>
<br>
Preparing the IMDb movie review data for text processing
Obtaining the IMDb movie review dataset
The IMDB movie review set can be downloaded from http://ai.stanford.edu/~amaas/data/sentiment/.
After downloading the dataset, decompress the files.
A) If you are working with Linux or MacOS X, open a new terminal windowm cd into the download directory and execute
tar -zxf aclImdb_v1.tar.gz
B) If you are working with Windows, download an archiver such as 7Zip to extract the files from the download archive.
Optional code to download and unzip the dataset via Python:
End of explanation
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
basepath = 'aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in sorted(os.listdir(path)):
with open(os.path.join(path, file),
'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]],
ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
Explanation: Preprocessing the movie dataset into more convenient format
End of explanation
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
Explanation: Shuffling the DataFrame:
End of explanation
df.to_csv('movie_data.csv', index=False, encoding='utf-8')
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
df.shape
Explanation: Optional: Saving the assembled data as CSV file:
End of explanation
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
Explanation: <hr>
Note
If you have problems with creating the movie_data.csv, you can find a download a zip archive at
https://github.com/rasbt/python-machine-learning-book-2nd-edition/tree/master/code/ch08/
<hr>
<br>
<br>
Introducing the bag-of-words model
...
Transforming documents into feature vectors
By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:
1. The sun is shining
2. The weather is sweet
3. The sun is shining, the weather is sweet, and one and one is two
End of explanation
print(count.vocabulary_)
Explanation: Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
End of explanation
print(bag.toarray())
Explanation: As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created:
Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: tf (t,d)—the number of times a term t occurs in a document d.
End of explanation
np.set_printoptions(precision=2)
Explanation: <br>
Assessing word relevancy via term frequency-inverse document frequency
End of explanation
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True,
norm='l2',
smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs))
.toarray())
Explanation: When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:
$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$
Here the tf(t, d) is the term frequency that we introduced in the previous section,
and the inverse document frequency idf(t, d) can be calculated as:
$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$
where $n_d$ is the total number of documents, and df(d, t) is the number of documents d that contain the term t. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.
Scikit-learn implements yet another transformer, the TfidfTransformer, that takes the raw term frequencies from CountVectorizer as input and transforms them into tf-idfs:
End of explanation
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
Explanation: As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is is
now associated with a relatively small tf-idf (0.45) in document 3 since it is
also contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information.
However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the TfidfTransformer calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are:
$$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$
The tf-idf equation that was implemented in scikit-learn is as follows:
$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$
While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the TfidfTransformer normalizes the tf-idfs directly.
By default (norm='l2'), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector v by its L2-norm:
$$v_{\text{norm}} = \frac{v}{||v||2} = \frac{v}{\sqrt{v{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$
To make sure that we understand how TfidfTransformer works, let us walk
through an example and calculate the tf-idf of the word is in the 3rd document.
The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:
$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$
Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:
$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
End of explanation
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
Explanation: If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows:
$$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$
$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$
$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$
As we can see, the results match the results returned by scikit-learn's TfidfTransformer (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
End of explanation
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)',
text)
text = (re.sub('[\W]+', ' ', text.lower()) +
' '.join(emoticons).replace('-', ''))
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
Explanation: <br>
Cleaning text data
End of explanation
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
Explanation: <br>
Processing documents into tokens
End of explanation
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
Explanation: <br>
<br>
Training a logistic regression model for document classification
Strip HTML and punctuation to speed up the GridSearch later:
End of explanation
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to generate more
## "logging" output when this notebook is run
## on the Travis Continuous Integration
## platform to test the code as well as
## speeding up the run using a smaller
## dataset for debugging
if 'TRAVIS' in os.environ:
gs_lr_tfidf.verbose=2
X_train = df.loc[:250, 'review'].values
y_train = df.loc[:250, 'sentiment'].values
X_test = df.loc[25000:25250, 'review'].values
y_test = df.loc[25000:25250, 'sentiment'].values
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
Explanation: Important Note about n_jobs
Please note that it is highly recommended to use n_jobs=-1 (instead of n_jobs=1) in the previous code example to utilize all available cores on your machine and speed up the grid search. However, some Windows users reported issues when running the previous code with the n_jobs=-1 setting related to pickling the tokenizer and tokenizer_porter functions for multiprocessing on Windows. Another workaround would be to replace those two functions, [tokenizer, tokenizer_porter], with [str.split]. However, note that the replacement by the simple str.split would not support stemming.
Important Note about the running time
Executing the following code cell may take up to 30-60 min depending on your machine, since based on the parameter grid we defined, there are 22235 + 22235 = 240 models to fit.
If you do not wish to wait so long, you could reduce the size of the dataset by decreasing the number of training samples, for example, as follows:
X_train = df.loc[:2500, 'review'].values
y_train = df.loc[:2500, 'sentiment'].values
However, note that decreasing the training set size to such a small number will likely result in poorly performing models. Alternatively, you can delete parameters from the grid above to reduce the number of models to fit -- for example, by using the following:
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0]},
]
End of explanation
from sklearn.linear_model import LogisticRegression
import numpy as np
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
Explanation: <hr>
<hr>
Start comment:
Please note that gs_lr_tfidf.best_score_ is the average k-fold cross-validation score. I.e., if we have a GridSearchCV object with 5-fold cross-validation (like the one above), the best_score_ attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
End of explanation
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
Explanation: By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (cv3_idx) to the cross_val_score scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds.
Next, let us use the GridSearchCV object and feed it the same 5 cross-validation sets (via the pre-generated cv3_idx indices):
End of explanation
gs.best_score_
Explanation: As we can see, the scores for the 5 folds are exactly the same as the ones from cross_val_score earlier.
Now, the best_score_ attribute of the GridSearchCV object, which becomes available after fitting, returns the average accuracy score of the best model:
End of explanation
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
Explanation: As we can see, the result above is consistent with the average score computed the cross_val_score.
End of explanation
# This cell is not contained in the book but
# added for convenience so that the notebook
# can be executed starting here, without
# executing prior code in this notebook
import os
import gzip
if not os.path.isfile('movie_data.csv'):
if not os.path.isfile('movie_data.csv.gz'):
print('Please place a copy of the movie_data.csv.gz'
'in this directory. You can obtain it by'
'a) executing the code in the beginning of this'
'notebook or b) by downloading it from GitHub:'
'https://github.com/rasbt/python-machine-learning-'
'book-2nd-edition/blob/master/code/ch08/movie_data.csv.gz')
else:
with in_f = gzip.open('movie_data.csv.gz', 'rb'), \
out_f = open('movie_data.csv', 'wb'):
out_f.write(in_f.read())
import numpy as np
import re
from nltk.corpus import stopwords
# The `stop` is defined as earlier in this chapter
# Added it here for convenience, so that this section
# can be run as standalone without executing prior code
# in the directory
stop = stopwords.words('english')
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
Explanation: End comment.
<hr>
<hr>
<br>
<br>
Working with bigger data - online algorithms and out-of-core learning
End of explanation
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
if Version(sklearn_version) < '0.18':
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
else:
clf = SGDClassifier(loss='log', random_state=1, max_iter=1)
doc_stream = stream_docs(path='movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
Explanation: Note
You can replace Perceptron(n_iter, ...) by Perceptron(max_iter, ...) in scikit-learn >= 0.19.
End of explanation
import pandas as pd
df = pd.read_csv('movie_data.csv', encoding='utf-8')
df.head(3)
## @Readers: PLEASE IGNORE THIS CELL
##
## This cell is meant to create a smaller dataset if
## the notebook is run on the Travis Continuous Integration
## platform to test the code on a smaller dataset
## to prevent timeout errors and just serves a debugging tool
## for this notebook
if 'TRAVIS' in os.environ:
df.loc[:500].to_csv('movie_data.csv')
df = pd.read_csv('movie_data.csv', nrows=500)
print('SMALL DATA SUBSET CREATED FOR TESTING')
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer(stop_words='english',
max_df=.1,
max_features=5000)
X = count.fit_transform(df['review'].values)
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_topics=10,
random_state=123,
learning_method='batch')
X_topics = lda.fit_transform(X)
lda.components_.shape
n_top_words = 5
feature_names = count.get_feature_names()
for topic_idx, topic in enumerate(lda.components_):
print("Topic %d:" % (topic_idx + 1))
print(" ".join([feature_names[i]
for i in topic.argsort()\
[:-n_top_words - 1:-1]]))
Explanation: Topic modeling
Decomposing text documents with Latent Dirichlet Allocation
Latent Dirichlet Allocation with scikit-learn
End of explanation
horror = X_topics[:, 5].argsort()[::-1]
for iter_idx, movie_idx in enumerate(horror[:3]):
print('\nHorror movie #%d:' % (iter_idx + 1))
print(df['review'][movie_idx][:300], '...')
Explanation: Based on reading the 5 most important words for each topic, we may guess that the LDA identified the following topics:
Generally bad movies (not really a topic category)
Movies about families
War movies
Art movies
Crime movies
Horror movies
Comedies
Movies somehow related to TV shows
Movies based on books
Action movies
To confirm that the categories make sense based on the reviews, let's plot 5 movies from the horror movie category (category 6 at index position 5):
End of explanation
! python ../.convert_notebook_to_script.py --input ch08.ipynb --output ch08.py
Explanation: Using the preceeding code example, we printed the first 300 characters from the top 3 horror movies and indeed, we can see that the reviews -- even though we don't know which exact movie they belong to -- sound like reviews of horror movies, indeed. (However, one might argue that movie #2 could also belong to topic category 1.)
<br>
<br>
Summary
...
Readers may ignore the next cell.
End of explanation |
4,291 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>GarrissonWow</h1>
Script to get world of warcraft data and display it in a website.
Step1: inpd character name input ut realm an.
Step2: If faction is alliance change CSS to BLUE background.
If faction is horde change css to RED background.
This script displays a wow character given by input of character name and realm name.
Step3: Make background red if horde - blue if alliance.
Depend on class. | Python Code:
import battlenet
import dominate
from dominate.tags import *
import json
import arrow
import requests
import datetime
from battlenet import Character
from battlenet import Realm
#Realm.to_json()
realm = Realm(battlenet.UNITED_STATES, "jubei'thos")
realm
print realm.is_online()
print realm.to_json()
rejs = json.loads(realm.to_json())
rejs
rtza = rejs['timezone']
arrow.get('US/Pacific')
arrow.get(rtza)
realm.slug
realm.population
realm.type
realm.status
relmjs = realm.to_json['timezone']
#charget = battlenet.
Explanation: <h1>GarrissonWow</h1>
Script to get world of warcraft data and display it in a website.
End of explanation
lookuprealm = raw_input("Enter US realm name: ")
lookupchar = raw_input("Enter character: ")
bookchar = Character(battlenet.UNITED_STATES, lookuprealm, lookupchar)
bospnam = bookchar.get_spec_name
bookchar.achievement_points
bokjs = bookchar.to_json()
import json
bokjsz = json.loads(bokjs)
bokjst = bokjsz['stats']
bokza = bokjst.keys()
for bok in bokza:
print bok
print bokjst[bok]
Explanation: inpd character name input ut realm an.
End of explanation
if bookchar.faction == 'Alliance':
print ('Alliance has a blue background')
else:
print ('Horde has a red background')
bookchar.to_json()
opcharjs = open('/home/wcmckee/github/garrison-wow-track/charstats.json')
opcharjs.read()
bookchar.get_thumbnail_url()
bookdatz = bookchar.last_modified.date()
bookchar.last_modified.year
bookdatz.month
bookdatz.min
bookchar.last_modified.microsecond
bookchar.gender
battlenet.things.Reputation()
geninfo = dict()
if bookchar.gender == 1:
geninfo.update({'gender': 'female'})
if bookchar.gender == 0:
geninfo.update({'gender': 'male'})
geninfo
for bocha in bookchar.professions['primary']:
print bocha.recipes
profz = bocha.recipes
import requests
recreqs = requests.get('http://us.battle.net/api/wow/recipe/33994')
recreqs
requests.get('http://google.com')
battlenet.q
bookchar.last_modified.day
bookchar.last_modified.weekday()
bookchar.last_modified.hour
racnum = bookchar.race
battlenet.RACE
print battlenet.RACE[racnum]
battlenet.RACE_TO_FACTION
print battlenet.RACE_TO_FACTION[racnum]
print battlenet.quote
print battlenet.enums.CLASS
print bookchar.class_
print battlenet.enums.CLASS[racnum]
bokkall = bookchar.to_json()
bokkall
json.loads(bokkall)['lastModified']
import json
json.loads
json.loads(bokkall)['items']
for bequ in bookchar.equipment:
print bequ
from battlenet import Connection
for realm in connection.get_all_realms(battlenet.UNITED_STATES):
print realm
#for realm in connection.get_all_realms(battlenet.UNITED_STATES):
# print realm
from battlenet import Guild
# If a global connection was setup
guild = Guild(battlenet.UNITED_STATES, "jubei'thos", "adventure time")
glead = guild.get_leader
guildjson = guild.to_json()
glead()
guild.realm
battlenet.connection.Connection.get_character_races
item = battlenet.connection.Connection.get_all_realms
# TODO
name = "Kiljaeden"
realm = battlenet.
portchar = Character(battlenet.UNITED_STATES, "saurfang", "portishead")
portchar.to_json()
portchar.level
portmoded = portchar.last_modified
portmoded.year
portmoded.month
portmoded.day
portapc = portchar.appearance
portapc.face
portapc.skin_color
portapc.feature
portapc.hair
portchar.get_class_name()
portchar.get_race_name()
portprof = portchar.professions['primary']
for ppof in portprof:
#print ppof.recipes
for pprec in ppof.recipes:
print pprec
portales = portchar.TALENTS
portreq = portchar.equipment
portreq.average_item_level
portreq.average_item_level_equiped
realm.to_json()
bnetthing = battlenet.things
battlenet.RACE_TO_FACTION
alrac = battlenet.RACE
import random
random.choice(alrac.values())
plaqual = battlenet.QUALITY
random.choice(plaqual.values())
battlenet.Character
battlenet.connection()
alclas = battlenet.CLASS
alclas
random.choice(alclas.values())
battlenet.things
portales
Explanation: If faction is alliance change CSS to BLUE background.
If faction is horde change css to RED background.
This script displays a wow character given by input of character name and realm name.
End of explanation
portchar.faction
portchar.
portchat
galfchar = Character(battlenet.UNITED_STATES, "jubei'thos", 'galf')
galfchar
galfchar.to_json()
gquiq = galfchar.equipment
gquiq.average_item_level
gquiq.average_item_level_equiped
gdarz = galfchar.get_spec_name
gdarz()
import random
gfinz = galfchar.titles
diczid = list()
for gfin in gfinz:
print gfin
diczid.append(gfin)
len(diczid)
rantitle = random.choice(diczid)
print rantitle
if (' ') in str(rantitle):
print ('space!')
str.replace(str(rantitle), ' ', '_')
fixstrz = str.replace(str(rantitle), ' ', '_')
str(rantitle)
fixstrz
gquiq.hands.name
gquiq.to_json()
doc = dominate.document(title='garrisonwowtrack')
with doc.head:
link(rel='stylesheet', href='style.css')
script(type ='text/javascript', src='script.js')
#str(str2)
with div():
attr(cls='header')
h1(str(rantitle))
p(img('/imgs/getsdrawn-bw.png', src='/imgs/getsdrawn-bw.png'))
#p(img('imgs/15/01/02/ReptileLover82-reference.png', src= 'imgs/15/01/02/ReptileLover82-reference.png'))
#h1('Updated ', str(artes.datetime))
#p(panz)
#p(bodycom)
for bok in bokza:
p(bok)
p(bokjst[bok])
'''
with doc:
with div(id='body').add(ol()):
for flc in fulcom:
if 'http://i.imgur.com' in flc.url:
p(h1(flc.title))
p(img(imlocdir, src= imlocdir))
#p(img(flc.url, src = flc.url))
p(str(flc.author))
#res = requests.get(flc.url, stream=True)
#with open(str(flc.author) + '-' + str(artes.date()) + '-reference.png', 'wb') as outfil:
# shutil.copyfileobj(res.raw, outfil)
# del res
for flcz in flc.comments:
p(flcz.body)
#for rdz in reliz:
#h1(rdz.title)
#a(rdz.url)
#p(img(rdz, src='%s' % rdz))
#print rdz
#p(img(rdz, src = rdz))
#p(rdz)
#print rdz.url
#if '.jpg' in rdz.url:
# img(rdz.urlz)
#else:
# a(rdz.urlz)
#h1(str(rdz.author))
#li(img(i.lower(), src='%s' % i))
with div():
attr(cls='body')
p('GotDrawn is open source')
a('https://github.com/getsdrawn/getsdrawndotcom')
a('https://reddit.com/r/redditgetsdrawn')
'''
print doc
reqallre = requests.get('http://us.battle.net/api/wow/realm/status')
reqtxt = reqallre.text
jsre = json.loads(reqtxt)
lerez = len(jsre['realms'])
lerez
for jsrez in jsre['realms']:
print jsrez['name']
#realm = Realm(battlenet.UNITED_STATES, jsrez['name'])
#portchar = Character(battlenet.UNITED_STATES, jsrez['name'], "portishead")
realm = Realm(battlenet.UNITED_STATES, jsrez['name')
#Return all realms.
Explanation: Make background red if horde - blue if alliance.
Depend on class.
End of explanation |
4,292 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NumPy
Numpy é um pacote fundamental para programação científica com Python. Ele traz consigo uma variedade de operações matemáticas, principalmente referente à operações algébricas com dados N-dimensionais!
Step1: A base de seu funcionamento é o np.array, que retorna o objeto array sobre o qual todas as funções estão implementadas
Step2: O array traz consigo diversos operadores já implementados
Step3: O Numpy traz consigo diversas operações matemáticas implementadas, as quais podem ser aplicadas sobre um valor ou um array de valores.
OBS
Step4: Uma operação booleana pode ser aplicada sobre todos os elementos de um array, retornando um array de mesmas dimensões com o resultado da operação
Step5: Existem também operações utilitárias pré-definidas em um array
Step6: Existem também funções para gerar arrays pré-inicializados
Step7: Podemos selecionar intervalos do array, permitindo recuperar apenas uma porção dele
Step8: O Numpy conta também com funções para salvar/ler arrays de arquivos | Python Code:
import numpy as np
Explanation: NumPy
Numpy é um pacote fundamental para programação científica com Python. Ele traz consigo uma variedade de operações matemáticas, principalmente referente à operações algébricas com dados N-dimensionais!
End of explanation
a = np.array([1, 2, 3])
print(repr(a), a.shape, end="\n\n")
b = np.array([(1, 2, 3), (4, 5, 6)])
print(repr(b), b.shape)
Explanation: A base de seu funcionamento é o np.array, que retorna o objeto array sobre o qual todas as funções estão implementadas
End of explanation
print(b.T, end="\n\n") # transpoe uma matriz
print(a + b, end="\n\n") # soma um vetor linha/coluna a todas as linhas/colunas de uma matriz
print(b - a, end="\n\n") # subtrai um vetor linha/coluna a todas as linhas/colunas de uma matriz
# multiplica os elementos de um vetor linha/coluna
# a todos os elementos das linhas/colunas de uma matriz
print(a * b, end="\n\n")
print(a**2, end="\n\n") # eleva os elementos ao quadrado
Explanation: O array traz consigo diversos operadores já implementados:
End of explanation
print(10*np.sin(1)) # seno trigonométrico de 1
print(10*np.sin(a)) # seno trigonométrico de cada elemento de a
Explanation: O Numpy traz consigo diversas operações matemáticas implementadas, as quais podem ser aplicadas sobre um valor ou um array de valores.
OBS: podemos ver a aplicações dessas funções como uma operação de transformação (map)
End of explanation
b<35
Explanation: Uma operação booleana pode ser aplicada sobre todos os elementos de um array, retornando um array de mesmas dimensões com o resultado da operação
End of explanation
print(b,end="\n\n")
print('Axis 1: %s' % b[0], end="\n\n") # retorna um vetor
print(np.average(b), end="\n\n") # tira a média dos elementos
print(np.average(b, axis=1), end="\n\n") # tira a média dos elementos dos vetores no eixo 1
print(b.sum(), end="\n\n") # retorna as somas dos valores
print(b.sum(axis=1), end="\n\n") # retorna as somas dos valores no eixo 1
print(b.min(), end="\n\n") # retorna o menor valor
print(b.max(), end="\n\n") # retorna o maior valor
Explanation: Existem também operações utilitárias pré-definidas em um array
End of explanation
print(np.zeros((3, 5)), end="\n\n") # array de zeros com dimensões [3,5]
print(np.ones((2,3,4)), end="\n\n------------\n\n") # array de uns com dimensões [2,3,4]
print(np.full((2, 2), 10), end="\n\n") # array de 10 com dimensões [2,2]
print(np.arange(10, 30, 5), end="\n\n") # valores de 10 a 30 com passo 5
print(np.random.rand(2, 3), end="\n\n") # array cde dimensao [2,3] com valores aleatórios
Explanation: Existem também funções para gerar arrays pré-inicializados
End of explanation
d = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
d
d[:, 0] # todas as linhas (:) da primeira coluna (0)
d[:, 1] # todas as linhas (:) da segunda coluna (1)
d[:, 0:2] # todas as linhas (:) das colunas de 0 à 2
d[:, 2] # todas as linhas (:) da terceira coluna (2)
Explanation: Podemos selecionar intervalos do array, permitindo recuperar apenas uma porção dele
End of explanation
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
np.save('/tmp/x.npy', x)
del(x)
x = np.load('/tmp/x.npy')
print(x)
Explanation: O Numpy conta também com funções para salvar/ler arrays de arquivos
End of explanation |
4,293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sliders Example
This is an example of interactive iPython workbook that uses widgets to meaningfully interact with visualization.
Step4: 2D Rank Features | Python Code:
# Imports
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error as mse
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
# Data Loading
columns = OrderedDict([
("DAY", "the day of data collection"),
("Q-E", "input flow to plant"),
("ZN-E", "input Zinc to plant"),
("PH-E", "input pH to plant"),
("DBO-E", "input Biological demand of oxygen to plant"),
("DQO-E", "input chemical demand of oxygen to plant"),
("SS-E", "input suspended solids to plant"),
("SSV-E", "input volatile supended solids to plant"),
("SED-E", "input sediments to plant"),
("COND-E", "input conductivity to plant"),
("PH-P", "input pH to primary settler"),
("DBO-P", "input Biological demand of oxygen to primary settler"),
("SS-P", "input suspended solids to primary settler"),
("SSV-P", "input volatile supended solids to primary settler"),
("SED-P", "input sediments to primary settler"),
("COND-P", "input conductivity to primary settler"),
("PH-D", "input pH to secondary settler"),
("DBO-D", "input Biological demand of oxygen to secondary settler"),
("DQO-D", "input chemical demand of oxygen to secondary settler"),
("SS-D", "input suspended solids to secondary settler"),
("SSV-D", "input volatile supended solids to secondary settler"),
("SED-D", "input sediments to secondary settler"),
("COND-S", "input conductivity to secondary settler"),
("PH-S", "output pH"),
("DBO-S", "output Biological demand of oxygen"),
("DQO-S", "output chemical demand of oxygen"),
("SS-S", "output suspended solids"),
("SSV-S", "output volatile supended solids"),
("SED-S", "output sediments"),
("COND-", "output conductivity"),
("RD-DB-P", "performance input Biological demand of oxygen in primary settler"),
("RD-SSP", "performance input suspended solids to primary settler"),
("RD-SE-P", "performance input sediments to primary settler"),
("RD-DB-S", "performance input Biological demand of oxygen to secondary settler"),
("RD-DQ-S", "performance input chemical demand of oxygen to secondary settler"),
("RD-DB-G", "global performance input Biological demand of oxygen"),
("RD-DQ-G", "global performance input chemical demand of oxygen"),
("RD-SSG", "global performance input suspended solids"),
("RD-SED-G", "global performance input sediments"),
])
data = pd.read_csv("data/water-treatment.data", names=columns.keys())
data = data.replace('?', np.nan)
# Capture only the numeric columns in the data set.
numeric_columns = columns.keys()
numeric_columns.remove("DAY")
data = data[numeric_columns].apply(pd.to_numeric)
Explanation: Sliders Example
This is an example of interactive iPython workbook that uses widgets to meaningfully interact with visualization.
End of explanation
def apply_column_pairs(func):
Applies a function to a pair of columns and returns a new
dataframe that contains the result of the function as a matrix
of each pair of columns.
def inner(df):
cols = pd.DataFrame([
[
func(df[acol], df[bcol]) for bcol in df.columns
] for acol in df.columns
])
cols.columns = df.columns
cols.index = df.columns
return cols
return inner
@apply_column_pairs
def least_square_error(cola, colb):
Computes the Root Mean Squared Error of a linear regression
between two columns of data.
x = cola.fillna(np.nanmean(cola))
y = colb.fillna(np.nanmean(colb))
m, b = np.polyfit(x, y, 1)
yh = (x * m) + b
return ((y-yh) ** 2).mean()
labeled_metrics = {
'Pearson': 'pearson',
'Kendall Tao': 'kendall',
'Spearman': 'spearman',
'Pairwise Covariance': 'covariance',
'Least Squares Error': 'lse',
}
@interact(metric=labeled_metrics, data=fixed(data))
def rank2d(data, metric='pearson'):
Creates a visualization of pairwise ranking by column in the data.
# The different rank by 2d metrics.
metrics = {
"pearson": lambda df: df.corr('pearson'),
"kendall": lambda df: df.corr('kendall'),
"spearman": lambda df: df.corr('spearman'),
"covariance": lambda df: df.cov(),
"lse": least_square_error,
}
# Quick check to make sure a valid metric is passed in.
if metric not in metrics:
raise ValueError(
"'{}' not a valid metric, specify one of {}".format(
metric, ", ".join(metrics.keys())
)
)
# Compute the correlation matrix
corr = metrics[metric](data)
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(11, 9))
ax.set_title("{} metric across {} features".format(metric.title(), len(data.columns)))
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, vmax=.3,
square=True, xticklabels=5, yticklabels=5,
linewidths=.5, cbar_kws={"shrink": .5}, ax=ax)
Explanation: 2D Rank Features
End of explanation |
4,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
Here, the unit of observation is epochs from a specific study subject.
However, the same logic applies when the unit observation is
a number of study subject each of whom contribute their own averaged
data (i.e., an average of their epochs). This would then be considered
an analysis at the "2nd level".
See the FieldTrip tutorial for a caveat regarding
the possible interpretation of "significant" clusters.
For more information on cluster-based permutation testing in MNE-Python,
see also
Step1: Set parameters
Step2: Read epochs for the channel of interest
Step3: Find the FieldTrip neighbor definition to setup sensor adjacency
Step4: Compute permutation statistic
How does it work? We use clustering to "bind" together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size
Step5: <div class="alert alert-info"><h4>Note</h4><p>Note how we only specified an adjacency for sensors! However,
because we used
Step6: Permutation statistic for time-frequencies
Let's do the same thing with the time-frequency decomposition of the data
(see tut-sensors-time-freq for a tutorial and
ex-tfr-comparison for a comparison of time-frequency methods) to
show how cluster permutations can be done on higher-dimensional data.
Step7: Remember the note on the adjacency matrix from above
Step8: Now we can run the cluster permutation test, but first we have to set a
threshold. This example decimates in time and uses few frequencies so we need
to increase the threshold from the default value in order to have
differentiated clusters (i.e., so that our algorithm doesn't just find one
large cluster). For a more principled method of setting this parameter,
threshold-free cluster enhancement may be used.
See disc-stats for a discussion.
Step9: Finally, we can plot our results. It is difficult to visualize clusters in
time-frequency-sensor space; plotting time-frequency spectrograms and
plotting topomaps display time-frequency and sensor space respectively
but they are difficult to combine. We will plot topomaps with the clustered
sensors colored in white adjacent to spectrograms in order to provide a
visualization of the results. This is a dimensionally limited view, however.
Each sensor has its own significant time-frequencies, but, in order to
display a single spectrogram, all the time-frequencies that are significant
for any sensor in the cluster are plotted as significant. This is a
difficulty inherent to visualizing high-dimensional data and should be taken
into consideration when interpreting results. | Python Code:
# Authors: Denis Engemann <[email protected]>
# Jona Sassenhagen <[email protected]>
# Alex Rockhill <[email protected]>
# Stefan Appelhoff <[email protected]>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
import scipy.stats
import mne
from mne.stats import spatio_temporal_cluster_test, combine_adjacency
from mne.datasets import sample
from mne.channels import find_ch_adjacency
from mne.viz import plot_compare_evokeds
from mne.time_frequency import tfr_morlet
Explanation: Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
Here, the unit of observation is epochs from a specific study subject.
However, the same logic applies when the unit observation is
a number of study subject each of whom contribute their own averaged
data (i.e., an average of their epochs). This would then be considered
an analysis at the "2nd level".
See the FieldTrip tutorial for a caveat regarding
the possible interpretation of "significant" clusters.
For more information on cluster-based permutation testing in MNE-Python,
see also: tut-cluster-one-samp-tfr
End of explanation
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
event_id = {'Aud/L': 1, 'Aud/R': 2, 'Vis/L': 3, 'Vis/R': 4}
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 30)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
picks = mne.pick_types(raw.info, meg='mag', eog=True)
reject = dict(mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
epochs.drop_channels(['EOG 061'])
epochs.equalize_event_counts(event_id)
# Obtain the data as a 3D matrix and transpose it such that
# the dimensions are as expected for the cluster permutation test:
# n_epochs × n_times × n_channels
X = [epochs[event_name].get_data() for event_name in event_id]
X = [np.transpose(x, (0, 2, 1)) for x in X]
Explanation: Read epochs for the channel of interest
End of explanation
adjacency, ch_names = find_ch_adjacency(epochs.info, ch_type='mag')
print(type(adjacency)) # it's a sparse matrix!
fig, ax = plt.subplots(figsize=(5, 4))
ax.imshow(adjacency.toarray(), cmap='gray', origin='lower',
interpolation='nearest')
ax.set_xlabel('{} Magnetometers'.format(len(ch_names)))
ax.set_ylabel('{} Magnetometers'.format(len(ch_names)))
ax.set_title('Between-sensor adjacency')
fig.tight_layout()
Explanation: Find the FieldTrip neighbor definition to setup sensor adjacency
End of explanation
# We are running an F test, so we look at the upper tail
# see also: https://stats.stackexchange.com/a/73993
tail = 1
# We want to set a critical test statistic (here: F), to determine when
# clusters are being formed. Using Scipy's percent point function of the F
# distribution, we can conveniently select a threshold that corresponds to
# some alpha level that we arbitrarily pick.
alpha_cluster_forming = 0.001
# For an F test we need the degrees of freedom for the numerator
# (number of conditions - 1) and the denominator (number of observations
# - number of conditions):
n_conditions = len(event_id)
n_observations = len(X[0])
dfn = n_conditions - 1
dfd = n_observations - n_conditions
# Note: we calculate 1 - alpha_cluster_forming to get the critical value
# on the right tail
f_thresh = scipy.stats.f.ppf(1 - alpha_cluster_forming, dfn=dfn, dfd=dfd)
# run the cluster based permutation analysis
cluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,
threshold=f_thresh, tail=tail,
n_jobs=1, buffer_size=None,
adjacency=adjacency)
F_obs, clusters, p_values, _ = cluster_stats
Explanation: Compute permutation statistic
How does it work? We use clustering to "bind" together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size
:footcite:MarisOostenveld2007,Sassenhagen2019.
End of explanation
# We subselect clusters that we consider significant at an arbitrarily
# picked alpha level: "p_accept".
# NOTE: remember the caveats with respect to "significant" clusters that
# we mentioned in the introduction of this tutorial!
p_accept = 0.01
good_cluster_inds = np.where(p_values < p_accept)[0]
# configure variables for visualization
colors = {"Aud": "crimson", "Vis": 'steelblue'}
linestyles = {"L": '-', "R": '--'}
# organize data for plotting
evokeds = {cond: epochs[cond].average() for cond in event_id}
# loop over clusters
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
time_inds, space_inds = np.squeeze(clusters[clu_idx])
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
# get topography for F stat
f_map = F_obs[time_inds, ...].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
# plot average test statistic and mark significant sensors
f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)
f_evoked.plot_topomap(times=0, mask=mask, axes=ax_topo, cmap='Reds',
vmin=np.min, vmax=np.max, show=False,
colorbar=False, mask_params=dict(markersize=10))
image = ax_topo.images[0]
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))
# add new axis for time courses and plot time courses
ax_signals = divider.append_axes('right', size='300%', pad=1.2)
title = 'Cluster #{0}, {1} sensor'.format(i_clu + 1, len(ch_inds))
if len(ch_inds) > 1:
title += "s (mean)"
plot_compare_evokeds(evokeds, title=title, picks=ch_inds, axes=ax_signals,
colors=colors, linestyles=linestyles, show=False,
split_legend=True, truncate_yaxis='auto')
# plot temporal cluster extent
ymin, ymax = ax_signals.get_ylim()
ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],
color='orange', alpha=0.3)
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Note how we only specified an adjacency for sensors! However,
because we used :func:`mne.stats.spatio_temporal_cluster_test`,
an adjacency for time points was automatically taken into
account. That is, at time point N, the time points N - 1 and
N + 1 were considered as adjacent (this is also called "lattice
adjacency"). This is only possbile because we ran the analysis on
2D data (times × channels) per observation ... for 3D data per
observation (e.g., times × frequencies × channels), we will need
to use :func:`mne.stats.combine_adjacency`, as shown further
below.</p></div>
Note also that the same functions work with source estimates.
The only differences are the origin of the data, the size,
and the adjacency definition.
It can be used for single trials or for groups of subjects.
Visualize clusters
End of explanation
decim = 4
freqs = np.arange(7, 30, 3) # define frequencies of interest
n_cycles = freqs / freqs[0]
epochs_power = list()
for condition in [epochs[k] for k in ('Aud/L', 'Vis/L')]:
this_tfr = tfr_morlet(condition, freqs, n_cycles=n_cycles,
decim=decim, average=False, return_itc=False)
this_tfr.apply_baseline(mode='ratio', baseline=(None, 0))
epochs_power.append(this_tfr.data)
# transpose again to (epochs, frequencies, times, channels)
X = [np.transpose(x, (0, 2, 3, 1)) for x in epochs_power]
Explanation: Permutation statistic for time-frequencies
Let's do the same thing with the time-frequency decomposition of the data
(see tut-sensors-time-freq for a tutorial and
ex-tfr-comparison for a comparison of time-frequency methods) to
show how cluster permutations can be done on higher-dimensional data.
End of explanation
# our data at each observation is of shape frequencies × times × channels
tfr_adjacency = combine_adjacency(
len(freqs), len(this_tfr.times), adjacency)
Explanation: Remember the note on the adjacency matrix from above: For 3D data, as here,
we must use :func:mne.stats.combine_adjacency to extend the
sensor-based adjacency to incorporate the time-frequency plane as well.
Here, the integer inputs are converted into a lattice and
combined with the sensor adjacency matrix so that data at similar
times and with similar frequencies and at close sensor locations are
clustered together.
End of explanation
# This time we don't calculate a threshold based on the F distribution.
# We might as well select an arbitrary threshold for cluster forming
tfr_threshold = 15.0
# run cluster based permutation analysis
cluster_stats = spatio_temporal_cluster_test(
X, n_permutations=1000, threshold=tfr_threshold, tail=1, n_jobs=1,
buffer_size=None, adjacency=tfr_adjacency)
Explanation: Now we can run the cluster permutation test, but first we have to set a
threshold. This example decimates in time and uses few frequencies so we need
to increase the threshold from the default value in order to have
differentiated clusters (i.e., so that our algorithm doesn't just find one
large cluster). For a more principled method of setting this parameter,
threshold-free cluster enhancement may be used.
See disc-stats for a discussion.
End of explanation
F_obs, clusters, p_values, _ = cluster_stats
good_cluster_inds = np.where(p_values < p_accept)[0]
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
freq_inds, time_inds, space_inds = clusters[clu_idx]
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
freq_inds = np.unique(freq_inds)
# get topography for F stat
f_map = F_obs[freq_inds].mean(axis=0)
f_map = f_map[time_inds].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# plot average test statistic and mark significant sensors
f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)
f_evoked.plot_topomap(times=0, mask=mask, axes=ax_topo, cmap='Reds',
vmin=np.min, vmax=np.max, show=False,
colorbar=False, mask_params=dict(markersize=10))
image = ax_topo.images[0]
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))
# remove the title that would otherwise say "0.000 s"
ax_topo.set_title("")
# add new axis for spectrogram
ax_spec = divider.append_axes('right', size='300%', pad=1.2)
title = 'Cluster #{0}, {1} spectrogram'.format(i_clu + 1, len(ch_inds))
if len(ch_inds) > 1:
title += " (max over channels)"
F_obs_plot = F_obs[..., ch_inds].max(axis=-1)
F_obs_plot_sig = np.zeros(F_obs_plot.shape) * np.nan
F_obs_plot_sig[tuple(np.meshgrid(freq_inds, time_inds))] = \
F_obs_plot[tuple(np.meshgrid(freq_inds, time_inds))]
for f_image, cmap in zip([F_obs_plot, F_obs_plot_sig], ['gray', 'autumn']):
c = ax_spec.imshow(f_image, cmap=cmap, aspect='auto', origin='lower',
extent=[epochs.times[0], epochs.times[-1],
freqs[0], freqs[-1]])
ax_spec.set_xlabel('Time (ms)')
ax_spec.set_ylabel('Frequency (Hz)')
ax_spec.set_title(title)
# add another colorbar
ax_colorbar2 = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(c, cax=ax_colorbar2)
ax_colorbar2.set_ylabel('F-stat')
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
Explanation: Finally, we can plot our results. It is difficult to visualize clusters in
time-frequency-sensor space; plotting time-frequency spectrograms and
plotting topomaps display time-frequency and sensor space respectively
but they are difficult to combine. We will plot topomaps with the clustered
sensors colored in white adjacent to spectrograms in order to provide a
visualization of the results. This is a dimensionally limited view, however.
Each sensor has its own significant time-frequencies, but, in order to
display a single spectrogram, all the time-frequencies that are significant
for any sensor in the cluster are plotted as significant. This is a
difficulty inherent to visualizing high-dimensional data and should be taken
into consideration when interpreting results.
End of explanation |
4,295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Urban Networks II
Overview of today's topics
Step1: 1. Model a study site
First, we will identify a study site, model its street network, and calculate some simple indicators.
Step2: 2. Simulate commutes
We'll use a random sample of LEHD LODES data to get home/work coordinates. This is an imperfect proxy for "true" work locations from a payroll enumeration. You can read more about LODES and its limitations here. These data are processed in a separate notebook to keep the data easy on your CPU and memory for this lecture. Our trip simulation will use naive assumptions about travel time (e.g., free flow, no congestion, rough imputation of speed limits) for simplicity, but these can be enriched with effort.
Step3: 3. Network efficiency
How "efficient" are our commuter's routes? That is, how does their distance traveled compare to straight-line distances from home to work?
Step4: 4. Network perturbation
Oh no! There's been an earthquake!
The earthquake has knocked out 10% of the street network. Let's simulate that perturbation and see how routes have to change.
Step5: How many routes are now disconnected? How did trip efficiency change?
Step6: Central LA performs relatively well because it has a relatively dense and gridlike network that offers multiple redundancy options.
What if you conduct this analysis in a disconnected, dendritic suburb on the urban fringe?
What if you model a walkable network rather than a drivable one?
What if the network perturbation isn't a spatially random process?
Take these questions as prompts for self-paced exercise. For example, let's say the LA river has flooded. Use OSMnx to attach elevations to all the nodes in our street network, then knock-out the 10% at the lowest elevation (ie, around the river). How does that change network characteristics like connectivity and efficiency? Or, model a coastal town Miami Beach, then knock-out the network nodes below some sea-level rise threshold. What happens? What neighborhoods are most affected? What communities live in those vulnerable places?
Step7: 5. Compare places to each other
Here we'll model and analyze a set of sub-sites within a study area to compare their characteristics.
Step8: Let's use a custom filter to model "surface streets." You get to pick what to include and exclude, using the Overpass Query Language.
Step9: Our simplified, naive assumptions in this analysis have some shortcomings that resulting in analytical problems. How would you improve it?
1. Periphery effects?
2. Incorrect study site sizes?
3. What are we counting and not counting here?
Step10: 6. Urban accessibility
If you're interested in isochrone mapping, see the OSMnx examples for a demonstration.
Here, we'll analyze food deserts in central LA using OSMnx and Pandana. Pandana uses contraction hierarchies for imprecise but very fast shortest path calculation.
Step11: This tells us about the travel time to the nearest amenities, from each node in the network. What if we're instead interested in how many amenities we can reach within our time horizon? | Python Code:
import geopandas as gpd
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import osmnx as ox
import pandana
import pandas as pd
from shapely.geometry import Point
# consistent randomization
np.random.seed(0)
# configure OSMnx
cache_folder = '../../data/cache2'
ox.config(log_console=True, use_cache=True, cache_folder=cache_folder)
Explanation: Urban Networks II
Overview of today's topics:
- Network modeling and analysis in a study site
- Simulating commutes
- Network efficiency
- Network perturbation
- Comparative network analysis
- Urban accessibility
End of explanation
# create a study site: geocode city hall, convert coords to shapely geometry,
# project geometry to UTM, buffer by 5km, project back to lat-lng
latlng_coords = ox.geocode('Los Angeles City Hall')
latlng_point = Point(latlng_coords[1], latlng_coords[0])
latlng_point_proj, crs = ox.projection.project_geometry(latlng_point)
polygon_proj = latlng_point_proj.buffer(5000)
polygon, crs = ox.projection.project_geometry(polygon_proj, crs=crs, to_latlong=True)
polygon
# model the street network within study site
# your parameterization makes assumptions about your interests here
G = ox.graph_from_polygon(polygon, network_type='drive', truncate_by_edge=True)
fig, ax = ox.plot_graph(G, node_size=0, edge_color='w', edge_linewidth=0.3)
# add speeds and travel times
G = ox.add_edge_speeds(G)
G = ox.add_edge_travel_times(G)
# study site area in km^2
polygon_proj.area / 1e6
# how many intersections does it contain?
street_counts = pd.Series(dict(G.nodes(data='street_count')))
intersect_count = len(street_counts[street_counts > 2])
intersect_count
# what's the intersection density?
intersect_count / (polygon_proj.area / 1e6)
# now clean up the intersections and re-calculate
clean_intersects = ox.consolidate_intersections(ox.project_graph(G),
rebuild_graph=False,
tolerance=10)
clean_intersect_count = len(clean_intersects)
clean_intersect_count
# what's the cleaned intersection density?
clean_intersect_count / (polygon_proj.area / 1e6)
Explanation: 1. Model a study site
First, we will identify a study site, model its street network, and calculate some simple indicators.
End of explanation
od = pd.read_csv('../../data/od.csv').sample(1000)
od.shape
od
# get home/work network nodes
home_nodes = ox.get_nearest_nodes(G, X=od['home_lng'], Y=od['home_lat'], method='balltree')
work_nodes = ox.get_nearest_nodes(G, X=od['work_lng'], Y=od['work_lat'], method='balltree')
def calc_path(G, orig, dest, weight='travel_time'):
try:
return ox.shortest_path(G, orig, dest, weight)
except nx.exception.NetworkXNoPath:
# if path cannot be solved
return None
%%time
paths = [calc_path(G, orig, dest) for orig, dest in zip(home_nodes, work_nodes)]
len(paths)
# filter out any nulls (ie, not successfully solved)
paths = [path for path in paths if path is not None]
len(paths)
# plot 100 routes
fig, ax = ox.plot_graph_routes(G,
routes=paths[0:100],
node_size=0,
edge_linewidth=0.2,
orig_dest_size=0,
route_colors='c',
route_linewidth=2,
route_alpha=0.2)
# now it's your turn
# how do these routes change if we minimize distance traveled instead?
# what kinds of streets get more/fewer trips assigned to them?
Explanation: 2. Simulate commutes
We'll use a random sample of LEHD LODES data to get home/work coordinates. This is an imperfect proxy for "true" work locations from a payroll enumeration. You can read more about LODES and its limitations here. These data are processed in a separate notebook to keep the data easy on your CPU and memory for this lecture. Our trip simulation will use naive assumptions about travel time (e.g., free flow, no congestion, rough imputation of speed limits) for simplicity, but these can be enriched with effort.
End of explanation
def calc_efficiency(G, route, attr='length'):
# sum the edge lengths in the route
trip_distance = sum(ox.utils_graph.get_route_edge_attributes(G,
route=route,
attribute=attr))
# fast vectorized great-circle distance calculator
gc_distance = ox.distance.great_circle_vec(lat1=G.nodes[route[0]]['y'],
lng1=G.nodes[route[0]]['x'],
lat2=G.nodes[route[-1]]['y'],
lng2=G.nodes[route[-1]]['x'])
return gc_distance / trip_distance
# calculate each trip's efficiency and make a pandas series
trip_efficiency = pd.Series([calc_efficiency(G, path) for path in paths])
# the straight-line distance is what % of each network distance traveled?
trip_efficiency
trip_efficiency.describe()
# now it's your turn
# what if i were instead interested in how much longer trips are than straight-line would be?
Explanation: 3. Network efficiency
How "efficient" are our commuter's routes? That is, how does their distance traveled compare to straight-line distances from home to work?
End of explanation
# randomly knock-out 10% of the network's nodes
frac = 0.10
n = int(len(G.nodes) * frac)
nodes_to_remove = pd.Series(G.nodes).sample(n).index
G_per = G.copy()
G_per.remove_nodes_from(nodes_to_remove)
# get home/work network nodes again, calculate routes, drop nulls
home_nodes_per = ox.get_nearest_nodes(G_per, X=od['home_lng'], Y=od['home_lat'], method='balltree')
work_nodes_per = ox.get_nearest_nodes(G_per, X=od['work_lng'], Y=od['work_lat'], method='balltree')
paths_per = [calc_path(G_per, orig, dest) for orig, dest in zip(home_nodes_per, work_nodes_per)]
paths_per = [path for path in paths_per if path is not None]
len(paths_per)
# calculate each trip's efficiency and make a pandas series
trip_efficiency_per = pd.Series([calc_efficiency(G_per, path) for path in paths_per])
trip_efficiency_per.describe()
Explanation: 4. Network perturbation
Oh no! There's been an earthquake!
The earthquake has knocked out 10% of the street network. Let's simulate that perturbation and see how routes have to change.
End of explanation
# what % of formerly solvable routes are now unsolvable?
1 - (len(paths_per) / len(paths))
# knocking out x% of the network made (solvable) trips what % less efficient?
1 - (trip_efficiency_per.mean() / trip_efficiency.mean())
# plot n routes apiece, before (cyan) and after (yellow) perturbation
n = 100
all_paths = paths[:n] + paths_per[:n]
colors = ['c'] * n + ['y'] * n
# shuffle the order, so you don't just plot new atop old
paths_colors = pd.DataFrame({'path': all_paths, 'color': colors}).sample(frac=1)
fig, ax = ox.plot_graph_routes(G,
routes=paths_colors['path'],
node_size=0,
edge_linewidth=0.2,
orig_dest_size=0,
route_colors=paths_colors['color'],
route_linewidth=2,
route_alpha=0.3)
Explanation: How many routes are now disconnected? How did trip efficiency change?
End of explanation
# now it's your turn
# use the prompts above to conduct a self-directed analysis of network perturbation
# either using elevation/flooding or any of the 3 prompts above
Explanation: Central LA performs relatively well because it has a relatively dense and gridlike network that offers multiple redundancy options.
What if you conduct this analysis in a disconnected, dendritic suburb on the urban fringe?
What if you model a walkable network rather than a drivable one?
What if the network perturbation isn't a spatially random process?
Take these questions as prompts for self-paced exercise. For example, let's say the LA river has flooded. Use OSMnx to attach elevations to all the nodes in our street network, then knock-out the 10% at the lowest elevation (ie, around the river). How does that change network characteristics like connectivity and efficiency? Or, model a coastal town Miami Beach, then knock-out the network nodes below some sea-level rise threshold. What happens? What neighborhoods are most affected? What communities live in those vulnerable places?
End of explanation
# study area within 1/2 mile of SF Civic Center
latlng_coords = ox.geocode('Civic Center, San Francisco, CA, USA')
latlng_point = Point(latlng_coords[1], latlng_coords[0])
latlng_point_proj, crs = ox.projection.project_geometry(latlng_point)
polygon_proj = latlng_point_proj.buffer(800)
sf_polygon, crs = ox.projection.project_geometry(polygon_proj, crs=crs, to_latlong=True)
# get the tracts that intersect the study area polygon
tracts = gpd.read_file('../../data/tl_2020_06_tract/').set_index('GEOID')
mask = tracts.intersects(sf_polygon)
cols = ['ALAND', 'geometry']
sf_tracts = tracts.loc[mask, cols]
sf_tracts.head()
Explanation: 5. Compare places to each other
Here we'll model and analyze a set of sub-sites within a study area to compare their characteristics.
End of explanation
# build a custom filter
cf1 = '["highway"~"residential|living_street|tertiary|secondary|primary"]'
cf2 = '["service"!~"alley|driveway|emergency_access|parking|parking_aisle|private"]'
cf3 = '["area"!~"yes"]'
custom_filter = cf1 + cf2 + cf3
custom_filter
# model the street network across all the study sub-sites
G_all = ox.graph_from_polygon(sf_tracts.unary_union, custom_filter=custom_filter)
len(G_all.nodes)
%%time
# calculate clean intersection counts per tract
intersect_counts = {}
for label, geom in zip(sf_tracts.index, sf_tracts['geometry']):
G_tmp = ox.graph_from_polygon(geom, custom_filter=custom_filter)
clean_intersects = ox.consolidate_intersections(ox.project_graph(G_tmp),
rebuild_graph=False)
intersect_counts[label] = len(clean_intersects)
# calculate intersection density per km^2
sf_tracts['intersect_count'] = pd.Series(intersect_counts)
sf_tracts['intersect_density'] = sf_tracts['intersect_count'] / (sf_tracts['ALAND'] / 1e6)
sf_tracts['intersect_density'].describe()
# plot the tracts and the network
plt.style.use('dark_background')
fig, ax = plt.subplots(figsize=(6, 6))
ax.axis('off')
ax.set_title('Intersection density (per km2)')
ax = sf_tracts.plot(ax=ax, column='intersect_density', cmap='Reds_r',
legend=True, legend_kwds={'shrink': 0.8})
fig, ax = ox.plot_graph(G_all, ax=ax, node_size=0, edge_color='#111111')
fig.savefig('map.png', dpi=300, facecolor='#111111', bbox_inches='tight')
Explanation: Let's use a custom filter to model "surface streets." You get to pick what to include and exclude, using the Overpass Query Language.
End of explanation
# now it's your turn
# how would you improve this analysis to make it more meaningful and interpretable?
Explanation: Our simplified, naive assumptions in this analysis have some shortcomings that resulting in analytical problems. How would you improve it?
1. Periphery effects?
2. Incorrect study site sizes?
3. What are we counting and not counting here?
End of explanation
# specify some parameters for the analysis
walk_time = 20 # max walking horizon in minutes
walk_speed = 4.5 # km per hour
# model the walkable network within our original study site
G_walk = ox.graph_from_polygon(polygon, network_type='walk')
fig, ax = ox.plot_graph(G_walk, node_size=0, edge_color='w', edge_linewidth=0.3)
# set a uniform walking speed on every edge
for u, v, data in G_walk.edges(data=True):
data['speed_kph'] = walk_speed
G_walk = ox.add_edge_travel_times(G_walk)
# extract node/edge GeoDataFrames, retaining only necessary columns (for pandana)
nodes = ox.graph_to_gdfs(G_walk, edges=False)[['x', 'y']]
edges = ox.graph_to_gdfs(G_walk, nodes=False).reset_index()[['u', 'v', 'travel_time']]
# get all the "fresh food" stores on OSM within the study site
# you could load any amenities DataFrame, but we'll get ours from OSM
tags = {'shop': ['grocery', 'greengrocer', 'supermarket']}
amenities = ox.geometries_from_bbox(north=nodes['y'].max(),
south=nodes['y'].min(),
east=nodes['x'].min(),
west=nodes['x'].max(),
tags=tags)
amenities.shape
# construct the pandana network model
network = pandana.Network(node_x=nodes['x'],
node_y=nodes['y'],
edge_from=edges['u'],
edge_to=edges['v'],
edge_weights=edges[['travel_time']])
# extract (approximate, unprojected) centroids from the amenities' geometries
centroids = amenities.centroid
# specify a max travel distance for this analysis
# then set the amenities' locations on the network
maxdist = walk_time * 60 # minutes -> seconds, to match travel_time units
network.set_pois(category='grocery',
maxdist=maxdist,
maxitems=3,
x_col=centroids.x,
y_col=centroids.y)
# calculate travel time to nearest amenity from each node in network
distances = network.nearest_pois(distance=maxdist,
category='grocery',
num_pois=3)
distances.astype(int).head()
# plot distance to nearest amenity
fig, ax = ox.plot_graph(G_walk, node_size=0, edge_linewidth=0.1,
edge_color='gray', show=False, close=False)
sc = ax.scatter(x=nodes['x'],
y=nodes['y'],
c=distances[1],
s=1,
cmap='inferno_r')
ax.set_title(f'Walking time to nearest grocery store')
plt.colorbar(sc, shrink=0.7).outline.set_edgecolor('none')
Explanation: 6. Urban accessibility
If you're interested in isochrone mapping, see the OSMnx examples for a demonstration.
Here, we'll analyze food deserts in central LA using OSMnx and Pandana. Pandana uses contraction hierarchies for imprecise but very fast shortest path calculation.
End of explanation
# set a variable on the network, using the amenities' nodes
node_ids = network.get_node_ids(centroids.x, centroids.y)
network.set(node_ids, name='grocery')
# aggregate the variable to all the nodes in the network
# when counting, the decay doesn't matter (but would for summing)
access = network.aggregate(distance=maxdist,
type='count',
decay='linear',
name='grocery')
# let's cap it at 5, assuming no further utility from a larger choice set
access = access.clip(upper=5)
access.describe()
# plot amenity count within your walking horizon
fig, ax = ox.plot_graph(G_walk, node_size=0, edge_linewidth=0.1,
edge_color='gray', show=False, close=False)
sc = ax.scatter(x=nodes['x'],
y=nodes['y'],
c=access,
s=1,
cmap='inferno')
ax.set_title(f'Grocery stores within a {walk_time} minute walk')
plt.colorbar(sc, shrink=0.7).outline.set_edgecolor('none')
# now it's your turn
# map walking time to nearest school in our study site, capped at 30 minutes
# what kinds of communities have better/worse walking access to schools?
# see documentation at https://wiki.openstreetmap.org/wiki/Tag:amenity=school
Explanation: This tells us about the travel time to the nearest amenities, from each node in the network. What if we're instead interested in how many amenities we can reach within our time horizon?
End of explanation |
4,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quick start guide
Installation
Stable
Fri can be installed via the Python Package Index (PyPI).
If you have pip installed just execute the command
pip install fri
to get the newest stable version.
The dependencies should be installed and checked automatically.
If you have problems installing please open issue at our tracker.
Development
To install a bleeding edge dev version of FRI you can clone the GitHub repository using
git clone [email protected]
Step1: We want to create a small set with a few features.
Because we want to showcase the all-relevant feature selection, we generate multiple strongly and weakly relevant features.
Step2: The method also prints out the parameters again.
Step3: We created a binary classification set with 6 features of which 2 are strongly relevant and 2 weakly relevant.
Preprocess
Because our method expects mean centered data we need to standardize it first.
This centers the values around 0 and deviation to the standard deviation
Step4: Model
Now we need to creata a Model.
We use the FRI module.
Step5: fri provides a convenience class fri.FRI to create a model.
fri.FRI needs the type of problem as a first argument of type ProblemName.
Depending on the Problem you want to analyze pick from one of the available models in ProblemName.
Step6: Because we have Classification data we use the ProblemName.CLASSIFICATION to instantiate our model.
Step7: We used no parameters for creation so the defaults are active.
Fitting to data
Now we can just fit the model to the data using scikit-learn like commands.
Step8: The resulting feature relevance bounds are saved in the interval_ variable.
Step9: If you want to print out the relevance class use the print_interval_with_class() function.
Step10: The bounds are grouped in 2d sublists for each feature.
To acess the relevance bounds for feature 2 we would use
Step11: The relevance classes are saved in the corresponding variable relevance_classes_
Step12: 2 denotes strongly relevant features, 1 weakly relevant and 0 irrelevant.
Plot results
The bounds in numerical form are useful for postprocesing.
If we want a human to look at it, we recommend the plot function plot_relevance_bars.
We can also color the bars according to relevance_classes_
Step13: Setting constraints manually
Our model also allows to compute relevance bounds when the user sets a given range for the features.
We use a dictionary to encode our constraints.
Step14: Example
As an example, let us constrain the third from our example to the minimum relevance bound.
Step15: We use the function constrained_intervals.
Note
Step16: Feature 3 is set to its minimum (at 0).
How does it look visually?
Step17: Feature 3 is reduced to its minimum (no contribution).
In turn, its correlated partner feature 4 had to take its maximum contribution.
Print internal Parameters
If we want to take at internal parameters, we can use the verbose flag in the model creation.
Step18: This prints out the parameters of the baseline model
One can also see the best selected hyperparameter according to gridsearch and the training score of the model in score.
Multiprocessing
To enable multiprocessing simply use the n_jobs parameter when init. the model.
It expects an integer parameter which defines the amount of processes used.
n_jobs=-1 uses all available on the CPU. | Python Code:
import numpy as np
# fixed Seed for demonstration
STATE = np.random.RandomState(123)
from fri import genClassificationData
Explanation: Quick start guide
Installation
Stable
Fri can be installed via the Python Package Index (PyPI).
If you have pip installed just execute the command
pip install fri
to get the newest stable version.
The dependencies should be installed and checked automatically.
If you have problems installing please open issue at our tracker.
Development
To install a bleeding edge dev version of FRI you can clone the GitHub repository using
git clone [email protected]:lpfann/fri.git
and then check out the dev branch: git checkout dev.
We use poetry for dependency management.
Run
poetry install
in the cloned repository to install fri in a virtualenv.
To check if everything works as intented you can use pytest to run the unit tests.
Just run the command
poetry run pytest
in the main project folder
Using FRI
Now we showcase the workflow of using FRI on a simple classification problem.
Data
To have something to work with, we need some data first.
fri includes a generation method for binary classification and regression data.
In our case we need some classification data.
End of explanation
n = 300
features = 6
strongly_relevant = 2
weakly_relevant = 2
X,y = genClassificationData(n_samples=n,
n_features=features,
n_strel=strongly_relevant,
n_redundant=weakly_relevant,
random_state=STATE)
Explanation: We want to create a small set with a few features.
Because we want to showcase the all-relevant feature selection, we generate multiple strongly and weakly relevant features.
End of explanation
X.shape
Explanation: The method also prints out the parameters again.
End of explanation
from sklearn.preprocessing import StandardScaler
X_scaled = StandardScaler().fit_transform(X)
Explanation: We created a binary classification set with 6 features of which 2 are strongly relevant and 2 weakly relevant.
Preprocess
Because our method expects mean centered data we need to standardize it first.
This centers the values around 0 and deviation to the standard deviation
End of explanation
import fri
Explanation: Model
Now we need to creata a Model.
We use the FRI module.
End of explanation
list(fri.ProblemName)
Explanation: fri provides a convenience class fri.FRI to create a model.
fri.FRI needs the type of problem as a first argument of type ProblemName.
Depending on the Problem you want to analyze pick from one of the available models in ProblemName.
End of explanation
fri_model = fri.FRI(fri.ProblemName.CLASSIFICATION,
loss_slack=0.2,
w_l1_slack=0.2,
random_state=STATE)
fri_model
Explanation: Because we have Classification data we use the ProblemName.CLASSIFICATION to instantiate our model.
End of explanation
fri_model.fit(X_scaled,y)
Explanation: We used no parameters for creation so the defaults are active.
Fitting to data
Now we can just fit the model to the data using scikit-learn like commands.
End of explanation
fri_model.interval_
Explanation: The resulting feature relevance bounds are saved in the interval_ variable.
End of explanation
print(fri_model.print_interval_with_class())
Explanation: If you want to print out the relevance class use the print_interval_with_class() function.
End of explanation
fri_model.interval_[2]
Explanation: The bounds are grouped in 2d sublists for each feature.
To acess the relevance bounds for feature 2 we would use
End of explanation
fri_model.relevance_classes_
Explanation: The relevance classes are saved in the corresponding variable relevance_classes_:
End of explanation
# Import plot function
from fri.plot import plot_relevance_bars
import matplotlib.pyplot as plt
%matplotlib inline
# Create new figure, where we can put an axis on
fig, ax = plt.subplots(1, 1,figsize=(6,3))
# plot the bars on the axis, colored according to fri
out = plot_relevance_bars(ax,fri_model.interval_,classes=fri_model.relevance_classes_)
Explanation: 2 denotes strongly relevant features, 1 weakly relevant and 0 irrelevant.
Plot results
The bounds in numerical form are useful for postprocesing.
If we want a human to look at it, we recommend the plot function plot_relevance_bars.
We can also color the bars according to relevance_classes_
End of explanation
preset = {}
Explanation: Setting constraints manually
Our model also allows to compute relevance bounds when the user sets a given range for the features.
We use a dictionary to encode our constraints.
End of explanation
preset[2] = fri_model.interval_[2, 0]
Explanation: Example
As an example, let us constrain the third from our example to the minimum relevance bound.
End of explanation
const_ints = fri_model.constrained_intervals(preset=preset)
const_ints
Explanation: We use the function constrained_intervals.
Note: we need to fit the model before we can use this function.
We already did that, so we are fine.
End of explanation
fig, ax = plt.subplots(1, 1,figsize=(6,3))
out = plot_relevance_bars(ax, const_ints)
Explanation: Feature 3 is set to its minimum (at 0).
How does it look visually?
End of explanation
fri_model = fri.FRI(fri.ProblemName.CLASSIFICATION, verbose=True, random_state=STATE)
fri_model.fit(X_scaled,y)
Explanation: Feature 3 is reduced to its minimum (no contribution).
In turn, its correlated partner feature 4 had to take its maximum contribution.
Print internal Parameters
If we want to take at internal parameters, we can use the verbose flag in the model creation.
End of explanation
fri_model = fri.FRI(fri.ProblemName.CLASSIFICATION,
n_jobs=-1,
verbose=1,
random_state=STATE)
fri_model.fit(X_scaled,y)
Explanation: This prints out the parameters of the baseline model
One can also see the best selected hyperparameter according to gridsearch and the training score of the model in score.
Multiprocessing
To enable multiprocessing simply use the n_jobs parameter when init. the model.
It expects an integer parameter which defines the amount of processes used.
n_jobs=-1 uses all available on the CPU.
End of explanation |
4,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started with TensorFlow (Eager Mode)
Learning Objectives
- Understand difference between Tensorflow's two modes
Step1: Eager Execution
Step2: Adding Two Tensors
The value of the tensor, as well as its shape and data type are printed
Step3: Overloaded Operators
We can also perform a tf.add() using the + operator. The /,-,* and ** operators are similarly overloaded with the appropriate tensorflow operation.
Step4: NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
Step5: You can convert a native TF tensor to a NumPy array using .numpy()
Step6: Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
Toy Dataset
We'll model the following function
Step7: Loss Function
Using mean squared error, our loss function is
Step8: Gradient Function
To use gradient descent we need to take the partial derivative of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables. The params=[2,3] argument tells TensorFlow to only compute derivatives with respect to the 2nd and 3rd arguments to the loss function (counting from 0, so really the 3rd and 4th).
Step9: Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
Exercise 2
Complete the code to update the parameters $w_0$ and $w_1$ according to the gradients d_w0 and d_w1 and the specified learning rate.
Step10: Bonus
Try modelling a non-linear function such as | Python Code:
import tensorflow as tf
print(tf.__version__)
Explanation: Getting started with TensorFlow (Eager Mode)
Learning Objectives
- Understand difference between Tensorflow's two modes: Eager Execution and Graph Execution
- Practice defining and performing basic operations on constant Tensors
- Use Tensorflow's automatic differentiation capability
Introduction
Eager Execution
Eager mode evaluates operations immediatley and return concrete values immediately. To enable eager mode simply place tf.enable_eager_execution() at the top of your code. We recommend using eager execution when prototyping as it is intuitive, easier to debug, and requires less boilerplate code.
Graph Execution
Graph mode is TensorFlow's default execution mode (although it will change to eager with TF 2.0). In graph mode operations only produce a symbolic graph which doesn't get executed until run within the context of a tf.Session(). This style of coding is less inutitive and has more boilerplate, however it can lead to performance optimizations and is particularly suited for distributing training across multiple devices. We recommend using delayed execution for performance sensitive production code.
End of explanation
tf.enable_eager_execution()
Explanation: Eager Execution
End of explanation
a = tf.constant(value = [5, 3, 8], dtype = tf.int32)
b = tf.constant(value = [3, -1, 2], dtype = tf.int32)
c = tf.add(x = a, y = b)
print(c)
Explanation: Adding Two Tensors
The value of the tensor, as well as its shape and data type are printed
End of explanation
c = a + b # this is equivalent to tf.add(a,b)
print(c)
Explanation: Overloaded Operators
We can also perform a tf.add() using the + operator. The /,-,* and ** operators are similarly overloaded with the appropriate tensorflow operation.
End of explanation
import numpy as np
a_py = [1,2] # native python list
b_py = [3,4] # native python list
a_np = np.array(object = [1,2]) # numpy array
b_np = np.array(object = [3,4]) # numpy array
a_tf = tf.constant(value = [1,2], dtype = tf.int32) # native TF tensor
b_tf = tf.constant(value = [3,4], dtype = tf.int32) # native TF tensor
for result in [tf.add(x = a_py, y = b_py), tf.add(x = a_np, y = b_np), tf.add(x = a_tf, y = b_tf)]:
print("Type: {}, Value: {}".format(type(result), result))
Explanation: NumPy Interoperability
In addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands.
End of explanation
a_tf.numpy()
Explanation: You can convert a native TF tensor to a NumPy array using .numpy()
End of explanation
X = tf.constant(value = [1,2,3,4,5,6,7,8,9,10], dtype = tf.float32)
Y = 2 * X + 10
print("X:{}".format(X))
print("Y:{}".format(Y))
Explanation: Linear Regression
Now let's use low level tensorflow operations to implement linear regression.
Later in the course you'll see abstracted ways to do this using high level TensorFlow.
Toy Dataset
We'll model the following function:
\begin{equation}
y= 2x + 10
\end{equation}
End of explanation
def loss_mse(X, Y, w0, w1):
# TODO: Your code goes here
pass
Explanation: Loss Function
Using mean squared error, our loss function is:
\begin{equation}
MSE = \frac{1}{m}\sum_{i=1}^{m}(\hat{Y}_i-Y_i)^2
\end{equation}
$\hat{Y}$ represents the vector containing our model's predictions:
\begin{equation}
\hat{Y} = w_oX + w_1
\end{equation}
Exercise 1
The function loss_mse below takes four arguments: the tensors $X$, $Y$ and the weights $w_0$ and $w_1$. Complete the function below to compute the Mean Square Error (MSE). Hint: check out the tf.reduce_mean function.
End of explanation
# Counting from 0, the 2nd and 3rd parameter to the loss function are our weights
grad_f = tf.contrib.eager.gradients_function(f = loss_mse, params = [2, 3])
Explanation: Gradient Function
To use gradient descent we need to take the partial derivative of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!
During gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables. The params=[2,3] argument tells TensorFlow to only compute derivatives with respect to the 2nd and 3rd arguments to the loss function (counting from 0, so really the 3rd and 4th).
End of explanation
STEPS = 1000
LEARNING_RATE = .02
# Initialize weights
w0 = tf.constant(value = 0.0, dtype = tf.float32)
w1 = tf.constant(value = 0.0, dtype = tf.float32)
for step in range(STEPS):
#1. Calculate gradients
d_w0, d_w1 = grad_f(X, Y, w0, w1) # derivatives calculated by tensorflow!
#2. Update weights
w0 = # TODO: Your code goes here
w1 = # TODO: Your code goes here
#3. Periodically print MSE
if step % 100 == 0:
print("STEP: {} MSE: {}".format(step,loss_mse(X, Y, w0, w1)))
# Print final MSE and weights
print("STEP: {} MSE: {}".format(STEPS,loss_mse(X, Y, w0, w1)))
print("w0:{}".format(round(float(w0), 4)))
print("w1:{}".format(round(float(w1), 4)))
Explanation: Training Loop
Here we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.
Exercise 2
Complete the code to update the parameters $w_0$ and $w_1$ according to the gradients d_w0 and d_w1 and the specified learning rate.
End of explanation
from matplotlib import pyplot as plt
%matplotlib inline
X = tf.constant(value = np.linspace(0,2,1000), dtype = tf.float32)
Y = X*np.exp(-X**2) * X
plt.plot(X, Y)
def make_features(X):
features = [X]
features.append(tf.ones_like(X)) # Bias.
# TODO: add new features.
return tf.stack(features, axis=1)
def make_weights(n_weights):
W = [tf.constant(value = 0.0, dtype = tf.float32) for _ in range(n_weights)]
return tf.expand_dims(tf.stack(W),-1)
def predict(X, W):
Y_hat = tf.matmul(# TODO)
return tf.squeeze(Y_hat, axis=-1)
def loss_mse(X, Y, W):
Y_hat = predict(X, W)
return tf.reduce_mean(input_tensor = (Y_hat - Y)**2)
X = tf.constant(value = np.linspace(0,2,1000), dtype = tf.float32)
Y = np.exp(-X**2) * X
grad_f = tf.contrib.eager.gradients_function(f = loss_mse, params=[2])
STEPS = 2000
LEARNING_RATE = .02
# Weights/features.
Xf = make_features(X)
# Xf = Xf[:,0:2] # Linear features only.
W = make_weights(Xf.get_shape()[1].value)
# For plotting
steps = []
losses = []
plt.figure()
for step in range(STEPS):
#1. Calculate gradients
dW = grad_f(Xf, Y, W)[0]
#2. Update weights
W -= dW * LEARNING_RATE
#3. Periodically print MSE
if step % 100 == 0:
loss = loss_mse(Xf, Y, W)
steps.append(step)
losses.append(loss)
plt.clf()
plt.plot(steps, losses)
# Print final MSE and weights
print("STEP: {} MSE: {}".format(STEPS,loss_mse(Xf, Y, W)))
# Plot results
plt.figure()
plt.plot(X, Y, label='actual')
plt.plot(X, predict(Xf, W), label='predicted')
plt.legend()
Explanation: Bonus
Try modelling a non-linear function such as: $y=xe^{-x^2}$
Hint: Creating more training data will help. Also, you will need to build non-linear features.
End of explanation |
4,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright © 2019 The TensorFlow Authors.
Step1: TensorFlow Model Analysis
An Example of a Key TFX Library
This example colab notebook illustrates how TensorFlow Model Analysis (TFMA) can be used to investigate and visualize the characteristics of a dataset and the performance of a model. We'll use a model that we trained previously, and now you get to play with the results!
The model we trained was for the Chicago Taxi Example, which uses the Taxi Trips dataset released by the City of Chicago.
Note
Step2: Import packages
We import necessary packages, including standard TFX component classes.
Step3: Load The Files
We'll download a zip file that has everything we need. That includes
Step4: Parse the Schema
Among the things we downloaded was a schema for our data that was created by TensorFlow Data Validation. Let's parse that now so that we can use it with TFMA.
Step5: Use the Schema to Create TFRecords
We need to give TFMA access to our dataset, so let's create a TFRecords file. We can use our schema to create it, since it gives us the correct type for each feature.
Step7: Run TFMA and Render Metrics
Now we're ready to create a function that we'll use to run TFMA and render metrics. It requires an EvalSavedModel, a list of SliceSpecs, and an index into the SliceSpec list. It will create an EvalResult using tfma.run_model_analysis, and use it to create a SlicingMetricsViewer using tfma.view.render_slicing_metrics, which will render a visualization of our dataset using the slice we created.
Step8: Slicing and Dicing
We previously trained a model, and now we've loaded the results. Let's take a look at our visualizations, starting with using TFMA to slice along particular features. But first we need to read in the EvalSavedModel from one of our previous training runs.
To define the slice you want to visualize you create a tfma.slicer.SingleSliceSpec
To use tfma.view.render_slicing_metrics you can either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec)
If neither is provided, the overview will be displayed
Plots are interactive
Step9: Slices Overview
The default visualization is the Slices Overview when the number of slices is small. It shows the values of metrics for each slice. Since we've selected trip_start_hour above, it's showing us metrics like accuracy and AUC for each hour, which allows us to look for issues that are specific to some hours and not others.
In the visualization above
Step10: You can create feature crosses to analyze combinations of features. Let's create a SliceSpec to look at a cross of trip_start_day and trip_start_hour
Step11: Crossing the two columns creates a lot of combinations! Let's narrow down our cross to only look at trips that start at noon. Then let's select accuracy from the visualization
Step12: Tracking Model Performance Over Time
Your training dataset will be used for training your model, and will hopefully be representative of your test dataset and the data that will be sent to your model in production. However, while the data in inference requests may remain the same as your training data, in many cases it will start to change enough so that the performance of your model will change.
That means that you need to monitor and measure your model's performance on an ongoing basis, so that you can be aware of and react to changes. Let's take a look at how TFMA can help.
Measure Performance For New Data
We downloaded the results of three different training runs above, so let's load them now
Step13: Next, let's use TFMA to see how these runs compare using render_time_series.
How does it look today?
First, we'll imagine that we've trained and deployed our model yesterday, and now we want to see how it's doing on the new data coming in today. We can specify particular slices to look at. Let's compare our training runs for trips that started at noon.
Note
Step14: Now we'll imagine that another day has passed and we want to see how it's doing on the new data coming in today, compared to the previous two days. Again add AUC and average loss by using the "Add metric series" menu | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright © 2019 The TensorFlow Authors.
End of explanation
!pip install -q -U \
tensorflow==2.0.0 \
tfx==0.15.0rc0
Explanation: TensorFlow Model Analysis
An Example of a Key TFX Library
This example colab notebook illustrates how TensorFlow Model Analysis (TFMA) can be used to investigate and visualize the characteristics of a dataset and the performance of a model. We'll use a model that we trained previously, and now you get to play with the results!
The model we trained was for the Chicago Taxi Example, which uses the Taxi Trips dataset released by the City of Chicago.
Note: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The data provided at this site is subject to change at any time. It is understood that the data provided at this site is being used at one’s own risk.
Read more about the dataset in Google BigQuery. Explore the full dataset in the BigQuery UI.
Key Point: As a modeler and developer, think about how this data is used and the potential benefits and harm a model's predictions can cause. A model like this could reinforce societal biases and disparities. Is a feature relevant to the problem you want to solve or will it introduce bias? For more information, read about <a target='_blank' href='https://developers.google.com/machine-learning/fairness-overview/'>ML fairness</a>.
Key Point: In order to understand TFMA and how it works with Apache Beam, you'll need to know a little bit about Apache Beam itself. The <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/'>Beam Programming Guide</a> is a great place to start.
The columns in the dataset are:
<table>
<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
</table>
Install Jupyter Extensions
Note: If running TFMA in a local Jupyter notebook, then these Jupyter extensions must be installed in the environment before running Jupyter.
bash
jupyter nbextension enable --py widgetsnbextension
jupyter nbextension install --py --symlink tensorflow_model_analysis
jupyter nbextension enable --py tensorflow_model_analysis
Setup
First, we install the necessary packages, download data, import modules and set up paths.
Install TensorFlow, TensorFlow Model Analysis (TFMA) and TensorFlow Data Validation (TFDV)
End of explanation
import csv
import io
import os
import requests
import tempfile
import zipfile
from google.protobuf import text_format
import tensorflow as tf
import tensorflow_data_validation as tfdv
import tensorflow_model_analysis as tfma
from tensorflow_metadata.proto.v0 import schema_pb2
tf.__version__
tfma.version.VERSION_STRING
Explanation: Import packages
We import necessary packages, including standard TFX component classes.
End of explanation
# Download the zip file from GCP and unzip it
BASE_DIR = tempfile.mkdtemp()
TFMA_DIR = os.path.join(BASE_DIR, 'eval_saved_models-2.0')
DATA_DIR = os.path.join(TFMA_DIR, 'data')
OUTPUT_DIR = os.path.join(TFMA_DIR, 'output')
SCHEMA = os.path.join(TFMA_DIR, 'schema.pbtxt')
response = requests.get('https://storage.googleapis.com/tfx-colab-datasets/eval_saved_models-2.0.zip', stream=True)
zipfile.ZipFile(io.BytesIO(response.content)).extractall(BASE_DIR)
print("Here's what we downloaded:")
!cd {TFMA_DIR} && find .
Explanation: Load The Files
We'll download a zip file that has everything we need. That includes:
Training and evaluation datasets
Data schema
Training results as EvalSavedModels
Note: We are downloading with HTTPS from a Google Cloud server.
End of explanation
schema = schema_pb2.Schema()
contents = tf.io.read_file(SCHEMA).numpy()
schema = text_format.Parse(contents, schema)
tfdv.display_schema(schema)
Explanation: Parse the Schema
Among the things we downloaded was a schema for our data that was created by TensorFlow Data Validation. Let's parse that now so that we can use it with TFMA.
End of explanation
datafile = os.path.join(DATA_DIR, 'eval', 'data.csv')
reader = csv.DictReader(open(datafile))
examples = []
for line in reader:
example = tf.train.Example()
for feature in schema.feature:
key = feature.name
if len(line[key]) > 0:
if feature.type == schema_pb2.FLOAT:
example.features.feature[key].float_list.value[:] = [float(line[key])]
elif feature.type == schema_pb2.INT:
example.features.feature[key].int64_list.value[:] = [int(line[key])]
elif feature.type == schema_pb2.BYTES:
example.features.feature[key].bytes_list.value[:] = [line[key].encode('utf8')]
else:
if feature.type == schema_pb2.FLOAT:
example.features.feature[key].float_list.value[:] = []
elif feature.type == schema_pb2.INT:
example.features.feature[key].int64_list.value[:] = []
elif feature.type == schema_pb2.BYTES:
example.features.feature[key].bytes_list.value[:] = []
examples.append(example)
TFRecord_file = os.path.join(BASE_DIR, 'train_data.rio')
with tf.io.TFRecordWriter(TFRecord_file) as writer:
for example in examples:
writer.write(example.SerializeToString())
writer.flush()
writer.close()
!ls {TFRecord_file}
Explanation: Use the Schema to Create TFRecords
We need to give TFMA access to our dataset, so let's create a TFRecords file. We can use our schema to create it, since it gives us the correct type for each feature.
End of explanation
def run_and_render(eval_model=None, slice_list=None, slice_idx=0):
Runs the model analysis and renders the slicing metrics
Args:
eval_model: An instance of tf.saved_model saved with evaluation data
slice_list: A list of tfma.slicer.SingleSliceSpec giving the slices
slice_idx: An integer index into slice_list specifying the slice to use
Returns:
A SlicingMetricsViewer object if in Jupyter notebook; None if in Colab.
eval_result = tfma.run_model_analysis(eval_shared_model=eval_model,
data_location=TFRecord_file,
file_format='tfrecords',
slice_spec=slice_list,
output_path='sample_data',
extractors=None)
return tfma.view.render_slicing_metrics(eval_result, slicing_spec=slice_list[slice_idx] if slice_list else None)
Explanation: Run TFMA and Render Metrics
Now we're ready to create a function that we'll use to run TFMA and render metrics. It requires an EvalSavedModel, a list of SliceSpecs, and an index into the SliceSpec list. It will create an EvalResult using tfma.run_model_analysis, and use it to create a SlicingMetricsViewer using tfma.view.render_slicing_metrics, which will render a visualization of our dataset using the slice we created.
End of explanation
# Load the TFMA results for the first training run
# This will take a minute
eval_model_base_dir_0 = os.path.join(TFMA_DIR, 'run_0', 'eval_model_dir')
eval_model_dir_0 = os.path.join(eval_model_base_dir_0,
max(os.listdir(eval_model_base_dir_0)))
eval_shared_model_0 = tfma.default_eval_shared_model(
eval_saved_model_path=eval_model_dir_0)
# Slice our data by the trip_start_hour feature
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_hour'])]
run_and_render(eval_model=eval_shared_model_0, slice_list=slices, slice_idx=0)
Explanation: Slicing and Dicing
We previously trained a model, and now we've loaded the results. Let's take a look at our visualizations, starting with using TFMA to slice along particular features. But first we need to read in the EvalSavedModel from one of our previous training runs.
To define the slice you want to visualize you create a tfma.slicer.SingleSliceSpec
To use tfma.view.render_slicing_metrics you can either use the name of the column (by setting slicing_column) or provide a tfma.slicer.SingleSliceSpec (by setting slicing_spec)
If neither is provided, the overview will be displayed
Plots are interactive:
Click and drag to pan
Scroll to zoom
Right click to reset the view
Simply hover over the desired data point to see more details. Select from four different types of plots using the selections at the bottom.
For example, we'll be setting slicing_column to look at the trip_start_hour feature in our SliceSpec.
End of explanation
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_hour']),
tfma.slicer.SingleSliceSpec(columns=['trip_start_day']),
tfma.slicer.SingleSliceSpec(columns=['trip_start_month'])]
run_and_render(eval_model=eval_shared_model_0, slice_list=slices, slice_idx=0)
Explanation: Slices Overview
The default visualization is the Slices Overview when the number of slices is small. It shows the values of metrics for each slice. Since we've selected trip_start_hour above, it's showing us metrics like accuracy and AUC for each hour, which allows us to look for issues that are specific to some hours and not others.
In the visualization above:
Try sorting the feature column, which is our trip_start_hours feature, by clicking on the column header
Try sorting by precision, and notice that the precision for some of the hours with examples is 0, which may indicate a problem
The chart also allows us to select and display different metrics in our slices.
Try selecting different metrics from the "Show" menu
Try selecting recall in the "Show" menu, and notice that the recall for some of the hours with examples is 0, which may indicate a problem
It is also possible to set a threshold to filter out slices with smaller numbers of examples, or "weights". You can type a minimum number of examples, or use the slider.
Metrics Histogram
This view also supports a Metrics Histogram as an alternative visualization, which is also the default view when the number of slices is large. The results will be divided into buckets and the number of slices / total weights / both can be visualized. Columns can be sorted by clicking on the column header. Slices with small weights can be filtered out by setting the threshold. Further filtering can be applied by dragging the grey band. To reset the range, double click the band. Filtering can also be used to remove outliers in the visualization and the metrics tables. Click the gear icon to switch to a logarithmic scale instead of a linear scale.
Try selecting "Metrics Histogram" in the Visualization menu
More Slices
Let's create a whole list of SliceSpecs, which will allow us to select any of the slices in the list. We'll select the trip_start_day slice (days of the week) by setting the slice_idx to 1. Try changing the slice_idx to 0 or 2 and running again to examine different slices.
End of explanation
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_day', 'trip_start_hour'])]
run_and_render(eval_shared_model_0, slices, 0)
Explanation: You can create feature crosses to analyze combinations of features. Let's create a SliceSpec to look at a cross of trip_start_day and trip_start_hour:
End of explanation
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_day'], features=[('trip_start_hour', 12)])]
run_and_render(eval_shared_model_0, slices, 0)
Explanation: Crossing the two columns creates a lot of combinations! Let's narrow down our cross to only look at trips that start at noon. Then let's select accuracy from the visualization:
End of explanation
def get_eval_result(base_dir, run_name, data_loc, slice_spec):
eval_model_base_dir = os.path.join(base_dir, run_name, "eval_model_dir")
versions = os.listdir(eval_model_base_dir)
eval_model_dir = os.path.join(eval_model_base_dir, max(versions))
output_dir = os.path.join(base_dir, "output", run_name)
eval_shared_model = tfma.default_eval_shared_model(eval_saved_model_path=eval_model_dir)
return tfma.run_model_analysis(eval_shared_model=eval_shared_model,
data_location=data_loc,
file_format='tfrecords',
slice_spec=slice_spec,
output_path=output_dir,
extractors=None)
slices = [tfma.slicer.SingleSliceSpec()]
result_ts0 = get_eval_result(TFMA_DIR, 'run_0', TFRecord_file, slices)
result_ts1 = get_eval_result(TFMA_DIR, 'run_1', TFRecord_file, slices)
result_ts2 = get_eval_result(TFMA_DIR, 'run_2', TFRecord_file, slices)
Explanation: Tracking Model Performance Over Time
Your training dataset will be used for training your model, and will hopefully be representative of your test dataset and the data that will be sent to your model in production. However, while the data in inference requests may remain the same as your training data, in many cases it will start to change enough so that the performance of your model will change.
That means that you need to monitor and measure your model's performance on an ongoing basis, so that you can be aware of and react to changes. Let's take a look at how TFMA can help.
Measure Performance For New Data
We downloaded the results of three different training runs above, so let's load them now:
End of explanation
output_dirs = [os.path.join(TFMA_DIR, "output", run_name)
for run_name in ("run_0", "run_1", "run_2")]
eval_results_from_disk = tfma.load_eval_results(
output_dirs[:2], tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, slices[0])
Explanation: Next, let's use TFMA to see how these runs compare using render_time_series.
How does it look today?
First, we'll imagine that we've trained and deployed our model yesterday, and now we want to see how it's doing on the new data coming in today. We can specify particular slices to look at. Let's compare our training runs for trips that started at noon.
Note:
* The visualization will start by displaying accuracy. Add AUC and average loss by using the "Add metric series" menu.
* Hover over the curves to see the values.
* In the metric series charts the X axis is the model ID number of the model run that you're examining. The numbers themselves are not meaningful.
End of explanation
eval_results_from_disk = tfma.load_eval_results(
output_dirs, tfma.constants.MODEL_CENTRIC_MODE)
tfma.view.render_time_series(eval_results_from_disk, slices[0])
Explanation: Now we'll imagine that another day has passed and we want to see how it's doing on the new data coming in today, compared to the previous two days. Again add AUC and average loss by using the "Add metric series" menu:
End of explanation |
4,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tabular core
Basic function to preprocess tabular data before assembling it in a DataLoaders.
Initial preprocessing
Step1: For example if we have a series of dates we can then generate features such as Year, Month, Day, Dayofweek, Is_month_start, etc as shown below
Step2: This function works by determining if a column is continuous or categorical based on the cardinality of its values. If it is above the max_card parameter (or a float datatype) then it will be added to the cont_names else cat_names. An example is below
Step3: For example we will make a sample DataFrame with int, float, bool, and object datatypes
Step4: We can then call df_shrink_dtypes to find the smallest possible datatype that can support the data
Step5: df_shrink(df) attempts to make a DataFrame uses less memory, by fit numeric columns into smallest datatypes. In addition
Step6: Let's compare the two
Step7: We can see that the datatypes changed, and even further we can look at their relative memory usages
Step8: Here's another example using the ADULT_SAMPLE dataset
Step9: We reduced the overall memory used by 79%!
Tabular -
Step10: df
Step11: These transforms are applied as soon as the data is available rather than as data is called from the DataLoader
Step12: While visually in the DataFrame you will not see a change, the classes are stored in to.procs.categorify as we can see below on a dummy DataFrame
Step13: Each column's unique values are stored in a dictionary of column
Step14: Currently, filling with the median, a constant, and the mode are supported.
Step15: TabularPandas Pipelines -
Step16: Integration example
For a more in-depth explanation, see the tabular tutorial
Step17: We can decode any set of transformed data by calling to.decode_row with our raw data
Step18: We can make new test datasets based on the training data with the to.new()
Note
Step19: We can then convert it to a DataLoader
Step20: Other target types
Multi-label categories
one-hot encoded label
Step21: Not one-hot encoded
Step22: Regression
Step24: Not being used now - for multi-modal
Step25: Export - | Python Code:
#|export
def make_date(df, date_field):
"Make sure `df[date_field]` is of the right date type."
field_dtype = df[date_field].dtype
if isinstance(field_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
field_dtype = np.datetime64
if not np.issubdtype(field_dtype, np.datetime64):
df[date_field] = pd.to_datetime(df[date_field], infer_datetime_format=True)
df = pd.DataFrame({'date': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24']})
make_date(df, 'date')
test_eq(df['date'].dtype, np.dtype('datetime64[ns]'))
#|export
def add_datepart(df, field_name, prefix=None, drop=True, time=False):
"Helper function that adds columns relevant to a date in the column `field_name` of `df`."
make_date(df, field_name)
field = df[field_name]
prefix = ifnone(prefix, re.sub('[Dd]ate$', '', field_name))
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
if time: attr = attr + ['Hour', 'Minute', 'Second']
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in attr: df[prefix + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
mask = ~field.isna()
df[prefix + 'Elapsed'] = np.where(mask,field.values.astype(np.int64) // 10 ** 9,np.nan)
if drop: df.drop(field_name, axis=1, inplace=True)
return df
Explanation: Tabular core
Basic function to preprocess tabular data before assembling it in a DataLoaders.
Initial preprocessing
End of explanation
df = pd.DataFrame({'date': ['2019-12-04', None, '2019-11-15', '2019-10-24']})
df = add_datepart(df, 'date')
df.head()
#|hide
test_eq(df.columns, ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start', 'Elapsed'])
test_eq(df[df.Elapsed.isna()].shape,(1, 13))
# Test that week dtype is consistent with other datepart fields
test_eq(df['Year'].dtype, df['Week'].dtype)
test_eq(pd.api.types.is_numeric_dtype(df['Elapsed']), True)
#|hide
df = pd.DataFrame({'f1': [1.],'f2': [2.],'f3': [3.],'f4': [4.],'date':['2019-12-04']})
df = add_datepart(df, 'date')
df.head()
#|hide
# Test Order of columns when date isn't in first position
test_eq(df.columns, ['f1', 'f2', 'f3', 'f4', 'Year', 'Month', 'Week', 'Day',
'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start', 'Elapsed'])
# Test that week dtype is consistent with other datepart fields
test_eq(df['Year'].dtype, df['Week'].dtype)
#|export
def _get_elapsed(df,field_names, date_field, base_field, prefix):
for f in field_names:
day1 = np.timedelta64(1, 'D')
last_date,last_base,res = np.datetime64(),None,[]
for b,v,d in zip(df[base_field].values, df[f].values, df[date_field].values):
if last_base is None or b != last_base:
last_date,last_base = np.datetime64(),b
if v: last_date = d
res.append(((d-last_date).astype('timedelta64[D]') / day1))
df[prefix + f] = res
return df
#|export
def add_elapsed_times(df, field_names, date_field, base_field):
"Add in `df` for each event in `field_names` the elapsed time according to `date_field` grouped by `base_field`"
field_names = list(L(field_names))
#Make sure date_field is a date and base_field a bool
df[field_names] = df[field_names].astype('bool')
make_date(df, date_field)
work_df = df[field_names + [date_field, base_field]]
work_df = work_df.sort_values([base_field, date_field])
work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'After')
work_df = work_df.sort_values([base_field, date_field], ascending=[True, False])
work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'Before')
for a in ['After' + f for f in field_names] + ['Before' + f for f in field_names]:
work_df[a] = work_df[a].fillna(0).astype(int)
for a,s in zip([True, False], ['_bw', '_fw']):
work_df = work_df.set_index(date_field)
tmp = (work_df[[base_field] + field_names].sort_index(ascending=a)
.groupby(base_field).rolling(7, min_periods=1).sum())
if base_field in tmp: tmp.drop(base_field, axis=1,inplace=True)
tmp.reset_index(inplace=True)
work_df.reset_index(inplace=True)
work_df = work_df.merge(tmp, 'left', [date_field, base_field], suffixes=['', s])
work_df.drop(field_names, axis=1, inplace=True)
return df.merge(work_df, 'left', [date_field, base_field])
df = pd.DataFrame({'date': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24'],
'event': [False, True, False, True], 'base': [1,1,2,2]})
df = add_elapsed_times(df, ['event'], 'date', 'base')
df.head()
#|export
def cont_cat_split(df, max_card=20, dep_var=None):
"Helper function that returns column names of cont and cat variables from given `df`."
cont_names, cat_names = [], []
for label in df:
if label in L(dep_var): continue
if ((pd.api.types.is_integer_dtype(df[label].dtype) and
df[label].unique().shape[0] > max_card) or
pd.api.types.is_float_dtype(df[label].dtype)):
cont_names.append(label)
else: cat_names.append(label)
return cont_names, cat_names
Explanation: For example if we have a series of dates we can then generate features such as Year, Month, Day, Dayofweek, Is_month_start, etc as shown below:
End of explanation
# Example with simple numpy types
df = pd.DataFrame({'cat1': [1, 2, 3, 4], 'cont1': [1., 2., 3., 2.], 'cat2': ['a', 'b', 'b', 'a'],
'i8': pd.Series([1, 2, 3, 4], dtype='int8'),
'u8': pd.Series([1, 2, 3, 4], dtype='uint8'),
'f16': pd.Series([1, 2, 3, 4], dtype='float16'),
'y1': [1, 0, 1, 0], 'y2': [2, 1, 1, 0]})
cont_names, cat_names = cont_cat_split(df)
#|hide_input
print(f'cont_names: {cont_names}\ncat_names: {cat_names}`')
#|hide
# Test all columns
cont, cat = cont_cat_split(df)
test_eq((cont, cat), (['cont1', 'f16'], ['cat1', 'cat2', 'i8', 'u8', 'y1', 'y2']))
# Test exclusion of dependent variable
cont, cat = cont_cat_split(df, dep_var='y1')
test_eq((cont, cat), (['cont1', 'f16'], ['cat1', 'cat2', 'i8', 'u8', 'y2']))
# Test exclusion of multi-label dependent variables
cont, cat = cont_cat_split(df, dep_var=['y1', 'y2'])
test_eq((cont, cat), (['cont1', 'f16'], ['cat1', 'cat2', 'i8', 'u8']))
# Test maximal cardinality bound for int variable
cont, cat = cont_cat_split(df, max_card=3)
test_eq((cont, cat), (['cat1', 'cont1', 'i8', 'u8', 'f16'], ['cat2', 'y1', 'y2']))
cont, cat = cont_cat_split(df, max_card=2)
test_eq((cont, cat), (['cat1', 'cont1', 'i8', 'u8', 'f16', 'y2'], ['cat2', 'y1']))
cont, cat = cont_cat_split(df, max_card=1)
test_eq((cont, cat), (['cat1', 'cont1', 'i8', 'u8', 'f16', 'y1', 'y2'], ['cat2']))
# Example with pandas types and generated columns
df = pd.DataFrame({'cat1': pd.Series(['l','xs','xl','s'], dtype='category'),
'ui32': pd.Series([1, 2, 3, 4], dtype='UInt32'),
'i64': pd.Series([1, 2, 3, 4], dtype='Int64'),
'f16': pd.Series([1, 2, 3, 4], dtype='Float64'),
'd1_date': ['2021-02-09', None, '2020-05-12', '2020-08-14'],
})
df = add_datepart(df, 'd1_date', drop=False)
df['cat1'].cat.set_categories(['xl','l','m','s','xs'], ordered=True, inplace=True)
cont_names, cat_names = cont_cat_split(df, max_card=0)
#|hide_input
print(f'cont_names: {cont_names}\ncat_names: {cat_names}')
#|hide
cont, cat = cont_cat_split(df, max_card=0)
test_eq((cont, cat), (
['ui32', 'i64', 'f16', 'd1_Year', 'd1_Month', 'd1_Week', 'd1_Day', 'd1_Dayofweek', 'd1_Dayofyear', 'd1_Elapsed'],
['cat1', 'd1_date', 'd1_Is_month_end', 'd1_Is_month_start', 'd1_Is_quarter_end', 'd1_Is_quarter_start', 'd1_Is_year_end', 'd1_Is_year_start']
))
#|export
def df_shrink_dtypes(df, skip=[], obj2cat=True, int2uint=False):
"Return any possible smaller data types for DataFrame columns. Allows `object`->`category`, `int`->`uint`, and exclusion."
# 1: Build column filter and typemap
excl_types, skip = {'category','datetime64[ns]','bool'}, set(skip)
typemap = {'int' : [(np.dtype(x), np.iinfo(x).min, np.iinfo(x).max) for x in (np.int8, np.int16, np.int32, np.int64)],
'uint' : [(np.dtype(x), np.iinfo(x).min, np.iinfo(x).max) for x in (np.uint8, np.uint16, np.uint32, np.uint64)],
'float' : [(np.dtype(x), np.finfo(x).min, np.finfo(x).max) for x in (np.float32, np.float64, np.longdouble)]
}
if obj2cat: typemap['object'] = 'category' # User wants to categorify dtype('Object'), which may not always save space
else: excl_types.add('object')
new_dtypes = {}
exclude = lambda dt: dt[1].name not in excl_types and dt[0] not in skip
for c, old_t in filter(exclude, df.dtypes.items()):
t = next((v for k,v in typemap.items() if old_t.name.startswith(k)), None)
if isinstance(t, list): # Find the smallest type that fits
if int2uint and t==typemap['int'] and df[c].min() >= 0: t=typemap['uint']
new_t = next((r[0] for r in t if r[1]<=df[c].min() and r[2]>=df[c].max()), None)
if new_t and new_t == old_t: new_t = None
else: new_t = t if isinstance(t, str) else None
if new_t: new_dtypes[c] = new_t
return new_dtypes
show_doc(df_shrink_dtypes, title_level=3)
Explanation: This function works by determining if a column is continuous or categorical based on the cardinality of its values. If it is above the max_card parameter (or a float datatype) then it will be added to the cont_names else cat_names. An example is below:
End of explanation
df = pd.DataFrame({'i': [-100, 0, 100], 'f': [-100.0, 0.0, 100.0], 'e': [True, False, True],
'date':['2019-12-04','2019-11-29','2019-11-15',]})
df.dtypes
Explanation: For example we will make a sample DataFrame with int, float, bool, and object datatypes:
End of explanation
dt = df_shrink_dtypes(df)
dt
#|hide
test_eq(df['i'].dtype, 'int64')
test_eq(dt['i'], 'int8')
test_eq(df['f'].dtype, 'float64')
test_eq(dt['f'], 'float32')
# Default ignore 'object' and 'boolean' columns
test_eq(df['date'].dtype, 'object')
test_eq(dt['date'], 'category')
# Test categorifying 'object' type
dt2 = df_shrink_dtypes(df, obj2cat=False)
test_eq('date' not in dt2, True)
#|export
def df_shrink(df, skip=[], obj2cat=True, int2uint=False):
"Reduce DataFrame memory usage, by casting to smaller types returned by `df_shrink_dtypes()`."
dt = df_shrink_dtypes(df, skip, obj2cat=obj2cat, int2uint=int2uint)
return df.astype(dt)
show_doc(df_shrink, title_level=3)
Explanation: We can then call df_shrink_dtypes to find the smallest possible datatype that can support the data:
End of explanation
df = pd.DataFrame({'i': [-100, 0, 100], 'f': [-100.0, 0.0, 100.0], 'u':[0, 10,254],
'date':['2019-12-04','2019-11-29','2019-11-15']})
df2 = df_shrink(df, skip=['date'])
Explanation: df_shrink(df) attempts to make a DataFrame uses less memory, by fit numeric columns into smallest datatypes. In addition:
boolean, category, datetime64[ns] dtype columns are ignored.
'object' type columns are categorified, which can save a lot of memory in large dataset. It can be turned off by obj2cat=False.
int2uint=True, to fit int types to uint types, if all data in the column is >= 0.
columns can be excluded by name using excl_cols=['col1','col2'].
To get only new column data types without actually casting a DataFrame,
use df_shrink_dtypes() with all the same parameters for df_shrink().
End of explanation
df.dtypes
df2.dtypes
Explanation: Let's compare the two:
End of explanation
#|hide_input
print(f'Initial Dataframe: {df.memory_usage().sum()} bytes')
print(f'Reduced Dataframe: {df2.memory_usage().sum()} bytes')
#|hide
test_eq(df['i'].dtype=='int64' and df2['i'].dtype=='int8', True)
test_eq(df['f'].dtype=='float64' and df2['f'].dtype=='float32', True)
test_eq(df['u'].dtype=='int64' and df2['u'].dtype=='int16', True)
test_eq(df2['date'].dtype, 'object')
test_eq(df2.memory_usage().sum() < df.memory_usage().sum(), True)
# Test int => uint (when col.min() >= 0)
df3 = df_shrink(df, int2uint=True)
test_eq(df3['u'].dtype, 'uint8') # int64 -> uint8 instead of int16
# Test excluding columns
df4 = df_shrink(df, skip=['i','u'])
test_eq(df['i'].dtype, df4['i'].dtype)
test_eq(df4['u'].dtype, 'int64')
Explanation: We can see that the datatypes changed, and even further we can look at their relative memory usages:
End of explanation
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
new_df = df_shrink(df, int2uint=True)
#|hide_input
print(f'Initial Dataframe: {df.memory_usage().sum() / 1000000} megabytes')
print(f'Reduced Dataframe: {new_df.memory_usage().sum() / 1000000} megabytes')
Explanation: Here's another example using the ADULT_SAMPLE dataset:
End of explanation
#|export
class _TabIloc:
"Get/set rows by iloc and cols by name"
def __init__(self,to): self.to = to
def __getitem__(self, idxs):
df = self.to.items
if isinstance(idxs,tuple):
rows,cols = idxs
cols = df.columns.isin(cols) if is_listy(cols) else df.columns.get_loc(cols)
else: rows,cols = idxs,slice(None)
return self.to.new(df.iloc[rows, cols])
#|export
class Tabular(CollBase, GetAttr, FilteredBase):
"A `DataFrame` wrapper that knows which cols are cont/cat/y, and returns rows in `__getitem__`"
_default,with_cont='procs',True
def __init__(self, df, procs=None, cat_names=None, cont_names=None, y_names=None, y_block=None, splits=None,
do_setup=True, device=None, inplace=False, reduce_memory=True):
if inplace and splits is not None and pd.options.mode.chained_assignment is not None:
warn("Using inplace with splits will trigger a pandas error. Set `pd.options.mode.chained_assignment=None` to avoid it.")
if not inplace: df = df.copy()
if reduce_memory: df = df_shrink(df)
if splits is not None: df = df.iloc[sum(splits, [])]
self.dataloaders = delegates(self._dl_type.__init__)(self.dataloaders)
super().__init__(df)
self.y_names,self.device = L(y_names),device
if y_block is None and self.y_names:
# Make ys categorical if they're not numeric
ys = df[self.y_names]
if len(ys.select_dtypes(include='number').columns)!=len(ys.columns): y_block = CategoryBlock()
else: y_block = RegressionBlock()
if y_block is not None and do_setup:
if callable(y_block): y_block = y_block()
procs = L(procs) + y_block.type_tfms
self.cat_names,self.cont_names,self.procs = L(cat_names),L(cont_names),Pipeline(procs)
self.split = len(df) if splits is None else len(splits[0])
if do_setup: self.setup()
def new(self, df, inplace=False):
return type(self)(df, do_setup=False, reduce_memory=False, y_block=TransformBlock(), inplace=inplace,
**attrdict(self, 'procs','cat_names','cont_names','y_names', 'device'))
def subset(self, i): return self.new(self.items[slice(0,self.split) if i==0 else slice(self.split,len(self))])
def copy(self): self.items = self.items.copy(); return self
def decode(self): return self.procs.decode(self)
def decode_row(self, row): return self.new(pd.DataFrame(row).T).decode().items.iloc[0]
def show(self, max_n=10, **kwargs): display_df(self.new(self.all_cols[:max_n]).decode().items)
def setup(self): self.procs.setup(self)
def process(self): self.procs(self)
def loc(self): return self.items.loc
def iloc(self): return _TabIloc(self)
def targ(self): return self.items[self.y_names]
def x_names (self): return self.cat_names + self.cont_names
def n_subsets(self): return 2
def y(self): return self[self.y_names[0]]
def new_empty(self): return self.new(pd.DataFrame({}, columns=self.items.columns))
def to_device(self, d=None):
self.device = d
return self
def all_col_names (self):
ys = [n for n in self.y_names if n in self.items.columns]
return self.x_names + self.y_names if len(ys) == len(self.y_names) else self.x_names
properties(Tabular,'loc','iloc','targ','all_col_names','n_subsets','x_names','y')
Explanation: We reduced the overall memory used by 79%!
Tabular -
End of explanation
#|export
class TabularPandas(Tabular):
"A `Tabular` object with transforms"
def transform(self, cols, f, all_col=True):
if not all_col: cols = [c for c in cols if c in self.items.columns]
if len(cols) > 0: self[cols] = self[cols].transform(f)
#|export
def _add_prop(cls, nm):
@property
def f(o): return o[list(getattr(o,nm+'_names'))]
@f.setter
def fset(o, v): o[getattr(o,nm+'_names')] = v
setattr(cls, nm+'s', f)
setattr(cls, nm+'s', fset)
_add_prop(Tabular, 'cat')
_add_prop(Tabular, 'cont')
_add_prop(Tabular, 'y')
_add_prop(Tabular, 'x')
_add_prop(Tabular, 'all_col')
#|hide
df = pd.DataFrame({'a':[0,1,2,0,2], 'b':[0,0,0,0,1]})
to = TabularPandas(df, cat_names='a')
t = pickle.loads(pickle.dumps(to))
test_eq(t.items,to.items)
test_eq(to.all_cols,to[['a']])
#|hide
import gc
def _count_objs(o):
"Counts number of instanes of class `o`"
objs = gc.get_objects()
return len([x for x in objs if isinstance(x, pd.DataFrame)])
df = pd.DataFrame({'a':[0,1,2,0,2], 'b':[0,0,0,0,1]})
df_b = pd.DataFrame({'a':[1,2,0,0,2], 'b':[1,0,3,0,1]})
to = TabularPandas(df, cat_names='a', inplace=True)
_init_count = _count_objs(pd.DataFrame)
to_new = to.new(df_b, inplace=True)
test_eq(_init_count, _count_objs(pd.DataFrame))
#|export
class TabularProc(InplaceTransform):
"Base class to write a non-lazy tabular processor for dataframes"
def setup(self, items=None, train_setup=False): #TODO: properly deal with train_setup
super().setup(getattr(items,'train',items), train_setup=False)
# Procs are called as soon as data is available
return self(items.items if isinstance(items,Datasets) else items)
@property
def name(self): return f"{super().name} -- {getattr(self,'__stored_args__',{})}"
Explanation: df: A DataFrame of your data
cat_names: Your categorical x variables
cont_names: Your continuous x variables
y_names: Your dependent y variables
Note: Mixed y's such as Regression and Classification is not currently supported, however multiple regression or classification outputs is
y_block: How to sub-categorize the type of y_names (CategoryBlock or RegressionBlock)
splits: How to split your data
do_setup: A parameter for if Tabular will run the data through the procs upon initialization
device: cuda or cpu
inplace: If True, Tabular will not keep a separate copy of your original DataFrame in memory. You should ensure pd.options.mode.chained_assignment is None before setting this
reduce_memory: fastai will attempt to reduce the overall memory usage by the inputted DataFrame with df_shrink
End of explanation
#|export
def _apply_cats (voc, add, c):
if not is_categorical_dtype(c):
return pd.Categorical(c, categories=voc[c.name][add:]).codes+add
return c.cat.codes+add #if is_categorical_dtype(c) else c.map(voc[c.name].o2i)
def _decode_cats(voc, c): return c.map(dict(enumerate(voc[c.name].items)))
#|export
class Categorify(TabularProc):
"Transform the categorical variables to something similar to `pd.Categorical`"
order = 1
def setups(self, to):
store_attr(classes={n:CategoryMap(to.iloc[:,n].items, add_na=(n in to.cat_names)) for n in to.cat_names}, but='to')
def encodes(self, to): to.transform(to.cat_names, partial(_apply_cats, self.classes, 1))
def decodes(self, to): to.transform(to.cat_names, partial(_decode_cats, self.classes))
def __getitem__(self,k): return self.classes[k]
#|exporti
@Categorize
def setups(self, to:Tabular):
if len(to.y_names) > 0:
if self.vocab is None:
self.vocab = CategoryMap(getattr(to, 'train', to).iloc[:,to.y_names[0]].items, strict=True)
else:
self.vocab = CategoryMap(self.vocab, sort=False, add_na=self.add_na)
self.c = len(self.vocab)
return self(to)
@Categorize
def encodes(self, to:Tabular):
to.transform(to.y_names, partial(_apply_cats, {n: self.vocab for n in to.y_names}, 0), all_col=False)
return to
@Categorize
def decodes(self, to:Tabular):
to.transform(to.y_names, partial(_decode_cats, {n: self.vocab for n in to.y_names}), all_col=False)
return to
show_doc(Categorify, title_level=3)
Explanation: These transforms are applied as soon as the data is available rather than as data is called from the DataLoader
End of explanation
df = pd.DataFrame({'a':[0,1,2,0,2]})
to = TabularPandas(df, Categorify, 'a')
to.show()
Explanation: While visually in the DataFrame you will not see a change, the classes are stored in to.procs.categorify as we can see below on a dummy DataFrame:
End of explanation
cat = to.procs.categorify
cat.classes
#|hide
def test_series(a,b): return test_eq(list(a), b)
test_series(cat['a'], ['#na#',0,1,2])
test_series(to['a'], [1,2,3,1,3])
#|hide
df1 = pd.DataFrame({'a':[1,0,3,-1,2]})
to1 = to.new(df1)
to1.process()
#Values that weren't in the training df are sent to 0 (na)
test_series(to1['a'], [2,1,0,0,3])
to2 = cat.decode(to1)
test_series(to2['a'], [1,0,'#na#','#na#',2])
#|hide
#test with splits
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2]})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]])
test_series(cat['a'], ['#na#',0,1,2])
test_series(to['a'], [1,2,3,0,3])
#|hide
df = pd.DataFrame({'a':pd.Categorical(['M','H','L','M'], categories=['H','M','L'], ordered=True)})
to = TabularPandas(df, Categorify, 'a')
cat = to.procs.categorify
test_series(cat['a'], ['#na#','H','M','L'])
test_series(to.items.a, [2,1,3,2])
to2 = cat.decode(to)
test_series(to2['a'], ['M','H','L','M'])
#|hide
#test with targets
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2], 'b': ['a', 'b', 'a', 'b', 'b']})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]], y_names='b')
test_series(to.vocab, ['a', 'b'])
test_series(to['b'], [0,1,0,1,1])
to2 = to.procs.decode(to)
test_series(to2['b'], ['a', 'b', 'a', 'b', 'b'])
#|hide
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2], 'b': ['a', 'b', 'a', 'b', 'b']})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]], y_names='b')
test_series(to.vocab, ['a', 'b'])
test_series(to['b'], [0,1,0,1,1])
to2 = to.procs.decode(to)
test_series(to2['b'], ['a', 'b', 'a', 'b', 'b'])
#|hide
#test with targets and train
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,2], 'b': ['a', 'b', 'a', 'c', 'b']})
to = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]], y_names='b')
test_series(to.vocab, ['a', 'b'])
#|hide
#test to ensure no copies of the dataframe are stored
cat = Categorify()
df = pd.DataFrame({'a':[0,1,2,3,4]})
to = TabularPandas(df, cat, cont_names='a', splits=[[0,1,2],[3,4]])
test_eq(hasattr(to.categorify, 'to'), False)
#|exporti
@Normalize
def setups(self, to:Tabular):
store_attr(but='to', means=dict(getattr(to, 'train', to).conts.mean()),
stds=dict(getattr(to, 'train', to).conts.std(ddof=0)+1e-7))
return self(to)
@Normalize
def encodes(self, to:Tabular):
to.conts = (to.conts-self.means) / self.stds
return to
@Normalize
def decodes(self, to:Tabular):
to.conts = (to.conts*self.stds ) + self.means
return to
#|hide
norm = Normalize()
df = pd.DataFrame({'a':[0,1,2,3,4]})
to = TabularPandas(df, norm, cont_names='a')
x = np.array([0,1,2,3,4])
m,s = x.mean(),x.std()
test_eq(norm.means['a'], m)
test_close(norm.stds['a'], s)
test_close(to['a'].values, (x-m)/s)
#|hide
df1 = pd.DataFrame({'a':[5,6,7]})
to1 = to.new(df1)
to1.process()
test_close(to1['a'].values, (np.array([5,6,7])-m)/s)
to2 = norm.decode(to1)
test_close(to2['a'].values, [5,6,7])
#|hide
norm = Normalize()
df = pd.DataFrame({'a':[0,1,2,3,4]})
to = TabularPandas(df, norm, cont_names='a', splits=[[0,1,2],[3,4]])
x = np.array([0,1,2])
m,s = x.mean(),x.std()
test_eq(norm.means['a'], m)
test_close(norm.stds['a'], s)
test_close(to['a'].values, (np.array([0,1,2,3,4])-m)/s)
#|hide
norm = Normalize()
df = pd.DataFrame({'a':[0,1,2,3,4]})
to = TabularPandas(df, norm, cont_names='a', splits=[[0,1,2],[3,4]])
test_eq(hasattr(to.procs.normalize, 'to'), False)
#|export
class FillStrategy:
"Namespace containing the various filling strategies."
def median (c,fill): return c.median()
def constant(c,fill): return fill
def mode (c,fill): return c.dropna().value_counts().idxmax()
Explanation: Each column's unique values are stored in a dictionary of column:[values]:
End of explanation
#|export
class FillMissing(TabularProc):
"Fill the missing values in continuous columns."
def __init__(self, fill_strategy=FillStrategy.median, add_col=True, fill_vals=None):
if fill_vals is None: fill_vals = defaultdict(int)
store_attr()
def setups(self, to):
missing = pd.isnull(to.conts).any()
store_attr(but='to', na_dict={n:self.fill_strategy(to[n], self.fill_vals[n])
for n in missing[missing].keys()})
self.fill_strategy = self.fill_strategy.__name__
def encodes(self, to):
missing = pd.isnull(to.conts)
for n in missing.any()[missing.any()].keys():
assert n in self.na_dict, f"nan values in `{n}` but not in setup training set"
for n in self.na_dict.keys():
to[n].fillna(self.na_dict[n], inplace=True)
if self.add_col:
to.loc[:,n+'_na'] = missing[n]
if n+'_na' not in to.cat_names: to.cat_names.append(n+'_na')
show_doc(FillMissing, title_level=3)
#|hide
fill1,fill2,fill3 = (FillMissing(fill_strategy=s)
for s in [FillStrategy.median, FillStrategy.constant, FillStrategy.mode])
df = pd.DataFrame({'a':[0,1,np.nan,1,2,3,4]})
df1 = df.copy(); df2 = df.copy()
tos = (TabularPandas(df, fill1, cont_names='a'),
TabularPandas(df1, fill2, cont_names='a'),
TabularPandas(df2, fill3, cont_names='a'))
test_eq(fill1.na_dict, {'a': 1.5})
test_eq(fill2.na_dict, {'a': 0})
test_eq(fill3.na_dict, {'a': 1.0})
for t in tos: test_eq(t.cat_names, ['a_na'])
for to_,v in zip(tos, [1.5, 0., 1.]):
test_eq(to_['a'].values, np.array([0, 1, v, 1, 2, 3, 4]))
test_eq(to_['a_na'].values, np.array([0, 0, 1, 0, 0, 0, 0]))
#|hide
fill = FillMissing()
df = pd.DataFrame({'a':[0,1,np.nan,1,2,3,4], 'b': [0,1,2,3,4,5,6]})
to = TabularPandas(df, fill, cont_names=['a', 'b'])
test_eq(fill.na_dict, {'a': 1.5})
test_eq(to.cat_names, ['a_na'])
test_eq(to['a'].values, np.array([0, 1, 1.5, 1, 2, 3, 4]))
test_eq(to['a_na'].values, np.array([0, 0, 1, 0, 0, 0, 0]))
test_eq(to['b'].values, np.array([0,1,2,3,4,5,6]))
#|hide
fill = FillMissing()
df = pd.DataFrame({'a':[0,1,np.nan,1,2,3,4], 'b': [0,1,2,3,4,5,6]})
to = TabularPandas(df, fill, cont_names=['a', 'b'])
test_eq(hasattr(to.procs.fill_missing, 'to'), False)
Explanation: Currently, filling with the median, a constant, and the mode are supported.
End of explanation
#|hide
procs = [Normalize, Categorify, FillMissing, noop]
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4]})
to = TabularPandas(df, procs, cat_names='a', cont_names='b')
#Test setup and apply on df_main
test_series(to.cat_names, ['a', 'b_na'])
test_series(to['a'], [1,2,3,2,2,3,1])
test_series(to['b_na'], [1,1,2,1,1,1,1])
x = np.array([0,1,1.5,1,2,3,4])
m,s = x.mean(),x.std()
test_close(to['b'].values, (x-m)/s)
test_eq(to.classes, {'a': ['#na#',0,1,2], 'b_na': ['#na#',False,True]})
#|hide
#Test apply on y_names
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})
to = TabularPandas(df, procs, 'a', 'b', y_names='c')
test_series(to.cat_names, ['a', 'b_na'])
test_series(to['a'], [1,2,3,2,2,3,1])
test_series(to['b_na'], [1,1,2,1,1,1,1])
test_series(to['c'], [1,0,1,0,0,1,0])
x = np.array([0,1,1.5,1,2,3,4])
m,s = x.mean(),x.std()
test_close(to['b'].values, (x-m)/s)
test_eq(to.classes, {'a': ['#na#',0,1,2], 'b_na': ['#na#',False,True]})
test_eq(to.vocab, ['a','b'])
#|hide
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})
to = TabularPandas(df, procs, 'a', 'b', y_names='c')
test_series(to.cat_names, ['a', 'b_na'])
test_series(to['a'], [1,2,3,2,2,3,1])
test_eq(df.a.dtype, np.int64 if sys.platform == "win32" else int)
test_series(to['b_na'], [1,1,2,1,1,1,1])
test_series(to['c'], [1,0,1,0,0,1,0])
#|hide
df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,np.nan,1,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})
to = TabularPandas(df, procs, cat_names='a', cont_names='b', y_names='c', splits=[[0,1,4,6], [2,3,5]])
test_series(to.cat_names, ['a', 'b_na'])
test_series(to['a'], [1,2,2,1,0,2,0])
test_eq(df.a.dtype, np.int64 if sys.platform == "win32" else int)
test_series(to['b_na'], [1,2,1,1,1,1,1])
test_series(to['c'], [1,0,0,0,1,0,1])
#|export
def _maybe_expand(o): return o[:,None] if o.ndim==1 else o
#|export
class ReadTabBatch(ItemTransform):
"Transform `TabularPandas` values into a `Tensor` with the ability to decode"
def __init__(self, to): self.to = to.new_empty()
def encodes(self, to):
if not to.with_cont: res = (tensor(to.cats).long(),)
else: res = (tensor(to.cats).long(),tensor(to.conts).float())
ys = [n for n in to.y_names if n in to.items.columns]
if len(ys) == len(to.y_names): res = res + (tensor(to.targ),)
if to.device is not None: res = to_device(res, to.device)
return res
def decodes(self, o):
o = [_maybe_expand(o_) for o_ in to_np(o) if o_.size != 0]
vals = np.concatenate(o, axis=1)
try: df = pd.DataFrame(vals, columns=self.to.all_col_names)
except: df = pd.DataFrame(vals, columns=self.to.x_names)
to = self.to.new(df)
return to
#|export
@typedispatch
def show_batch(x: Tabular, y, its, max_n=10, ctxs=None):
x.show()
#|export
@delegates()
class TabDataLoader(TfmdDL):
"A transformed `DataLoader` for Tabular data"
def __init__(self, dataset, bs=16, shuffle=False, after_batch=None, num_workers=0, **kwargs):
if after_batch is None: after_batch = L(TransformBlock().batch_tfms)+ReadTabBatch(dataset)
super().__init__(dataset, bs=bs, shuffle=shuffle, after_batch=after_batch, num_workers=num_workers, **kwargs)
def create_batch(self, b): return self.dataset.iloc[b]
def do_item(self, s): return 0 if s is None else s
TabularPandas._dl_type = TabDataLoader
Explanation: TabularPandas Pipelines -
End of explanation
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_test.drop('salary', axis=1, inplace=True)
df_main.head()
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
to = TabularPandas(df_main, procs, cat_names, cont_names, y_names="salary", splits=splits)
dls = to.dataloaders()
dls.valid.show_batch()
to.show()
Explanation: Integration example
For a more in-depth explanation, see the tabular tutorial
End of explanation
row = to.items.iloc[0]
to.decode_row(row)
Explanation: We can decode any set of transformed data by calling to.decode_row with our raw data:
End of explanation
to_tst = to.new(df_test)
to_tst.process()
to_tst.items.head()
Explanation: We can make new test datasets based on the training data with the to.new()
Note: Since machine learning models can't magically understand categories it was never trained on, the data should reflect this. If there are different missing values in your test data you should address this before training
End of explanation
tst_dl = dls.valid.new(to_tst)
tst_dl.show_batch()
Explanation: We can then convert it to a DataLoader:
End of explanation
def _mock_multi_label(df):
sal,sex,white = [],[],[]
for row in df.itertuples():
sal.append(row.salary == '>=50k')
sex.append(row.sex == ' Male')
white.append(row.race == ' White')
df['salary'] = np.array(sal)
df['male'] = np.array(sex)
df['white'] = np.array(white)
return df
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_main = _mock_multi_label(df_main)
df_main.head()
#|exporti
@EncodedMultiCategorize
def setups(self, to:Tabular):
self.c = len(self.vocab)
return self(to)
@EncodedMultiCategorize
def encodes(self, to:Tabular): return to
@EncodedMultiCategorize
def decodes(self, to:Tabular):
to.transform(to.y_names, lambda c: c==1)
return to
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
y_names=["salary", "male", "white"]
%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names=y_names, y_block=MultiCategoryBlock(encoded=True, vocab=y_names), splits=splits)
dls = to.dataloaders()
dls.valid.show_batch()
Explanation: Other target types
Multi-label categories
one-hot encoded label
End of explanation
def _mock_multi_label(df):
targ = []
for row in df.itertuples():
labels = []
if row.salary == '>=50k': labels.append('>50k')
if row.sex == ' Male': labels.append('male')
if row.race == ' White': labels.append('white')
targ.append(' '.join(labels))
df['target'] = np.array(targ)
return df
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_main = _mock_multi_label(df_main)
df_main.head()
@MultiCategorize
def encodes(self, to:Tabular):
#to.transform(to.y_names, partial(_apply_cats, {n: self.vocab for n in to.y_names}, 0))
return to
@MultiCategorize
def decodes(self, to:Tabular):
#to.transform(to.y_names, partial(_decode_cats, {n: self.vocab for n in to.y_names}))
return to
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['age', 'fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names="target", y_block=MultiCategoryBlock(), splits=splits)
to.procs[2].vocab
Explanation: Not one-hot encoded
End of explanation
#|exporti
@RegressionSetup
def setups(self, to:Tabular):
if self.c is not None: return
self.c = len(to.y_names)
return to
@RegressionSetup
def encodes(self, to:Tabular): return to
@RegressionSetup
def decodes(self, to:Tabular): return to
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
df_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()
df_main = _mock_multi_label(df_main)
cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']
cont_names = ['fnlwgt', 'education-num']
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter()(range_of(df_main))
%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names='age', splits=splits)
to.procs[-1].means
dls = to.dataloaders()
dls.valid.show_batch()
Explanation: Regression
End of explanation
class TensorTabular(fastuple):
def get_ctxs(self, max_n=10, **kwargs):
n_samples = min(self[0].shape[0], max_n)
df = pd.DataFrame(index = range(n_samples))
return [df.iloc[i] for i in range(n_samples)]
def display(self, ctxs): display_df(pd.DataFrame(ctxs))
class TabularLine(pd.Series):
"A line of a dataframe that knows how to show itself"
def show(self, ctx=None, **kwargs): return self if ctx is None else ctx.append(self)
class ReadTabLine(ItemTransform):
def __init__(self, proc): self.proc = proc
def encodes(self, row):
cats,conts = (o.map(row.__getitem__) for o in (self.proc.cat_names,self.proc.cont_names))
return TensorTabular(tensor(cats).long(),tensor(conts).float())
def decodes(self, o):
to = TabularPandas(o, self.proc.cat_names, self.proc.cont_names, self.proc.y_names)
to = self.proc.decode(to)
return TabularLine(pd.Series({c: v for v,c in zip(to.items[0]+to.items[1], self.proc.cat_names+self.proc.cont_names)}))
class ReadTabTarget(ItemTransform):
def __init__(self, proc): self.proc = proc
def encodes(self, row): return row[self.proc.y_names].astype(np.int64)
def decodes(self, o): return Category(self.proc.classes[self.proc.y_names][o])
# tds = TfmdDS(to.items, tfms=[[ReadTabLine(proc)], ReadTabTarget(proc)])
# enc = tds[1]
# test_eq(enc[0][0], tensor([2,1]))
# test_close(enc[0][1], tensor([-0.628828]))
# test_eq(enc[1], 1)
# dec = tds.decode(enc)
# assert isinstance(dec[0], TabularLine)
# test_close(dec[0], pd.Series({'a': 1, 'b_na': False, 'b': 1}))
# test_eq(dec[1], 'a')
# test_stdout(lambda: print(show_at(tds, 1)), a 1
# b_na False
# b 1
# category a
# dtype: object)
Explanation: Not being used now - for multi-modal
End of explanation
#|hide
from nbdev.export import notebook2script
notebook2script()
Explanation: Export -
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.