Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
5,400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras for Text Classification
Learning Objectives
Learn how to create a text classification datasets using BigQuery.
Learn how to tokenize and integerize a corpus of text for training in Keras.
Learn how to do one-hot-encodings in Keras.
Learn how to use embedding layers to represent words in Keras.
Learn about the bag-of-word representation for sentences.
Learn how to use DNN/CNN/RNN model to classify text in keras.
Introduction
In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Step1: Replace the variable values in the cell below
Step2: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Lab Task 1a
Step3: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http
Step6: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
Step7: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
Step8: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
Step9: Let's make sure we have roughly the same number of labels for each of our three labels
Step10: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note
Step11: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Lab Task 1c
Step12: Let's write the sample datatset to disk.
Step13: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located
Step14: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times).
Step15: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that
Step16: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
Lab Task #2
Step17: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
Step18: Lab Task #3
Step19: Preparing the train/test splits
Let's split our data into train and test splits
Step20: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
Step21: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
Step22: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
Lab Tasks #4, #5, and #6
Step23: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
Step24: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6
Step25: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs)
Step26: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6
Complete the code below to create a CNN model for text classification. This model is similar to the previous models in that you should start with an embedding layer. However, the embedding next layers should pass through a 1-dimensional convolution and ultimately the final fully connected, dense layer. Use the arguments of the build_cnn_model function to set up the 1D convolution layer.
Step27: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps. | Python Code:
import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
Explanation: Keras for Text Classification
Learning Objectives
Learn how to create a text classification datasets using BigQuery.
Learn how to tokenize and integerize a corpus of text for training in Keras.
Learn how to do one-hot-encodings in Keras.
Learn how to use embedding layers to represent words in Keras.
Learn about the bag-of-word representation for sentences.
Learn how to use DNN/CNN/RNN model to classify text in keras.
Introduction
In this notebook, we will implement text models to recognize the probable source (Github, Tech-Crunch, or The New-York Times) of the titles we have in the title dataset we constructed in the first task of the lab.
In the next step, we will load and pre-process the texts and labels so that they are suitable to be fed to a Keras model. For the texts of the titles we will learn how to split them into a list of tokens, and then how to map each token to an integer using the Keras Tokenizer class. What will be fed to our Keras models will be batches of padded list of integers representing the text. For the labels, we will learn how to one-hot-encode each of the 3 classes into a 3 dimensional basis vector.
Then we will explore a few possible models to do the title classification. All models will be fed padded list of integers, and all models will start with a Keras Embedding layer that transforms the integer representing the words into dense vectors.
The first model will be a simple bag-of-word DNN model that averages up the word vectors and feeds the tensor that results to further dense layers. Doing so means that we forget the word order (and hence that we consider sentences as a “bag-of-words”). In the second and in the third model we will keep the information about the word order using a simple RNN and a simple CNN allowing us to achieve the same performance as with the DNN model but in much fewer epochs.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
SEED = 0
Explanation: Replace the variable values in the cell below:
End of explanation
%%bigquery --project $PROJECT
SELECT
# TODO: Your code goes here.
FROM
# TODO: Your code goes here.
WHERE
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
LIMIT 10
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Lab Task 1a:
Complete the query below to create a sample dataset containing the url, title, and score of articles from the public dataset bigquery-public-data.hacker_news.stories. Use a WHERE clause to restrict to only those articles with
* title length greater than 10 characters
* score greater than 10
* url length greater than 0 characters
End of explanation
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
# TODO: Your code goes here.
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
# TODO: Your code goes here.
GROUP BY
# TODO: Your code goes here.
ORDER BY num_articles DESC
LIMIT 100
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
Lab task 1b:
Complete the query below to count the number of titles within each 'source' category. Note that to grab the 'source' of the article we use the a regex command on the url of the article. To count the number of articles you'll use a GROUP BY in sql, and we'll also restrict our attention to only those articles whose title has greater than 10 characters.
End of explanation
regex = '.*://(.[^/]+)/'
sub_query =
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
.format(regex)
query =
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
.format(sub_query=sub_query)
print(query)
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
title_dataset.source.value_counts()
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
sample_title_dataset = # TODO: Your code goes here.
# TODO: Your code goes here.
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Lab Task 1c:
Use .sample to create a sample dataset of 1,000 articles from the full dataset. Use .value_counts to see how many articles are contained in each of the three source categories?
End of explanation
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset.head()
import os
import shutil
import pandas as pd
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard, EarlyStopping
from tensorflow.keras.layers import (
Embedding,
Flatten,
GRU,
Conv1D,
Lambda,
Dense,
)
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.utils import to_categorical
print(tf.__version__)
%matplotlib inline
Explanation: Let's write the sample datatset to disk.
End of explanation
LOGDIR = "./text_models"
DATA_DIR = "./data"
Explanation: Let's start by specifying where the information about the trained models will be saved as well as where our dataset is located:
End of explanation
DATASET_NAME = "titles_full.csv"
TITLE_SAMPLE_PATH = os.path.join(DATA_DIR, DATASET_NAME)
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(TITLE_SAMPLE_PATH, header=None, names=COLUMNS)
titles_df.head()
Explanation: Loading the dataset
Our dataset consists of titles of articles along with the label indicating from which source these articles have been taken from (GitHub, Tech-Crunch, or the New-York Times).
End of explanation
tokenizer = Tokenizer()
tokenizer.fit_on_texts(titles_df.title)
integerized_titles = tokenizer.texts_to_sequences(titles_df.title)
integerized_titles[:3]
VOCAB_SIZE = len(tokenizer.index_word)
VOCAB_SIZE
DATASET_SIZE = tokenizer.document_count
DATASET_SIZE
MAX_LEN = max(len(sequence) for sequence in integerized_titles)
MAX_LEN
Explanation: Integerize the texts
The first thing we need to do is to find how many words we have in our dataset (VOCAB_SIZE), how many titles we have (DATASET_SIZE), and what the maximum length of the titles we have (MAX_LEN) is. Keras offers the Tokenizer class in its keras.preprocessing.text module to help us with that:
End of explanation
# TODO 1
def create_sequences(texts, max_len=MAX_LEN):
sequences = # TODO: Your code goes here.
padded_sequences = # TODO: Your code goes here.
return padded_sequences
sequences = create_sequences(titles_df.title[:3])
sequences
titles_df.source[:4]
Explanation: Let's now implement a function create_sequence that will
* take as input our titles as well as the maximum sentence length and
* returns a list of the integers corresponding to our tokens padded to the sentence maximum length
Keras has the helper functions pad_sequence for that on the top of the tokenizer methods.
Lab Task #2:
Complete the code in the create_sequences function below to
* create text sequences from texts using the tokenizer we created above
* pad the end of those text sequences to have length max_len
End of explanation
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
Explanation: We now need to write a function that
* takes a title source and
* returns the corresponding one-hot encoded vector
Keras to_categorical is handy for that.
End of explanation
# TODO 2
def encode_labels(sources):
classes = # TODO: Your code goes here.
one_hots = # TODO: Your code goes here.
return one_hots
encode_labels(titles_df.source[:4])
Explanation: Lab Task #3:
Complete the code in the encode_labels function below to
* create a list that maps each source in sources to its corresponding numeric value using the dictionary CLASSES above
* use the Keras function to one-hot encode the variable classes
End of explanation
N_TRAIN = int(DATASET_SIZE * 0.80)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
Explanation: Preparing the train/test splits
Let's split our data into train and test splits:
End of explanation
sources_train.value_counts()
sources_valid.value_counts()
Explanation: To be on the safe side, we verify that the train and test splits
have roughly the same number of examples per classes.
Since it is the case, accuracy will be a good metric to use to measure
the performance of our models.
End of explanation
X_train, Y_train = create_sequences(titles_train), encode_labels(sources_train)
X_valid, Y_valid = create_sequences(titles_valid), encode_labels(sources_valid)
X_train[:3]
Y_train[:3]
Explanation: Using create_sequence and encode_labels, we can now prepare the
training and validation data to feed our models.
The features will be
padded list of integers and the labels will be one-hot-encoded 3D vectors.
End of explanation
# TODOs 4-6
def build_dnn_model(embed_dim):
model = Sequential([
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
Explanation: Building a DNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple embedding layer transforming the word integers into dense vectors, followed by a Dense softmax layer that returns the probabilities for each class.
Note that we need to put a custom Keras Lambda layer in between the Embedding layer and the Dense softmax layer to do an average of the word vectors returned by the embedding layer. This is the average that's fed to the dense softmax layer. By doing so, we create a model that is simple but that loses information about the word order, creating a model that sees sentences as "bag-of-words".
Lab Tasks #4, #5, and #6:
Create a Keras Sequential model with three layers:
* The first layer should be an embedding layer with output dimension equal to embed_dim.
* The second layer should use a Lambda layer to create a bag-of-words representation of the sentences by computing the mean.
* The last layer should use a Dense layer to predict which class the example belongs to.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'dnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
BATCH_SIZE = 300
EPOCHS = 100
EMBED_DIM = 10
PATIENCE = 0
dnn_model = build_dnn_model(embed_dim=EMBED_DIM)
dnn_history = dnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(dnn_history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(dnn_history.history)[['accuracy', 'val_accuracy']].plot()
dnn_model.summary()
Explanation: Below we train the model on 100 epochs but adding an EarlyStopping callback that will stop the training as soon as the validation loss has not improved after a number of steps specified by PATIENCE . Note that we also give the model.fit method a Tensorboard callback so that we can later compare all the models using TensorBoard.
End of explanation
def build_rnn_model(embed_dim, units):
model = Sequential([
# TODO: Your code goes here.
# TODO: Your code goes here.
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
Explanation: Building a RNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple RNN model with a single GRU layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6:
Complete the code below to build an RNN model which predicts the article class. The code below is similar to the DNN you created above; however, here we do not need to use a bag-of-words representation of the sentence. Instead, you can pass the embedding layer directly to an RNN/LSTM/GRU layer.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'rnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 10
UNITS = 16
PATIENCE = 0
rnn_model = build_rnn_model(embed_dim=EMBED_DIM, units=UNITS)
history = rnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
rnn_model.summary()
Explanation: Let's train the model with early stoping as above.
Observe that we obtain the same type of accuracy as with the DNN model, but in less epochs (~3 v.s. ~20 epochs):
End of explanation
def build_cnn_model(embed_dim, filters, ksize, strides):
model = Sequential([
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
Dense(N_CLASSES, activation='softmax')
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
Explanation: Build a CNN model
The build_dnn_model function below returns a compiled Keras model that implements a simple CNN model with a single Conv1D layer, which now takes into account the word order in the sentence.
The first and last layers are the same as for the simple DNN model, but we need to add a Flatten layer betwen the convolution and the softmax layer.
Note that we set mask_zero=True in the Embedding layer so that the padded words (represented by a zero) are ignored by this and the subsequent layers.
Lab Task #4 and #6
Complete the code below to create a CNN model for text classification. This model is similar to the previous models in that you should start with an embedding layer. However, the embedding next layers should pass through a 1-dimensional convolution and ultimately the final fully connected, dense layer. Use the arguments of the build_cnn_model function to set up the 1D convolution layer.
End of explanation
%%time
tf.random.set_seed(33)
MODEL_DIR = os.path.join(LOGDIR, 'cnn')
shutil.rmtree(MODEL_DIR, ignore_errors=True)
EPOCHS = 100
BATCH_SIZE = 300
EMBED_DIM = 5
FILTERS = 200
STRIDES = 2
KSIZE = 3
PATIENCE = 0
cnn_model = build_cnn_model(
embed_dim=EMBED_DIM,
filters=FILTERS,
strides=STRIDES,
ksize=KSIZE,
)
cnn_history = cnn_model.fit(
X_train, Y_train,
epochs=EPOCHS,
batch_size=BATCH_SIZE,
validation_data=(X_valid, Y_valid),
callbacks=[EarlyStopping(patience=PATIENCE), TensorBoard(MODEL_DIR)],
)
pd.DataFrame(cnn_history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(cnn_history.history)[['accuracy', 'val_accuracy']].plot()
cnn_model.summary()
Explanation: Let's train the model.
Again we observe that we get the same kind of accuracy as with the DNN model but in many fewer steps.
End of explanation |
5,401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
Step2: Improving Reading Ability
From DASL(http
Step3: Exercise
Step9: Paintball
Step10: Exercise
Step11: Exercise | Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
Explanation: Think Bayes: Chapter 9
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
import pandas as pd
df = pd.read_csv('drp_scores.csv', skiprows=21, delimiter='\t')
df.head()
grouped = df.groupby('Treatment')
for name, group in grouped:
print(name, group.Response.mean())
from scipy.stats import norm
class Normal(Suite, Joint):
def Likelihood(self, data, hypo):
data: sequence of test scores
hypo: mu, sigma
mu, sigma = hypo
likes = norm.pdf(data, mu, sigma)
return np.prod(likes)
from itertools import product
mus = np.linspace(20, 80, 101)
sigmas = np.linspace(5, 30, 101)
control = Normal(product(mus, sigmas))
data = df[df.Treatment=='Control'].Response
control.Update(data)
thinkplot.Contour(control, pcolor=True)
pmf_mu0 = control.Marginal(0)
thinkplot.Pdf(pmf_mu0)
pmf_sigma0 = control.Marginal(1)
thinkplot.Pdf(pmf_sigma0)
Explanation: Improving Reading Ability
From DASL(http://lib.stat.cmu.edu/DASL/Stories/ImprovingReadingAbility.html)
An educator conducted an experiment to test whether new directed reading activities in the classroom will help elementary school pupils improve some aspects of their reading ability. She arranged for a third grade class of 21 students to follow these activities for an 8-week period. A control classroom of 23 third graders followed the same curriculum without the activities. At the end of the 8 weeks, all students took a Degree of Reading Power (DRP) test, which measures the aspects of reading ability that the treatment is designed to improve.
Summary statistics on the two groups of children show that the average score of the treatment class was almost ten points higher than the average of the control class. A two-sample t-test is appropriate for testing whether this difference is statistically significant. The t-statistic is 2.31, which is significant at the .05 level.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: Run this analysis again for the control group. What is the distribution of the difference between the groups? What is the probability that the average "reading power" for the treatment group is higher? What is the probability that the variance of the treatment group is higher?
End of explanation
class Paintball(Suite, Joint):
Represents hypotheses about the location of an opponent.
def __init__(self, alphas, betas, locations):
Makes a joint suite of parameters alpha and beta.
Enumerates all pairs of alpha and beta.
Stores locations for use in Likelihood.
alphas: possible values for alpha
betas: possible values for beta
locations: possible locations along the wall
self.locations = locations
pairs = [(alpha, beta)
for alpha in alphas
for beta in betas]
Suite.__init__(self, pairs)
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: pair of alpha, beta
data: location of a hit
Returns: float likelihood
alpha, beta = hypo
x = data
pmf = MakeLocationPmf(alpha, beta, self.locations)
like = pmf.Prob(x)
return like
def MakeLocationPmf(alpha, beta, locations):
Computes the Pmf of the locations, given alpha and beta.
Given that the shooter is at coordinates (alpha, beta),
the probability of hitting any spot is inversely proportionate
to the strafe speed.
alpha: x position
beta: y position
locations: x locations where the pmf is evaluated
Returns: Pmf object
pmf = Pmf()
for x in locations:
prob = 1.0 / StrafingSpeed(alpha, beta, x)
pmf.Set(x, prob)
pmf.Normalize()
return pmf
def StrafingSpeed(alpha, beta, x):
Computes strafing speed, given location of shooter and impact.
alpha: x location of shooter
beta: y location of shooter
x: location of impact
Returns: derivative of x with respect to theta
theta = math.atan2(x - alpha, beta)
speed = beta / math.cos(theta)**2
return speed
alphas = range(0, 31)
betas = range(1, 51)
locations = range(0, 31)
suite = Paintball(alphas, betas, locations)
suite.UpdateSet([15, 16, 18, 21])
locations = range(0, 31)
alpha = 10
betas = [10, 20, 40]
thinkplot.PrePlot(num=len(betas))
for beta in betas:
pmf = MakeLocationPmf(alpha, beta, locations)
pmf.label = 'beta = %d' % beta
thinkplot.Pdf(pmf)
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
marginal_alpha = suite.Marginal(0, label='alpha')
marginal_beta = suite.Marginal(1, label='beta')
print('alpha CI', marginal_alpha.CredibleInterval(50))
print('beta CI', marginal_beta.CredibleInterval(50))
thinkplot.PrePlot(num=2)
thinkplot.Cdf(Cdf(marginal_alpha))
thinkplot.Cdf(Cdf(marginal_beta))
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
betas = [10, 20, 40]
thinkplot.PrePlot(num=len(betas))
for beta in betas:
cond = suite.Conditional(0, 1, beta)
cond.label = 'beta = %d' % beta
thinkplot.Pdf(cond)
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
thinkplot.Contour(suite.GetDict(), contour=False, pcolor=True)
thinkplot.Config(xlabel='alpha',
ylabel='beta',
axis=[0, 30, 0, 20])
d = dict((pair, 0) for pair in suite.Values())
percentages = [75, 50, 25]
for p in percentages:
interval = suite.MaxLikeInterval(p)
for pair in interval:
d[pair] += 1
thinkplot.Contour(d, contour=False, pcolor=True)
thinkplot.Text(17, 4, '25', color='white')
thinkplot.Text(17, 15, '50', color='white')
thinkplot.Text(17, 30, '75')
thinkplot.Config(xlabel='alpha',
ylabel='beta',
legend=False)
Explanation: Paintball
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: From John D. Cook
"Suppose you have a tester who finds 20 bugs in your program. You want to estimate how many bugs are really in the program. You know there are at least 20 bugs, and if you have supreme confidence in your tester, you may suppose there are around 20 bugs. But maybe your tester isn't very good. Maybe there are hundreds of bugs. How can you have any idea how many bugs there are? There’s no way to know with one tester. But if you have two testers, you can get a good idea, even if you don’t know how skilled the testers are.
Suppose two testers independently search for bugs. Let k1 be the number of errors the first tester finds and k2 the number of errors the second tester finds. Let c be the number of errors both testers find. The Lincoln Index estimates the total number of errors as k1 k2 / c [I changed his notation to be consistent with mine]."
So if the first tester finds 20 bugs, the second finds 15, and they find 3 in common, we estimate that there are about 100 bugs. What is the Bayesian estimate of the number of errors based on this data?
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: The GPS problem. According to Wikipedia

GPS included a (currently disabled) feature called Selective Availability (SA) that adds intentional, time varying errors of up to 100 meters (328 ft) to the publicly available navigation signals. This was intended to deny an enemy the use of civilian GPS receivers for precision weapon guidance.
[...]
Before it was turned off on May 2, 2000, typical SA errors were about 50 m (164 ft) horizontally and about 100 m (328 ft) vertically.[10] Because SA affects every GPS receiver in a given area almost equally, a fixed station with an accurately known position can measure the SA error values and transmit them to the local GPS receivers so they may correct their position fixes. This is called Differential GPS or DGPS. DGPS also corrects for several other important sources of GPS errors, particularly ionospheric delay, so it continues to be widely used even though SA has been turned off. The ineffectiveness of SA in the face of widely available DGPS was a common argument for turning off SA, and this was finally done by order of President Clinton in 2000.
Suppose it is 1 May 2000, and you are standing in a field that is 200m square. You are holding a GPS unit that indicates that your location is 51m north and 15m west of a known reference point in the middle of the field.
However, you know that each of these coordinates has been perturbed by a "feature" that adds random errors with mean 0 and standard deviation 30m.
1) After taking one measurement, what should you believe about your position?
Note: Since the intentional errors are independent, you could solve this problem independently for X and Y. But we'll treat it as a two-dimensional problem, partly for practice and partly to see how we could extend the solution to handle dependent errors.
You can start with the code in gps.py.
2) Suppose that after one second the GPS updates your position and reports coordinates (48, 90). What should you believe now?
3) Suppose you take 8 more measurements and get:
(11.903060613102866, 19.79168669735705)
(77.10743601503178, 39.87062906535289)
(80.16596823095534, -12.797927542984425)
(67.38157493119053, 83.52841028148538)
(89.43965206875271, 20.52141889230797)
(58.794021026248245, 30.23054016065644)
(2.5844401241265302, 51.012041625783766)
(45.58108994142448, 3.5718287379754585)
At this point, how certain are you about your location?
End of explanation |
5,402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Сравнение метрик качества бинарной классификации
Programming Assignment
В этом задании мы разберемся, в чем состоит разница между разными метриками качества. Мы остановимся на задаче бинарной классификации (с откликами 0 и 1), но рассмотрим ее как задачу предсказания вероятности того, что объект принадлежит классу 1. Таким образом, мы будем работать с вещественной, а не бинарной целевой переменной.
Задание оформлено в стиле демонстрации с элементами Programming Assignment. Вам нужно запустить уже написанный код и рассмотреть предложенные графики, а также реализовать несколько своих функций. Для проверки запишите в отдельные файлы результаты работы этих функций на указанных наборах входных данных, это можно сделать с помощью предложенных в заданиях функций write_answer_N, N - номер задачи. Загрузите эти файлы в систему.
Для построения графиков нужно импортировать соответствующие модули.
Библиотека seaborn позволяет сделать графики красивее. Если вы не хотите ее использовать, закомментируйте третью строку.
Более того, для выполнения Programming Assignment модули matplotlib и seaborn не нужны (вы можете не запускать ячейки с построением графиков и смотреть на уже построенные картинки).
Step1: Что предсказывают алгоритмы
Для вычисления метрик качества в обучении с учителем нужно знать только два вектора
Step2: Идеальная ситуация
Step3: Интервалы вероятностей для двух классов прекрасно разделяются порогом T = 0.5.
Чаще всего интервалы накладываются - тогда нужно аккуратно подбирать порог.
Самый неправильный алгоритм делает все наоборот
Step4: Алгоритм может быть осторожным и стремиться сильно не отклонять вероятности от 0.5, а может рисковать - делать предсказания близакими к нулю или единице.
Step5: Также интервалы могут смещаться. Если алгоритм боится ошибок false positive, то он будет чаще делать предсказания, близкие к нулю.
Аналогично, чтобы избежать ошибок false negative, логично чаще предсказывать большие вероятности.
Step6: Мы описали разные характеры векторов вероятностей. Далее мы будем смотреть, как метрики оценивают разные векторы предсказаний, поэтому обязательно выполните ячейки, создающие векторы для визуализации.
Метрики, оценивающие бинарные векторы предсказаний
Есть две типичные ситуации, когда специалисты по машинному обучению начинают изучать характеристики метрик качества
Step7: Все три метрики легко различают простые случаи хороших и плохих алгоритмов. Обратим внимание, что метрики имеют область значений [0, 1], и потому их легко интерпретировать.
Метрикам не важны величины вероятностей, им важно только то, сколько объектов неправильно зашли за установленную границу (в данном случае T = 0.5).
Метрика accuracy дает одинаковый вес ошибкам false positive и false negative, зато пара метрик precision и recall однозначно идентифицирует это различие. Собственно, их для того и используют, чтобы контролировать ошибки FP и FN.
Мы измерили три метрики, фиксировав порог T = 0.5, потому что для почти всех картинок он кажется оптимальным. Давайте посмотрим на последней (самой интересной для этих метрик) группе векторов, как меняются precision и recall при увеличении порога.
Step8: При увеличении порога мы делаем меньше ошибок FP и больше ошибок FN, поэтому одна из кривых растет, а вторая - падает. По такому графику можно подобрать оптимальное значение порога, при котором precision и recall будут приемлемы. Если такого порога не нашлось, нужно обучать другой алгоритм.
Оговоримся, что приемлемые значения precision и recall определяются предметной областью. Например, в задаче определения, болен ли пациент определенной болезнью (0 - здоров, 1 - болен), ошибок false negative стараются избегать, требуя recall около 0.9. Можно сказать человеку, что он болен, и при дальнейшей диагностике выявить ошибку; гораздо хуже пропустить наличие болезни.
<font color="green" size=5>Programming assignment
Step9: F1-score
Очевидный недостаток пары метрик precision-recall - в том, что их две
Step10: F1-метрика в двух последних случаях, когда одна из парных метрик равна 1, значительно меньше, чем в первом, сбалансированном случае.
<font color="green" size=5>Programming assignment
Step11: Метрики, оценивающие векторы вероятностей класса 1
Рассмотренные метрики удобно интерпретировать, но при их использовании мы не учитываем большую часть информации, полученной от алгоритма. В некоторых задачах вероятности нужны в чистом виде, например, если мы предсказываем, выиграет ли команда в футбольном матче, и величина вероятности влияет на размер ставки за эту команду. Даже если в конце концов мы все равно бинаризуем предсказание, хочется следить за характером вектора вероятности.
Log_loss
Log_loss вычисляет правдоподобие меток в actual с вероятностями из predicted, взятое с противоположным знаком
Step12: Как и предыдущие метрики, log_loss хорошо различает идеальный, типичный и плохой случаи. Но обратите внимание, что интерпретировать величину достаточно сложно
Step13: Обратите внимание на разницу weighted_log_loss между случаями Avoids FP и Avoids FN.
ROC и AUC
При построении ROC-кривой (receiver operating characteristic) происходит варьирование порога бинаризации вектора вероятностей, и вычисляются величины, зависящие от числа ошибок FP и FN. Эти величины задаются так, чтобы в случае, когда существует порог для идеального разделения классов, ROC-кривая проходила через определенную точку - верхний левый угол квадрата [0, 1] x [0, 1]. Кроме того, она всегда проходит через левый нижний и правый верхний углы. Получается наглядная визуализация качества алгоритма. С целью охарактеризовать эту визуализацию численно, ввели понятие AUC - площадь под ROC-кривой.
Есть несложный и эффективный алгоритм, который за один проход по выборке вычисляет ROC-кривую и AUC, но мы не будем вдаваться в детали.
Построим ROC-кривые для наших задач
Step14: Чем больше объектов в выборке, тем более гладкой выглядит кривая (хотя на самом деле она все равно ступенчатая).
Как и ожидалось, кривые всех идеальных алгоритмов проходят через левый верхний угол. На первом графике также показана типичная ROC-кривая (обычно на практике они не доходят до "идеального" угла).
AUC рискующего алгоритма значительном меньше, чем у осторожного, хотя осторожный и рискущий идеальные алгоритмы не различаются по ROC или AUC. Поэтому стремиться увеличить зазор между интервалами вероятностей классов смысла не имеет.
Наблюдается перекос кривой в случае, когда алгоритму свойственны ошибки FP или FN. Однако по величине AUC это отследить невозможно (кривые могут быть симметричны относительно диагонали (0, 1)-(1, 0)).
После того, как кривая построена, удобно выбирать порог бинаризации, в котором будет достигнут компромисс между FP или FN. Порог соответствует точке на кривой. Если мы хотим избежать ошибок FP, нужно выбирать точку на левой стороне квадрата (как можно выше), если FN - точку на верхней стороне квадрата (как можно левее). Все промежуточные точки будут соответствовать разным пропорциям FP и FN.
<font color="green" size=5>Programming assignment | Python Code:
import numpy as np
from matplotlib import pyplot as plt
import seaborn
%matplotlib inline
Explanation: Сравнение метрик качества бинарной классификации
Programming Assignment
В этом задании мы разберемся, в чем состоит разница между разными метриками качества. Мы остановимся на задаче бинарной классификации (с откликами 0 и 1), но рассмотрим ее как задачу предсказания вероятности того, что объект принадлежит классу 1. Таким образом, мы будем работать с вещественной, а не бинарной целевой переменной.
Задание оформлено в стиле демонстрации с элементами Programming Assignment. Вам нужно запустить уже написанный код и рассмотреть предложенные графики, а также реализовать несколько своих функций. Для проверки запишите в отдельные файлы результаты работы этих функций на указанных наборах входных данных, это можно сделать с помощью предложенных в заданиях функций write_answer_N, N - номер задачи. Загрузите эти файлы в систему.
Для построения графиков нужно импортировать соответствующие модули.
Библиотека seaborn позволяет сделать графики красивее. Если вы не хотите ее использовать, закомментируйте третью строку.
Более того, для выполнения Programming Assignment модули matplotlib и seaborn не нужны (вы можете не запускать ячейки с построением графиков и смотреть на уже построенные картинки).
End of explanation
# рисует один scatter plot
def scatter(actual, predicted, T):
plt.scatter(actual, predicted)
plt.xlabel("Labels")
plt.ylabel("Predicted probabilities")
plt.plot([-0.2, 1.2], [T, T])
plt.axis([-0.1, 1.1, -0.1, 1.1])
# рисует несколько scatter plot в таблице, имеющей размеры shape
def many_scatters(actuals, predicteds, Ts, titles, shape):
plt.figure(figsize=(shape[1]*5, shape[0]*5))
i = 1
for actual, predicted, T, title in zip(actuals, predicteds, Ts, titles):
ax = plt.subplot(shape[0], shape[1], i)
ax.set_title(title)
i += 1
scatter(actual, predicted, T)
Explanation: Что предсказывают алгоритмы
Для вычисления метрик качества в обучении с учителем нужно знать только два вектора: вектор правильных ответов и вектор предсказанных величин; будем обозначать их actual и predicted. Вектор actual известен из обучающей выборки, вектор predicted возвращается алгоритмом предсказания. Сегодня мы не будем использовать какие-то алгоритмы классификации, а просто рассмотрим разные векторы предсказаний.
В нашей формулировке actual состоит из нулей и единиц, а predicted - из величин из интервала [0, 1] (вероятности класса 1). Такие векторы удобно показывать на scatter plot.
Чтобы сделать финальное предсказание (уже бинарное), нужно установить порог T: все объекты, имеющие предсказание выше порога, относят к классу 1, остальные - к классу 0.
End of explanation
actual_0 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_0 = np.array([ 0.19015288, 0.23872404, 0.42707312, 0.15308362, 0.2951875 ,
0.23475641, 0.17882447, 0.36320878, 0.33505476, 0.202608 ,
0.82044786, 0.69750253, 0.60272784, 0.9032949 , 0.86949819,
0.97368264, 0.97289232, 0.75356512, 0.65189193, 0.95237033,
0.91529693, 0.8458463 ])
plt.figure(figsize=(5, 5))
scatter(actual_0, predicted_0, 0.5)
Explanation: Идеальная ситуация: существует порог T, верно разделяющий вероятности, соответствующие двум классам. Пример такой ситуации:
End of explanation
actual_1 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1.])
predicted_1 = np.array([ 0.41310733, 0.43739138, 0.22346525, 0.46746017, 0.58251177,
0.38989541, 0.43634826, 0.32329726, 0.01114812, 0.41623557,
0.54875741, 0.48526472, 0.21747683, 0.05069586, 0.16438548,
0.68721238, 0.72062154, 0.90268312, 0.46486043, 0.99656541,
0.59919345, 0.53818659, 0.8037637 , 0.272277 , 0.87428626,
0.79721372, 0.62506539, 0.63010277, 0.35276217, 0.56775664])
actual_2 = np.array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
predicted_2 = np.array([ 0.07058193, 0.57877375, 0.42453249, 0.56562439, 0.13372737,
0.18696826, 0.09037209, 0.12609756, 0.14047683, 0.06210359,
0.36812596, 0.22277266, 0.79974381, 0.94843878, 0.4742684 ,
0.80825366, 0.83569563, 0.45621915, 0.79364286, 0.82181152,
0.44531285, 0.65245348, 0.69884206, 0.69455127])
many_scatters([actual_0, actual_1, actual_2], [predicted_0, predicted_1, predicted_2],
[0.5, 0.5, 0.5], ["Perfect", "Typical", "Awful algorithm"], (1, 3))
Explanation: Интервалы вероятностей для двух классов прекрасно разделяются порогом T = 0.5.
Чаще всего интервалы накладываются - тогда нужно аккуратно подбирать порог.
Самый неправильный алгоритм делает все наоборот: поднимает вероятности класса 0 выше вероятностей класса 1. Если так произошло, стоит посмотреть, не перепутались ли метки 0 и 1 при создании целевого вектора из сырых данных.
Примеры:
End of explanation
# рискующий идеальный алгоитм
actual_0r = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_0r = np.array([ 0.23563765, 0.16685597, 0.13718058, 0.35905335, 0.18498365,
0.20730027, 0.14833803, 0.18841647, 0.01205882, 0.0101424 ,
0.10170538, 0.94552901, 0.72007506, 0.75186747, 0.85893269,
0.90517219, 0.97667347, 0.86346504, 0.72267683, 0.9130444 ,
0.8319242 , 0.9578879 , 0.89448939, 0.76379055])
# рискующий хороший алгоритм
actual_1r = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_1r = np.array([ 0.13832748, 0.0814398 , 0.16136633, 0.11766141, 0.31784942,
0.14886991, 0.22664977, 0.07735617, 0.07071879, 0.92146468,
0.87579938, 0.97561838, 0.75638872, 0.89900957, 0.93760969,
0.92708013, 0.82003675, 0.85833438, 0.67371118, 0.82115125,
0.87560984, 0.77832734, 0.7593189, 0.81615662, 0.11906964,
0.18857729])
many_scatters([actual_0, actual_1, actual_0r, actual_1r],
[predicted_0, predicted_1, predicted_0r, predicted_1r],
[0.5, 0.5, 0.5, 0.5],
["Perfect careful", "Typical careful", "Perfect risky", "Typical risky"],
(2, 2))
Explanation: Алгоритм может быть осторожным и стремиться сильно не отклонять вероятности от 0.5, а может рисковать - делать предсказания близакими к нулю или единице.
End of explanation
actual_10 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1.])
predicted_10 = np.array([ 0.29340574, 0.47340035, 0.1580356 , 0.29996772, 0.24115457, 0.16177793,
0.35552878, 0.18867804, 0.38141962, 0.20367392, 0.26418924, 0.16289102,
0.27774892, 0.32013135, 0.13453541, 0.39478755, 0.96625033, 0.47683139,
0.51221325, 0.48938235, 0.57092593, 0.21856972, 0.62773859, 0.90454639, 0.19406537,
0.32063043, 0.4545493 , 0.57574841, 0.55847795 ])
actual_11 = np.array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])
predicted_11 = np.array([ 0.35929566, 0.61562123, 0.71974688, 0.24893298, 0.19056711, 0.89308488,
0.71155538, 0.00903258, 0.51950535, 0.72153302, 0.45936068, 0.20197229, 0.67092724,
0.81111343, 0.65359427, 0.70044585, 0.61983513, 0.84716577, 0.8512387 ,
0.86023125, 0.7659328 , 0.70362246, 0.70127618, 0.8578749 , 0.83641841,
0.62959491, 0.90445368])
many_scatters([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11],
[0.5, 0.5, 0.5], ["Typical", "Avoids FP", "Avoids FN"], (1, 3))
Explanation: Также интервалы могут смещаться. Если алгоритм боится ошибок false positive, то он будет чаще делать предсказания, близкие к нулю.
Аналогично, чтобы избежать ошибок false negative, логично чаще предсказывать большие вероятности.
End of explanation
from sklearn.metrics import precision_score, recall_score, accuracy_score
T = 0.5
print "Алгоритмы, разные по качеству:"
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0 > T, predicted_1 > T, predicted_2 > T],
["Perfect:", "Typical:", "Awful:"]):
print descr, "precision =", precision_score(actual, predicted), "recall =", \
recall_score(actual, predicted), ";",\
"accuracy =", accuracy_score(actual, predicted)
print
print "Осторожный и рискующий алгоритмы:"
for actual, predicted, descr in zip([actual_1, actual_1r],
[predicted_1 > T, predicted_1r > T],
["Typical careful:", "Typical risky:"]):
print descr, "precision =", precision_score(actual, predicted), "recall =", \
recall_score(actual, predicted), ";",\
"accuracy =", accuracy_score(actual, predicted)
print
print "Разные склонности алгоритмов к ошибкам FP и FN:"
for actual, predicted, descr in zip([actual_10, actual_11],
[predicted_10 > T, predicted_11 > T],
["Avoids FP:", "Avoids FN:"]):
print descr, "precision =", precision_score(actual, predicted), "recall =", \
recall_score(actual, predicted), ";",\
"accuracy =", accuracy_score(actual, predicted)
Explanation: Мы описали разные характеры векторов вероятностей. Далее мы будем смотреть, как метрики оценивают разные векторы предсказаний, поэтому обязательно выполните ячейки, создающие векторы для визуализации.
Метрики, оценивающие бинарные векторы предсказаний
Есть две типичные ситуации, когда специалисты по машинному обучению начинают изучать характеристики метрик качества:
1. при участии в соревновании или решении прикладной задачи, когда вектор предсказаний оценивается по конкретной метрике, и нужно построить алгоритм, максимизирующий эту метрику.
1. на этапе формализации задачи машинного обучения, когда есть требования прикладной области, и нужно предложить математическую метрику, которая будет соответствовать этим требованиям.
Далее мы вкратце рассмотрим каждую метрику с этих двух позиций.
Precision и recall; accuracy
Для начала разберемся с метриками, оценивающие качество уже после бинаризации по порогу T, то есть сравнивающие два бинарных вектора: actual и predicted.
Две популярные метрики - precision и recall. Первая показывает, как часто алгоритм предсказывает класс 1 и оказывается правым, а вторая - как много объектов класса 1 алгоритм нашел.
Также рассмотрим самую простую и известную метрику - accuracy; она показывает долю правильных ответов.
Выясним преимущества и недостатки этих метрик, попробовав их на разных векторах вероятностей.
End of explanation
from sklearn.metrics import precision_recall_curve
precs = []
recs = []
threshs = []
labels = ["Typical", "Avoids FP", "Avoids FN"]
for actual, predicted in zip([actual_1, actual_10, actual_11],
[predicted_1, predicted_10, predicted_11]):
prec, rec, thresh = precision_recall_curve(actual, predicted)
precs.append(prec)
recs.append(rec)
threshs.append(thresh)
plt.figure(figsize=(15, 5))
for i in range(3):
ax = plt.subplot(1, 3, i+1)
plt.plot(threshs[i], precs[i][:-1], label="precision")
plt.plot(threshs[i], recs[i][:-1], label="recall")
plt.xlabel("threshold")
ax.set_title(labels[i])
plt.legend()
Explanation: Все три метрики легко различают простые случаи хороших и плохих алгоритмов. Обратим внимание, что метрики имеют область значений [0, 1], и потому их легко интерпретировать.
Метрикам не важны величины вероятностей, им важно только то, сколько объектов неправильно зашли за установленную границу (в данном случае T = 0.5).
Метрика accuracy дает одинаковый вес ошибкам false positive и false negative, зато пара метрик precision и recall однозначно идентифицирует это различие. Собственно, их для того и используют, чтобы контролировать ошибки FP и FN.
Мы измерили три метрики, фиксировав порог T = 0.5, потому что для почти всех картинок он кажется оптимальным. Давайте посмотрим на последней (самой интересной для этих метрик) группе векторов, как меняются precision и recall при увеличении порога.
End of explanation
############### Programming assignment: problem 1 ###############
T = 0.65
precision_1 = precision_score(actual_1, predicted_1 > T)
recall_1 = recall_score(actual_1, predicted_1 > T)
precision_10 = precision_score(actual_10, predicted_10 > T)
recall_10 = recall_score(actual_10, predicted_10 > T)
precision_11 = precision_score(actual_11, predicted_11 > T)
recall_11 = recall_score(actual_11, predicted_11 > T)
def write_answer_1(precision_1, recall_1, precision_10, recall_10, precision_11, recall_11):
answers = [precision_1, recall_1, precision_10, recall_10, precision_11, recall_11]
with open("pa_metrics_problem1.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
write_answer_1(precision_1, recall_1, precision_10, recall_10, precision_11, recall_11)
Explanation: При увеличении порога мы делаем меньше ошибок FP и больше ошибок FN, поэтому одна из кривых растет, а вторая - падает. По такому графику можно подобрать оптимальное значение порога, при котором precision и recall будут приемлемы. Если такого порога не нашлось, нужно обучать другой алгоритм.
Оговоримся, что приемлемые значения precision и recall определяются предметной областью. Например, в задаче определения, болен ли пациент определенной болезнью (0 - здоров, 1 - болен), ошибок false negative стараются избегать, требуя recall около 0.9. Можно сказать человеку, что он болен, и при дальнейшей диагностике выявить ошибку; гораздо хуже пропустить наличие болезни.
<font color="green" size=5>Programming assignment: problem 1. </font> Фиксируем порог T = 0.65; по графикам можно примерно узнать, чему равны метрики на трех выбранных парах векторов (actual, predicted). Вычислите точные precision и recall для этих трех пар векторов.
6 полученных чисел запишите в текстовый файл в таком порядке:
precision_1 recall_1 precision_10 recall_10 precision_11 recall_11
Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.
Передайте ответ в функцию write_answer_1. Полученный файл загрузите в форму.
End of explanation
from sklearn.metrics import f1_score
T = 0.5
print "Разные склонности алгоритмов к ошибкам FP и FN:"
for actual, predicted, descr in zip([actual_1, actual_10, actual_11],
[predicted_1 > T, predicted_10 > T, predicted_11 > T],
["Typical:", "Avoids FP:", "Avoids FN:"]):
print descr, "f1 =", f1_score(actual, predicted)
Explanation: F1-score
Очевидный недостаток пары метрик precision-recall - в том, что их две: непонятно, как ранжировать алгоритмы. Чтобы этого избежать, используют F1-метрику, которая равна среднему гармоническому precision и recall.
F1-метрика будет равна 1, если и только если precision = 1 и recall = 1 (идеальный алгоритм).
(: Обмануть F1 сложно: если одна из величин маленькая, а другая близка к 1 (по графикам видно, что такое соотношение иногда легко получить), F1 будет далека от 1. F1-метрику сложно оптимизировать, потому что для этого нужно добиваться высокой полноты и точности одновременно.
Например, посчитаем F1 для того же набора векторов, для которого мы строили графики (мы помним, что там одна из кривых быстро выходит в единицу).
End of explanation
############### Programming assignment: problem 2 ###############
def f1_scores(a, p):
rv = []
for i in np.arange(1, 11):
t = 0.1 * float(i)
rv.append(f1_score(a, p > t))
return np.argmax(rv) + 1
k_1 = f1_scores(actual_1, predicted_1)
k_10 = f1_scores(actual_10, predicted_10)
k_11 = f1_scores(actual_11, predicted_11)
ks = [k_1, k_10, k_11]
many_scatters([actual_1, actual_10, actual_11], [predicted_1, predicted_10, predicted_11],
np.array(ks)*0.1, ["Typical", "Avoids FP", "Avoids FN"], (1, 3))
def write_answer_2(k_1, k_10, k_11):
answers = [k_1, k_10, k_11]
with open("pa_metrics_problem2.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
write_answer_2(k_1, k_10, k_11)
Explanation: F1-метрика в двух последних случаях, когда одна из парных метрик равна 1, значительно меньше, чем в первом, сбалансированном случае.
<font color="green" size=5>Programming assignment: problem 2. </font> На precision и recall влияют и характер вектора вероятностей, и установленный порог.
Для тех же пар (actual, predicted), что и в предыдущей задаче, найдите оптимальные пороги, максимизирующие F1_score. Будем рассматривать только пороги вида T = 0.1 * k, k - целое; соответственно, нужно найти три значения k. Если f1 максимизируется при нескольких значениях k, укажите наименьшее из них.
Запишите найденные числа k в следующем порядке:
k_1, k_10, k_11
Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.
Передайте ответ в функцию write_answer_2. Загрузите файл в форму.
Если вы запишите список из трех найденных k в том же порядке в переменную ks, то с помощью кода ниже можно визуализировать найденные пороги:
End of explanation
from sklearn.metrics import log_loss
print "Алгоритмы, разные по качеству:"
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0, predicted_1, predicted_2],
["Perfect:", "Typical:", "Awful:"]):
print descr, log_loss(actual, predicted)
print
print "Осторожный и рискующий алгоритмы:"
for actual, predicted, descr in zip([actual_0, actual_0r, actual_1, actual_1r],
[predicted_0, predicted_0r, predicted_1, predicted_1r],
["Ideal careful", "Ideal risky", "Typical careful:", "Typical risky:"]):
print descr, log_loss(actual, predicted)
print
print "Разные склонности алгоритмов к ошибкам FP и FN:"
for actual, predicted, descr in zip([actual_10, actual_11],
[predicted_10, predicted_11],
["Avoids FP:", "Avoids FN:"]):
print descr, log_loss(actual, predicted)
Explanation: Метрики, оценивающие векторы вероятностей класса 1
Рассмотренные метрики удобно интерпретировать, но при их использовании мы не учитываем большую часть информации, полученной от алгоритма. В некоторых задачах вероятности нужны в чистом виде, например, если мы предсказываем, выиграет ли команда в футбольном матче, и величина вероятности влияет на размер ставки за эту команду. Даже если в конце концов мы все равно бинаризуем предсказание, хочется следить за характером вектора вероятности.
Log_loss
Log_loss вычисляет правдоподобие меток в actual с вероятностями из predicted, взятое с противоположным знаком:
$log_loss(actual, predicted) = - \frac 1 n \sum_{i=1}^n (actual_i \cdot \log (predicted_i) + (1-actual_i) \cdot \log (1-predicted_i))$, $n$ - длина векторов.
Соответственно, эту метрику нужно минимизировать.
Вычислим ее на наших векторах:
End of explanation
############### Programming assignment: problem 3 ###############
def wll_func(act, pred):
n = float(len(act))
return -(1.0 / n) * np.sum(0.3 * act * np.log(pred) + 0.7 * (1.0 - act) * np.log(1.0 - pred))
wll_0 = wll_func(actual_0, predicted_0)
wll_1 = wll_func(actual_1, predicted_1)
wll_2 = wll_func(actual_2, predicted_2)
wll_0r = wll_func(actual_0r, predicted_0r)
wll_1r = wll_func(actual_1r, predicted_1r)
wll_10 = wll_func(actual_10, predicted_10)
wll_11 = wll_func(actual_11, predicted_11)
wll_0, wll_1, wll_2, wll_0r, wll_1r, wll_10, wll_11
def write_answer_3(wll_0, wll_1, wll_2, wll_0r, wll_1r, wll_10, wll_11):
answers = [wll_0, wll_1, wll_2, wll_0r, wll_1r, wll_10, wll_11]
with open("pa_metrics_problem3.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
write_answer_3(wll_0, wll_1, wll_2, wll_0r, wll_1r, wll_10, wll_11)
Explanation: Как и предыдущие метрики, log_loss хорошо различает идеальный, типичный и плохой случаи. Но обратите внимание, что интерпретировать величину достаточно сложно: метрика не достигает нуля никогда и не имеет верхней границы. Поэтому даже для идеального алгоритма, если смотреть только на одно значение log_loss, невозможно понять, что он идеальный.
Но зато эта метрика различает осторожный и рискующий алгоритмы. Как мы видели выше, в случаях Typical careful и Typical risky количество ошибок при бинаризации по T = 0.5 примерно одинаковое, в случаях Ideal ошибок вообще нет. Однако за неудачно угаданные классы в Typical рискующему алгоритму приходится платить большим увеличением log_loss, чем осторожному алгоритму. С другой стороны, за удачно угаданные классы рискованный идеальный алгоритм получает меньший log_loss, чем осторожный идеальный алгоритм.
Таким образом, log_loss чувствителен и к вероятностям, близким к 0 и 1, и к вероятностям, близким к 0.5.
Ошибки FP и FN обычный Log_loss различать не умеет.
Однако нетрудно сделать обобщение log_loss на случай, когда нужно больше штрафовать FP или FN: для этого достаточно добавить выпуклую (то есть неотрицательную и суммирующуюся к единице) комбинацию из двух коэффициентов к слагаемым правдоподобия. Например, давайте штрафовать false positive:
$weighted_log_loss(actual, predicted) = -\frac 1 n \sum_{i=1}^n (0.3\, \cdot actual_i \cdot \log (predicted_i) + 0.7\,\cdot (1-actual_i)\cdot \log (1-predicted_i))$
Если алгоритм неверно предсказывает большую вероятность первому классу, то есть объект на самом деле принадлежит классу 0, то первое слагаемое в скобках равно нулю, а второе учитывается с большим весом.
<font color="green" size=5>Programming assignment: problem 3. </font> Напишите функцию, которая берет на вход векторы actual и predicted и возвращает модифицированный Log-Loss, вычисленный по формуле выше. Вычислите ее значение (обозначим его wll) на тех же векторах, на которых мы вычисляли обычный log_loss, и запишите в файл в следующем порядке:
wll_0 wll_1 wll_2 wll_0r wll_1r wll_10 wll_11
Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.
Передайте ответ в функцию write_answer3. Загрузите файл в форму.
End of explanation
from sklearn.metrics import roc_curve, roc_auc_score
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
aucs = ""
for actual, predicted, descr in zip([actual_0, actual_1, actual_2],
[predicted_0, predicted_1, predicted_2],
["Perfect", "Typical", "Awful"]):
fpr, tpr, thr = roc_curve(actual, predicted)
plt.plot(fpr, tpr, label=descr)
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplot(1, 3, 2)
for actual, predicted, descr in zip([actual_0, actual_0r, actual_1, actual_1r],
[predicted_0, predicted_0r, predicted_1, predicted_1r],
["Ideal careful", "Ideal Risky", "Typical careful", "Typical risky"]):
fpr, tpr, thr = roc_curve(actual, predicted)
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.plot(fpr, tpr, label=descr)
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplot(1, 3, 3)
for actual, predicted, descr in zip([actual_1, actual_10, actual_11],
[predicted_1, predicted_10, predicted_11],
["Typical", "Avoids FP", "Avoids FN"]):
fpr, tpr, thr = roc_curve(actual, predicted)
aucs += descr + ":%3f"%roc_auc_score(actual, predicted) + " "
plt.plot(fpr, tpr, label=descr)
plt.xlabel("false positive rate")
plt.ylabel("true positive rate")
plt.legend(loc=4)
plt.axis([-0.1, 1.1, -0.1, 1.1])
print aucs
Explanation: Обратите внимание на разницу weighted_log_loss между случаями Avoids FP и Avoids FN.
ROC и AUC
При построении ROC-кривой (receiver operating characteristic) происходит варьирование порога бинаризации вектора вероятностей, и вычисляются величины, зависящие от числа ошибок FP и FN. Эти величины задаются так, чтобы в случае, когда существует порог для идеального разделения классов, ROC-кривая проходила через определенную точку - верхний левый угол квадрата [0, 1] x [0, 1]. Кроме того, она всегда проходит через левый нижний и правый верхний углы. Получается наглядная визуализация качества алгоритма. С целью охарактеризовать эту визуализацию численно, ввели понятие AUC - площадь под ROC-кривой.
Есть несложный и эффективный алгоритм, который за один проход по выборке вычисляет ROC-кривую и AUC, но мы не будем вдаваться в детали.
Построим ROC-кривые для наших задач:
End of explanation
############### Programming assignment: problem 4 ###############
############### Programming assignment: problem 4 ###############
def find_t(act, pred):
best = np.array([0.0, 1.0])
fpr, tpr, thr = roc_curve(act, pred)
#print [ (x,y) for x,y in zip(fpr, tpr)], thr
tmp = [np.linalg.norm(np.array([x, y]) - best) for x,y in zip(fpr, tpr)]
return thr[np.argmin(tmp)]
T_0 = find_t(actual_0, predicted_0)
T_1 = find_t(actual_1, predicted_1)
T_2 = find_t(actual_2, predicted_2)
T_0r = find_t(actual_0r, predicted_0r)
T_1r = find_t(actual_1r, predicted_1r)
T_10 = find_t(actual_10, predicted_10)
T_11 = find_t(actual_11, predicted_11)
def write_answer_4(T_0, T_1, T_2, T_0r, T_1r, T_10, T_11):
answers = [T_0, T_1, T_2, T_0r, T_1r, T_10, T_11]
with open("pa_metrics_problem4.txt", "w") as fout:
fout.write(" ".join([str(num) for num in answers]))
write_answer_4(T_0, T_1, T_2, T_0r, T_1r, T_10, T_11)
Explanation: Чем больше объектов в выборке, тем более гладкой выглядит кривая (хотя на самом деле она все равно ступенчатая).
Как и ожидалось, кривые всех идеальных алгоритмов проходят через левый верхний угол. На первом графике также показана типичная ROC-кривая (обычно на практике они не доходят до "идеального" угла).
AUC рискующего алгоритма значительном меньше, чем у осторожного, хотя осторожный и рискущий идеальные алгоритмы не различаются по ROC или AUC. Поэтому стремиться увеличить зазор между интервалами вероятностей классов смысла не имеет.
Наблюдается перекос кривой в случае, когда алгоритму свойственны ошибки FP или FN. Однако по величине AUC это отследить невозможно (кривые могут быть симметричны относительно диагонали (0, 1)-(1, 0)).
После того, как кривая построена, удобно выбирать порог бинаризации, в котором будет достигнут компромисс между FP или FN. Порог соответствует точке на кривой. Если мы хотим избежать ошибок FP, нужно выбирать точку на левой стороне квадрата (как можно выше), если FN - точку на верхней стороне квадрата (как можно левее). Все промежуточные точки будут соответствовать разным пропорциям FP и FN.
<font color="green" size=5>Programming assignment: problem 4. </font> На каждой кривой найдите точку, которая ближе всего к левому верхнему углу (ближе в смысле обычного евклидова расстояния), этой точке соответствует некоторый порог бинаризации. Запишите в выходной файл пороги в следующем порядке:
T_0 T_1 T_2 T_0r T_1r T_10 T_11
Цифры XXX после пробела соответствуют таким же цифрам в названиях переменных actual_XXX и predicted_XXX.
Если порогов, минимизирующих расстояние, несколько, выберите наибольший.
Передайте ответ в функцию write_answer_4. Загрузите файл в форму.
Пояснение: функция roc_curve возвращает три значения: FPR (массив абсции точек ROC-кривой), TPR (массив ординат точек ROC-кривой) и thresholds (массив порогов, соответствующих точкам).
Рекомендуем отрисовывать найденную точку на графике с помощью функции plt.scatter.
End of explanation |
5,403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
waLBerla Tutorial 01
Step1: We have created an empty playing field consisting of 8-bit integer values, all initialized with zeros.
Now we write a function iterating over all cells, applying the update rule on each one of them. In waLBerla these update functions are called sweeps, since they iterate (sweep) over the complete domain and update each cell separately. A crucial point here is that at each cell, we only have to access neighboring values, and, since we create a new temporary copy, all of these cell updates could in principle happen in parallel.
Step2: This code snippet takes a grid and returns the grid in the next timestep.
For the GameOfLife we use a D2Q9 neighborhood, meaning 2 dimensional ( D2 ) with in total 9 cells (Q9). Since we leave out the center (0,0) we strictly speaking have only 8 cells.
Step3: Lets first initialize a so-called blinker and run a few timesteps
Step4: Our implementation seems to work
Step5: Now lets create an animation of this blinker
Step6: Now lets load some more interesting starting configuration. Here we choose the 'Gosper Gliding Gun' scenario, taken from Wikipedia.
Step7: Step 2
Step8: Having a distributed domain, we can now run the stencil algorithm locally on each block. We can safely do this, since the simulation domain was extended with one ghost layer. After each sweep over the domain the ghost layers have to be synchronized again. | Python Code:
import numpy as np
def makeGrid(shape):
return np.zeros(shape, dtype=np.int8)
print(makeGrid( [5,5] ))
Explanation: waLBerla Tutorial 01: Basic data structures
Preface
This is an interactive Python notebook. The grey cells contain runnable Python code which can be executed with Ctrl+Enter. Make sure to execute all of them in the right order, since they built upon each other. You can also execute them all in the beginning using "Cell->Run all". To get back to a clean state use "Kernel->Restart".
Step 1: Game of Life in Python with numpy
This first tutorial introduces waLBerla's basic data structures,
by implementing a version of <a href="http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life" target="_blank">Conway's Game of Life</a> cellular automaton.
The Game of Life algorithm is formulated on a regular grid of cells. Each cell can be in one of two states: dead or alive. The time evolution of this game is a simple rule how to get to the next cell state by only using cell states of neighboring cells. For details see <a href="http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life" target="_blank">the Wikipedia page</a>.
Before working with waLBerla we first start with a pure Python implementation of this algorithm using the popular <a href="http://www.numpy.org/">numpy package</a>. If you are not familiar with that package, now is a good time to read up on it.
End of explanation
ALIVE = 1
DEAD = 0
neighborhoodD2Q9 = [ (i,j) for i in [-1,0,1]
for j in [-1,0,1]
if i != 0 or j!=0 ]
def gameOfLifeSweep(grid):
temporaryGrid = np.copy(grid)
for i in range(1, grid.shape[0]-1):
for j in range(1, grid.shape[1]-1):
numberOfAliveNeighbors = 0
for neighborCellOffset in neighborhoodD2Q9:
ni = i + neighborCellOffset[0]
nj = j + neighborCellOffset[1]
if temporaryGrid[ni,nj] == ALIVE:
numberOfAliveNeighbors += 1
if numberOfAliveNeighbors < 2 or numberOfAliveNeighbors > 3:
grid[i,j] = DEAD
if numberOfAliveNeighbors == 3:
grid[i,j] = ALIVE
return grid
Explanation: We have created an empty playing field consisting of 8-bit integer values, all initialized with zeros.
Now we write a function iterating over all cells, applying the update rule on each one of them. In waLBerla these update functions are called sweeps, since they iterate (sweep) over the complete domain and update each cell separately. A crucial point here is that at each cell, we only have to access neighboring values, and, since we create a new temporary copy, all of these cell updates could in principle happen in parallel.
End of explanation
print(neighborhoodD2Q9)
Explanation: This code snippet takes a grid and returns the grid in the next timestep.
For the GameOfLife we use a D2Q9 neighborhood, meaning 2 dimensional ( D2 ) with in total 9 cells (Q9). Since we leave out the center (0,0) we strictly speaking have only 8 cells.
End of explanation
grid = makeGrid( [5,5] )
grid[2,1:4] = ALIVE
print ( "Initial Setup:" )
print ( grid )
for t in range(2):
grid = gameOfLifeSweep(grid)
print("After timestep %d: " % (t+1,) )
print(grid)
Explanation: Lets first initialize a so-called blinker and run a few timesteps
End of explanation
from material.matplotlib_setup import * # import matplotlib and configures it to play nicely with iPython notebook
matplotlib.rcParams['image.cmap'] = 'Blues' # switch default colormap
im = plt.imshow(grid, interpolation='none')
Explanation: Our implementation seems to work: The blinker is iterating ( blinking ) nicely between these two configurations.
Looking at these configurations in text representation is not a good idea for bigger grids, so lets display our grid using matplotlib. The setup code for matplotlib was put to a separate file called matplotlib_setup.py which also contains code for creating animations.
End of explanation
ani = makeImshowAnimation(grid, gameOfLifeSweep, frames=6)
displayAsHtmlVideo(ani, fps=2)
Explanation: Now lets create an animation of this blinker:
End of explanation
from imageio import imread
grid = imread('material/GosperGliderGun.png', as_gray=True).astype(int)
grid[grid>0] = ALIVE # values are from 0 to 255 - set everything nonzero to ALIVE
ani = makeImshowAnimation(grid, gameOfLifeSweep, frames=6*15)
displayAsHtmlVideo(ani, fps=15)
Explanation: Now lets load some more interesting starting configuration. Here we choose the 'Gosper Gliding Gun' scenario, taken from Wikipedia.
End of explanation
from imageio import imread
import os
# Read the initial scenario
initialConfig = np.rot90( imread('material/GosperGliderGun.png',as_gray=True).astype(int), 3 )
initialConfig[initialConfig>0] = ALIVE # values are from 0 to 255 - set everything nonzero to ALIVE
# %%px
import sys
import waLBerla as wlb
import numpy as np
import os
# For this tutorial we use only one process. The code can easaly run with more processes and mpirun as pyhton script
numberOfProcesses = 1
domainSize = (initialConfig.shape[0], initialConfig.shape[1], 1)
# We can either specify the detailed domain partitioning ...
blocks = wlb.createUniformBlockGrid(blocks=(numberOfProcesses, 1, 1),
cellsPerBlock=(domainSize[0]//numberOfProcesses, domainSize[1], domainSize[2]),
periodic=(1,1,1))
# Now put one field (i.e. grid) on each block
wlb.field.addToStorage(blocks, name='PlayingField', dtype=np.int, ghostLayers=1)
# Iterate over local blocks - in our setup we have exactly one block per process - but lets be general
for block in blocks:
offsetInGlobalDomain = blocks.transformLocalToGlobal(block, wlb.Cell(0,0,0))
myRank = wlb.mpi.rank()
print("Block on rank %d: with offset %s" % (myRank, offsetInGlobalDomain[:] ))
Explanation: Step 2: Demonstration with waLBerlas python bindings
waLBerla is parallelized using MPI (message passing interface). That means that multiple processes are started, possibly on different machines which all execute the same program and communicate by sending messages. In a typical Python environment, we would start multiple Python interpreters all executing the same script using mpirun. The following command would run a script using four processes:
mpirun -np 4 python3 my_waLBerla_script.py
Using multiple processes is not very convenient in an IPython environment. Thus we only demonstrate here with one process. However, this tutorial can be easily extended to multiple processes
Now lets implement the Game of Life algorithm in using waLBerla. waLBerla divides the complete domain into blocks. These blocks can then be distributed to the participating processes. We will set up here a simple example where each process gets one block. waLBerla can put multiple blocks per process. This makes sense if the computational load varies for each block, then a process receives either few expensive blocks or many cheap blocks.
While blocks are the basic unit of load balancing they also act as container for distributed data. In this Game of Life example we have to distribute our grid to the blocks e.g. each block internally stores only part of the complete domain.
The local grids are extended by one ghost layer which are a shadow copy of the outermost layer of the neighboring block.
All these details are handled by waLBerla.
<img src="material/blocks.png" width="900px"></img>
We only have to specify the number of blocks (which equals the number of processes in our case) and how many cells we want to have on each block. Each of these size informations have to be 3 tuples since waLBerla inherently is a 3D framework. However we can mimic our 2D setup by choosing the size of the z coordinate as 1.
End of explanation
import waLBerla.plot as wlbPlt
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from material.matplotlib_setup import *
matplotlib.rcParams['image.cmap'] = 'Blues' # switch default colormap
# Initialize our grid
for block in blocks:
grid = wlb.field.toArray( block['PlayingField'] ).squeeze()
offsetInGlobalDomain = blocks.transformLocalToGlobal(block, wlb.Cell(0,0,0))
blockSize = grid.shape[:2]
xBegin, xEnd = offsetInGlobalDomain[0], offsetInGlobalDomain[0] + grid.shape[0]
yBegin, yEnd = offsetInGlobalDomain[1], offsetInGlobalDomain[1] + grid.shape[1]
grid[:,:] = initialConfig[xBegin:xEnd, yBegin:yEnd]
wlbPlt.scalar_field( blocks, 'PlayingField', wlb.makeSlice[:,:,0] )
communication = wlb.createUniformBufferedScheme( blocks, 'D2Q9')
communication.addDataToCommunicate( wlb.field.createPackInfo( blocks, 'PlayingField') )
def runTimestep():
communication()
for block in blocks:
grid = wlb.field.toArray( block['PlayingField'], with_ghost_layers=True )[:, :, 1]
gameOfLifeSweep( grid )
ani = wlbPlt.scalar_field_animation( blocks, 'PlayingField', wlb.makeSlice[:,:,0], run_function=runTimestep, frames=100 )
displayAsHtmlVideo( ani, fps=30, show=(wlb.mpi.rank()==0) )
Explanation: Having a distributed domain, we can now run the stencil algorithm locally on each block. We can safely do this, since the simulation domain was extended with one ghost layer. After each sweep over the domain the ghost layers have to be synchronized again.
End of explanation |
5,404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sensitivity analysis with SALib
We have got the single parts now for the sensitivity analysis. We are now using the global sensitivity analysis methods of the Python package SALib, available on
Step1: Model set-up
We use the two-fault model from previous examples and assign parameter ranges with a dictionary
Step2: Define sampling lines
As before, we need to define points in the model (or lines) which we want to evaluate the sensitivity for
Step3: And, again, we "freeze" the base state for later comparison and distance caluclations
Step4: Setting-up the parameter set
For use with SALib, we have to define a parameter set as a text file (maybe there is a different way directly in Python - something to figure out for the future). The sensitivity object has a method to do that automatically
Step5: We now invoke the methods of the SALib library to generate parameter data sets that are required for the type of sensitivity analysis that we want to perform
Step6: The object 'param_values' is a list of samples for the parameters that are defined in the model, in the order of appearance in param_file, e.g.
Step7: Calculating distances for all parameter sets
We now need to create a model realisation for each of these parameter sets and calculate the distance between the realisation and the base model at the position of the defined sampling lines. As we are not (always) interested in keeping the results of all realisations, those steps are combined and only the calculated distance is retained (per default)
Step8: Sensitivity analysis
We can now analyse the sensitivity of the modelled stratigraphy along the defined vertical lines ("drillholes") with respect to the model parameters
Step9: Reading parameter ranges from file
So, now that we have all the required ingredients for the sensitivity analysis, we can make life a bit easier with more automation. First, instead of defining parameters in a dictionary as above, we can actually read them in from a csv file (e.g. saved from Excel as Windows-csv file).
In order to be read in correctly, the header should contain the labels
Step10: The only further aspect we need to define are the sampling lines
Step11: And then we know everything to perform the sensitivity analysis. The relevant steps are combined in one method | Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
import sys, os
import matplotlib.pyplot as plt
import numpy as np
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
os.chdir(r'../../../pynoddy/docs/notebooks/')
repo_path = os.path.realpath('../..')
import pynoddy.history
import pynoddy.experiment
rcParams.update({'font.size': 20})
Explanation: Sensitivity analysis with SALib
We have got the single parts now for the sensitivity analysis. We are now using the global sensitivity analysis methods of the Python package SALib, available on:
https://github.com/jdherman/SALib
As a start, we will test the sensitivity of the model at each drillhole position separately. As parameters, we will use the parameters of the fault events: dip, dip direction, and slip.
End of explanation
reload(pynoddy.history)
import pynoddy.experiment.sensitivity_analysis
reload(pynoddy.experiment.sensitivity_analysis)
# Start again with the original model
his_filename = "two_faults_sensi.his"
sa = pynoddy.experiment.sensitivity_analysis.SensitivityAnalysis(history = his_filename)
# Initialise list
param_stats = []
# Add one entry as dictionary with relevant properties:
# for event 2:
param_stats.append({'event' : 2, 'parameter' : 'Dip', 'min' : 55., 'max' : 65.,
'type' : 'normal', 'stdev' : 10., 'mean' : 60., 'initial' : 60.})
param_stats.append({'event' : 2, 'parameter' : 'Dip Direction', 'min' : 85., 'max' : 95.,
'type' : 'normal', 'stdev' : 10., 'mean' : 90., 'initial' : 90.})
param_stats.append({'event' : 2, 'parameter' : 'Slip', 'min' : 900., 'max' : 1100.,
'type' : 'normal', 'stdev' : 500, 'mean' : 1000., 'initial' : 1000.})
# for event 3:
param_stats.append({'event' : 3, 'parameter' : 'Dip', 'min' : 55., 'max' : 65.,
'type' : 'normal', 'stdev' : 10., 'mean' : 60., 'initial' : 60.})
param_stats.append({'event' : 3, 'parameter' : 'Dip Direction', 'min' : 265., 'max' : 275.,
'type' : 'normal', 'stdev' : 10., 'mean' : 270., 'initial' : 270.})
param_stats.append({'event' : 3, 'parameter' : 'Slip', 'min' : 900., 'max' : 1100.,
'type' : 'normal', 'stdev' : 500, 'mean' : 1000., 'initial' : 1000.})
sa.set_parameter_statistics(param_stats)
Explanation: Model set-up
We use the two-fault model from previous examples and assign parameter ranges with a dictionary:
End of explanation
# sa.add_sampling_line(5000, 3500, label = 'centre')
sa.add_sampling_line(2500, 3500, label = 'left')
# sa.add_sampling_line(7500, 3500, label = 'right')
# sa.add_sampling_line(4000, 3500, label = 'compare')
Explanation: Define sampling lines
As before, we need to define points in the model (or lines) which we want to evaluate the sensitivity for:
End of explanation
sa.freeze()
Explanation: And, again, we "freeze" the base state for later comparison and distance caluclations:
End of explanation
param_file = "params_file_tmp.txt"
sa.create_params_file(filename = param_file)
Explanation: Setting-up the parameter set
For use with SALib, we have to define a parameter set as a text file (maybe there is a different way directly in Python - something to figure out for the future). The sensitivity object has a method to do that automatically:
End of explanation
# import SALib method
from SALib.sample import saltelli
param_values = saltelli.sample(10, param_file, calc_second_order = True)
Explanation: We now invoke the methods of the SALib library to generate parameter data sets that are required for the type of sensitivity analysis that we want to perform:
End of explanation
param_values[0]
Explanation: The object 'param_values' is a list of samples for the parameters that are defined in the model, in the order of appearance in param_file, e.g.:
End of explanation
distances = sa.determine_distances(param_values = param_values)
# plot(sa.get_model_lines(model_type = 'base'))
plt.plot(sa.get_model_lines(model_type = 'current'))
# Just to check if we actualy did get different models:
plt.plot(distances, '.-k')
plt.title("Model distances")
plt.xlabel("Sensitivity step")
plt.ylabel("Distance")
Explanation: Calculating distances for all parameter sets
We now need to create a model realisation for each of these parameter sets and calculate the distance between the realisation and the base model at the position of the defined sampling lines. As we are not (always) interested in keeping the results of all realisations, those steps are combined and only the calculated distance is retained (per default):
End of explanation
# save results
results_file = 'dist_tmp.txt'
np.savetxt(results_file, distances, delimiter=' ')
from SALib.analyze import sobol
Si = sobol.analyze(param_file, results_file,
column = 0,
conf_level = 0.95,
calc_second_order = True,
print_to_console=False)
# create composite matrix for sensitivities
n_params = 6
comp_matrix = np.ndarray(shape = (n_params,n_params))
for j in range(n_params):
for i in range(n_params):
if i == j:
comp_matrix[i,j] = Si['S1'][i]
else:
comp_matrix[i,j] = Si['S2'][i,j]
comp_matrix[j,i] = Si['S2'][i,j]
# print comp_matrix
# define labels for figure: phi = dip, d = dip direction, s = slip, subscript = fault event
label_names = ["","$\phi_1$", "$d_1$", "$s_1$", "$\phi_2$", "$d_2$", "$s_2$"]
# Create figure
fig = plt.figure()
ax = fig.add_subplot(111)
im = ax.imshow(comp_matrix, interpolation='nearest', cmap='RdBu_r',
vmax = np.max(np.abs(comp_matrix)),
vmin = -np.max(np.abs(comp_matrix)),
)
ax.yaxis.set_ticks_position("both")
ax.xaxis.set_ticks_position("top")
ax.set_xticklabels(label_names)
ax.set_yticklabels(label_names)
# ax.set_title("Sensitivities")
ax.set_xlabel("Parameter Sensitivities")
fig.colorbar(im)
plt.tight_layout()
# plt.savefig("two_fault_sensi.png")
Explanation: Sensitivity analysis
We can now analyse the sensitivity of the modelled stratigraphy along the defined vertical lines ("drillholes") with respect to the model parameters:
End of explanation
reload(pynoddy.history)
reload(pynoddy.experiment)
# Start again with the original model
his_filename = "two_faults_sensi.his"
sa = pynoddy.experiment.SensitivityAnalysis(history = his_filename)
sa.load_parameter_file("params_fault_model.csv")
Explanation: Reading parameter ranges from file
So, now that we have all the required ingredients for the sensitivity analysis, we can make life a bit easier with more automation. First, instead of defining parameters in a dictionary as above, we can actually read them in from a csv file (e.g. saved from Excel as Windows-csv file).
In order to be read in correctly, the header should contain the labels:
'event' : event id
'parameter' : Noddy parameter ('Dip', 'Dip Direction', etc.)
'min' : minimum value
'max' : maximum value
'initial' : initial value
In addition, it is possible to define PDF type and parameters. For now, the following settings are supported:
'type' = 'normal'
'stdev' : standard deviation
'mean' : mean value (default: 'initial' value)
We can read in the parameters simply with:
End of explanation
# sa.add_sampling_line(5000, 3500, label = 'centre')
sa.add_sampling_line(2500, 3500, label = 'left')
# sa.add_sampling_line(7500, 3500, label = 'right')
# sa.add_sampling_line(4000, 3500, label = 'compare')
Explanation: The only further aspect we need to define are the sampling lines:
End of explanation
sa.perform_analsis(10)
sa.plot_distances()
sa.plot_sensitivity_matrix()
# for event 2:
param_stats.append({'event' : 2, 'parameter' : 'Dip', 'min' : 55., 'max' : 65.,
'type' : 'normal', 'stdev' : 10., 'mean' : 60., 'initial' : 60.})
param_stats.append({'event' : 2, 'parameter' : 'Dip Direction', 'min' : 85., 'max' : 95.,
'type' : 'normal', 'stdev' : 10., 'mean' : 90., 'initial' : 90.})
param_stats.append({'event' : 2, 'parameter' : 'Slip', 'min' : 900., 'max' : 1100.,
'type' : 'normal', 'stdev' : 500, 'mean' : 1000., 'initial' : 1000.})
# for event 3:
param_stats.append({'event' : 3, 'parameter' : 'Dip', 'min' : 55., 'max' : 65.,
'type' : 'normal', 'stdev' : 10., 'mean' : 60., 'initial' : 60.})
param_stats.append({'event' : 3, 'parameter' : 'Dip Direction', 'min' : 265., 'max' : 275.,
'type' : 'normal', 'stdev' : 10., 'mean' : 270., 'initial' : 270.})
param_stats.append({'event' : 3, 'parameter' : 'Slip', 'min' : 900., 'max' : 1100.,
'type' : 'normal', 'stdev' : 500, 'mean' : 1000., 'initial' : 1000.})
sa.param_stats
sa.plot_section(model_type = "base")
plt.plot(sa.get_drillhole_data(4000, 3500))
plt.plot(sa.get_model_lines())
reload(pynoddy.history)
reload(pynoddy.experiment)
sa2 = pynoddy.experiment.Experiment(history = "two_faults_sensi.his")
sa2.write_history("test5.his")
nm = pynoddy.history.NoddyHistory(history = "two_faults_sensi.his")
# nm.determine_events()
nm.write_history("test6.his")
param_values[0]
reload(pynoddy.history)
reload(pynoddy.experiment)
# Start again with the original model
his_filename = "two_faults_sensi.his"
sa = pynoddy.experiment.SensitivityAnalysis(history = his_filename)
sa.freeze()
# sa.change_event_params({3 : {'Slip' : 500.}})
sa.change_event_params({3 : {'Dip' : 15.}})
fig = plt.figure(figsize = (12,6))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
sa.plot_section(ax = ax1, colorbar = False, title = "")
sa.plot_section(ax = ax2, model_type = "base", colorbar = False, title = "")
sa.change_event_params({3 : {'Slip' : 100.}})
sa.plot_section()
# sa.add_sampling_line(5000, 3500, label = 'centre')
# sa.add_sampling_line(2500, 3500, label = 'left')
sa.add_sampling_line(7500, 3500, label = 'right')
# sa.add_sampling_line(4000, 3500, label = 'compare')
plt.plot(sa.get_model_lines(), 'k')
plt.plot(sa.get_model_lines(model_type = "base"), 'b')
pwd
Explanation: And then we know everything to perform the sensitivity analysis. The relevant steps are combined in one method:
End of explanation |
5,405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: Create a function to download and save SRTM images using BMI_topography.
Step2: Make function to plot DEMs and drainage accumulation with shaded relief.
Step3: Compare default Landlab flow accumulator with priority flood flow accumulator
For small DEMs (small buffer size, in degrees), the default flow accumulator is slightly faster than the priority flood flow accumulator. For large DEMs, the priority flood flow accumulator outperforms the default flow accumulator by several orders of magnitude. To test the performance for larger DEM's increase the buffer size (e.g. with 1 degree = 111 km).
Default flow director/accumulator
Step4: Priority flood flow director/accumulator
Calculate flow directions/flow accumulation using the first instance of the flow accumulator
Step5: Priority flood flow director/accumulator
Calculate flow directions/flow accumulation using the second instance of the flow accumulator | Python Code:
import sys, time, os
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
from landlab.components import FlowAccumulator, PriorityFloodFlowRouter, ChannelProfiler
from landlab.io.netcdf import read_netcdf
from landlab.utils import get_watershed_mask
from landlab import imshowhs_grid, imshow_grid
from landlab.io import read_esri_ascii, write_esri_ascii
from bmi_topography import Topography
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Introduction to priority flood component
<hr>
The priority flood flow director is designed to calculate flow properties over large scale grids.
In the following notebook we illustrate how flow accumulation can be calculated for a real DEM downloaded with the BMI_topography data component. Moreover, we demonstrate how shaded relief can be plotted using the imshowhs_grid function.
First we will import all the modules we need.
End of explanation
def get_topo(buffer, north=40.16, south=40.14, east=-105.49, west=-105.51):
params = Topography.DEFAULT.copy()
params["south"] = south - buffer
params["north"] = north + buffer
params["west"] = -105.51 - buffer
params["east"] = -105.49 + buffer
params["output_format"] = "AAIGrid"
params["cache_dir"] = Path.cwd()
dem = Topography(**params)
name = dem.fetch()
props = dem.load()
dim_x = props.sizes["x"]
dim_y = props.sizes["y"]
cells = props.sizes["x"] * props.sizes["y"]
grid, z = read_esri_ascii(name, name="topographic__elevation")
return dim_x, dim_y, cells, grid, z, dem
Explanation: Create a function to download and save SRTM images using BMI_topography.
End of explanation
def plotting(
grid, topo=True, DA=True, hill_DA=False, flow_metric="D8", hill_flow_metric="Quinn"
):
if topo:
azdeg = 200
altdeg = 20
ve = 1
plt.figure()
plot_type = "DEM"
ax = imshowhs_grid(
grid,
"topographic__elevation",
grid_units=("deg", "deg"),
var_name="Topo, m",
cmap="terrain",
plot_type=plot_type,
vertical_exa=ve,
azdeg=azdeg,
altdeg=altdeg,
default_fontsize=12,
cbar_tick_size=10,
cbar_width="100%",
cbar_or="vertical",
bbox_to_anchor=[1.03, 0.3, 0.075, 14],
colorbar_label_y=-15,
colorbar_label_x=0.5,
ticks_km=False,
)
if DA:
# %% Plot first instance of drainage_area
grid.at_node["drainage_area"][grid.at_node["drainage_area"] == 0] = (
grid.dx * grid.dx
)
plot_DA = np.log10(grid.at_node["drainage_area"] * 111e3 * 111e3)
plt.figure()
plot_type = "Drape1"
drape1 = plot_DA
thres_drape1 = None
alpha = 0.5
myfile1 = "temperature.cpt"
cmap1 = "terrain"
ax = imshowhs_grid(
grid,
"topographic__elevation",
grid_units=("deg", "deg"),
cmap=cmap1,
plot_type=plot_type,
drape1=drape1,
vertical_exa=ve,
azdeg=azdeg,
altdeg=altdeg,
thres_drape1=thres_drape1,
alpha=alpha,
default_fontsize=12,
cbar_tick_size=10,
var_name="$log^{10}DA, m^2$",
cbar_width="100%",
cbar_or="vertical",
bbox_to_anchor=[1.03, 0.3, 0.075, 14],
colorbar_label_y=-15,
colorbar_label_x=0.5,
ticks_km=False,
)
props = dict(boxstyle="round", facecolor="white", alpha=0.6)
textstr = flow_metric
ax.text(
0.05,
0.95,
textstr,
transform=ax.transAxes,
fontsize=10,
verticalalignment="top",
bbox=props,
)
if hill_DA:
# Plot second instance of drainage_area (hill_drainage_area)
grid.at_node["hill_drainage_area"][grid.at_node["hill_drainage_area"] == 0] = (
grid.dx * grid.dx
)
plotDA = np.log10(grid.at_node["hill_drainage_area"] * 111e3 * 111e3)
# plt.figure()
# imshow_grid(grid, plotDA,grid_units=("m", "m"), var_name="Elevation (m)", cmap='terrain')
plt.figure()
plot_type = "Drape1"
# plot_type='Drape2'
drape1 = np.log10(grid.at_node["hill_drainage_area"])
thres_drape1 = None
alpha = 0.5
myfile1 = "temperature.cpt"
cmap1 = "terrain"
ax = imshowhs_grid(
grid,
"topographic__elevation",
grid_units=("deg", "deg"),
cmap=cmap1,
plot_type=plot_type,
drape1=drape1,
vertical_exa=ve,
azdeg=azdeg,
altdeg=altdeg,
thres_drape1=thres_drape1,
alpha=alpha,
default_fontsize=10,
cbar_tick_size=10,
var_name="$log^{10}DA, m^2$",
cbar_width="100%",
cbar_or="vertical",
bbox_to_anchor=[1.03, 0.3, 0.075, 14],
colorbar_label_y=-15,
colorbar_label_x=0.5,
ticks_km=False,
)
props = dict(boxstyle="round", facecolor="white", alpha=0.6)
textstr = hill_flow_metric
ax.text(
0.05,
0.95,
textstr,
transform=ax.transAxes,
fontsize=10,
verticalalignment="top",
bbox=props,
)
Explanation: Make function to plot DEMs and drainage accumulation with shaded relief.
End of explanation
# Download or reload topo data with given buffer
dim_x, dim_y, cells, grid_LL, z_LL, dem = get_topo(0.05)
fa_LL = FlowAccumulator(
grid_LL, flow_director="D8", depression_finder="DepressionFinderAndRouter"
)
fa_LL.run_one_step()
# Plot output products
plotting(grid_LL)
Explanation: Compare default Landlab flow accumulator with priority flood flow accumulator
For small DEMs (small buffer size, in degrees), the default flow accumulator is slightly faster than the priority flood flow accumulator. For large DEMs, the priority flood flow accumulator outperforms the default flow accumulator by several orders of magnitude. To test the performance for larger DEM's increase the buffer size (e.g. with 1 degree = 111 km).
Default flow director/accumulator
End of explanation
# Download or reload topo data with given buffer
dim_x, dim_y, cells, grid_PF, z_PF, dem = get_topo(0.05)
# Here, we only calculate flow directions using the first instance of the flow accumulator
flow_metric = "D8"
fa_PF = PriorityFloodFlowRouter(
grid_PF,
surface="topographic__elevation",
flow_metric=flow_metric,
suppress_out=False,
depression_handler="fill",
accumulate_flow=True,
)
fa_PF.run_one_step()
# Plot output products
plotting(grid_PF)
Explanation: Priority flood flow director/accumulator
Calculate flow directions/flow accumulation using the first instance of the flow accumulator
End of explanation
# 3. Priority flow director/accumualtor
# Download or reload topo data with given buffer
dim_x, dim_y, cells, grid_PF, z_PF, dem = get_topo(0.05)
# For timing compare only single flow
flow_metric = "D8"
hill_flow_metric = "Quinn"
fa_PF = PriorityFloodFlowRouter(
grid_PF,
surface="topographic__elevation",
flow_metric=flow_metric,
suppress_out=False,
depression_handler="fill",
accumulate_flow=True,
separate_hill_flow=True,
accumulate_flow_hill=True,
update_hill_flow_instantaneous=False,
hill_flow_metric=hill_flow_metric,
)
fa_PF.run_one_step()
fa_PF.update_hill_fdfa()
# 4. Plot output products
plotting(grid_PF, hill_DA=True, flow_metric="D8", hill_flow_metric="Quinn")
# Remove downloaded DEM. Uncomment to remove DEM.
# os.remove(dem.fetch())
Explanation: Priority flood flow director/accumulator
Calculate flow directions/flow accumulation using the second instance of the flow accumulator
End of explanation |
5,406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
02. Fetching binary data with urllib and unzipping files with zipfile
If you can get the web address, or URL, of a specific binary file that you found on some website, you can usually download it fairly easily using Python's native urllib library, which is a simple interface for interacting with network resources.
Here, we demonstrate how to use the urllib module to send a request to a server and handle the repsonse. While urllib can do much more than just download files, we'll keep ourselves to its urllib.urlretrieve function, which is designed to fetch remote files, saving them as local files..
We'll use the urllib.urlretrive to download a Census tract shapefile located on the US Census's web server
Step1: Now take a look at the contents of your folder. You'll see a local copy of the zip file!
Step2: This works for photos too...
Step3: So now, let's look at how the zipfile module can unzip it | Python Code:
#Import the two modules
import urllib
import zipfile
#Specify the URL of the resource
theURL = 'https://www2.census.gov/geo/tiger/TIGER2017/TRACT/tl_2017_38_tract.zip'
#Set a local filename to save the file as
localFile = 'tl_2017_38_tract.zip'
#The urlretrieve function downloads a file, saving it as the file name we specify
urllib.urlretrieve(url=theURL,filename=localFile)
Explanation: 02. Fetching binary data with urllib and unzipping files with zipfile
If you can get the web address, or URL, of a specific binary file that you found on some website, you can usually download it fairly easily using Python's native urllib library, which is a simple interface for interacting with network resources.
Here, we demonstrate how to use the urllib module to send a request to a server and handle the repsonse. While urllib can do much more than just download files, we'll keep ourselves to its urllib.urlretrieve function, which is designed to fetch remote files, saving them as local files..
We'll use the urllib.urlretrive to download a Census tract shapefile located on the US Census's web server: https://www2.census.gov/geo/tiger/TIGER2017/TRACT. The one file we'll get is tracts for North Dakota (because it's a fairly small file): tl_2017_38_tract.zip.
We'll also take this opportunity to examine how Python can unzip files with the zipfile library.
End of explanation
!dir *.zip
Explanation: Now take a look at the contents of your folder. You'll see a local copy of the zip file!
End of explanation
imgURL = 'https://imgs.xkcd.com/comics/state_borders.png'
urllib.urlretrieve(url=imgURL,filename="map.jpg",);
#Display the file in our notebook
from IPython.display import Image
Image("map.jpg")
Explanation: This works for photos too...
End of explanation
#First open the local zip file as a zipFile object
zipObject = zipfile.ZipFile(localFile)
#Create a folder to hold the file
#Name the folder we'll create the same as the file, without the extension
outFolderName = localFile[:-4]
#Well us the os module to do check if the folder exists, and create if not
import os
if not os.path.exists(outFolderName):
outFolder = os.mkdir(localFile[:-4])
zipObject.extractall(path=outFolderName)
zipObject.close()
Explanation: So now, let's look at how the zipfile module can unzip it
End of explanation |
5,407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this recipe, we're going to build taxonomic classifiers for amplicon sequencing. We'll do this for 16S using some scikit-learn classifiers.
Step1: We're going to work with the qiime-default-reference so we have easy access to some sequences. For reasons we'll look at below, we're going to load the unaligned reference sequences (which are 97% OTUs) and the aligned reference sequences (which are 85% OTUs). If you want to adapt this recipe to train and test a classifier on other files, just set the variable names below to the file paths that you'd like to use for training.
Step2: Several recent studies of amplicon taxonomic assignment methods (Mizrahi-Man et al. 2013, Werner et al. 2012) have suggested that training Naive Bayes taxonomic classifiers against only the region of a sequence that was amplified, rather than a full length sequence, will give better taxonomic assignment results. So, lets start by slicing our reference sequences by finding some commonly used 16S primers so we train only on the fragment of the 16S that we would amplify in an amplicon survey.
We'll define the forward and reverse primers as skbio.DNA objects. The primers that we're using here are pulled from Supplementary File 1 of Caporaso et al. 2012. Note that we're reverse complementing the reverse primer when we load it here so that it's in the same orientation as our reference sequences.
Step4: The typical way to approach the problem of finding the boundaries of a short sequence in a longer sequence would be to use pairwise alignment. But, we're going to try a different approach here since pairwise alignment is inherently slow (it scales quadratically). Because these are sequencing primers, they're designed to be unique (so there shouldn't be multiple matches of a primer to a sequence), and they're designed to match as many sequences as possible. So let's try using regular expressions to match our sequencing primers in the reference database. Regular expression matching scales lineaerly, so is much faster to apply to many sequences.
First, we'll define a function to generate a regular expression from a Sequence object. This functionality will be in scikit-bio's next official release (it was recently added as part of issue #1005).
Step5: We can then apply this to define a regular expression that will match our forward primer, the following sequence, and then the reverse primer. We can use the resulting matches then to find the region of our sequences that is bound by our forward and reverse primer.
Step6: Next, let's apply this to all of our unaligned sequence and find out how many reference sequences our pattern matches.
Step7: So we're matching only about 80% of our reference sequences with this pattern. The implication for this application is that we'd only know how to slice 80% of our sequences, and as a result, we'd only have 80% of our sequence to train on. In addition to this being a problem because we want to train on as many sequences possible, it's very likely that there are certain taxonomic groups are left out all together. So, using regular expressions this way won't work.
However... this is exactly what multiple sequence alignments are good for. If we could match our primers against aligned reference sequences, then finding matches in 80% of our sequences would give us an idea of how to slice all of our sequences, since the purpose of a multiple sequence alignment is to normalize the position numbers across all of the sequences in a sequence collection. The problem is that the gaps in the alignment would make it harder to match our regular expression, as gaps would show up that disrupt our matches. We can get around this using the ignore parameter to DNA.find_with_regex, which takes a boolean vector (a fancy name for an array or list of boolean values) indicating positions that should be ignore in the regular expression match. Let's try applying our regular expression to the aligned reference sequences and keeping track of where each match starts and stops.
Step8: If we now look at the distribution of the start and stop positions of each regular expression match, we see that each distribution is narrowly focused around certain positions. We can use those to define the region that we want to slice from our reference alignment, and then remove the gaps from all sequences to train our classifiers. | Python Code:
%pylab inline
from __future__ import division
import numpy as np
import pandas as pd
import skbio
import qiime_default_reference
Explanation: In this recipe, we're going to build taxonomic classifiers for amplicon sequencing. We'll do this for 16S using some scikit-learn classifiers.
End of explanation
###
## UPDATE THIS CELL TO USE THE DEFAULT REFERENCE AGAIN!!
###
unaligned_ref_fp = qiime_default_reference.get_reference_sequences()
aligned_ref_fp = "/Users/caporaso/data/gg_13_8_otus/rep_set_aligned/97_otus.fasta" #qiime_default_reference.get_template_alignment()
tax_ref_fp = "/Users/caporaso/data/gg_13_8_otus/taxonomy/97_otu_taxonomy.txt" #qiime_default_reference.get_reference_taxonomy()
Explanation: We're going to work with the qiime-default-reference so we have easy access to some sequences. For reasons we'll look at below, we're going to load the unaligned reference sequences (which are 97% OTUs) and the aligned reference sequences (which are 85% OTUs). If you want to adapt this recipe to train and test a classifier on other files, just set the variable names below to the file paths that you'd like to use for training.
End of explanation
fwd_primer = skbio.DNA("GTGCCAGCMGCCGCGGTAA", {'id':'fwd-primer'})
rev_primer = skbio.DNA("GGACTACHVGGGTWTCTAAT", {'id':'rev-primer'}).reverse_complement()
Explanation: Several recent studies of amplicon taxonomic assignment methods (Mizrahi-Man et al. 2013, Werner et al. 2012) have suggested that training Naive Bayes taxonomic classifiers against only the region of a sequence that was amplified, rather than a full length sequence, will give better taxonomic assignment results. So, lets start by slicing our reference sequences by finding some commonly used 16S primers so we train only on the fragment of the 16S that we would amplify in an amplicon survey.
We'll define the forward and reverse primers as skbio.DNA objects. The primers that we're using here are pulled from Supplementary File 1 of Caporaso et al. 2012. Note that we're reverse complementing the reverse primer when we load it here so that it's in the same orientation as our reference sequences.
End of explanation
def seq_to_regex(seq):
Convert a sequence to a regular expression
result = []
sequence_class = seq.__class__
for base in str(seq):
if base in sequence_class.degenerate_chars:
result.append('[{0}]'.format(
''.join(sequence_class.degenerate_map[base])))
else:
result.append(base)
return ''.join(result)
Explanation: The typical way to approach the problem of finding the boundaries of a short sequence in a longer sequence would be to use pairwise alignment. But, we're going to try a different approach here since pairwise alignment is inherently slow (it scales quadratically). Because these are sequencing primers, they're designed to be unique (so there shouldn't be multiple matches of a primer to a sequence), and they're designed to match as many sequences as possible. So let's try using regular expressions to match our sequencing primers in the reference database. Regular expression matching scales lineaerly, so is much faster to apply to many sequences.
First, we'll define a function to generate a regular expression from a Sequence object. This functionality will be in scikit-bio's next official release (it was recently added as part of issue #1005).
End of explanation
regex = '({0}.*{1})'.format(seq_to_regex(fwd_primer),
seq_to_regex(rev_primer))
regex
Explanation: We can then apply this to define a regular expression that will match our forward primer, the following sequence, and then the reverse primer. We can use the resulting matches then to find the region of our sequences that is bound by our forward and reverse primer.
End of explanation
seq_count = 0
match_count = 0
for seq in skbio.io.read(unaligned_ref_fp, format='fasta',
constructor=skbio.DNA):
seq_count += 1
for match in seq.find_with_regex(regex):
match_count += 1
match_percentage = (match_count / seq_count) * 100
print('{0} of {1} ({2:.2f}%) sequences have exact matches to the regular expression.'.format(match_count, seq_count, match_percentage))
Explanation: Next, let's apply this to all of our unaligned sequence and find out how many reference sequences our pattern matches.
End of explanation
starts = []
stops = []
for seq in skbio.io.read(aligned_ref_fp, format='fasta',
constructor=skbio.DNA):
for match in seq.find_with_regex(regex, ignore=seq.gaps()):
starts.append(match.start)
stops.append(match.stop)
Explanation: So we're matching only about 80% of our reference sequences with this pattern. The implication for this application is that we'd only know how to slice 80% of our sequences, and as a result, we'd only have 80% of our sequence to train on. In addition to this being a problem because we want to train on as many sequences possible, it's very likely that there are certain taxonomic groups are left out all together. So, using regular expressions this way won't work.
However... this is exactly what multiple sequence alignments are good for. If we could match our primers against aligned reference sequences, then finding matches in 80% of our sequences would give us an idea of how to slice all of our sequences, since the purpose of a multiple sequence alignment is to normalize the position numbers across all of the sequences in a sequence collection. The problem is that the gaps in the alignment would make it harder to match our regular expression, as gaps would show up that disrupt our matches. We can get around this using the ignore parameter to DNA.find_with_regex, which takes a boolean vector (a fancy name for an array or list of boolean values) indicating positions that should be ignore in the regular expression match. Let's try applying our regular expression to the aligned reference sequences and keeping track of where each match starts and stops.
End of explanation
pd.Series(starts).describe()
pd.Series(stops).describe()
locus = slice(int(np.median(starts)), int(np.median(stops)))
locus
subset_fraction = 1.0
kmer_counts = []
seq_ids = []
for seq in skbio.io.read(aligned_ref_fp, format='fasta',
constructor=skbio.DNA):
if np.random.random() > subset_fraction: continue
seq_ids.append(seq.metadata['id'])
sliced_seq = seq[locus].degap()
kmer_counts.append(sliced_seq.kmer_frequencies(8))
from sklearn.feature_extraction import DictVectorizer
X = DictVectorizer().fit_transform(kmer_counts)
taxonomy_level = 7 #
id_to_taxon = {}
with open(tax_ref_fp) as f:
for line in f:
id_, taxon = line.strip().split('\t')
id_to_taxon[id_] = '; '.join(taxon.split('; ')[:taxonomy_level])
y = [id_to_taxon[seq_id] for seq_id in seq_ids]
from sklearn.feature_selection import SelectPercentile
X = SelectPercentile().fit_transform(X, y)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state=0)
%matplotlib inline
import matplotlib.pyplot as plt
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.figure()
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
plt.ylabel('Known taxonomy')
plt.xlabel('Predicted taxonomy')
plt.tight_layout()
plt.show()
from sklearn.svm import SVC
y_pred = SVC(C=10, kernel='rbf', degree=3,
gamma=0.001).fit(X_train, y_train).predict(X_test)
from sklearn.metrics import confusion_matrix, f1_score
cm = confusion_matrix(y_test, y_pred)
cm_normalized = cm / cm.sum(axis=1)[:, np.newaxis]
plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix')
print("F-score: %1.3f" % f1_score(y_test, y_pred, average='micro'))
from sklearn.naive_bayes import MultinomialNB
y_pred = MultinomialNB().fit(X_train, y_train).predict(X_test)
cm = confusion_matrix(y_test, y_pred)
cm_normalized = cm / cm.sum(axis=1)[:, np.newaxis]
plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix')
print("F-score: %1.3f" % f1_score(y_test, y_pred, average='micro'))
Explanation: If we now look at the distribution of the start and stop positions of each regular expression match, we see that each distribution is narrowly focused around certain positions. We can use those to define the region that we want to slice from our reference alignment, and then remove the gaps from all sequences to train our classifiers.
End of explanation |
5,408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Sequential model
Author
Step1: When to use a Sequential model
A Sequential model is appropriate for a plain stack of layers
where each layer has exactly one input tensor and one output tensor.
Schematically, the following Sequential model
Step2: is equivalent to this function
Step3: A Sequential model is not appropriate when
Step4: Its layers are accessible via the layers attribute
Step5: You can also create a Sequential model incrementally via the add() method
Step6: Note that there's also a corresponding pop() method to remove layers
Step7: Also note that the Sequential constructor accepts a name argument, just like
any layer or model in Keras. This is useful to annotate TensorBoard graphs
with semantically meaningful names.
Step8: Specifying the input shape in advance
Generally, all layers in Keras need to know the shape of their inputs
in order to be able to create their weights. So when you create a layer like
this, initially, it has no weights
Step9: It creates its weights the first time it is called on an input, since the shape
of the weights depends on the shape of the inputs
Step10: Naturally, this also applies to Sequential models. When you instantiate a
Sequential model without an input shape, it isn't "built"
Step11: Once a model is "built", you can call its summary() method to display its
contents
Step12: However, it can be very useful when building a Sequential model incrementally
to be able to display the summary of the model so far, including the current
output shape. In this case, you should start your model by passing an Input
object to your model, so that it knows its input shape from the start
Step13: Note that the Input object is not displayed as part of model.layers, since
it isn't a layer
Step14: A simple alternative is to just pass an input_shape argument to your first
layer
Step15: Models built with a predefined input shape like this always have weights (even
before seeing any data) and always have a defined output shape.
In general, it's a recommended best practice to always specify the input shape
of a Sequential model in advance if you know what it is.
A common debugging workflow
Step16: Very practical, right?
What to do once you have a model
Once your model architecture is ready, you will want to
Step17: Here's a similar example that only extract features from one layer | Python Code:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: The Sequential model
Author: fchollet<br>
Date created: 2020/04/12<br>
Last modified: 2020/04/12<br>
Description: Complete guide to the Sequential model.
Setup
End of explanation
# Define Sequential model with 3 layers
model = keras.Sequential(
[
layers.Dense(2, activation="relu", name="layer1"),
layers.Dense(3, activation="relu", name="layer2"),
layers.Dense(4, name="layer3"),
]
)
# Call model on a test input
x = tf.ones((3, 3))
y = model(x)
Explanation: When to use a Sequential model
A Sequential model is appropriate for a plain stack of layers
where each layer has exactly one input tensor and one output tensor.
Schematically, the following Sequential model:
End of explanation
# Create 3 layers
layer1 = layers.Dense(2, activation="relu", name="layer1")
layer2 = layers.Dense(3, activation="relu", name="layer2")
layer3 = layers.Dense(4, name="layer3")
# Call layers on a test input
x = tf.ones((3, 3))
y = layer3(layer2(layer1(x)))
Explanation: is equivalent to this function:
End of explanation
model = keras.Sequential(
[
layers.Dense(2, activation="relu"),
layers.Dense(3, activation="relu"),
layers.Dense(4),
]
)
Explanation: A Sequential model is not appropriate when:
Your model has multiple inputs or multiple outputs
Any of your layers has multiple inputs or multiple outputs
You need to do layer sharing
You want non-linear topology (e.g. a residual connection, a multi-branch
model)
Creating a Sequential model
You can create a Sequential model by passing a list of layers to the Sequential
constructor:
End of explanation
model.layers
Explanation: Its layers are accessible via the layers attribute:
End of explanation
model = keras.Sequential()
model.add(layers.Dense(2, activation="relu"))
model.add(layers.Dense(3, activation="relu"))
model.add(layers.Dense(4))
Explanation: You can also create a Sequential model incrementally via the add() method:
End of explanation
model.pop()
print(len(model.layers)) # 2
Explanation: Note that there's also a corresponding pop() method to remove layers:
a Sequential model behaves very much like a list of layers.
End of explanation
model = keras.Sequential(name="my_sequential")
model.add(layers.Dense(2, activation="relu", name="layer1"))
model.add(layers.Dense(3, activation="relu", name="layer2"))
model.add(layers.Dense(4, name="layer3"))
Explanation: Also note that the Sequential constructor accepts a name argument, just like
any layer or model in Keras. This is useful to annotate TensorBoard graphs
with semantically meaningful names.
End of explanation
layer = layers.Dense(3)
layer.weights # Empty
Explanation: Specifying the input shape in advance
Generally, all layers in Keras need to know the shape of their inputs
in order to be able to create their weights. So when you create a layer like
this, initially, it has no weights:
End of explanation
# Call layer on a test input
x = tf.ones((1, 4))
y = layer(x)
layer.weights # Now it has weights, of shape (4, 3) and (3,)
Explanation: It creates its weights the first time it is called on an input, since the shape
of the weights depends on the shape of the inputs:
End of explanation
model = keras.Sequential(
[
layers.Dense(2, activation="relu"),
layers.Dense(3, activation="relu"),
layers.Dense(4),
]
) # No weights at this stage!
# At this point, you can't do this:
# model.weights
# You also can't do this:
# model.summary()
# Call the model on a test input
x = tf.ones((1, 4))
y = model(x)
print("Number of weights after calling the model:", len(model.weights)) # 6
Explanation: Naturally, this also applies to Sequential models. When you instantiate a
Sequential model without an input shape, it isn't "built": it has no weights
(and calling
model.weights results in an error stating just this). The weights are created
when the model first sees some input data:
End of explanation
model.summary()
Explanation: Once a model is "built", you can call its summary() method to display its
contents:
End of explanation
model = keras.Sequential()
model.add(keras.Input(shape=(4,)))
model.add(layers.Dense(2, activation="relu"))
model.summary()
Explanation: However, it can be very useful when building a Sequential model incrementally
to be able to display the summary of the model so far, including the current
output shape. In this case, you should start your model by passing an Input
object to your model, so that it knows its input shape from the start:
End of explanation
model.layers
Explanation: Note that the Input object is not displayed as part of model.layers, since
it isn't a layer:
End of explanation
model = keras.Sequential()
model.add(layers.Dense(2, activation="relu", input_shape=(4,)))
model.summary()
Explanation: A simple alternative is to just pass an input_shape argument to your first
layer:
End of explanation
model = keras.Sequential()
model.add(keras.Input(shape=(250, 250, 3))) # 250x250 RGB images
model.add(layers.Conv2D(32, 5, strides=2, activation="relu"))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.MaxPooling2D(3))
# Can you guess what the current output shape is at this point? Probably not.
# Let's just print it:
model.summary()
# The answer was: (40, 40, 32), so we can keep downsampling...
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.MaxPooling2D(3))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.Conv2D(32, 3, activation="relu"))
model.add(layers.MaxPooling2D(2))
# And now?
model.summary()
# Now that we have 4x4 feature maps, time to apply global max pooling.
model.add(layers.GlobalMaxPooling2D())
# Finally, we add a classification layer.
model.add(layers.Dense(10))
Explanation: Models built with a predefined input shape like this always have weights (even
before seeing any data) and always have a defined output shape.
In general, it's a recommended best practice to always specify the input shape
of a Sequential model in advance if you know what it is.
A common debugging workflow: add() + summary()
When building a new Sequential architecture, it's useful to incrementally stack
layers with add() and frequently print model summaries. For instance, this
enables you to monitor how a stack of Conv2D and MaxPooling2D layers is
downsampling image feature maps:
End of explanation
initial_model = keras.Sequential(
[
keras.Input(shape=(250, 250, 3)),
layers.Conv2D(32, 5, strides=2, activation="relu"),
layers.Conv2D(32, 3, activation="relu"),
layers.Conv2D(32, 3, activation="relu"),
]
)
feature_extractor = keras.Model(
inputs=initial_model.inputs,
outputs=[layer.output for layer in initial_model.layers],
)
# Call feature extractor on test input.
x = tf.ones((1, 250, 250, 3))
features = feature_extractor(x)
Explanation: Very practical, right?
What to do once you have a model
Once your model architecture is ready, you will want to:
Train your model, evaluate it, and run inference. See our
guide to training & evaluation with the built-in loops
Save your model to disk and restore it. See our
guide to serialization & saving.
Speed up model training by leveraging multiple GPUs. See our
guide to multi-GPU and distributed training.
Feature extraction with a Sequential model
Once a Sequential model has been built, it behaves like a Functional API
model. This means that every layer has an input
and output attribute. These attributes can be used to do neat things, like
quickly
creating a model that extracts the outputs of all intermediate layers in a
Sequential model:
End of explanation
initial_model = keras.Sequential(
[
keras.Input(shape=(250, 250, 3)),
layers.Conv2D(32, 5, strides=2, activation="relu"),
layers.Conv2D(32, 3, activation="relu", name="my_intermediate_layer"),
layers.Conv2D(32, 3, activation="relu"),
]
)
feature_extractor = keras.Model(
inputs=initial_model.inputs,
outputs=initial_model.get_layer(name="my_intermediate_layer").output,
)
# Call feature extractor on test input.
x = tf.ones((1, 250, 250, 3))
features = feature_extractor(x)
Explanation: Here's a similar example that only extract features from one layer:
End of explanation |
5,409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Supplementary tables
Setup
Step1: Table of non-synonymous variants
One row per alternate allele.
Step2: Table of haplotype tracking variants
One row per variant. Only biallelic obviously because haplotype tagging.
Columns
Step3: Table of haplotypes panel
Step4: Table of haplotype data
Step5: All variation, for PCR flanks | Python Code:
%run setup.ipynb
%matplotlib inline
# load haplotypes
callset_haps = np.load('../data/haps_phase1.npz')
haps = allel.HaplotypeArray(callset_haps['haplotypes'])
pos = allel.SortedIndex(callset_haps['POS'])
n_variants = haps.shape[0]
n_haps = haps.shape[1]
n_variants, n_haps
list(callset_haps)
callset_haps['ANN']
callset_phased = phase1_ar31.callset_phased
sorted(callset_phased['2L/variants'])
# load up haplotype group assignments from hierarchical clustering
hierarchical_group_membership = np.load('../data/hierarchical_cluster_membership.npy')
np.unique(hierarchical_group_membership)
# load up haplotype group assignments from network analysis
network_group_membership = np.load('../data/median_joining_network_membership.npy')
network_group_membership[network_group_membership == ''] = 'WT'
np.unique(network_group_membership)
# load up core haplotypes
with open('../data/core_haps.pkl', mode='rb') as f:
core_haps = pickle.load(f)
Explanation: Supplementary tables
Setup
End of explanation
tbl_variants = etl.frompickle('../data/tbl_variants_phase1.pkl')
tbl_variants.head()
transcript_ids = [
'AGAP004707-RA',
'AGAP004707-RB',
'AGAP004707-RC',
'Davies-C1N2',
'Davies-C3N2',
'Davies-C5N2',
'Davies-C7N2',
'Davies-C8N2',
'Davies-C10N2',
'Davies-C11N2',
'Davies-C1N9',
'Davies-C8N9',
'Davies-C1N9ck'
]
#load the codon map from the blog post (with the header info removed)
md_tbl = etl.fromtsv('../data/domestica_gambiae_map.txt')
md_tbl
dom_fs = pyfasta.Fasta('../data/domestica_gambiae_PROT_MEGA.fas')
#grab the right sample
dom = dom_fs.get('domestica_vgsc')
#remove the '-' from the aligned fasta so the numbering makes sense
dom_fix = [p for p in dom if p != '-']
#check
dom_fix[261-1],dom_fix[1945-1]
# keep only the missense variants
def simplify_missense_effect(v):
if v and v[0] == 'NON_SYNONYMOUS_CODING':
return v[1]
else:
return ''
tbl_variants_missense = (
tbl_variants
.select(lambda row: any(row[t] and row[t][0] == 'NON_SYNONYMOUS_CODING' for t in transcript_ids))
.convert(transcript_ids, simplify_missense_effect)
.capture('AGAP004707-RA', pattern="([0-9]+)", newfields=['gambiae_codon'], include_original=True, fill=[''])
.hashleftjoin(md_tbl, key='gambiae_codon', missing='')
.replace('domestica_codon', '.', '')
.convert('domestica_codon', lambda v: dom_fix[int(v) - 1] + v if v else v)
.cut('CHROM', 'POS', 'REF', 'ALT', 'AC', 'exon', 'domestica_codon',
'AGAP004707-RA', 'AGAP004707-RB', 'AGAP004707-RC', 'Davies-C1N2', 'Davies-C3N2', 'Davies-C5N2',
'Davies-C7N2', 'Davies-C8N2', 'Davies-C10N2', 'Davies-C11N2', 'Davies-C1N9', 'Davies-C8N9', 'Davies-C1N9ck',
'AF_AOM', 'AF_BFM', 'AF_GWA', 'AF_GNS', 'AF_BFS', 'AF_CMS', 'AF_GAS', 'AF_UGS', 'AF_KES',
'FILTER_PASS', 'NoCoverage', 'LowCoverage', 'HighCoverage', 'LowMQ', 'HighMQ0', 'RepeatDUST', 'RepeatMasker', 'RepeatTRF', 'FS', 'HRun', 'QD', 'ReadPosRankSum',
)
)
tbl_variants_missense.displayall()
tbl_variants_missense.nrows()
tbl_variants_missense.tocsv('../data/supp_table_variants_missense.csv')
Explanation: Table of non-synonymous variants
One row per alternate allele.
End of explanation
list(callset_phased['2L/variants'])
region_vgsc
pos = allel.SortedIndex(callset_phased['2L/variants/POS'])
pos
# "_tr" means tracking region, i.e., genome region we'll report data for tracking SNPs
loc_tr = pos.locate_range(region_vgsc.start - 10000, region_vgsc.end + 10000)
loc_tr
pos_tr = pos[loc_tr]
pos_tr
haps_tr = allel.GenotypeArray(callset_phased['2L/calldata/genotype'][loc_tr, :-8]).to_haplotypes()
haps_tr
ac_tr = haps_tr.count_alleles(max_allele=1)
ac_tr
af_tr = ac_tr.to_frequencies()
af_tr
subpops = dict()
for p in 'F1 F2 F3 F4 F5 S1 S2 S3 S4 S5 L1 L2 WT'.split():
subpops[p] = np.nonzero(network_group_membership == p)[0]
sorted((k, len(v)) for k, v in subpops.items())
ac_subpops_tr = haps_tr.count_alleles_subpops(subpops, max_allele=1)
af_subpops_tr = {k: ac.to_frequencies()[:, 1] for k, ac in ac_subpops_tr.items()}
columns = [
('CHROM', np.asarray(['2L'] * haps_tr.shape[0], dtype=object)),
('POS', callset_phased['2L/variants/POS'][loc_tr]),
('REF', callset_phased['2L/variants/REF'][loc_tr].astype('U')),
('ALT', callset_phased['2L/variants/ALT'][loc_tr].astype('U')),
('AC', ac_tr[:, 1]),
('AF', af_tr[:, 1]),
('MAC', ac_tr.min(axis=1)),
('MAF', af_tr.min(axis=1)),
] + sorted(('AF_' + k, v) for k, v in af_subpops_tr.items())
df_tr = pandas.DataFrame.from_items(columns)
df_tr.head()
df_tr_mac = df_tr[df_tr.MAC > 14].reset_index(drop=True)
df_tr_mac.head()
# check how many potentially diagnostics SNPs separating each pair of F haplotype groups
for x, y in itertools.combinations('F1 F2 F3 F4 F5'.split(), 2):
print(x, y, np.count_nonzero(np.abs(df_tr_mac['AF_' + x] - df_tr_mac['AF_' + y]) > 0.98))
# check how many potentially diagnostics SNPs separating each pair of S haplotype groups
for x, y in itertools.combinations('S1 S2 S3 S4 S5'.split(), 2):
print(x, y, np.count_nonzero(np.abs(df_tr_mac['AF_' + x] - df_tr_mac['AF_' + y]) > 0.98))
df_tr_mac.to_csv('../data/supp_table_variants_tracking.csv', index=False, float_format='%.3f')
Explanation: Table of haplotype tracking variants
One row per variant. Only biallelic obviously because haplotype tagging.
Columns:
* CHROM
* POS
* REF
* ALT
* AC
* AF
* AF_F1
* AF_F2
* AF_F3
* AF_F4
* AF_F5
* AF_S1
* AF_S2
* AF_S3
* AF_S4
* AF_S5
* AF_L1
* AF_L2
* AF_WT
End of explanation
df_haps = pandas.read_csv('../data/ag1000g.phase1.AR3.1.haplotypes.meta.txt', sep='\t', index_col='index')[:-16]
df_haps.head()
n_haps
core_hap_col = np.empty(n_haps, dtype=object)
for k, s in core_haps.items():
core_hap_col[sorted(s)] = k
core_hap_col
df_haps_out = (
df_haps[['label', 'ox_code', 'population', 'country', 'region', 'sex']]
.rename(columns={'label': 'haplotype_id', 'ox_code': 'sample_id'})
.copy()
)
df_haps_out['core_haplotype'] = core_hap_col
df_haps_out['network_haplotype_group'] = network_group_membership
df_haps_out['hierarchy_haplotype_group'] = hierarchical_group_membership.astype('U')
df_haps_out
df_haps_out[df_haps_out.core_haplotype == 'L1']
df_haps_out[df_haps_out.core_haplotype == 'L2']
pandas.set_option('display.max_rows', 9999)
df_haps_out.groupby(by=('population', 'network_haplotype_group')).count()[['haplotype_id']]
df_haps_out.to_csv('../data/supp_table_haplotype_panel.csv', index=False)
Explanation: Table of haplotypes panel
End of explanation
haps_tr_mac = haps_tr[df_tr.MAC > 14]
haps_tr_mac
df_hap_data = df_tr_mac[['CHROM', 'POS', 'REF', 'ALT']].merge(
pandas.DataFrame(np.asarray(haps_tr_mac), columns=df_haps.label),
left_index=True, right_index=True)
df_hap_data.head()
df_hap_data.to_csv('../data/supp_table_haplotypes_tracking.csv', index=False)
Explanation: Table of haplotype data
End of explanation
callset = phase1_ar31.callset
callset
pos = allel.SortedIndex(callset['2L/variants/POS'])
pos
loc_cs_tr = pos.locate_range(region_vgsc.start - 10000, region_vgsc.end + 10000)
pos_cs_tr = pos[loc_cs_tr]
pos_cs_tr
gt = allel.GenotypeArray(callset['2L/calldata/genotype'][loc_cs_tr])
gt
df_samples = phase1_ar3.df_samples
df_samples.head()
subpops = {p: sorted(df_samples[df_samples.population == p].index.values)
for p in df_samples.population.unique()}
acs = gt.count_alleles_subpops(subpops, max_allele=3)
acs['BFS']
afs = {p: acs[p].to_frequencies()[:, 1:].sum(axis=1) for p in df_samples.population.unique()}
afs
columns = ([
('CHROM', callset['2L/variants/CHROM'][loc_cs_tr].astype('U')),
('POS', callset['2L/variants/POS'][loc_cs_tr]),
('REF', callset['2L/variants/REF'][loc_cs_tr].astype('U')),
('ALT_1', callset['2L/variants/ALT'][loc_cs_tr, 0].astype('U')),
('ALT_2', callset['2L/variants/ALT'][loc_cs_tr, 1].astype('U')),
('ALT_3', callset['2L/variants/ALT'][loc_cs_tr, 2].astype('U')),
('AC_1', callset['2L/variants/AC'][loc_cs_tr, 0]),
('AC_2', callset['2L/variants/AC'][loc_cs_tr, 1]),
('AC_3', callset['2L/variants/AC'][loc_cs_tr, 2]),
('AF_1', callset['2L/variants/AF'][loc_cs_tr, 0]),
('AF_2', callset['2L/variants/AF'][loc_cs_tr, 1]),
('AF_3', callset['2L/variants/AF'][loc_cs_tr, 2]),
('AF_total', callset['2L/variants/AF'][loc_cs_tr, :].sum(axis=1)),
('AN', callset['2L/variants/AN'][loc_cs_tr]),
('FILTER_PASS', callset['2L/variants/FILTER_PASS'][loc_cs_tr]),
] + [('AF_' + p, afs[p]) for p in df_samples.population.unique()] +
[('MAX_POP_AF', np.column_stack([afs[p] for p in df_samples.population.unique()]).max(axis=1))]
)
df_cs_tr = pandas.DataFrame.from_items(columns)
df_cs_tr.head()
df_cs_tr.FILTER_PASS.describe()
df_cs_tr.to_csv('../data/supp_table_variants_all.csv')
Explanation: All variation, for PCR flanks
End of explanation |
5,410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VARLiNGAM
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
Step1: Test data
We create test data consisting of 5 variables.
Step2: Causal Discovery
To run causal discovery, we create a VARLiNGAM object and call the fit method.
Step3: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery.
Step4: Also, using the adjacency_matrices_ properties, we can see the adjacency matrix as a result of the causal discovery.
Step5: Using DirectLiNGAM for the residuals_ properties, we can calculate B0 matrix.
Step6: We can draw a causal graph by utility funciton.
Step7: Independence between error variables
To check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $e_i$ and $e_j$.
Step8: Bootstrap
Bootstrapping
We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.
Step9: Causal Directions
Since BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.3 or more.
Step10: We can check the result by utility function.
Step11: Directed Acyclic Graphs
Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.2 or more.
Step12: We can check the result by utility function.
Step13: Probability
Using the get_probabilities() method, we can get the probability of bootstrapping.
Step14: Total Causal Effects
Using the get_causal_effects() method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable.
We can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below.
Step15: We can easily perform sorting operations with pandas.DataFrame.
Step16: And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1(t).
Step17: Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below. | Python Code:
import numpy as np
import pandas as pd
import graphviz
import lingam
from lingam.utils import make_dot, print_causal_directions, print_dagc
print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__])
np.set_printoptions(precision=3, suppress=True)
np.random.seed(0)
Explanation: VARLiNGAM
Import and settings
In this example, we need to import numpy, pandas, and graphviz in addition to lingam.
End of explanation
B0 = [
[0,-0.12,0,0,0],
[0,0,0,0,0],
[-0.41,0.01,0,-0.02,0],
[0.04,-0.22,0,0,0],
[0.15,0,-0.03,0,0],
]
B1 = [
[-0.32,0,0.12,0.32,0],
[0,-0.35,-0.1,-0.46,0.4],
[0,0,0.37,0,0.46],
[-0.38,-0.1,-0.24,0,-0.13],
[0,0,0,0,0],
]
causal_order = [1, 0, 3, 2, 4]
# data generated from B0 and B1
X = pd.read_csv('data/sample_data_var_lingam.csv')
Explanation: Test data
We create test data consisting of 5 variables.
End of explanation
model = lingam.VARLiNGAM()
model.fit(X)
Explanation: Causal Discovery
To run causal discovery, we create a VARLiNGAM object and call the fit method.
End of explanation
model.causal_order_
Explanation: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery.
End of explanation
# B0
model.adjacency_matrices_[0]
# B1
model.adjacency_matrices_[1]
model.residuals_
Explanation: Also, using the adjacency_matrices_ properties, we can see the adjacency matrix as a result of the causal discovery.
End of explanation
dlingam = lingam.DirectLiNGAM()
dlingam.fit(model.residuals_)
dlingam.adjacency_matrix_
Explanation: Using DirectLiNGAM for the residuals_ properties, we can calculate B0 matrix.
End of explanation
labels = ['x0(t)', 'x1(t)', 'x2(t)', 'x3(t)', 'x4(t)', 'x0(t-1)', 'x1(t-1)', 'x2(t-1)', 'x3(t-1)', 'x4(t-1)']
make_dot(np.hstack(model.adjacency_matrices_), ignore_shape=True, lower_limit=0.05, labels=labels)
Explanation: We can draw a causal graph by utility funciton.
End of explanation
p_values = model.get_error_independence_p_values()
print(p_values)
Explanation: Independence between error variables
To check if the LiNGAM assumption is broken, we can get p-values of independence between error variables. The value in the i-th row and j-th column of the obtained matrix shows the p-value of the independence of the error variables $e_i$ and $e_j$.
End of explanation
model = lingam.VARLiNGAM()
result = model.bootstrap(X, n_sampling=100)
Explanation: Bootstrap
Bootstrapping
We call bootstrap() method instead of fit(). Here, the second argument specifies the number of bootstrap sampling.
End of explanation
cdc = result.get_causal_direction_counts(n_directions=8, min_causal_effect=0.3, split_by_causal_effect_sign=True)
Explanation: Causal Directions
Since BootstrapResult object is returned, we can get the ranking of the causal directions extracted by get_causal_direction_counts() method. In the following sample code, n_directions option is limited to the causal directions of the top 8 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.3 or more.
End of explanation
print_causal_directions(cdc, 100, labels=labels)
Explanation: We can check the result by utility function.
End of explanation
dagc = result.get_directed_acyclic_graph_counts(n_dags=3, min_causal_effect=0.2, split_by_causal_effect_sign=True)
Explanation: Directed Acyclic Graphs
Also, using the get_directed_acyclic_graph_counts() method, we can get the ranking of the DAGs extracted. In the following sample code, n_dags option is limited to the dags of the top 3 rankings, and min_causal_effect option is limited to causal directions with a coefficient of 0.2 or more.
End of explanation
print_dagc(dagc, 100, labels=labels)
Explanation: We can check the result by utility function.
End of explanation
prob = result.get_probabilities(min_causal_effect=0.1)
print('Probability of B0:\n', prob[0])
print('Probability of B1:\n', prob[1])
Explanation: Probability
Using the get_probabilities() method, we can get the probability of bootstrapping.
End of explanation
causal_effects = result.get_total_causal_effects(min_causal_effect=0.01)
df = pd.DataFrame(causal_effects)
df['from'] = df['from'].apply(lambda x : labels[x])
df['to'] = df['to'].apply(lambda x : labels[x])
df
Explanation: Total Causal Effects
Using the get_causal_effects() method, we can get the list of total causal effect. The total causal effects we can get are dictionary type variable.
We can display the list nicely by assigning it to pandas.DataFrame. Also, we have replaced the variable index with a label below.
End of explanation
df.sort_values('effect', ascending=False).head()
Explanation: We can easily perform sorting operations with pandas.DataFrame.
End of explanation
df[df['to']=='x1(t)'].head()
Explanation: And with pandas.DataFrame, we can easily filter by keywords. The following code extracts the causal direction towards x1(t).
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
from_index = 7 # index of x2(t-1). (index:2)+(n_features:5)*(lag:1) = 7
to_index = 2 # index of x2(t). (index:2)+(n_features:5)*(lag:0) = 2
plt.hist(result.total_effects_[:, to_index, from_index])
Explanation: Because it holds the raw data of the total causal effect (the original data for calculating the median), it is possible to draw a histogram of the values of the causal effect, as shown below.
End of explanation |
5,411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyze a large dataset with Google BigQuery
Learning Objectives
Access an ecommerce dataset
Look at the dataset metadata
Remove duplicate entries
Write and execute queries
Introduction
BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without having any infrastructure to manage or needing a database administrator. BigQuery uses SQL and can take advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
We have a publicly available ecommerce dataset that has millions of Google Analytics records for the Google Merchandise Store loaded into a table in BigQuery. In this lab, you use a copy of that dataset. Sample scenarios are provided, from which you look at the data and ways to remove duplicate information. The lab then steps you through further analysis the data.
BigQuery can be accessed by its own browser-based interface, Google Data Studio, and many third party tools. In this lab you will use the BigQuery directly in notebook cells using the iPython magic command %%bigquery.
The steps you will follow in the lab are analogous to what you would do to prepare data for use in advanced ML operations. You will follow the notebook to experiment with the BigQuery queries provided to analyze the data.
Set up the notebook environment
VERY IMPORTANT
Step1: Explore eCommerce data and identify duplicate records
Scenario
Step2: Next examine how many rows are in the table.
Step3: Now take a quick at few rows of data in the table.
Step4: Identify duplicate rows
Seeing a sample amount of data may give you greater intuition for what is included in the dataset. But since the table is quite large, a preview is not likely to render meaningful results. As you scan and scroll through the sample rows you see there is no singular field that uniquely identifies a row, so you need advanced logic to identify duplicate rows.
The query below uses the SQL GROUP BY function on every field and counts (COUNT) where there are rows that have the same values across every field.
If every field is unique, the COUNT will return 1 as there are no other groupings of rows with the exact same value for all fields.
If there is a row with the same values for all fields, they will be grouped together and the COUNT will be greater than 1. The last part of the query is an aggregation filter using HAVING to only show the results that have a COUNT of duplicates greater than 1.
Run the following query to find duplicate records across all columns.
Step5: As you can see there are quite a few "duplicate" records (615) when analyzed with these parameters.
In your own datasets, even if you have a unique key, it is still beneficial to confirm the uniqueness of the rows with COUNT, GROUP BY, and HAVING before you begin your analysis.
Analyze the new all_sessions table
In this section you use a deduplicated table called all_sessions.
Scenario
Step6: The query returns zero records indicating no duplicates exist.
Write basic SQL against the eCommerce data
In this section, you query for insights on the ecommerce dataset.
A good first path of analysis is to find the total unique visitors
The query below determines the total views by counting product_views and the number of unique visitors by counting fullVisitorID.
Step7: The next query shows total unique visitors(fullVisitorID) by the referring site (channelGrouping)
Step8: To find deeper insights in the data, the next query lists the five products with the most views (product_views) from unique visitors. The query counts number of times a product (v2ProductName) was viewed (product_views), puts the list in descending order, and lists the top 5 entries
Step9: Now expand your previous query to include the total number of distinct products ordered and the total number of total units ordered (productQuantity)
Step10: Lastly, expand the query to include the average amount of product per order (total number of units ordered/total number of orders, or SUM(productQuantity)/COUNT(productQuantity)). | Python Code:
import os
import pandas as pd
PROJECT = "<YOUR PROJECT>" #TODO Replace with your project id
os.environ["PROJECT"] = PROJECT
pd.options.display.max_columns = 50
Explanation: Analyze a large dataset with Google BigQuery
Learning Objectives
Access an ecommerce dataset
Look at the dataset metadata
Remove duplicate entries
Write and execute queries
Introduction
BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without having any infrastructure to manage or needing a database administrator. BigQuery uses SQL and can take advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
We have a publicly available ecommerce dataset that has millions of Google Analytics records for the Google Merchandise Store loaded into a table in BigQuery. In this lab, you use a copy of that dataset. Sample scenarios are provided, from which you look at the data and ways to remove duplicate information. The lab then steps you through further analysis the data.
BigQuery can be accessed by its own browser-based interface, Google Data Studio, and many third party tools. In this lab you will use the BigQuery directly in notebook cells using the iPython magic command %%bigquery.
The steps you will follow in the lab are analogous to what you would do to prepare data for use in advanced ML operations. You will follow the notebook to experiment with the BigQuery queries provided to analyze the data.
Set up the notebook environment
VERY IMPORTANT: In the cell below you must replace the text <YOUR PROJECT> with you GCP project id.
End of explanation
%%bigquery --project $PROJECT
#standardsql
SELECT *
EXCEPT
(table_catalog, table_schema, is_generated, generation_expression, is_stored,
is_updatable, is_hidden, is_system_defined, is_partitioning_column, clustering_ordinal_position)
FROM `data-to-insights.ecommerce.INFORMATION_SCHEMA.COLUMNS`
WHERE table_name="all_sessions_raw"
Explanation: Explore eCommerce data and identify duplicate records
Scenario: You were provided with Google Analytics logs for an eCommerce website in a BigQuery dataset. The data analyst team created a new BigQuery table of all the raw eCommerce visitor session data. This data tracks user interactions, location, device types, time on page, and details of any transaction. Your ultimate plan is to use this data in an ML capacity to create a model that delivers highly accurate predictions of user behavior to support tailored marketing campaigns.
First, a few notes on BigQuery within a python notebook context. Any cell that starts with %%bigquery (the BigQuery Magic) will be interpreted as a SQL query that is executed on BigQuery, and the result is printed to our notebook.
BigQuery supports two flavors of SQL syntax: legacy SQL and standard SQL. The preferred is standard SQL because it complies with the official SQL:2011 standard. To instruct BigQuery to interpret our syntax as such we start the query with #standardSQL.
Our first query is accessing the BigQuery Information Schema which stores all object-related metadata. In this case we want to see metadata details for the "all_sessions_raw" table.
Tip: To run the current cell you can click the cell and hit shift enter
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*)
FROM `data-to-insights.ecommerce.all_sessions_raw`
Explanation: Next examine how many rows are in the table.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT *
FROM `data-to-insights.ecommerce.all_sessions_raw`
LIMIT 7
Explanation: Now take a quick at few rows of data in the table.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*) AS num_duplicate_rows,
*
FROM `data-to-insights.ecommerce.all_sessions_raw`
GROUP BY fullvisitorid,
channelgrouping,
time,
country,
city,
totaltransactionrevenue,
transactions,
timeonsite,
pageviews,
sessionqualitydim,
date,
visitid,
type,
productrefundamount,
productquantity,
productprice,
productrevenue,
productsku,
v2productname,
v2productcategory,
productvariant,
currencycode,
itemquantity,
itemrevenue,
transactionrevenue,
transactionid,
pagetitle,
searchkeyword,
pagepathlevel1,
ecommerceaction_type,
ecommerceaction_step,
ecommerceaction_option
HAVING num_duplicate_rows > 1;
Explanation: Identify duplicate rows
Seeing a sample amount of data may give you greater intuition for what is included in the dataset. But since the table is quite large, a preview is not likely to render meaningful results. As you scan and scroll through the sample rows you see there is no singular field that uniquely identifies a row, so you need advanced logic to identify duplicate rows.
The query below uses the SQL GROUP BY function on every field and counts (COUNT) where there are rows that have the same values across every field.
If every field is unique, the COUNT will return 1 as there are no other groupings of rows with the exact same value for all fields.
If there is a row with the same values for all fields, they will be grouped together and the COUNT will be greater than 1. The last part of the query is an aggregation filter using HAVING to only show the results that have a COUNT of duplicates greater than 1.
Run the following query to find duplicate records across all columns.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT fullvisitorid, # the unique visitor ID
visitid, # a visitor can have multiple visits
date, # session date stored as string YYYYMMDD
time, # time of the individual site hit (can be 0 or more)
v2productname, # not unique since a product can have variants like Color
productsku, # unique for each product
type, # visit and/or event trigger
ecommerceaction_type, # maps to ‘add to cart', ‘completed checkout'
ecommerceaction_step,
ecommerceaction_option,
transactionrevenue, # revenue of the order
transactionid, # unique identifier for revenue bearing transaction
count(*) AS row_count
FROM `data-to-insights.ecommerce.all_sessions`
GROUP BY 1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12
HAVING row_count > 1 # find duplicates
Explanation: As you can see there are quite a few "duplicate" records (615) when analyzed with these parameters.
In your own datasets, even if you have a unique key, it is still beneficial to confirm the uniqueness of the rows with COUNT, GROUP BY, and HAVING before you begin your analysis.
Analyze the new all_sessions table
In this section you use a deduplicated table called all_sessions.
Scenario: Your data analyst team has provided you with a relevant query, and your schema experts have identified the key fields that must be unique for each record per your schema.
Run the query to confirm that no duplicates exist, this time against the "all_sessions" table:
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*) AS product_views,
count(DISTINCT fullvisitorid) AS unique_visitors
FROM `data-to-insights.ecommerce.all_sessions`;
Explanation: The query returns zero records indicating no duplicates exist.
Write basic SQL against the eCommerce data
In this section, you query for insights on the ecommerce dataset.
A good first path of analysis is to find the total unique visitors
The query below determines the total views by counting product_views and the number of unique visitors by counting fullVisitorID.
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT count(DISTINCT fullvisitorid) AS unique_visitors,
channelgrouping
FROM `data-to-insights.ecommerce.all_sessions`
GROUP BY 2
ORDER BY 2 DESC;
Explanation: The next query shows total unique visitors(fullVisitorID) by the referring site (channelGrouping):
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*) AS product_views,
( v2productname ) AS ProductName
FROM `data-to-insights.ecommerce.all_sessions`
WHERE type = 'PAGE'
GROUP BY v2productname
ORDER BY product_views DESC
LIMIT 5;
Explanation: To find deeper insights in the data, the next query lists the five products with the most views (product_views) from unique visitors. The query counts number of times a product (v2ProductName) was viewed (product_views), puts the list in descending order, and lists the top 5 entries:
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*) AS product_views,
count(productquantity) AS orders,
sum(productquantity) AS quantity_product_ordered,
v2productname
FROM `data-to-insights.ecommerce.all_sessions`
WHERE type = 'PAGE'
GROUP BY v2productname
ORDER BY product_views DESC
LIMIT 5;
Explanation: Now expand your previous query to include the total number of distinct products ordered and the total number of total units ordered (productQuantity):
End of explanation
%%bigquery --project $PROJECT
#standardSQL
SELECT count(*) AS product_views,
count(productquantity) AS orders,
sum(productquantity) AS quantity_product_ordered,
sum(productquantity) / Count(productquantity) AS avg_per_order,
v2productname AS productName
FROM `data-to-insights.ecommerce.all_sessions`
WHERE type = 'PAGE'
GROUP BY v2productname
ORDER BY product_views DESC
LIMIT 5;
Explanation: Lastly, expand the query to include the average amount of product per order (total number of units ordered/total number of orders, or SUM(productQuantity)/COUNT(productQuantity)).
End of explanation |
5,412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Plot dynamics functions
Step2: Sample data from the ARHMM
Step3: Below, we visualize each component of of the observation variable as a time series. The colors correspond to the latent state. The dotted lines represent the stationary point of the the corresponding AR state while the solid lines are the actual observations sampled from the HMM.
Step4: Fit an ARHMM | Python Code:
!pip install -qq git+git://github.com/lindermanlab/ssm-jax-refactor.git
try:
import ssm
except ModuleNotFoundError:
%pip install -qq ssm
import ssm
import copy
import jax.numpy as np
import jax.random as jr
try:
from tensorflow_probability.substrates import jax as tfp
except ModuleNotFoundError:
%pip install -qq tensorflow-probability
from tensorflow_probability.substrates import jax as tfp
try:
from ssm.distributions.linreg import GaussianLinearRegression
except ModuleNotFoundError:
%pip install -qq ssm
from ssm.distributions.linreg import GaussianLinearRegression
from ssm.arhmm import GaussianARHMM
from ssm.utils import find_permutation, random_rotation
from ssm.plots import gradient_cmap # , white_to_color_cmap
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set_style("white")
sns.set_context("talk")
color_names = ["windows blue", "red", "amber", "faded green", "dusty purple", "orange", "brown", "pink"]
colors = sns.xkcd_palette(color_names)
cmap = gradient_cmap(colors)
# Make a transition matrix
num_states = 5
transition_probs = (np.arange(num_states) ** 10).astype(float)
transition_probs /= transition_probs.sum()
transition_matrix = np.zeros((num_states, num_states))
for k, p in enumerate(transition_probs[::-1]):
transition_matrix += np.roll(p * np.eye(num_states), k, axis=1)
plt.imshow(transition_matrix, vmin=0, vmax=1, cmap="Greys")
plt.xlabel("next state")
plt.ylabel("current state")
plt.title("transition matrix")
plt.colorbar()
plt.savefig("arhmm-transmat.pdf")
# Make observation distributions
data_dim = 2
num_lags = 1
keys = jr.split(jr.PRNGKey(0), num_states)
angles = np.linspace(0, 2 * np.pi, num_states, endpoint=False)
theta = np.pi / 25 # rotational frequency
weights = np.array([0.8 * random_rotation(key, data_dim, theta=theta) for key in keys])
biases = np.column_stack([np.cos(angles), np.sin(angles), np.zeros((num_states, data_dim - 2))])
covariances = np.tile(0.001 * np.eye(data_dim), (num_states, 1, 1))
# Compute the stationary points
stationary_points = np.linalg.solve(np.eye(data_dim) - weights, biases)
print(theta / (2 * np.pi) * 360)
print(360 / 5)
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/arhmm_example.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Autoregressive (AR) HMM Demo
Modified from
https://github.com/lindermanlab/ssm-jax-refactor/blob/main/notebooks/arhmm-example.ipynb
This notebook illustrates the use of the auto_regression observation model.
Let $x_t$ denote the observation at time $t$. Let $z_t$ denote the corresponding discrete latent state.
The autoregressive hidden Markov model has the following likelihood,
$$
\begin{align}
x_t \mid x_{t-1}, z_t &\sim
\mathcal{N}\left(A_{z_t} x_{t-1} + b_{z_t}, Q_{z_t} \right).
\end{align}
$$
(Technically, higher-order autoregressive processes with extra linear terms from inputs are also implemented.)
End of explanation
if data_dim == 2:
lim = 5
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(1, num_states, figsize=(3 * num_states, 6))
for k in range(num_states):
A, b = weights[k], biases[k]
dxydt_m = xy.dot(A.T) + b - xy
axs[k].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[k % len(colors)])
axs[k].set_xlabel("$y_1$")
# axs[k].set_xticks([])
if k == 0:
axs[k].set_ylabel("$y_2$")
# axs[k].set_yticks([])
axs[k].set_aspect("equal")
plt.tight_layout()
plt.savefig("arhmm-flow-matrices.pdf")
colors
print(stationary_points)
Explanation: Plot dynamics functions
End of explanation
# Make an Autoregressive (AR) HMM
true_initial_distribution = tfp.distributions.Categorical(logits=np.zeros(num_states))
true_transition_distribution = tfp.distributions.Categorical(probs=transition_matrix)
true_arhmm = GaussianARHMM(
num_states,
transition_matrix=transition_matrix,
emission_weights=weights,
emission_biases=biases,
emission_covariances=covariances,
)
time_bins = 10000
true_states, data = true_arhmm.sample(jr.PRNGKey(0), time_bins)
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
plt.plot(*data[true_states == k].T, "o", color=colors[k], alpha=0.75, markersize=3)
plt.plot(*data[:1000].T, "-k", lw=0.5, alpha=0.2)
plt.xlabel("$y_1$")
plt.ylabel("$y_2$")
# plt.gca().set_aspect("equal")
plt.savefig("arhmm-samples-2d.pdf")
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
ndx = true_states == k
data_k = data[ndx]
T = 12
data_k = data_k[:T, :]
plt.plot(data_k[:, 0], data_k[:, 1], "o", color=colors[k], alpha=0.75, markersize=3)
for t in range(T):
plt.text(data_k[t, 0], data_k[t, 1], t, color=colors[k], fontsize=12)
# plt.plot(*data[:1000].T, '-k', lw=0.5, alpha=0.2)
plt.xlabel("$y_1$")
plt.ylabel("$y_2$")
# plt.gca().set_aspect("equal")
plt.savefig("arhmm-samples-2d-temporal.pdf")
print(biases)
print(stationary_points)
colors
Explanation: Sample data from the ARHMM
End of explanation
lim
# Plot the data and the smoothed data
plot_slice = (0, 200)
lim = 1.05 * abs(data).max()
plt.figure(figsize=(8, 6))
plt.imshow(
true_states[None, :],
aspect="auto",
cmap=cmap,
vmin=0,
vmax=len(colors) - 1,
extent=(0, time_bins, -lim, (data_dim) * lim),
)
Ey = np.array(stationary_points)[true_states]
for d in range(data_dim):
plt.plot(data[:, d] + lim * d, "-k")
plt.plot(Ey[:, d] + lim * d, ":k")
plt.xlim(plot_slice)
plt.xlabel("time")
# plt.yticks(lim * np.arange(data_dim), ["$y_{{{}}}$".format(d+1) for d in range(data_dim)])
plt.ylabel("observations")
plt.tight_layout()
plt.savefig("arhmm-samples-1d.pdf")
data.shape
data[:10, :]
Explanation: Below, we visualize each component of of the observation variable as a time series. The colors correspond to the latent state. The dotted lines represent the stationary point of the the corresponding AR state while the solid lines are the actual observations sampled from the HMM.
End of explanation
# Now fit an HMM to the data
key1, key2 = jr.split(jr.PRNGKey(0), 2)
test_num_states = num_states
initial_distribution = tfp.distributions.Categorical(logits=np.zeros(test_num_states))
transition_distribution = tfp.distributions.Categorical(logits=np.zeros((test_num_states, test_num_states)))
emission_distribution = GaussianLinearRegression(
weights=np.tile(0.99 * np.eye(data_dim), (test_num_states, 1, 1)),
bias=0.01 * jr.normal(key2, (test_num_states, data_dim)),
scale_tril=np.tile(np.eye(data_dim), (test_num_states, 1, 1)),
)
arhmm = GaussianARHMM(test_num_states, data_dim, num_lags, seed=jr.PRNGKey(0))
lps, arhmm, posterior = arhmm.fit(data, method="em")
# Plot the log likelihoods against the true likelihood, for comparison
true_lp = true_arhmm.marginal_likelihood(data)
plt.plot(lps, label="EM")
plt.plot(true_lp * np.ones(len(lps)), ":k", label="True")
plt.xlabel("EM Iteration")
plt.ylabel("Log Probability")
plt.legend(loc="lower right")
plt.show()
# # Find a permutation of the states that best matches the true and inferred states
# most_likely_states = posterior.most_likely_states()
# arhmm.permute(find_permutation(true_states[num_lags:], most_likely_states))
# posterior.update()
# most_likely_states = posterior.most_likely_states()
if data_dim == 2:
lim = abs(data).max()
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(2, max(num_states, test_num_states), figsize=(3 * num_states, 6))
for i, model in enumerate([true_arhmm, arhmm]):
for j in range(model.num_states):
dist = model._emissions._distribution[j]
A, b = dist.weights, dist.bias
dxydt_m = xy.dot(A.T) + b - xy
axs[i, j].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[j % len(colors)])
axs[i, j].set_xlabel("$x_1$")
axs[i, j].set_xticks([])
if j == 0:
axs[i, j].set_ylabel("$x_2$")
axs[i, j].set_yticks([])
axs[i, j].set_aspect("equal")
plt.tight_layout()
plt.savefig("argmm-flow-matrices-true-and-estimated.pdf")
if data_dim == 2:
lim = abs(data).max()
x = np.linspace(-lim, lim, 10)
y = np.linspace(-lim, lim, 10)
X, Y = np.meshgrid(x, y)
xy = np.column_stack((X.ravel(), Y.ravel()))
fig, axs = plt.subplots(1, max(num_states, test_num_states), figsize=(3 * num_states, 6))
for i, model in enumerate([arhmm]):
for j in range(model.num_states):
dist = model._emissions._distribution[j]
A, b = dist.weights, dist.bias
dxydt_m = xy.dot(A.T) + b - xy
axs[j].quiver(xy[:, 0], xy[:, 1], dxydt_m[:, 0], dxydt_m[:, 1], color=colors[j % len(colors)])
axs[j].set_xlabel("$y_1$")
axs[j].set_xticks([])
if j == 0:
axs[j].set_ylabel("$y_2$")
axs[j].set_yticks([])
axs[j].set_aspect("equal")
plt.tight_layout()
plt.savefig("arhmm-flow-matrices-estimated.pdf")
# Plot the true and inferred discrete states
plot_slice = (0, 1000)
plt.figure(figsize=(8, 4))
plt.subplot(211)
plt.imshow(true_states[None, num_lags:], aspect="auto", interpolation="none", cmap=cmap, vmin=0, vmax=len(colors) - 1)
plt.xlim(plot_slice)
plt.ylabel("$z_{\\mathrm{true}}$")
plt.yticks([])
plt.subplot(212)
# plt.imshow(most_likely_states[None,: :], aspect="auto", cmap=cmap, vmin=0, vmax=len(colors)-1)
plt.imshow(posterior.expected_states[0].T, aspect="auto", interpolation="none", cmap="Greys", vmin=0, vmax=1)
plt.xlim(plot_slice)
plt.ylabel("$z_{\\mathrm{inferred}}$")
plt.yticks([])
plt.xlabel("time")
plt.tight_layout()
plt.savefig("arhmm-state-est.pdf")
# Sample the fitted model
sampled_states, sampled_data = arhmm.sample(jr.PRNGKey(0), time_bins)
fig = plt.figure(figsize=(8, 8))
for k in range(num_states):
plt.plot(*sampled_data[sampled_states == k].T, "o", color=colors[k], alpha=0.75, markersize=3)
# plt.plot(*sampled_data.T, '-k', lw=0.5, alpha=0.2)
plt.plot(*sampled_data[:1000].T, "-k", lw=0.5, alpha=0.2)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
# plt.gca().set_aspect("equal")
plt.savefig("arhmm-samples-2d-estimated.pdf")
Explanation: Fit an ARHMM
End of explanation |
5,413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Agents
Agents are objects having a strategy, a vocabulary, and an ID (this last attribute is not important for the moment).
Step1: Let's create an agent. Vocabulary and strategy are created at the same time.
Step2: We can get visuals of agent objects from strategy and vocabulary visuals, with same syntax. | Python Code:
import lib.ngagent as ngagent
Explanation: Agents
Agents are objects having a strategy, a vocabulary, and an ID (this last attribute is not important for the moment).
End of explanation
ag_cfg = {
'agent_id':'test',
'voc_cfg':{
'voc_type':'sparse_matrix',
'M':5,
'W':10
},
'strat_cfg':{
'strat_type':'naive',
'voc_update':'Minimal'
}
}
testagent=ngagent.Agent(**ag_cfg)
testagent
print(testagent)
import random
M=ag_cfg['voc_cfg']['M']
W=ag_cfg['voc_cfg']['W']
for i in range(0,15):
k=random.randint(0,M-1)
l=random.randint(0,W-1)
testagent._vocabulary.add(k,l,1)
print(testagent)
Explanation: Let's create an agent. Vocabulary and strategy are created at the same time.
End of explanation
testagent.visual()
testagent.visual("hom")
testagent.visual()
testagent.visual("syn")
testagent.visual()
testagent.visual("pick_mw",iterr=500)
testagent.visual()
testagent.visual("guess_m",iterr=500)
testagent.visual()
testagent.visual("pick_w",iterr=500)
Explanation: We can get visuals of agent objects from strategy and vocabulary visuals, with same syntax.
End of explanation |
5,414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
classify reviews
This notebook describes the binary classification of Yelp hotel reviews on whether or not they are dog related.
Step1: Connect to DB
Step2: Restore BF Reviews
Step3: Restore Yelp Reviews
Step4: Add a New Column Stating the Review Type
Step6: Update the yelp_reviews SQL Table with the Dog Friendly Data
Step7: test updating the ta review category
This section tests updating the review_category column without deleting the entire table. | Python Code:
import numpy as np
from time import time
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_selection import SelectKBest, chi2
from sklearn.linear_model import RidgeClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import Perceptron
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.naive_bayes import BernoulliNB, MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neighbors import NearestCentroid
from sklearn.utils.extmath import density
from sklearn import metrics
import pandas as pd
import connect_aws_db as cadb
%matplotlib inline
Explanation: classify reviews
This notebook describes the binary classification of Yelp hotel reviews on whether or not they are dog related.
End of explanation
engine = cadb.connect_aws_db(write_unicode=True)
Explanation: Connect to DB
End of explanation
cmd = "SELECT review_id, review_rating, review_text FROM bf_reviews"
bfdf = pd.read_sql_query(cmd, engine)
print(len(bfdf))
bfdf.head(5)
len(bfdf[bfdf['review_text'].str.len() > 500])
num_cities = 'all'
if num_cities is 'all':
print('hello')
Explanation: Restore BF Reviews
End of explanation
cmd = "SELECT * FROM yelp_reviews"
yelpdf = pd.read_sql_query(cmd, engine)
print(len(yelpdf))
yelpdf.head(5)
yelp_review_data = yelpdf['review_text'].values
train_data = np.hstack((bfdf['review_text'].values[:1500],
yelpdf['review_text'].values[:1500]))
len(train_data)
labels = ['dog'] * 1500
labels.extend(['general'] * 1500)
y_train = labels
t0 = time()
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,
stop_words='english')
X_train = vectorizer.fit_transform(train_data)
duration = time() - t0
print('vectorized in {:.2f} seconds.'.format(duration))
print(X_train.shape)
feature_names = np.asarray(vectorizer.get_feature_names())
len(feature_names)
penalty = 'l2'
clf = LinearSVC(loss='l2', penalty=penalty, dual=False, tol=1e-3)
print(clf)
clf.fit(X_train, y_train)
#yelp_review_data[:10]
X_yrevs = vectorizer.transform(yelp_review_data)
pred = clf.predict(X_yrevs)
pred.shape
# print the number of yelp hotel reviews that are identified as dog reviews:
len(np.where(pred == 'dog')[0])
ydogrevs = np.where(pred == 'dog')[0]
yelp_review_data[ydogrevs[4]]
yelp_review_data[ydogrevs[5]]
ygenrevs = np.where(pred == "general")[0]
ygenrevs
yelp_review_data[ygenrevs[4]]
print(len(pred))
print(len(yelpdf))
pred[:10]
Explanation: Restore Yelp Reviews
End of explanation
yelpdf['review_category'] = pred
Explanation: Add a New Column Stating the Review Type
End of explanation
# conn = engine.connect()
# cmd = "ALTER TABLE yelp_reviews "
# cmd += "ADD review_category VARCHAR(56)"
# print(cmd)
# result = conn.execute(cmd)
# cmd = "UPDATE TABLE yelp_reviews "
# cmd += "SET review_category = ('"
# cmd += "','".join(pred)+"') "
# cmd += "WHERE yelp_review_id = ('"
# cmd += "','".join(yelpdf['yelp_review_id'].values)+"')"
# print(cmd[:500])
# print(cmd[-50:])
#result = conn.execute(cmd)
cmd = "DROP TABLE yelp_reviews"
result = conn.execute(cmd)
cmd =
CREATE TABLE yelp_reviews
(
rev_id MEDIUMINT AUTO_INCREMENT,
business_id VARCHAR(256),
yelp_review_date DATE,
yelp_review_id VARCHAR(256),
review_rating INT,
review_text VARCHAR(5000),
user_id VARCHAR(256),
review_category VARCHAR(56),
PRIMARY KEY (rev_id)
)
result = conn.execute(cmd)
yelpdf.to_sql('yelp_reviews', engine, if_exists='append', index=False)
Explanation: Update the yelp_reviews SQL Table with the Dog Friendly Data
End of explanation
conn = engine.connect()
cmd = "SELECT biz_review_id, review_text FROM ta_reviews limit 3"
res = conn.execute(cmd)
dat = res.fetchall()
dat
for row in result:
print(row)
bizids = [str(el[0]) for el in dat]
len(bizids)
cats = ['doggies', 'giraffes', 'random']
cmd = 'UPDATE ta_reviews SET review_category = NULL '
cmd += 'WHERE biz_review_id in ('+(',').join(bizids)+')'
cmd
res = conn.execute(cmd)
len(bfdf)
dids = bfdf[bfdf['review_rating'] == 3]['review_id'].values
dids[:5]
Explanation: test updating the ta review category
This section tests updating the review_category column without deleting the entire table.
End of explanation |
5,415 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data cleaning
Because Domestic % and International % columns data end with %, and its data type are objects, it is necessary to transfer its data type to float.
Step1: for the score columns, RT Score and Metacritic Score are 100 point scale, but IMDB Score is 10 point scale. IMDB Score could be changed to 100 point scale.
Step2: Data visualization
How do the Pixar films fare across each of the major review sites?
Step3: How are the average ratings from each review site across all the movies distributed?
Step4: How has the ratio of where the revenue comes from changed since the first movie? Now that Pixar is more well known internationally, is more revenue being made internationally for newer movies?
Step5: Is there any correlation between the number of Oscars a movie was nominated for and the number it actually won | Python Code:
pixar_movies['Domestic %'] = pixar_movies['Domestic %'].str.rstrip('%').astype('float')
pixar_movies['International %'] = pixar_movies['International %'].str.rstrip('%').astype('float')
Explanation: Data cleaning
Because Domestic % and International % columns data end with %, and its data type are objects, it is necessary to transfer its data type to float.
End of explanation
pixar_movies['IMDB Score'] = pixar_movies['IMDB Score'] * 10
filtered_pixar = pixar_movies.dropna()
pixar_movies.set_index('Movie', inplace=True)
filtered_pixar.set_index('Movie', inplace=True)
pixar_movies
Explanation: for the score columns, RT Score and Metacritic Score are 100 point scale, but IMDB Score is 10 point scale. IMDB Score could be changed to 100 point scale.
End of explanation
critics_reviews = pixar_movies[['RT Score', 'IMDB Score', 'Metacritic Score']]
critics_reviews.plot(figsize=(10,6))
plt.show()
Explanation: Data visualization
How do the Pixar films fare across each of the major review sites?
End of explanation
critics_reviews.plot(kind='box', figsize=(9,5))
plt.show()
Explanation: How are the average ratings from each review site across all the movies distributed?
End of explanation
revenue_proportions = filtered_pixar[['Domestic %', 'International %']]
revenue_proportions.plot(kind='bar', stacked=True, figsize=(12,6))
#sns.plt.show()
plt.show()
Explanation: How has the ratio of where the revenue comes from changed since the first movie? Now that Pixar is more well known internationally, is more revenue being made internationally for newer movies?
End of explanation
filtered_pixar[['Oscars Nominated', 'Oscars Won']].plot(kind='bar', figsize=(12,6))
plt.show()
Explanation: Is there any correlation between the number of Oscars a movie was nominated for and the number it actually won
End of explanation |
5,416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In Codice Ratio Convolutional Neural Network - Test on words
In this notebook we are going to define 4 pipelines and test them on words. This is just a preliminary test on 3 words with 2 groups of cuts each. After this one we will perform another test on a full page.
Step1: Loading test sets
Step2: Constants
Step3: Helper functions for model definition and load
Step4: Model load
Step5: Helper functions for prediction
Step6: Experiment 1 ("asseras")
Step7: Bad cuts
Step8: Pipeline 1 (22 networks as segmentator and classifier)
Step9: Possible word
Step10: Possible word
Step11: Possible word
Step12: Possible word
Step13: Pipeline 1 (22 networks as segmentator and classifier)
Step14: Possible word
Step15: Possible word
Step16: Possible word
Step17: Possible word
Step18: Bad cuts
Step19: Pipeline 1 (22 networks as segmentator and classifier)
Step20: Possible word
Step21: Possible word
Step22: Possible word
Step23: Possible word
Step24: Pipeline 1 (22 networks as segmentator and classifier)
Step25: Possible word
Step26: Possible word
Step27: Possible word
Step28: Possible word
Step29: Bad cuts
Step30: Pipeline 1 (22 networks as segmentator and classifier)
Step31: Possible word
Step32: Possible word
Step33: Possible word
Step34: Possible word
Step35: Pipeline 1 (22 networks as segmentator and classifier)
Step36: Possible word
Step37: Possible word
Step38: Possible word | Python Code:
import os.path
from IPython.display import Image
import time
from util import Util
u = Util()
import image_utils as iu
import keras_image_utils as kiu
import numpy as np
# Explicit random seed for reproducibility
np.random.seed(1337)
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.layers import Merge
import dataset_generator as dataset
Explanation: In Codice Ratio Convolutional Neural Network - Test on words
In this notebook we are going to define 4 pipelines and test them on words. This is just a preliminary test on 3 words with 2 groups of cuts each. After this one we will perform another test on a full page.
End of explanation
# letter list
ALPHABET_ALL = dataset.ALPHABET_ALL
(_, _, X_test_22, y_test_22, _) = dataset.generate_all_chars_with_class(verbose=0, plot=False)
(input_shape, X_test_22, Y_test_22) = kiu.adjust_input_output(X_test_22, y_test_22, 22)
print ("Loaded test set for all the characters")
(_, _, X_test_seg, y_test_seg) = dataset.generate_dataset_for_segmentator(verbose=0, plot=False)
(_, X_test_seg, Y_test_seg) = kiu.adjust_input_output(X_test_seg, y_test_seg, 2)
print ("Loaded test set for good and bad segments")
X_test_char = {}
y_test_char = {}
Y_test_char = {}
for char in ALPHABET_ALL:
(_, _, X_test_char[char], y_test_char[char]) = dataset.generate_positive_and_negative_labeled(char, verbose=0)
(_, X_test_char[char], Y_test_char[char]) = kiu.adjust_input_output(X_test_char[char], y_test_char[char], 2)
print ("Loaded test set for char '" + char + "'")
Explanation: Loading test sets
End of explanation
# input image dimensions
img_rows, img_cols = 34, 56
# number of networks for ensamble learning
number_of_models = 5
# checkpoints dir
checkpoints_dir = "checkpoints"
# size of pooling area for max pooling
pool_size1 = (2, 2)
pool_size2 = (3, 3)
# convolution kernel size
kernel_size1 = (4, 4)
kernel_size2 = (5, 5)
# dropout rate
dropout = 0.15
# activation
activation = 'relu'
Explanation: Constants
End of explanation
def initialize_network_single_column(model, nb_classes, nb_filters1, nb_filters2, dense_layer_size1):
model.add(Convolution2D(nb_filters1, kernel_size1[0], kernel_size1[1],
border_mode='valid',
input_shape=input_shape, name='covolution_1_' + str(nb_filters1) + '_filters'))
model.add(Activation(activation, name='activation_1_' + activation))
model.add(MaxPooling2D(pool_size=pool_size1, name='max_pooling_1_' + str(pool_size1) + '_pool_size'))
model.add(Convolution2D(nb_filters2, kernel_size2[0], kernel_size2[1]))
model.add(Activation(activation, name='activation_2_' + activation))
model.add(MaxPooling2D(pool_size=pool_size2, name='max_pooling_1_' + str(pool_size2) + '_pool_size'))
model.add(Dropout(dropout))
model.add(Flatten())
model.add(Dense(dense_layer_size1, name='fully_connected_1_' + str(dense_layer_size1) + '_neurons'))
model.add(Activation(activation, name='activation_3_' + activation))
model.add(Dropout(dropout))
model.add(Dense(nb_classes, name='output_' + str(nb_classes) + '_neurons'))
model.add(Activation('softmax', name='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy', 'precision', 'recall'])
def try_load_checkpoints(model, checkpoints_filepath, warn=True):
# loading weights from checkpoints
if os.path.exists(checkpoints_filepath):
model.load_weights(checkpoints_filepath)
elif warn:
print('Warning: ' + checkpoints_filepath + ' could not be loaded')
def initialize_network_multi_column(merged_model, models):
merged_model.add(Merge(models, mode='ave'))
merged_model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy', 'precision', 'recall'])
def create_and_load_network(number_of_models, checkpoint_paths, nb_classes,
nb_filters1, nb_filters2, dense_layer_size1):
# pseudo random generation of seeds
seeds = np.random.randint(10000, size=number_of_models)
# initializing all the models
models = [None] * number_of_models
for i in range(number_of_models):
models[i] = Sequential()
initialize_network_single_column(models[i], nb_classes, nb_filters1, nb_filters2, dense_layer_size1)
try_load_checkpoints(models[i], checkpoint_paths[i])
# initializing merged model
merged_model = Sequential()
initialize_network_multi_column(merged_model, models)
return (merged_model, models)
Explanation: Helper functions for model definition and load
End of explanation
# 22 classes ocr
ocr_weigts_dir = os.path.join(checkpoints_dir, "09_22-classes")
ocr_weights = [os.path.join(ocr_weigts_dir, "09_ICR_weights.best_0.hdf5"),
os.path.join(ocr_weigts_dir, "09_ICR_weights.best_1.hdf5"),
os.path.join(ocr_weigts_dir, "09_ICR_weights.best_2.hdf5"),
os.path.join(ocr_weigts_dir, "09_ICR_weights.best_3.hdf5"),
os.path.join(ocr_weigts_dir, "09_ICR_weights.best_4.hdf5")]
(ocr_model, _) = create_and_load_network(5, ocr_weights, 22, 50, 100, 250)
score = ocr_model.evaluate([np.asarray(X_test_22)] * number_of_models, Y_test_22, verbose=0)
print ("Loaded 22 classes orc model with test error of ", (1-score[2])*100, '%')
# segmentator network (good cut / bad cut)
segmentator_weigts_dir = os.path.join(checkpoints_dir, "letter_not_letter")
segmentator_weights = [os.path.join(segmentator_weigts_dir, "10_ICR_weights.best_0.hdf5"),
os.path.join(segmentator_weigts_dir, "10_ICR_weights.best_1.hdf5"),
os.path.join(segmentator_weigts_dir, "10_ICR_weights.best_2.hdf5"),
os.path.join(segmentator_weigts_dir, "10_ICR_weights.best_3.hdf5"),
os.path.join(segmentator_weigts_dir, "10_ICR_weights.best_4.hdf5")]
(segmentator_model, _) = create_and_load_network(5, segmentator_weights, 2, 50, 100, 250)
score = segmentator_model.evaluate([np.asarray(X_test_seg)] * number_of_models, Y_test_seg, verbose=0)
print ("Loaded binary segmentator model with test error of ", (1-score[2])*100, '%')
print ("---")
# single letter segmentator / ocr
single_letter_models = {}
single_letter_weights_dir = {}
single_letter_weights = {}
errors = []
for char in ALPHABET_ALL:
single_letter_weights_dir[char] = os.path.join(checkpoints_dir, char)
single_letter_weights[char] = [os.path.join(single_letter_weights_dir[char], "0.hdf5"),
os.path.join(single_letter_weights_dir[char], "1.hdf5"),
os.path.join(single_letter_weights_dir[char], "2.hdf5"),
os.path.join(single_letter_weights_dir[char], "3.hdf5"),
os.path.join(single_letter_weights_dir[char], "4.hdf5")]
(single_letter_models[char], _) = create_and_load_network(5, single_letter_weights[char], 2, 20, 40, 150)
score = single_letter_models[char].evaluate([np.asarray(X_test_char[char])] * number_of_models, Y_test_char[char], verbose=0)
print ("Loaded binary model for '" + char + "', with test error of ", (1-score[2])*100, '%')
errors.append(1-score[2])
print("Average test error: ", sum(errors) / float(len(errors)) * 100, "%")
Explanation: Model load
End of explanation
def predict_pipeline1(data, count_letter=True):
start_time = time.time()
count = 0
for bad_cut in data:
flag = False
count += 1
bad_cuts = np.asarray([bad_cut])
if count_letter:
print ("Predictions for the supposed letter number " + str(count))
for char in ALPHABET_ALL:
predictions = single_letter_models[char].predict([bad_cuts] * number_of_models)
if (predictions[0][1] > predictions[0][0]):
print ("Cut " + str(count) + " has been classified as good corresponding to char '" +\
char + "' with a confidence of " + str(predictions[0][1] * 100) + "%")
flag = True
if not flag:
print ("Bad cut")
print ("---")
elapsed_time = time.time() - start_time
print("Elapsed time:", elapsed_time)
def predict_pipeline2(data, count_letter=True):
start_time = time.time()
count = 0
for bad_cut in data:
count += 1
bad_cuts = np.asarray([bad_cut])
if count_letter:
print ("Predictions for the supposed letter number " + str(count))
predictions = segmentator_model.predict([bad_cuts] * number_of_models)
if (predictions[0][1] > predictions[0][0]):
predictions = ocr_model.predict([bad_cuts] * number_of_models)
ind = (-predictions[0]).argsort()[:3]
for i in ind:
print ("Good cut corresponding to letter '" + ALPHABET_ALL[i] + \
"' with a confidence of " + str(predictions[0][i] * 100) + "%")
else:
print ("Bad cut with a confidence of " + str(predictions[0][0] * 100) + "%")
print ("---")
elapsed_time = time.time() - start_time
print("Elapsed time:", elapsed_time)
def predict_pipeline3(data, count_letter=True):
start_time = time.time()
count = 0
for bad_cut in data:
flag = False
count += 1
bad_cuts = np.asarray([bad_cut])
if count_letter:
print ("Predictions for the supposed letter number " + str(count))
for char in ALPHABET_ALL:
predictions = single_letter_models[char].predict([bad_cuts] * number_of_models)
if (predictions[0][1] > predictions[0][0]):
print ("Good cut with a confidence of " + str(predictions[0][1] * 100) + "% by letter '" + char + "'")
flag = True
if flag:
predictions = ocr_model.predict([bad_cuts] * number_of_models)
ind = (-predictions[0]).argsort()[:3]
for i in ind:
print ("Good cut corresponding to letter '" + ALPHABET_ALL[i] + \
"' with a confidence of " + str(predictions[0][i] * 100) + "%")
else:
print ("Bad cut")
print ("---")
elapsed_time = time.time() - start_time
print("Elapsed time:", elapsed_time)
def predict_pipeline4(data, count_letter=True):
start_time = time.time()
count = 0
for bad_cut in data:
count += 1
bad_cuts = np.asarray([bad_cut])
if count_letter:
print ("Predictions for the supposed letter number " + str(count))
predictions = segmentator_model.predict([bad_cuts] * number_of_models)
if (predictions[0][1] > predictions[0][0]):
for char in ALPHABET_ALL:
predictions = single_letter_models[char].predict([bad_cuts] * number_of_models)
if (predictions[0][1] > predictions[0][0]):
print ("Good cut with a confidence of " + str(predictions[0][1] * 100) + "% by letter '" + char + "'")
else:
print ("Bad cut with a confidence of " + str(predictions[0][0] * 100) + "%")
print ("---")
elapsed_time = time.time() - start_time
print("Elapsed time:", elapsed_time)
Explanation: Helper functions for prediction
End of explanation
u.plot_image(iu.load_sample("not_code/words/asseras.png"), (40, 106))
Explanation: Experiment 1 ("asseras")
End of explanation
asseras_bad_cuts = iu.open_many_samples( \
["not_code/words/bad_cuts/asseras/1.png",
"not_code/words/bad_cuts/asseras/2.png",
"not_code/words/bad_cuts/asseras/3.png",
"not_code/words/bad_cuts/asseras/4.png",
"not_code/words/bad_cuts/asseras/5.png"])
(asseras_bad_cuts, _) = kiu.adjust_input(np.asarray(asseras_bad_cuts))
u.plot_some_images(asseras_bad_cuts, (img_cols, img_rows), grid_x=5, grid_y=1)
Explanation: Bad cuts
End of explanation
predict_pipeline1(asseras_bad_cuts)
Explanation: Pipeline 1 (22 networks as segmentator and classifier)
End of explanation
predict_pipeline2(asseras_bad_cuts)
Explanation: Possible word: -ls-s
Pipeline 2 (segmentator + classifier)
End of explanation
predict_pipeline3(asseras_bad_cuts)
Explanation: Possible word: ----s
Pipeline 3 (22 networks as segmentator + classifier)
End of explanation
predict_pipeline4(asseras_bad_cuts)
Explanation: Possible word: -ld-s
Pipeline 4 (segmentator + 22 networks as classifier)
End of explanation
asseras_good_cuts = iu.open_many_samples( \
["not_code/words/good_cuts/asseras/a1.png",
"not_code/words/good_cuts/asseras/f1.png",
"not_code/words/good_cuts/asseras/f2.png",
"not_code/words/good_cuts/asseras/e.png",
"not_code/words/good_cuts/asseras/r.png",
"not_code/words/good_cuts/asseras/a2.png",
"not_code/words/good_cuts/asseras/s.png"])
(asseras_good_cuts, _) = kiu.adjust_input(np.asarray(asseras_good_cuts))
u.plot_some_images(asseras_good_cuts, (img_cols, img_rows), grid_x=7, grid_y=1)
Explanation: Possible word: ----s
Good cuts
End of explanation
predict_pipeline1(asseras_good_cuts)
Explanation: Pipeline 1 (22 networks as segmentator and classifier)
End of explanation
predict_pipeline2(asseras_good_cuts)
Explanation: Possible word: asseras
Pipeline 2 (segmentator + classifier)
End of explanation
predict_pipeline3(asseras_good_cuts)
Explanation: Possible word: asseras
Pipeline 3 (22 networks as segmentator + classifier)
End of explanation
predict_pipeline4(asseras_good_cuts)
Explanation: Possible word: asseras
Pipeline 4 (segmentator + 22 networks as classifier)
End of explanation
u.plot_image(iu.load_sample("not_code/words/unicu2.png"), (61, 98))
Explanation: Possible word: asseras
Experiment 2 ("unicu")
End of explanation
unicu_bad_cuts = iu.open_many_samples( \
["not_code/words/bad_cuts/unicu/1.png",
"not_code/words/bad_cuts/unicu/2.png",
"not_code/words/bad_cuts/unicu/3.png",
"not_code/words/bad_cuts/unicu/4.png",
"not_code/words/bad_cuts/unicu/5.png"])
(unicu_bad_cuts, _) = kiu.adjust_input(np.asarray(unicu_bad_cuts))
u.plot_some_images(unicu_bad_cuts, (img_cols, img_rows), grid_x=5, grid_y=1)
Explanation: Bad cuts
End of explanation
predict_pipeline1(unicu_bad_cuts)
Explanation: Pipeline 1 (22 networks as segmentator and classifier)
End of explanation
predict_pipeline2(unicu_bad_cuts)
Explanation: Possible word: iuuci
Pipeline 2 (segmentator + classifier)
End of explanation
predict_pipeline3(unicu_bad_cuts)
Explanation: Possible word: -uu--
Pipeline 3 (22 networks as segmentator + classifier)
End of explanation
predict_pipeline4(unicu_bad_cuts)
Explanation: Possible word: iuuoi
Pipeline 4 (segmentator + 22 networks as classifier)
End of explanation
unicu_good_cuts = iu.open_many_samples( \
["not_code/words/good_cuts/unicu/u1.png",
"not_code/words/good_cuts/unicu/n.png",
"not_code/words/good_cuts/unicu/i.png",
"not_code/words/good_cuts/unicu/c.png",
"not_code/words/good_cuts/unicu/u2.png"])
(unicu_good_cuts, _) = kiu.adjust_input(np.asarray(unicu_good_cuts))
u.plot_some_images(unicu_good_cuts, (img_cols, img_rows), grid_x=5, grid_y=1)
Explanation: Possible word: -uu--
Good cuts
End of explanation
predict_pipeline1(unicu_good_cuts)
Explanation: Pipeline 1 (22 networks as segmentator and classifier)
End of explanation
predict_pipeline2(unicu_good_cuts)
Explanation: Possible word: unicu
Pipeline 2 (segmentator + classifier)
End of explanation
predict_pipeline3(unicu_good_cuts)
Explanation: Possible word: unicu
Pipeline 3 (22 networks as segmentator + classifier)
End of explanation
predict_pipeline4(unicu_good_cuts)
Explanation: Possible word: unicu
Pipeline 4 (segmentator + 22 networks as classifier)
End of explanation
u.plot_image(iu.load_sample("not_code/words/beneficiu.png"), (61, 153))
Explanation: Possible word: unicu
Experiment 3 ("beneficiu")
End of explanation
beneficiu_bad_cuts = iu.open_many_samples( \
["not_code/words/bad_cuts/beneficiu/1.png",
"not_code/words/bad_cuts/beneficiu/2.png",
"not_code/words/bad_cuts/beneficiu/3.png",
"not_code/words/bad_cuts/beneficiu/4.png",
"not_code/words/bad_cuts/beneficiu/5.png",
"not_code/words/bad_cuts/beneficiu/6.png",
"not_code/words/bad_cuts/beneficiu/7.png",
"not_code/words/bad_cuts/beneficiu/8.png"])
(beneficiu_bad_cuts, _) = kiu.adjust_input(np.asarray(beneficiu_bad_cuts))
u.plot_some_images(beneficiu_bad_cuts, (img_cols, img_rows), grid_x=4, grid_y=2)
Explanation: Bad cuts
End of explanation
predict_pipeline1(beneficiu_bad_cuts)
Explanation: Pipeline 1 (22 networks as segmentator and classifier)
End of explanation
predict_pipeline2(beneficiu_bad_cuts)
Explanation: Possible word: siiescii
Pipeline 2 (segmentator + classifier)
End of explanation
predict_pipeline3(beneficiu_bad_cuts)
Explanation: Possible word: ---ef--i
Pipeline 3 (22 networks as segmentator + classifier)
End of explanation
predict_pipeline4(beneficiu_bad_cuts)
Explanation: Possible word: biiefoii
Pipeline 4 (segmentator + 22 networks as classifier)
End of explanation
beneficiu_good_cuts = iu.open_many_samples( \
["not_code/words/good_cuts/beneficiu/b.png",
"not_code/words/good_cuts/beneficiu/e1.png",
"not_code/words/good_cuts/beneficiu/n.png",
"not_code/words/good_cuts/beneficiu/e2.png",
"not_code/words/good_cuts/beneficiu/f.png",
"not_code/words/good_cuts/beneficiu/i1.png",
"not_code/words/good_cuts/beneficiu/c.png",
"not_code/words/good_cuts/beneficiu/i2.png",
"not_code/words/good_cuts/beneficiu/u.png"])
(beneficiu_good_cuts, _) = kiu.adjust_input(np.asarray(beneficiu_good_cuts))
u.plot_some_images(beneficiu_good_cuts, (img_cols, img_rows), grid_x=3, grid_y=3)
Explanation: Possible word: ---ef--i
Good cuts
End of explanation
predict_pipeline1(beneficiu_good_cuts)
Explanation: Pipeline 1 (22 networks as segmentator and classifier)
End of explanation
predict_pipeline2(beneficiu_good_cuts)
Explanation: Possible word: beuessciu
Pipeline 2 (segmentator + classifier)
End of explanation
predict_pipeline3(beneficiu_good_cuts)
Explanation: Possible word: benes-ciu or benef-ciu with a lower chance
Pipeline 3 (22 networks as segmentator + classifier)
End of explanation
predict_pipeline4(beneficiu_good_cuts)
Explanation: Possible word: benesiciu or beneficiu with a lower chance
Pipeline 4 (segmentator + 22 networks as classifier)
End of explanation |
5,417 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading Data with DLTK
To build a reader you have to define a read function. This is a function with a signature read_fn(file_references, mode, params=None), where
- file_references is a array_like variable to be used to read files but can also be None if not used at all.
- mode is a mode key from tf.estimator.ModeKeys and
- params is a dictionary or None to pass additional parameters, you might need during interfacing with your inputs.
Creating a custom python generator read_fn
In the following cell we define an example reader to read from the IXI dataset you can download with
the included script. Let's start with some python, before we go into tensorflow
Step1: The read_fn above can be used as generator in python but we wrap it for you with our Reader class.
For debugging, you can visualise the examples as follows
Step2: Using a custom read_fn with TensorFlow
In order to use the read_fn in a tensorflow graph, we wrap the generator to feed a Tensorflow Dataset. You can generate this queue using dltk/io/abstract_reader or do it manually
Step3: Additional information on dltk.io.abstract_reader | Python Code:
import SimpleITK as sitk
import os
from dltk.io.augmentation import *
from dltk.io.preprocessing import *
import tensorflow as tf
def read_fn(file_references, mode, params=None):
# We define a `read_fn` and iterate through the `file_references`, which
# can contain information about the data to be read (e.g. a file path):
for meta_data in file_references:
# Here, we parse the `subject_id` to construct a file path to read
# an image from.
subject_id = meta_data[0]
data_path = '../../data/IXI_HH/1mm'
t1_fn = os.path.join(data_path, '{}/T1_1mm.nii.gz'.format(subject_id))
# Read the .nii image containing a brain volume with SimpleITK and get
# the numpy array:
sitk_t1 = sitk.ReadImage(t1_fn)
t1 = sitk.GetArrayFromImage(sitk_t1)
# Normalise the image to zero mean/unit std dev:
t1 = whitening(t1)
# Create a 4D Tensor with a dummy dimension for channels
t1 = t1[..., np.newaxis]
# If in PREDICT mode, yield the image (because there will be no label
# present). Additionally, yield the sitk.Image pointer (including all
# the header information) and some metadata (e.g. the subject id),
# to facilitate post-processing (e.g. reslicing) and saving.
# This can be useful when you want to use the same read function as
# python generator for deployment. Note: Data are not passed to
# tensorflow if we do not specify a data type for them
# (c.f. `dltk/io/abstract_reader`):
if mode == tf.estimator.ModeKeys.PREDICT:
yield {'features': {'x': t1},
'metadata': {
'subject_id': subject_id,
'sitk': sitk_t1}}
# Labels: Here, we parse the class *sex* from the file_references
# \in [1,2] and shift them to \in [0,1] for training:
sex = np.int32(meta_data[1]) - 1
y = sex
# If in TRAIN mode, we want to augment the data to generalise better
# and avoid overfitting to the training dataset:
if mode == tf.estimator.ModeKeys.TRAIN:
# Insert augmentation function here (see `dltk/io/augmentation`)
pass
# If training should be done on image patches for improved mixing,
# memory limitations or class balancing, call a patch extractor
# (see `dltk/io/augmentation`):
if params['extract_examples']:
images = extract_random_example_array(
t1,
example_size=params['example_size'],
n_examples=params['n_examples'])
# Loop the extracted image patches and yield
for e in range(params['n_examples']):
yield {'features': {'x': images[e].astype(np.float32)},
'labels': {'y': y.astype(np.int32)},
'metadata': {
'subject_id': subject_id,
'sitk': sitk_t1}}
# If desired (i.e. for evaluation, etc.), return the full images
else:
yield {'features': {'x': images},
'labels': {'y': y.astype(np.int32)},
'metadata': {
'subject_id': subject_id,
'sitk': sitk_t1}}
return
Explanation: Reading Data with DLTK
To build a reader you have to define a read function. This is a function with a signature read_fn(file_references, mode, params=None), where
- file_references is a array_like variable to be used to read files but can also be None if not used at all.
- mode is a mode key from tf.estimator.ModeKeys and
- params is a dictionary or None to pass additional parameters, you might need during interfacing with your inputs.
Creating a custom python generator read_fn
In the following cell we define an example reader to read from the IXI dataset you can download with
the included script. Let's start with some python, before we go into tensorflow:
End of explanation
# Use pandas to read csvs that hold meta information to read the files from disk
import pandas as pd
import tensorflow as tf
all_filenames = pd.read_csv(
'../../data/IXI_HH/demographic_HH.csv',
dtype=object,
keep_default_na=False,
na_values=[]).as_matrix()
# Set up a some parameters as required in the `read_fn`:
reader_params = {'n_examples': 1,
'example_size': [128, 224, 224],
'extract_examples': True}
# Create a generator with the read file_references `all_filenames` and
# `reader_params` in PREDICT mode:
it = read_fn(file_references=all_filenames,
mode=tf.estimator.ModeKeys.PREDICT,
params=reader_params)
# If you call `next`, the `read_fn` will yield an output dictionary as designed
# by you:
ex_dict = next(it)
# Print that output dict to debug
np.set_printoptions(edgeitems=1)
print(ex_dict)
Explanation: The read_fn above can be used as generator in python but we wrap it for you with our Reader class.
For debugging, you can visualise the examples as follows:
End of explanation
# As before, define the desired shapes of the examples and parameters to
# pass to your `read_fn`:
reader_example_shapes = {'features': {'x': reader_params['example_size'] + [1,]},
'labels': {'y': []}}
reader_params = {'n_examples': 1,
'example_size': [128, 224, 224],
'extract_examples': True}
# If data_types are set for output dictionary entries, the `dltk/io/abstract_reader`
# creates a tensorflow queue and enqueues the respective outputs for training.
# Here, we would like to train our features and use labels as targets:
reader_example_dtypes = {'features': {'x': tf.float32},
'labels': {'y': tf.int32}}
# Import and create a dltk reader
from dltk.io.abstract_reader import Reader
reader = Reader(read_fn=read_fn,
dtypes=reader_example_dtypes)
# Now, get the input function and queue initialisation hook to use in a `tf.Session` or
# with `tf.Estimator`. `shuffle_cache_size` defines the capacity of the queue.
input_fn, qinit_hook = reader.get_inputs(all_filenames,
tf.estimator.ModeKeys.TRAIN,
example_shapes=reader_example_shapes,
batch_size=4,
shuffle_cache_size=10,
params=reader_params)
# The input function splits the dictionary of `read_fn` into `features` and `labels` to
# match the `tf.Estimator` input requirements. However, both are standard dictionaries.
features, labels = input_fn()
# Let's create a `tf.Session` and get a batch of features and corresponding labels:
s = tf.train.MonitoredTrainingSession(hooks=[qinit_hook])
batch_features, batch_labels = s.run([features, labels])
# We can visualise the `batch_features` using matplotlib.
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(batch_features['x'][0, 0, :, :, 0], 'gray')
plt.show()
Explanation: Using a custom read_fn with TensorFlow
In order to use the read_fn in a tensorflow graph, we wrap the generator to feed a Tensorflow Dataset. You can generate this queue using dltk/io/abstract_reader or do it manually:
End of explanation
help(Reader)
Explanation: Additional information on dltk.io.abstract_reader:
DLTK uses Tensorflow's queueing options to efficiently pass data to the computational graph. Our setup makes use of the tf.data API that enables us to use TFs wrappers with tf.data.Dataset.from_generator. We still wrap this function for better stack traces and to provide input functions suitable for tf.Estimator.
End of explanation |
5,418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-mmh', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NERC
Source ID: UKESM1-0-MMH
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
5,419 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring precision and recall
The goal of this second notebook is to understand precision-recall in the context of classifiers.
Use Amazon review data in its entirety.
Train a logistic regression model.
Explore various evaluation metrics
Step1: Load amazon review dataset
Step2: Extract word counts and sentiments
As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following
Step3: Now, let's remember what the dataset looks like by taking a quick peek
Step4: Split data into training and test sets
We split the data into a 80-20 split where 80% is in the training set and 20% is in the test set.
Step5: Train a logistic regression classifier
We will now train a logistic regression classifier with sentiment as the target and word_count as the features. We will set validation_set=None to make sure everyone gets exactly the same results.
Remember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall.
Step6: Model Evaluation
We will explore the advanced model evaluation concepts that were discussed in the lectures.
Accuracy
One performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
To obtain the accuracy of our trained models using GraphLab Create, simply pass the option metric='accuracy' to the evaluate function. We compute the accuracy of our logistic regression model on the test_data as follows
Step7: Baseline
Step8: Quiz Question
Step9: Quiz Question
Step10: Computing the cost of mistakes
Put yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, false positives cost more than false negatives. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.)
Suppose you know the costs involved in each kind of mistake
Step11: Precision and Recall
You may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where precision comes in
Step12: Quiz Question
Step13: Quiz Question
Step14: Quiz Question
Step15: Run prediction with output_type='probability' to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.
Step16: Quiz Question
Step17: Quiz Question (variant 1)
Step18: For each of the values of threshold, we compute the precision and recall scores.
Step19: Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.
Step20: Quiz Question
Step21: Quiz Question
Step22: This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier.
Evaluating specific search terms
So far, we looked at the number of false positives for the entire test set. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon.
Precision-Recall on all baby related items
From the test set, select all the reviews for all products with the word 'baby' in them.
Step23: Now, let's predict the probability of classifying these reviews as positive
Step24: Let's plot the precision-recall curve for the baby_reviews dataset.
First, let's consider the following threshold_values ranging from 0.5 to 1
Step25: Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below.
Step26: Quiz Question
Step27: Quiz Question | Python Code:
import graphlab
from __future__ import division
import numpy as np
graphlab.canvas.set_target('ipynb')
Explanation: Exploring precision and recall
The goal of this second notebook is to understand precision-recall in the context of classifiers.
Use Amazon review data in its entirety.
Train a logistic regression model.
Explore various evaluation metrics: accuracy, confusion matrix, precision, recall.
Explore how various metrics can be combined to produce a cost of making an error.
Explore precision and recall curves.
Because we are using the full Amazon review dataset (not a subset of words or reviews), in this assignment we return to using GraphLab Create for its efficiency. As usual, let's start by firing up GraphLab Create.
Make sure you have the latest version of GraphLab Create (1.8.3 or later). If you don't find the decision tree module, then you would need to upgrade graphlab-create using
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby.gl/')
Explanation: Load amazon review dataset
End of explanation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
review_clean = products['review'].apply(remove_punctuation)
# Count words
products['word_count'] = graphlab.text_analytics.count_words(review_clean)
# Drop neutral sentiment reviews.
products = products[products['rating'] != 3]
# Positive sentiment to +1 and negative sentiment to -1
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
Explanation: Extract word counts and sentiments
As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following:
Remove punctuation.
Remove reviews with "neutral" sentiment (rating 3).
Set reviews with rating 4 or more to be positive and those with 2 or less to be negative.
End of explanation
products
Explanation: Now, let's remember what the dataset looks like by taking a quick peek:
End of explanation
train_data, test_data = products.random_split(.8, seed=1)
Explanation: Split data into training and test sets
We split the data into a 80-20 split where 80% is in the training set and 20% is in the test set.
End of explanation
model = graphlab.logistic_classifier.create(train_data, target='sentiment',
features=['word_count'],
validation_set=None)
Explanation: Train a logistic regression classifier
We will now train a logistic regression classifier with sentiment as the target and word_count as the features. We will set validation_set=None to make sure everyone gets exactly the same results.
Remember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall.
End of explanation
accuracy= model.evaluate(test_data, metric='accuracy')['accuracy']
print "Test Accuracy: %s" % accuracy
Explanation: Model Evaluation
We will explore the advanced model evaluation concepts that were discussed in the lectures.
Accuracy
One performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
To obtain the accuracy of our trained models using GraphLab Create, simply pass the option metric='accuracy' to the evaluate function. We compute the accuracy of our logistic regression model on the test_data as follows:
End of explanation
baseline = len(test_data[test_data['sentiment'] == 1])/len(test_data)
print "Baseline accuracy (majority class classifier): %s" % baseline
Explanation: Baseline: Majority class prediction
Recall from an earlier assignment that we used the majority class classifier as a baseline (i.e reference) model for a point of comparison with a more sophisticated classifier. The majority classifier model predicts the majority class for all data points.
Typically, a good model should beat the majority class classifier. Since the majority class in this dataset is the positive class (i.e., there are more positive than negative reviews), the accuracy of the majority class classifier can be computed as follows:
End of explanation
confusion_matrix = model.evaluate(test_data, metric='confusion_matrix')['confusion_matrix']
confusion_matrix
Explanation: Quiz Question: Using accuracy as the evaluation metric, was our logistic regression model better than the baseline (majority class classifier)?
Confusion Matrix
The accuracy, while convenient, does not tell the whole story. For a fuller picture, we turn to the confusion matrix. In the case of binary classification, the confusion matrix is a 2-by-2 matrix laying out correct and incorrect predictions made in each label as follows:
+---------------------------------------------+
| Predicted label |
+----------------------+----------------------+
| (+1) | (-1) |
+-------+-----+----------------------+----------------------+
| True |(+1) | # of true positives | # of false negatives |
| label +-----+----------------------+----------------------+
| |(-1) | # of false positives | # of true negatives |
+-------+-----+----------------------+----------------------+
To print out the confusion matrix for a classifier, use metric='confusion_matrix':
End of explanation
print '1443'
Explanation: Quiz Question: How many predicted values in the test set are false positives?
End of explanation
false_positive = confusion_matrix[(confusion_matrix['target_label'] == -1) & (confusion_matrix['predicted_label'] == 1) ]['count'][0]
false_negative = confusion_matrix[(confusion_matrix['target_label'] == 1) & (confusion_matrix['predicted_label'] == -1) ]['count'][0]
print 100 * false_positive + 1 * false_negative
Explanation: Computing the cost of mistakes
Put yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, false positives cost more than false negatives. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.)
Suppose you know the costs involved in each kind of mistake:
1. \$100 for each false positive.
2. \$1 for each false negative.
3. Correctly classified reviews incur no cost.
Quiz Question: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the test set?
End of explanation
precision = model.evaluate(test_data, metric='precision')['precision']
print "Precision on test data: %s" % precision
Explanation: Precision and Recall
You may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where precision comes in:
$$
[\text{precision}] = \frac{[\text{# positive data points with positive predicitions}]}{\text{[# all data points with positive predictions]}} = \frac{[\text{# true positives}]}{[\text{# true positives}] + [\text{# false positives}]}
$$
So to keep the percentage of false positives below 3.5% of positive predictions, we must raise the precision to 96.5% or higher.
First, let us compute the precision of the logistic regression classifier on the test_data.
End of explanation
1 - precision
Explanation: Quiz Question: Out of all reviews in the test set that are predicted to be positive, what fraction of them are false positives? (Round to the second decimal place e.g. 0.25)
End of explanation
recall = model.evaluate(test_data, metric='recall')['recall']
print "Recall on test data: %s" % recall
Explanation: Quiz Question: Based on what we learned in lecture, if we wanted to reduce this fraction of false positives to be below 3.5%, we would: (see the quiz)
A complementary metric is recall, which measures the ratio between the number of true positives and that of (ground-truth) positive reviews:
$$
[\text{recall}] = \frac{[\text{# positive data points with positive predicitions}]}{\text{[# all positive data points]}} = \frac{[\text{# true positives}]}{[\text{# true positives}] + [\text{# false negatives}]}
$$
Let us compute the recall on the test_data.
End of explanation
from graphlab import SArray
def apply_threshold(probabilities, threshold):
### YOUR CODE GOES HERE
# +1 if >= threshold and -1 otherwise.
array = map(lambda propability: +1 if propability > threshold else -1, probabilities)
return SArray(array)
Explanation: Quiz Question: What fraction of the positive reviews in the test_set were correctly predicted as positive by the classifier?
Quiz Question: What is the recall value for a classifier that predicts +1 for all data points in the test_data?
Precision-recall tradeoff
In this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve.
Varying the threshold
False positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold.
Write a function called apply_threshold that accepts two things
* probabilities (an SArray of probability values)
* threshold (a float between 0 and 1).
The function should return an SArray, where each element is set to +1 or -1 depending whether the corresponding probability exceeds threshold.
End of explanation
probabilities = model.predict(test_data, output_type='probability')
predictions_with_default_threshold = apply_threshold(probabilities, 0.5)
predictions_with_high_threshold = apply_threshold(probabilities, 0.9)
print "Number of positive predicted reviews (threshold = 0.5): %s" % (predictions_with_default_threshold == 1).sum()
print "Number of positive predicted reviews (threshold = 0.9): %s" % (predictions_with_high_threshold == 1).sum()
Explanation: Run prediction with output_type='probability' to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.
End of explanation
# Threshold = 0.5
precision_with_default_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_default_threshold)
recall_with_default_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_default_threshold)
# Threshold = 0.9
precision_with_high_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_high_threshold)
recall_with_high_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_high_threshold)
print "Precision (threshold = 0.5): %s" % precision_with_default_threshold
print "Recall (threshold = 0.5) : %s" % recall_with_default_threshold
print "Precision (threshold = 0.9): %s" % precision_with_high_threshold
print "Recall (threshold = 0.9) : %s" % recall_with_high_threshold
Explanation: Quiz Question: What happens to the number of positive predicted reviews as the threshold increased from 0.5 to 0.9?
Exploring the associated precision and recall as the threshold varies
By changing the probability threshold, it is possible to influence precision and recall. We can explore this as follows:
End of explanation
threshold_values = np.linspace(0.5, 1, num=100)
print threshold_values
Explanation: Quiz Question (variant 1): Does the precision increase with a higher threshold?
Quiz Question (variant 2): Does the recall increase with a higher threshold?
Precision-recall curve
Now, we will explore various different values of tresholds, compute the precision and recall scores, and then plot the precision-recall curve.
End of explanation
precision_all = []
recall_all = []
best_threshold = None
probabilities = model.predict(test_data, output_type='probability')
for threshold in threshold_values:
predictions = apply_threshold(probabilities, threshold)
precision = graphlab.evaluation.precision(test_data['sentiment'], predictions)
recall = graphlab.evaluation.recall(test_data['sentiment'], predictions)
precision_all.append(precision)
recall_all.append(recall)
if(best_threshold is None and precision >= 0.965):
best_threshold = threshold
print best_threshold
Explanation: For each of the values of threshold, we compute the precision and recall scores.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
def plot_pr_curve(precision, recall, title):
plt.rcParams['figure.figsize'] = 7, 5
plt.locator_params(axis = 'x', nbins = 5)
plt.plot(precision, recall, 'b-', linewidth=4.0, color = '#B0017F')
plt.title(title)
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.rcParams.update({'font.size': 16})
plot_pr_curve(precision_all, recall_all, 'Precision recall curve (all)')
Explanation: Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.
End of explanation
0.838
Explanation: Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places.
End of explanation
threshold = 0.98
probabilities = model.predict(test_data, output_type='probability')
predictions = apply_threshold(probabilities, threshold)
graphlab.evaluation.confusion_matrix(test_data['sentiment'], predictions)
Explanation: Quiz Question: Using threshold = 0.98, how many false negatives do we get on the test_data? (Hint: You may use the graphlab.evaluation.confusion_matrix function implemented in GraphLab Create.)
End of explanation
baby_reviews = test_data[test_data['name'].apply(lambda x: 'baby' in x.lower())]
Explanation: This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier.
Evaluating specific search terms
So far, we looked at the number of false positives for the entire test set. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon.
Precision-Recall on all baby related items
From the test set, select all the reviews for all products with the word 'baby' in them.
End of explanation
probabilities = model.predict(baby_reviews, output_type='probability')
Explanation: Now, let's predict the probability of classifying these reviews as positive:
End of explanation
threshold_values = np.linspace(0.5, 1, num=100)
Explanation: Let's plot the precision-recall curve for the baby_reviews dataset.
First, let's consider the following threshold_values ranging from 0.5 to 1:
End of explanation
precision_all = []
recall_all = []
best_threshold = None
for threshold in threshold_values:
# Make predictions. Use the `apply_threshold` function
## YOUR CODE HERE
predictions = apply_threshold(probabilities, threshold)
# Calculate the precision.
# YOUR CODE HERE
precision = graphlab.evaluation.precision(baby_reviews['sentiment'], predictions)
# YOUR CODE HERE
recall = graphlab.evaluation.recall(baby_reviews['sentiment'], predictions)
# Append the precision and recall scores.
precision_all.append(precision)
recall_all.append(recall)
if(best_threshold is None and precision > 0.965):
best_threshold = threshold
print best_threshold
Explanation: Second, as we did above, let's compute precision and recall for each value in threshold_values on the baby_reviews dataset. Complete the code block below.
End of explanation
best_threshold
Explanation: Quiz Question: Among all the threshold values tried, what is the smallest threshold value that achieves a precision of 96.5% or better for the reviews of data in baby_reviews? Round your answer to 3 decimal places.
End of explanation
plot_pr_curve(precision_all, recall_all, "Precision-Recall (Baby)")
Explanation: Quiz Question: Is this threshold value smaller or larger than the threshold used for the entire dataset to achieve the same specified precision of 96.5%?
Finally, let's plot the precision recall curve.
End of explanation |
5,420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = [
[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split('\n')]
target_id_text = [
[target_vocab_to_int[word] for word in sentence.split()] + [target_vocab_to_int['<EOS>']]
for sentence in target_text.split('\n')]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
return None, None, None, None, None, None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
return None, None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = None
# Batch Size
batch_size = None
# RNN Size
rnn_size = None
# Number of Layers
num_layers = None
# Embedding Size
encoding_embedding_size = None
decoding_embedding_size = None
# Learning Rate
learning_rate = None
# Dropout Keep Probability
keep_probability = None
display_step = None
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
5,421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
Step1: The Pmf class
I'll start by making a Pmf that represents the outcome of a six-sided die. Initially there are 6 values with equal probability.
Step2: To be true probabilities, they have to add up to 1. So we can normalize the Pmf
Step3: The return value from Normalize is the sum of the probabilities before normalizing.
Step4: A faster way to make a Pmf is to provide a sequence of values. The constructor adds the values to the Pmf and then normalizes
Step5: To extract a value from a Pmf, you can use Prob
Step6: Or you can use the bracket operator. Either way, if you ask for the probability of something that's not in the Pmf, the result is 0.
Step7: The cookie problem
Here's a Pmf that represents the prior distribution.
Step8: And we can update it using Mult
Step9: Or here's the shorter way to construct the prior.
Step10: And we can use *= for the update.
Step11: Either way, we have to normalize the posterior distribution.
Step16: The Bayesian framework
Here's the same computation encapsulated in a class.
Step17: We can confirm that we get the same result.
Step18: But this implementation is more general; it can handle any sequence of data.
Step23: The Monty Hall problem
The Monty Hall problem might be the most contentious question in
the history of probability. The scenario is simple, but the correct
answer is so counterintuitive that many people just can't accept
it, and many smart people have embarrassed themselves not just by
getting it wrong but by arguing the wrong side, aggressively,
in public.
Monty Hall was the original host of the game show Let's Make a
Deal. The Monty Hall problem is based on one of the regular
games on the show. If you are on the show, here's what happens
Step24: And here's how we use it.
Step25: The Suite class
Most Bayesian updates look pretty much the same, especially the Update method. So we can encapsulate the framework in a class, Suite, and create new classes that extend it.
Child classes of Suite inherit Update and provide Likelihood. So here's the short version of Monty
Step26: And it works.
Step29: The M&M problem
M&Ms are small candy-coated chocolates that come in a variety of
colors. Mars, Inc., which makes M&Ms, changes the mixture of
colors from time to time.
In 1995, they introduced blue M&Ms. Before then, the color mix in
a bag of plain M&Ms was 30% Brown, 20% Yellow, 20% Red, 10%
Green, 10% Orange, 10% Tan. Afterward it was 24% Blue , 20%
Green, 16% Orange, 14% Yellow, 13% Red, 13% Brown.
Suppose a friend of mine has two bags of M&Ms, and he tells me
that one is from 1994 and one from 1996. He won't tell me which is
which, but he gives me one M&M from each bag. One is yellow and
one is green. What is the probability that the yellow one came
from the 1994 bag?
Here's a solution
Step30: And here's an update
Step31: Exercise
Step32: Exercise
Step33: Exercises
Exercise
Step34: Exercise
Step35: Exercise
Step36: Exercise In Section 2.3 I said that the solution to the cookie problem generalizes to the case where we draw multiple cookies with replacement.
But in the more likely scenario where we eat the cookies we draw, the likelihood of each draw depends on the previous draws.
Modify the solution in this chapter to handle selection without replacement. Hint | Python Code:
from __future__ import print_function, division
% matplotlib inline
from thinkbayes2 import Hist, Pmf, Suite
Explanation: Think Bayes: Chapter 2
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
pmf = Pmf()
for x in [1,2,3,4,5,6]:
pmf[x] = 1
pmf.Print()
Explanation: The Pmf class
I'll start by making a Pmf that represents the outcome of a six-sided die. Initially there are 6 values with equal probability.
End of explanation
pmf.Normalize()
Explanation: To be true probabilities, they have to add up to 1. So we can normalize the Pmf:
End of explanation
pmf.Print()
Explanation: The return value from Normalize is the sum of the probabilities before normalizing.
End of explanation
pmf = Pmf([1,2,3,4,5,6])
pmf.Print()
Explanation: A faster way to make a Pmf is to provide a sequence of values. The constructor adds the values to the Pmf and then normalizes:
End of explanation
pmf.Prob(1)
Explanation: To extract a value from a Pmf, you can use Prob
End of explanation
pmf[1]
Explanation: Or you can use the bracket operator. Either way, if you ask for the probability of something that's not in the Pmf, the result is 0.
End of explanation
pmf = Pmf()
pmf['Bowl 1'] = 0.5
pmf['Bowl 2'] = 0.5
pmf.Print()
Explanation: The cookie problem
Here's a Pmf that represents the prior distribution.
End of explanation
pmf.Mult('Bowl 1', 0.75)
pmf.Mult('Bowl 2', 0.5)
pmf.Print()
Explanation: And we can update it using Mult
End of explanation
pmf = Pmf(['Bowl 1', 'Bowl 2'])
pmf.Print()
Explanation: Or here's the shorter way to construct the prior.
End of explanation
pmf['Bowl 1'] *= 0.75
pmf['Bowl 2'] *= 0.5
pmf.Print()
Explanation: And we can use *= for the update.
End of explanation
pmf.Normalize()
pmf.Print()
Explanation: Either way, we have to normalize the posterior distribution.
End of explanation
class Cookie(Pmf):
A map from string bowl ID to probablity.
def __init__(self, hypos):
Initialize self.
hypos: sequence of string bowl IDs
Pmf.__init__(self)
for hypo in hypos:
self.Set(hypo, 1)
self.Normalize()
def Update(self, data):
Updates the PMF with new data.
data: string cookie type
for hypo in self.Values():
like = self.Likelihood(data, hypo)
self.Mult(hypo, like)
self.Normalize()
mixes = {
'Bowl 1':dict(vanilla=0.75, chocolate=0.25),
'Bowl 2':dict(vanilla=0.5, chocolate=0.5),
}
def Likelihood(self, data, hypo):
The likelihood of the data under the hypothesis.
data: string cookie type
hypo: string bowl ID
mix = self.mixes[hypo]
like = mix[data]
return like
Explanation: The Bayesian framework
Here's the same computation encapsulated in a class.
End of explanation
pmf = Cookie(['Bowl 1', 'Bowl 2'])
pmf.Update('vanilla')
pmf.Print()
Explanation: We can confirm that we get the same result.
End of explanation
dataset = ['vanilla', 'chocolate', 'vanilla']
for data in dataset:
pmf.Update(data)
pmf.Print()
Explanation: But this implementation is more general; it can handle any sequence of data.
End of explanation
class Monty(Pmf):
Map from string location of car to probability
def __init__(self, hypos):
Initialize the distribution.
hypos: sequence of hypotheses
Pmf.__init__(self)
for hypo in hypos:
self.Set(hypo, 1)
self.Normalize()
def Update(self, data):
Updates each hypothesis based on the data.
data: any representation of the data
for hypo in self.Values():
like = self.Likelihood(data, hypo)
self.Mult(hypo, like)
self.Normalize()
def Likelihood(self, data, hypo):
Compute the likelihood of the data under the hypothesis.
hypo: string name of the door where the prize is
data: string name of the door Monty opened
if hypo == data:
return 0
elif hypo == 'A':
return 0.5
else:
return 1
Explanation: The Monty Hall problem
The Monty Hall problem might be the most contentious question in
the history of probability. The scenario is simple, but the correct
answer is so counterintuitive that many people just can't accept
it, and many smart people have embarrassed themselves not just by
getting it wrong but by arguing the wrong side, aggressively,
in public.
Monty Hall was the original host of the game show Let's Make a
Deal. The Monty Hall problem is based on one of the regular
games on the show. If you are on the show, here's what happens:
Monty shows you three closed doors and tells you that there is a
prize behind each door: one prize is a car, the other two are less
valuable prizes like peanut butter and fake finger nails. The
prizes are arranged at random.
The object of the game is to guess which door has the car. If
you guess right, you get to keep the car.
You pick a door, which we will call Door A. We'll call the
other doors B and C.
Before opening the door you chose, Monty increases the
suspense by opening either Door B or C, whichever does not
have the car. (If the car is actually behind Door A, Monty can
safely open B or C, so he chooses one at random.)
Then Monty offers you the option to stick with your original
choice or switch to the one remaining unopened door.
The question is, should you "stick" or "switch" or does it
make no difference?
Most people have the strong intuition that it makes no difference.
There are two doors left, they reason, so the chance that the car
is behind Door A is 50%.
But that is wrong. In fact, the chance of winning if you stick
with Door A is only 1/3; if you switch, your chances are 2/3.
Here's a class that solves the Monty Hall problem.
End of explanation
pmf = Monty('ABC')
pmf.Update('B')
pmf.Print()
Explanation: And here's how we use it.
End of explanation
class Monty(Suite):
def Likelihood(self, data, hypo):
if hypo == data:
return 0
elif hypo == 'A':
return 0.5
else:
return 1
Explanation: The Suite class
Most Bayesian updates look pretty much the same, especially the Update method. So we can encapsulate the framework in a class, Suite, and create new classes that extend it.
Child classes of Suite inherit Update and provide Likelihood. So here's the short version of Monty
End of explanation
pmf = Monty('ABC')
pmf.Update('B')
pmf.Print()
Explanation: And it works.
End of explanation
class M_and_M(Suite):
Map from hypothesis (A or B) to probability.
mix94 = dict(brown=30,
yellow=20,
red=20,
green=10,
orange=10,
tan=10,
blue=0)
mix96 = dict(blue=24,
green=20,
orange=16,
yellow=14,
red=13,
brown=13,
tan=0)
hypoA = dict(bag1=mix94, bag2=mix96)
hypoB = dict(bag1=mix96, bag2=mix94)
hypotheses = dict(A=hypoA, B=hypoB)
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: string hypothesis (A or B)
data: tuple of string bag, string color
bag, color = data
mix = self.hypotheses[hypo][bag]
like = mix[color]
return like
Explanation: The M&M problem
M&Ms are small candy-coated chocolates that come in a variety of
colors. Mars, Inc., which makes M&Ms, changes the mixture of
colors from time to time.
In 1995, they introduced blue M&Ms. Before then, the color mix in
a bag of plain M&Ms was 30% Brown, 20% Yellow, 20% Red, 10%
Green, 10% Orange, 10% Tan. Afterward it was 24% Blue , 20%
Green, 16% Orange, 14% Yellow, 13% Red, 13% Brown.
Suppose a friend of mine has two bags of M&Ms, and he tells me
that one is from 1994 and one from 1996. He won't tell me which is
which, but he gives me one M&M from each bag. One is yellow and
one is green. What is the probability that the yellow one came
from the 1994 bag?
Here's a solution:
End of explanation
suite = M_and_M('AB')
suite.Update(('bag1', 'yellow'))
suite.Update(('bag2', 'green'))
suite.Print()
Explanation: And here's an update:
End of explanation
suite.Update(('bag1', 'blue'))
suite.Print()
Explanation: Exercise: Suppose you draw another M&M from bag1 and it's blue. What can you conclude? Run the update to confirm your intuition.
End of explanation
# Solution
# suite.Update(('bag2', 'blue'))
# throws ValueError: Normalize: total probability is zero.
Explanation: Exercise: Now suppose you draw an M&M from bag2 and it's blue. What does that mean? Run the update to see what happens.
End of explanation
# Solution
# Here's a Pmf with the prior probability that Elvis
# was an identical twin (taking the fact that he was a
# twin as background information)
pmf = Pmf(dict(fraternal=0.92, identical=0.08))
# Solution
# And here's the update. The data is that the other twin
# was also male, which has likelihood 1 if they were identical
# and only 0.5 if they were fraternal.
pmf['fraternal'] *= 0.5
pmf['identical'] *= 1
pmf.Normalize()
pmf.Print()
Explanation: Exercises
Exercise: This one is from one of my favorite books, David MacKay's "Information Theory, Inference, and Learning Algorithms":
Elvis Presley had a twin brother who died at birth. What is the probability that Elvis was an identical twin?"
To answer this one, you need some background information: According to the Wikipedia article on twins: ``Twins are estimated to be approximately 1.9% of the world population, with monozygotic twins making up 0.2% of the total---and 8% of all twins.''
End of explanation
from sympy import symbols
p = symbols('p')
# Solution
# Here's the solution if Monty opens B.
pmf = Pmf('ABC')
pmf['A'] *= p
pmf['B'] *= 0
pmf['C'] *= 1
pmf.Normalize()
pmf['A'].simplify()
# Solution
# When p=0.5, the result is what we saw before
pmf['A'].evalf(subs={p:0.5})
# Solution
# When p=0.0, we know for sure that the prize is behind C
pmf['C'].evalf(subs={p:0.0})
# Solution
# And here's the solution if Monty opens C.
pmf = Pmf('ABC')
pmf['A'] *= 1-p
pmf['B'] *= 1
pmf['C'] *= 0
pmf.Normalize()
pmf['A'].simplify()
Explanation: Exercise: Let's consider a more general version of the Monty Hall problem where Monty is more unpredictable. As before, Monty never opens the door you chose (let's call it A) and never opens the door with the prize. So if you choose the door with the prize, Monty has to decide which door to open. Suppose he opens B with probability p and C with probability 1-p. If you choose A and Monty opens B, what is the probability that the car is behind A, in terms of p? What if Monty opens C?
Hint: you might want to use SymPy to do the algebra for you.
End of explanation
# Solution
# In this case, we can't compute the likelihoods individually;
# we only know the ratio of one to the other. But that's enough.
# Two ways to proceed: we could include a variable in the computation,
# and we would see it drop out.
# Or we can use "unnormalized likelihoods", for want of a better term.
# Here's my solution.
pmf = Pmf(dict(smoker=15, nonsmoker=85))
pmf['smoker'] *= 13
pmf['nonsmoker'] *= 1
pmf.Normalize()
pmf.Print()
Explanation: Exercise: According to the CDC, ``Compared to nonsmokers, men who smoke are about 23 times more likely to develop lung cancer and women who smoke are about 13 times more likely.'' Also, among adults in the U.S. in 2014:
Nearly 19 of every 100 adult men (18.8%)
Nearly 15 of every 100 adult women (14.8%)
If you learn that a woman has been diagnosed with lung cancer, and you know nothing else about her, what is the probability that she is a smoker?
End of explanation
# Solution
# We'll need an object to keep track of the number of cookies in each bowl.
# I use a Hist object, defined in thinkbayes2:
bowl1 = Hist(dict(vanilla=30, chocolate=10))
bowl2 = Hist(dict(vanilla=20, chocolate=20))
bowl1.Print()
# Solution
# Now I'll make a Pmf that contains the two bowls, giving them equal probability.
pmf = Pmf([bowl1, bowl2])
pmf.Print()
# Solution
# Here's a likelihood function that takes `hypo`, which is one of
# the Hist objects that represents a bowl, and `data`, which is either
# 'vanilla' or 'chocolate'.
# `likelihood` computes the likelihood of the data under the hypothesis,
# and as a side effect, it removes one of the cookies from `hypo`
def likelihood(hypo, data):
like = hypo[data] / hypo.Total()
if like:
hypo[data] -= 1
return like
# Solution
# Now for the update. We have to loop through the hypotheses and
# compute the likelihood of the data under each hypothesis.
def update(pmf, data):
for hypo in pmf:
pmf[hypo] *= likelihood(hypo, data)
return pmf.Normalize()
# Solution
# Here's the first update. The posterior probabilities are the
# same as what we got before, but notice that the number of cookies
# in each Hist has been updated.
update(pmf, 'vanilla')
pmf.Print()
# Solution
# So when we update again with a chocolate cookies, we get different
# likelihoods, and different posteriors.
update(pmf, 'chocolate')
pmf.Print()
# Solution
# If we get 10 more chocolate cookies, that eliminates Bowl 1 completely
for i in range(10):
update(pmf, 'chocolate')
print(pmf[bowl1])
Explanation: Exercise In Section 2.3 I said that the solution to the cookie problem generalizes to the case where we draw multiple cookies with replacement.
But in the more likely scenario where we eat the cookies we draw, the likelihood of each draw depends on the previous draws.
Modify the solution in this chapter to handle selection without replacement. Hint: add instance variables to Cookie to represent the hypothetical state of the bowls, and modify Likelihood accordingly. You might want to define a Bowl object.
End of explanation |
5,422 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trajectory Simulation
The simulation is done using a expected differentiation pattern along a timeline t. The differentiation pattern is generated doing randomly angled linear splits (like a tree) in two dimensions. This generates the different classes of samples (e.g. cells) in the two dimensional splitting pattern.
The linear splits and angles are generated randomly, and thus sometimes leed to overlapping tree branches or other non-expected behaviour in a differentiation process. We selected five seeds, which generate the following patterns
Step1: The labels for the cellstage go from 1 to 64 (as we simulate the early developmental stages in an embryo).
The labels also include the branch number for each stage
Step2: First we run the the other dimensionality reduction methods on the simulated data.
Step3: Waddington's landscape
First we plot the probabilistic interpretation of Waddington's landscape as a three dimensional plot, where the z direction of the plot shows the magnification factor. We always want to be in the ravines of the plot, and not cross over hills.
Step4: Then we inspect the landscape a little closer (distances and graph embeddings). This also will reveal the pseudo time and comparisons to the simulated times.
Step5: The model tells us that we only need the two dimensions shown above.
The significance of dimensions of the underlying function (defined by the kernel) can be plotted by plotting the ARD parameters. If the sensitivity for a dimension is close to 0 it is not used by the BGPLVM. The higher it is, the more non-linear the fit is for that dimension (actually the more 'wiggly' it gets). | Python Code:
seeds = [8971, 3551, 3279, 5001, 5081]
from topslam.simulation import qpcr_simulation
fig = plt.figure(figsize=(15,3), tight_layout=True)
gs = plt.GridSpec(6, 5)
axit = iter([fig.add_subplot(gs[1:, i]) for i in range(5)])
for seed in seeds:
Xsim, simulate_new, t, c, labels, seed = qpcr_simulation(seed=seed)
ax = next(axit)
# take only stage labels:
labels = np.asarray([lab.split(' ')[0] for lab in labels])
prevlab = None
for lab in labels:
if lab != prevlab:
color = plt.cm.hot(c[lab==labels])
ax.scatter(*Xsim[lab==labels].T, c=color, alpha=.7, lw=.1, label=lab)
prevlab = lab
ax.set_xlabel("SLS{}".format(seed))
ax.set_frame_on(False)
ax.xaxis.set_ticks([])
ax.yaxis.set_ticks([])
leg_hand = ax.get_legend_handles_labels()
ax = fig.add_subplot(gs[0, :])
ax.legend(*leg_hand, ncol=7, mode='expand')
ax.set_frame_on(False)
ax.xaxis.set_ticks([])
ax.yaxis.set_ticks([])
fig.subplots_adjust(wspace=0, hspace=0)
fig.tight_layout()
Explanation: Trajectory Simulation
The simulation is done using a expected differentiation pattern along a timeline t. The differentiation pattern is generated doing randomly angled linear splits (like a tree) in two dimensions. This generates the different classes of samples (e.g. cells) in the two dimensional splitting pattern.
The linear splits and angles are generated randomly, and thus sometimes leed to overlapping tree branches or other non-expected behaviour in a differentiation process. We selected five seeds, which generate the following patterns:
End of explanation
seed = 5001
y_seed = 0
Xsim, simulate_new, t, c, labels, seed = qpcr_simulation(seed=seed)
np.random.seed(y_seed)
Y = simulate_new()
Explanation: The labels for the cellstage go from 1 to 64 (as we simulate the early developmental stages in an embryo).
The labels also include the branch number for each stage: "<cellstage> <branch number>"
cellstage is the differentiation stage and the branch number is the number of the branch, starting from 1 going up to the number of splits in this branch. If there is no split this stage, it is only 1.
Comparison
We will use SLS5001 to compare to the other methods in order to show how Manifold works.
We also set the seeds for the simulation (y_seed) in order to get consistent results.
End of explanation
from topslam.optimization import run_methods, methods
X_init, dims = run_methods(Y, methods)
m = GPy.models.BayesianGPLVM(Y, 10, X=X_init, num_inducing=10)
m.likelihood.fix(.1)
m.kern.lengthscale.fix()
m.optimize(max_iters=500, messages=True, clear_after_finish=True)
m.likelihood.unfix()
m.kern.unfix()
m.optimize(max_iters=5e3, messages=True, clear_after_finish=False)
fig, axes = plt.subplots(2,4,figsize=(10,6))
axit = axes.flat
cols = plt.cm.hot(c)
ax = next(axit)
ax.scatter(*Xsim.T, c=cols, cmap='hot', lw=.1)
ax.set_title('Simulated')
ax.set_xticks([])
ax.set_yticks([])
ax = next(axit)
msi = m.get_most_significant_input_dimensions()[:2]
#ax.scatter(*m.X.mean.values[:,msi].T, c=t, cmap='hot')
#m.plot_inducing(ax=ax, color='w')
m.plot_magnification(resolution=20, scatter_kwargs=dict(color=cols, cmap='hot', s=20), marker='o', ax=ax)
ax.set_title('BGPLVM')
ax.set_xlabel('')
ax.set_ylabel('')
ax.set_xticks([])
ax.set_yticks([])
for name in methods:
ax = next(axit)
ax.scatter(*X_init[:,dims[name]].T, c=cols, cmap='hot', lw=.1)
ax.set_title(name)
ax.set_xticks([])
ax.set_yticks([])
plt.tight_layout()
print seed
ax = next(axit)
ax.set_visible(False)
#plt.savefig('../diagrams/simulation/{}_comparison.pdf'.format(seed), transparent=True, bbox_inches='tight')
Explanation: First we run the the other dimensionality reduction methods on the simulated data.
End of explanation
from manifold import waddington_landscape, plot_waddington_landscape
res = 120
Xgrid, wadXgrid, X, wadX = waddington_landscape(m, resolution=res)
ax = plot_waddington_landscape(Xgrid, wadXgrid, X, wadX, np.unique(labels), labels, resolution=res)
ax.view_init(elev=56, azim=-75)
Explanation: Waddington's landscape
First we plot the probabilistic interpretation of Waddington's landscape as a three dimensional plot, where the z direction of the plot shows the magnification factor. We always want to be in the ravines of the plot, and not cross over hills.
End of explanation
from manifold import ManifoldCorrectionTree, ManifoldCorrectionKNN
import networkx as nx
msi = m.get_most_significant_input_dimensions()[:2]
X = m.X.mean[:,msi]
pos = dict([(i, x) for i, x in zip(range(X.shape[0]), X)])
mc = ManifoldCorrectionTree(m)
start = 6
pt = mc.distances_along_graph
pt_graph = mc.get_time_graph(start)
G = nx.Graph(pt_graph)
fig, ax = plt.subplots(figsize=(4,4))
m.plot_magnification(ax=ax, plot_scatter=False)
prevlab = None
for lab in labels:
if lab != prevlab:
color = plt.cm.hot(c[lab==labels])
ax.scatter(*X[lab==labels].T, c=color, alpha=.9, lw=.1, label=lab)
prevlab = lab
ecols = [e[2]['weight'] for e in G.edges(data=True)]
cmap = sns.cubehelix_palette(as_cmap=True, reverse=True, start=0, rot=0, dark=.2, light=.8, )
edges = nx.draw_networkx_edges(G, pos=pos, ax=ax, edge_color=ecols, edge_cmap=cmap, lw=2)
cbar = fig.colorbar(edges, ax=ax)
#cbar.set_ticks([1,13/2.,12])
#ax.set_xlim(-3,2)
#ax.set_ylim(-3,2.2)
ax.set_xlabel('')
ax.set_ylabel('')
ax.set_xticks([])
ax.set_yticks([])
ax.set_frame_on(False)
ax.scatter(*X[start].T, edgecolor='red', lw=1.5, facecolor='none', s=50, label='start')
ax.legend(bbox_to_anchor=(0., 1.02, 1.2, .102), loc=3,
ncol=4, mode="expand", borderaxespad=0.)
fig.tight_layout(rect=(0,0,1,.9))
#fig.savefig('../diagrams/simulation/BGPLVMtree_{}_{}.pdf'.format(seed, y_seed), transparent=True, bbox_inches='tight')
ax = sns.jointplot(pt[start], t[:,0], kind="reg", size=4)
ax.ax_joint.set_xlabel('BGPLVM Extracted Time')
ax.ax_joint.set_ylabel('Simulated Time')
#ax.ax_joint.figure.savefig('../diagrams/simulation/BGPLVM_time_scatter_{}_{}.pdf'.format(seed, y_seed), transparent=True, bbox_inches='tight')
#fig, ax = plt.subplots(figsize=(4,4))
#msi = m.get_most_significant_input_dimensions()[:2]
#ax.scatter(*m.X.mean.values[:,msi].T, c=t, cmap='hot')
#m.plot_inducing(ax=ax, color='w')
#m.plot_magnification(resolution=20, scatter_kwargs=dict(color=cols, cmap='hot', s=20), marker='o', ax=ax)
#ax.set_title('BGPLVM')
#ax.set_xlabel('')
#ax.set_ylabel('')
#ax.set_xticks([])
#ax.set_yticks([])
#ax.figure.savefig('../diagrams/simulation/BGPLVM_magnificaton_{}_{}.pdf'.format(seed, y_seed), transparent=True, bbox_inches='tight')
print("MST spanning through the data")
Explanation: Then we inspect the landscape a little closer (distances and graph embeddings). This also will reveal the pseudo time and comparisons to the simulated times.
End of explanation
fig, ax = plt.subplots(figsize=(4,4))
m.kern.plot_ARD(ax=ax)
#fig.savefig('../diagrams/simulation/BGPLVM_ARD_{}_{}.pdf'.format(seed, y_seed), transparent=True, bbox_inches='tight')
msi = m.get_most_significant_input_dimensions()[:2]
X = m.X.mean[:,msi]
pos = dict([(i, x) for i, x in zip(range(X.shape[0]), X)])
mc = ManifoldCorrectionKNN(m, 4)
start = 6
pt = mc.distances_along_graph
pt_graph = mc.get_time_graph(start)
G = nx.Graph(pt_graph)
fig, ax = plt.subplots(figsize=(4,4))
m.plot_magnification(ax=ax, plot_scatter=False)
prevlab = None
for lab in labels:
if lab != prevlab:
color = plt.cm.hot(c[lab==labels])
ax.scatter(*X[lab==labels].T, c=color, alpha=.9, lw=.1, label=lab)
prevlab = lab
ecols = [e[2]['weight'] for e in G.edges(data=True)]
cmap = sns.cubehelix_palette(as_cmap=True, reverse=True, start=0, rot=0, dark=.2, light=.8, )
edges = nx.draw_networkx_edges(G, pos=pos, ax=ax, edge_color=ecols, edge_cmap=cmap, lw=1.5)
cbar = fig.colorbar(edges, ax=ax)
#cbar.set_ticks([1,13/2.,12])
#ax.set_xlim(-3,2)
#ax.set_ylim(-3,2.2)
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel('')
ax.set_ylabel('')
ax.set_frame_on(False)
ax.scatter(*X[start].T, edgecolor='red', lw=1.5, facecolor='none', s=50, label='start')
ax.legend(bbox_to_anchor=(0., 1.02, 1.2, .102), loc=3,
ncol=4, mode="expand", borderaxespad=0.)
fig.tight_layout(rect=(0,0,1,.9))
#fig.savefig('../diagrams/simulation/BGPLVMknn_{}_{}.pdf'.format(seed, y_seed), transparent=True, bbox_inches='tight')
ax = sns.jointplot(pt[start], t[:,0], kind="reg", size=4)
ax.ax_joint.set_xlabel('BGPLVM Extracted Time')
ax.ax_joint.set_ylabel('Simulated Time')
#ax.ax_joint.figure.savefig('../diagrams/simulation/BGPLVM_knn_time_scatter_{}_{}.pdf'.format(seed, y_seed), transparent=True, bbox_inches='tight')
print ("3 Nearest Neighbor embedding and extracted time along it for the same Manifold embedding")
i = 0
for method in methods:
print method, '{}:{}'.format(dims[method].start, dims[method].stop)
i+=2
from scipy.spatial.distance import squareform, pdist
%run graph_extraction.py
# Monocle:
X = X_init[:,dims['ICA']].copy()
pos = dict([(i, x) for i, x in zip(range(X.shape[0]), X)])
start = 6
pt, mst = extract_manifold_distances_mst(squareform(pdist(X)))
pt_graph = extract_distance_graph(pt, mst, start)
G = nx.Graph(pt_graph)
fig, ax = plt.subplots(figsize=(4,4))
prevlab = None
for lab in labels:
if lab != prevlab:
color = plt.cm.hot(c[lab==labels])
ax.scatter(*X[lab==labels].T, c=color, alpha=.9, lw=.1, label=lab)
prevlab = lab
ecols = [e[2]['weight'] for e in G.edges(data=True)]
cmap = sns.cubehelix_palette(as_cmap=True, reverse=True, start=0, rot=0, dark=.2, light=.8, )
edges = nx.draw_networkx_edges(G, pos=pos, ax=ax, edge_color=ecols, edge_cmap=cmap, lw=1.5)
cbar = fig.colorbar(edges, ax=ax)
#cbar.set_ticks([1,13/2.,12])
#ax.set_xlim(-3,2)
#ax.set_ylim(-3,2.2)
ax.set_xticks([])
ax.set_yticks([])
ax.set_frame_on(False)
ax.scatter(*X[start].T, edgecolor='red', lw=1.5, facecolor='none', s=50, label='start')
ax.legend(bbox_to_anchor=(0., 1.02, 1.2, .102), loc=3,
ncol=4, mode="expand", borderaxespad=0.)
fig.tight_layout(rect=(0,0,1,.9))
#fig.savefig('../diagrams/simulation/ICA_{}_{}.pdf'.format(seed, y_seed), transparent=True, bbox_inches='tight')
ax = sns.jointplot(pt[start], t[:,0], kind="reg", size=4)
ax.ax_joint.set_xlabel('Monocle Extracted Time')
ax.ax_joint.set_ylabel('Simulated Time')
#ax.ax_joint.figure.savefig('../diagrams/simulation/ICA_time_scatter_{}_{}.pdf'.format(seed, y_seed), transparent=True, bbox_inches='tight')
print("Monocle (MST on ICA embedding)")
from scipy.sparse import lil_matrix, find
# Wanderlust (without smoothing)
# take out tsne:
X = X_init[:,dims['t-SNE']].copy()
pos = dict([(i, x) for i, x in zip(range(X.shape[0]), X)])
k = 4
start = 6
_, mst = extract_manifold_distances_mst(squareform(pdist(X)))
pt, knn = extract_manifold_distances_knn(squareform(pdist(X)), knn=[k], add_mst=mst).next()
pt_graph = extract_distance_graph(pt, knn, start)
G = nx.Graph(pt_graph)
G = nx.Graph(pt_graph)
fig, ax = plt.subplots(figsize=(4,4))
prevlab = None
for lab in labels:
if lab != prevlab:
color = plt.cm.hot(c[lab==labels])
ax.scatter(*X[lab==labels].T, c=color, alpha=.9, lw=.1, label=lab)
prevlab = lab
ecols = [e[2]['weight'] for e in G.edges(data=True)]
cmap = sns.cubehelix_palette(as_cmap=True, reverse=True, start=0, rot=0, dark=.2, light=.8, )
edges = nx.draw_networkx_edges(G, pos=pos, ax=ax, edge_color=ecols, edge_cmap=cmap, lw=1.5)
cbar = fig.colorbar(edges, ax=ax)
#cbar.set_ticks([1,13/2.,12])
#ax.set_xlim(-3,2)
#ax.set_ylim(-3,2.2)
ax.set_xticks([])
ax.set_yticks([])
ax.set_frame_on(False)
ax.scatter(*X[start].T, edgecolor='red', lw=1.5, facecolor='none', s=50, label='start')
ax.legend(bbox_to_anchor=(0., 1.02, 1.2, .102), loc=3,
ncol=4, mode="expand", borderaxespad=0.)
fig.tight_layout(rect=(0,0,1,.9))
#fig.savefig('../diagrams/simulation/TSNE_knn_{}_{}.pdf'.format(seed, y_seed), transparent=True, bbox_inches='tight')
ax = sns.jointplot(pt[start], t[:,0], kind="reg", size=4)
ax.ax_joint.set_xlabel('t-SNE Extracted Time')
ax.ax_joint.set_ylabel('Simulated Time')
print("Wanderlust (KNN on t-SNE)")
Explanation: The model tells us that we only need the two dimensions shown above.
The significance of dimensions of the underlying function (defined by the kernel) can be plotted by plotting the ARD parameters. If the sensitivity for a dimension is close to 0 it is not used by the BGPLVM. The higher it is, the more non-linear the fit is for that dimension (actually the more 'wiggly' it gets).
End of explanation |
5,423 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Querying portia - Data fetching with Python
Making HTTP requests using Python - Checking credentials
Unsucessfull request
Step1: Sucessfull request
Step2: Obtaining data from a specific time frame
Now that we have learned how to authenticate with the service, let's see how to get the data
Step3: Obtaining the latest data
For the next example, we are requesting only the last data sent by the equipments
Last dimension
Step4: Last three dimensions | Python Code:
# Library for HTTP requests
import requests
# Portia service URL for token authorization checking
url = "http://io.portia.supe.solutions/api/v1/accesstoken/check"
# Makes the request
response = requests.get(url)
# Shows response
if response.status_code == 200:
print("Success accessing Portia Service - Status Code: {0}\n{1}".format(response.status_code, response.text))
else:
print("Couldn't access Portia service - Status Code: {0}".format(response.status_code))
Explanation: Querying portia - Data fetching with Python
Making HTTP requests using Python - Checking credentials
Unsucessfull request
End of explanation
# Library for HTTP requests
import requests
# Portia service URL for token authorization checking
url = "http://io.portia.supe.solutions/api/v1/accesstoken/check"
# Setting the header with a token for successful authorization
header = {"Authorization": "Bearer bdb6e780b43011e7af0b67cba486057b"}
# Makes the request
response = requests.get(url, headers=header)
# Shows response
if response.status_code == 200:
print("Success accessing Portia Service - Status Code: {0}\n{1}".format(response.status_code, response.text))
else:
print("Couldn't access Portia service - Status Code: {0}".format(response.status_code))
Explanation: Sucessfull request
End of explanation
import requests # Library for HTTP requests
import time as epoch # Library for timing functions
import json # Library for JSON usage
# Example for getting the last 5 minutes of data
fiveMinutes = 1000 * 60 * 5
toTimestamp = int(epoch.time()) * 1000 # The time lib only gives us the UTC time as seconds since January 1, 1970, so, we multiply by 1000 to get the milliseconds
fromTimestamp = toTimestamp - fiveMinutes
# Portia service URL for specific time frame
url = "http://io.portia.supe.solutions/api/v1/device/HytTDwUp-j8yrsh8e/port/2/sensor/1"
# Adding the calculated timestamps as GET parameters
url += "?from_timestamp={0}&?to_timestamp={1}".format(fromTimestamp, toTimestamp) # If no parameters, the service default response is for the last 24 hours
# Setting the header with a token for successful authorization
header = {"Authorization": "Bearer bdb6e780b43011e7af0b67cba486057b"}
# Makes the request
response = requests.get(url, headers=header)
# Shows response
if response.status_code == 200:
# Parses dimensions
dimensions = json.loads(response.text)
print("Success! For each received dimension:")
for dimension in dimensions:
print("Accessing dimension package:")
print("\tDimension Code: {0}".format(dimension["dimension_code"]))
print("\tUnity Code: {0}".format(dimension["dimension_unity_code"]))
print("\tThing Code: {0}".format(dimension["dimension_thing_code"]))
print("\tDimension Value: {0}".format(dimension["dimension_value"]))
print("\tServer Timestamp: {0}\n".format(dimension["server_timestamp"]))
else:
print("Couldn't access Portia service - Status Code: {0}".format(response.status_code))
Explanation: Obtaining data from a specific time frame
Now that we have learned how to authenticate with the service, let's see how to get the data
End of explanation
import requests # Library for HTTP requests
import json # Library for JSON usage
# Portia service URL for getting the latest data
url = "http://io.portia.supe.solutions/api/v1/device/HytTDwUp-j8yrsh8e/port/2/sensor/1/last"
# Setting the header with a token for successful authorization
header = {"Authorization": "Bearer bdb6e780b43011e7af0b67cba486057b"}
# Makes the request
response = requests.get(url, headers=header)
# Shows response
if response.status_code == 200:
# Parses dimension
dimension = json.loads(response.text)[0]
print("Success! Accessing dimension package:")
print("\tDimension Code: {0}".format(dimension["dimension_code"]))
print("\tUnity Code: {0}".format(dimension["dimension_unity_code"]))
print("\tThing Code: {0}".format(dimension["dimension_thing_code"]))
print("\tDimension Value: {0}".format(dimension["dimension_value"]))
print("\tServer Timestamp: {0}\n".format(dimension["server_timestamp"]))
else:
print("Couldn't access Portia service - Status Code: {0}".format(response.status_code))
Explanation: Obtaining the latest data
For the next example, we are requesting only the last data sent by the equipments
Last dimension
End of explanation
import requests # Library for HTTP requests
import json # Library for JSON usage
# Portia service URL for getting the latest data
url = "http://io.portia.supe.solutions/api/v1/device/HytTDwUp-j8yrsh8e/port/2/sensor/1/last"
# Adding GET parameter for specifying that we want the last 3 dimension packages
url += "?limit={0}".format(3)
# Setting the header with a token for successful authorization
header = {"Authorization": "Bearer bdb6e780b43011e7af0b67cba486057b"}
# Makes the request
response = requests.get(url, headers=header)
# Shows response
if response.status_code == 200:
# Parses dimensions
dimensions = json.loads(response.text)
print("Success! For each received dimension:")
for dimension in dimensions:
print("Accessing dimension package:")
print("\tDimension Code: {0}".format(dimension["dimension_code"]))
print("\tUnity Code: {0}".format(dimension["dimension_unity_code"]))
print("\tThing Code: {0}".format(dimension["dimension_thing_code"]))
print("\tDimension Value: {0}".format(dimension["dimension_value"]))
print("\tServer Timestamp: {0}\n".format(dimension["server_timestamp"]))
else:
print("Couldn't access Portia service - Status Code: {0}".format(response.status_code))
Explanation: Last three dimensions
End of explanation |
5,424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<a href="https
Step1: Then, loading the dataset is a one-liner
Step2: This function returns a dictionary we call iris, which contains a bunch of different fields
Step3: Here, all the data points are contained in 'data'. There are 150 data points, each of which
have four feature values
Step4: These four features correspond to the sepal and petal dimensions mentioned earlier
Step5: For every data point, we have a class label stored in target
Step6: We can also inspect the class labels, and find that there is a total of three classes
Step7: Making it a binary classification problem
For the sake of simplicity, we want to focus on a binary classification problem for now,
where we only have two classes. The easiest way to do this is to discard all data points
belonging to a certain class, such as class label 2, by selecting all the rows that do not belong
to class 2
Step8: Inspecting the data
Before you get started with setting up a model, it is always a good idea to have a look at the data. We did this above for the town map example, so let's continue our streak. Using Matplotlib, we create a scatter plot where the color of each data point corresponds to the class label.
To make plotting easier, we limit ourselves to the first two features
(iris.feature_names[0] being the sepal length and iris.feature_names[1] being
the sepal width). We can see a nice separation of classes in the following figure
Step9: Splitting the data into training and test sets
We learned in the previous chapter that it is essential to keep training and test data
separate. We can easily split the data using one of scikit-learn's many helper functions
Step10: Here we want to split the data into 90 percent training data and 10 percent test data, which
we specify with test_size=0.1. By inspecting the return arguments, we note that we
ended up with exactly 90 training data points and 10 test data points
Step11: Training the classifier
Creating a logistic regression classifier involves pretty much the same steps as setting up $k$-NN
Step12: We then have to specify the desired training method. Here, we can choose
cv2.ml.LogisticRegression_BATCH or cv2.ml.LogisticRegression_MINI_BATCH.
For now, all we need to know is that we want to update the model after every data point,
which can be achieved with the following code
Step13: We also want to specify the number of iterations the algorithm should run before it
terminates
Step14: We can then call the train method of the object (in the exact same way as we did earlier),
which will return True upon success
Step15: Retrieve the learned weights
Step16: Testing the classifier
Let's see for ourselves by calculating the accuracy score on the training set
Step17: Perfect score! However, this only means that the model was able to perfectly memorize the
training dataset. This does not mean that the model would be able to classify a new, unseen
data point. For this, we need to check the test dataset | Python Code:
import numpy as np
import cv2
from sklearn import datasets
from sklearn import model_selection
from sklearn import metrics
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Applying Lasso and Ridge Regression | Contents | Representing Data and Engineering Features >
Classifying Iris Species Using Logistic Regression
Another famous dataset in the world of machine learning is called the Iris dataset.
The Iris
dataset contains measurements of 150 iris flowers from three different species: setosa,
versicolor, and viriginica. These measurements include the length and width of the petals,
and the length and width of the sepals, all measured in centimeters.
Our goal is to build a machine learning model that can learn the measurements of these iris
flowers, whose species are known, so that we can predict the species for a new iris flower.
Understanding logistic regression
Despite its name, logistic regression can actually be used as a model for classification. It
uses a logistic function (or sigmoid) to convert any real-valued input $x$ into a predicted
output value $ŷ$ that takes values between 0 and 1. Rounding $ŷ$ to the nearest integer effectively classifies the input as belonging either to class
0 or 1.
Of course, most often, our problems have more than one input or feature value, x. For
example, the Iris dataset provides a total of four features.
To find out how logistic regression works in these cases, please refer to the book.
Logistic Regression in OpenCV
Loading the dataset
The Iris dataset is included with scikit-learn. We first load all the necessary modules, as we
did in our earlier examples:
End of explanation
iris = datasets.load_iris()
Explanation: Then, loading the dataset is a one-liner:
End of explanation
dir(iris)
Explanation: This function returns a dictionary we call iris, which contains a bunch of different fields:
- DESCR: Get a description of the data
- data: The actual data, <num_samples x num_features>
- feature_names: The names of the features
- target: The class labels, <num_samples x 1>
- target_names: The names of the class labels
End of explanation
iris.data.shape
Explanation: Here, all the data points are contained in 'data'. There are 150 data points, each of which
have four feature values:
End of explanation
iris.feature_names
Explanation: These four features correspond to the sepal and petal dimensions mentioned earlier:
End of explanation
iris.target.shape
Explanation: For every data point, we have a class label stored in target:
End of explanation
np.unique(iris.target)
Explanation: We can also inspect the class labels, and find that there is a total of three classes:
End of explanation
idx = iris.target != 2
data = iris.data[idx].astype(np.float32)
target = iris.target[idx].astype(np.float32)
Explanation: Making it a binary classification problem
For the sake of simplicity, we want to focus on a binary classification problem for now,
where we only have two classes. The easiest way to do this is to discard all data points
belonging to a certain class, such as class label 2, by selecting all the rows that do not belong
to class 2:
End of explanation
plt.figure(figsize=(10, 6))
plt.scatter(data[:, 0], data[:, 1], c=target, cmap=plt.cm.Paired, s=100)
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1]);
Explanation: Inspecting the data
Before you get started with setting up a model, it is always a good idea to have a look at the data. We did this above for the town map example, so let's continue our streak. Using Matplotlib, we create a scatter plot where the color of each data point corresponds to the class label.
To make plotting easier, we limit ourselves to the first two features
(iris.feature_names[0] being the sepal length and iris.feature_names[1] being
the sepal width). We can see a nice separation of classes in the following figure:
End of explanation
X_train, X_test, y_train, y_test = model_selection.train_test_split(
data, target, test_size=0.1, random_state=42
)
Explanation: Splitting the data into training and test sets
We learned in the previous chapter that it is essential to keep training and test data
separate. We can easily split the data using one of scikit-learn's many helper functions:
End of explanation
X_train.shape, y_train.shape
X_test.shape, y_test.shape
Explanation: Here we want to split the data into 90 percent training data and 10 percent test data, which
we specify with test_size=0.1. By inspecting the return arguments, we note that we
ended up with exactly 90 training data points and 10 test data points:
End of explanation
lr = cv2.ml.LogisticRegression_create()
Explanation: Training the classifier
Creating a logistic regression classifier involves pretty much the same steps as setting up $k$-NN:
End of explanation
lr.setTrainMethod(cv2.ml.LogisticRegression_MINI_BATCH)
lr.setMiniBatchSize(1)
Explanation: We then have to specify the desired training method. Here, we can choose
cv2.ml.LogisticRegression_BATCH or cv2.ml.LogisticRegression_MINI_BATCH.
For now, all we need to know is that we want to update the model after every data point,
which can be achieved with the following code:
End of explanation
lr.setIterations(100)
Explanation: We also want to specify the number of iterations the algorithm should run before it
terminates:
End of explanation
lr.train(X_train, cv2.ml.ROW_SAMPLE, y_train);
Explanation: We can then call the train method of the object (in the exact same way as we did earlier),
which will return True upon success:
End of explanation
lr.get_learnt_thetas()
Explanation: Retrieve the learned weights:
End of explanation
ret, y_pred = lr.predict(X_train)
metrics.accuracy_score(y_train, y_pred)
Explanation: Testing the classifier
Let's see for ourselves by calculating the accuracy score on the training set:
End of explanation
ret, y_pred = lr.predict(X_test)
metrics.accuracy_score(y_test, y_pred)
Explanation: Perfect score! However, this only means that the model was able to perfectly memorize the
training dataset. This does not mean that the model would be able to classify a new, unseen
data point. For this, we need to check the test dataset:
End of explanation |
5,425 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Instrumental-Variables Estimation to Recover the Treatment Effect in Quasi-Experiments
This section is taken from Chapter 11 of Methods Matter by Richard Murnane and John Willett.
In Chapter 10, Murnane and Willett introduce instrumental variables estimation(IVE) as a method for carving out causal claims from observational data (chapter summary) (example code).
In Chapter 11, the authors explain how IVE can be used to "recover" the treatment effect in cases where random assignment is applied to an offer to participate, where not everyone takes the offer, and where other people participate through some other means. They use the example of research on the effectiveness of a financial aid offer on the likelihood of a student to finish 8th grade, using a subset of data from Bogotá from a study on "Vouchers for Private Schooling in Columbia" (2002) by Joshua Angrist, Eric Bettinger, Erik Bloom, Elizabeth King, and Michael Kremer (full data here, subset data here).
The dataset includes the following variables
Step1: Acquire Dataset from Methods Matter
Step2: Summary Statistics
Step3: Two-stage Least Squares Logistic Regression
If you're interested to learn more on the rationale and process for doing this kind of analysis, Murnane and Willett introduce instrumental variables estimation(IVE) as a method for carving out causal claims from observational data (chapter summary) (example code). | Python Code:
# THINGS TO IMPORT
# This is a baseline set of libraries I import by default if I'm rushed for time.
import codecs # load UTF-8 Content
import json # load JSON files
import pandas as pd # Pandas handles dataframes
import numpy as np # Numpy handles lots of basic maths operations
import matplotlib.pyplot as plt # Matplotlib for plotting
import seaborn as sns # Seaborn for beautiful plots
from dateutil import * # I prefer dateutil for parsing dates
import math # transformations
import statsmodels.formula.api as smf # for doing statistical regression
import statsmodels.api as sm # access to the wider statsmodels library, including R datasets
from collections import Counter # Counter is useful for grouping and counting
import scipy
Explanation: Using Instrumental-Variables Estimation to Recover the Treatment Effect in Quasi-Experiments
This section is taken from Chapter 11 of Methods Matter by Richard Murnane and John Willett.
In Chapter 10, Murnane and Willett introduce instrumental variables estimation(IVE) as a method for carving out causal claims from observational data (chapter summary) (example code).
In Chapter 11, the authors explain how IVE can be used to "recover" the treatment effect in cases where random assignment is applied to an offer to participate, where not everyone takes the offer, and where other people participate through some other means. They use the example of research on the effectiveness of a financial aid offer on the likelihood of a student to finish 8th grade, using a subset of data from Bogotá from a study on "Vouchers for Private Schooling in Columbia" (2002) by Joshua Angrist, Eric Bettinger, Erik Bloom, Elizabeth King, and Michael Kremer (full data here, subset data here).
The dataset includes the following variables:
* finish8th: did the student finish 8th grade or not (outcome variable)
* won_lottry: won the lottery to receive offer of financial aid
* use_fin_aid: did the student use financial aid of any kind (not exclusive to the lottery) or not
* base_age: student age
* male: is the student male or not
End of explanation
import urllib2
import os.path
if(os.path.isfile("colombia_voucher.dta")!=True):
response = urllib2.urlopen("http://www.ats.ucla.edu/stat/stata/examples/methods_matter/chapter11/colombia_voucher.dta")
if(response.getcode()==200):
f = open("colombia_voucher.dta","w")
f.write(response.read())
f.close()
voucher_df = pd.read_stata("colombia_voucher.dta")
Explanation: Acquire Dataset from Methods Matter
End of explanation
print "=============================================================================="
print " OVERALL SUMMARY"
print "=============================================================================="
print voucher_df.describe()
for i in range(2):
print "=============================================================================="
print " LOTTERY = %(i)d" % {"i":i}
print "=============================================================================="
print voucher_df[voucher_df['won_lottry']==i].describe()
Explanation: Summary Statistics
End of explanation
print "=============================================================================="
print " FIRST STAGE"
print "=============================================================================="
result = smf.glm(formula = "use_fin_aid ~ won_lottry + male + base_age",
data=voucher_df,
family=sm.families.Binomial()).fit()
voucher_df['use_fin_aid_fitted']= result.predict()
print result.summary()
print
print
print "=============================================================================="
print " SECOND STAGE"
print "=============================================================================="#
result = smf.glm(formula = " finish8th ~ use_fin_aid_fitted + male + base_age",
data=voucher_df,
family=sm.families.Binomial()).fit()
print result.summary()
Explanation: Two-stage Least Squares Logistic Regression
If you're interested to learn more on the rationale and process for doing this kind of analysis, Murnane and Willett introduce instrumental variables estimation(IVE) as a method for carving out causal claims from observational data (chapter summary) (example code).
End of explanation |
5,426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
There are three main plot kinds; in addition to histograms and kernel density estimates (KDEs), you can also draw empirical cumulative distribution functions (ECDFs)
Step1: While in histogram mode, it is also possible to add a KDE curve
Step2: To draw a bivariate plot, assign both x and y
Step3: Currently, bivariate plots are available only for histograms and KDEs
Step4: For each kind of plot, you can also show individual observations with a marginal "rug"
Step5: Additional keyword arguments are passed to the appropriate underlying plotting function, allowing for further customization | Python Code:
sns.displot(data=penguins, x="flipper_length_mm", kind="ecdf")
Explanation: There are three main plot kinds; in addition to histograms and kernel density estimates (KDEs), you can also draw empirical cumulative distribution functions (ECDFs):
End of explanation
sns.displot(data=penguins, x="flipper_length_mm", kde=True)
Explanation: While in histogram mode, it is also possible to add a KDE curve:
End of explanation
sns.displot(data=penguins, x="flipper_length_mm", y="bill_length_mm")
Explanation: To draw a bivariate plot, assign both x and y:
End of explanation
sns.displot(data=penguins, x="flipper_length_mm", y="bill_length_mm", kind="kde")
Explanation: Currently, bivariate plots are available only for histograms and KDEs:
End of explanation
g = sns.displot(data=penguins, x="flipper_length_mm", y="bill_length_mm", kind="kde", rug=True)
sns.displot(data=penguins, x="flipper_length_mm", hue="species", kind="kde")
Explanation: For each kind of plot, you can also show individual observations with a marginal "rug":
End of explanation
sns.displot(data=penguins, x="flipper_length_mm", hue="species", multiple="stack")
sns.displot(data=penguins, x="flipper_length_mm", hue="species", col="sex", kind="kde")
sns.displot(
data=penguins, y="flipper_length_mm", hue="sex", col="species",
kind="ecdf", height=4, aspect=.7,
)
g = sns.displot(
data=penguins, y="flipper_length_mm", hue="sex", col="species",
kind="kde", height=4, aspect=.7,
)
g.set_axis_labels("Density (a.u.)", "Flipper length (mm)")
g.set_titles("{col_name} penguins")
Explanation: Additional keyword arguments are passed to the appropriate underlying plotting function, allowing for further customization:
End of explanation |
5,427 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build Adjacency Matrix
Step1: Queries
Step2: Step through every interaction.
If geneids1 not in matrix - insert it as dict.
If geneids2 not in matrix[geneids1] - insert it as []
If probability not in matrix[geneids1][geneids2] - insert it.
Perform the reverse. | Python Code:
import sqlite3
import json
DATABASE = "data.sqlite"
conn = sqlite3.connect(DATABASE)
cursor = conn.cursor()
Explanation: Build Adjacency Matrix
End of explanation
# For getting the maximum row id
QUERY_MAX_ID = "SELECT id FROM interactions ORDER BY id DESC LIMIT 1"
# Get interaction data
QUERY_INTERACTION = "SELECT geneids1, geneids2, probability FROM interactions WHERE id = {}"
max_id = cursor.execute(QUERY_MAX_ID).fetchone()[0]
Explanation: Queries
End of explanation
matrix = {}
row_id = 0
while row_id <= max_id:
row_id+= 1
row = cursor.execute(QUERY_INTERACTION.format(row_id))
row = row.fetchone()
if row == None:
continue
id1 = row[0]
id2 = row[1]
prob = int(round(row[2],2) * 1000)
# Forward
if id1 not in matrix:
matrix[id1] = {}
if id2 not in matrix[id1]:
matrix[id1][id2] = []
if prob not in matrix[id1][id2]:
matrix[id1][id2].append(prob)
# Backwards
if id2 not in matrix:
matrix[id2] = {}
if id1 not in matrix[id2]:
matrix[id2][id1] = []
if prob not in matrix[id2][id1]:
matrix[id2][id1].append(prob)
with open("matrix.json", "w+") as file:
file.write(json.dumps( matrix ))
Explanation: Step through every interaction.
If geneids1 not in matrix - insert it as dict.
If geneids2 not in matrix[geneids1] - insert it as []
If probability not in matrix[geneids1][geneids2] - insert it.
Perform the reverse.
End of explanation |
5,428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aula 05 - Lendo dados oceanográficos em diversos formatos (netCDF, OPeNDAP, ERDDAP etc) e dimensões (AKA além das tabelas)]
Objetivos
Exibir dados em várias dimensões (Satélites, Modelos, etc)
Ler de dados diversas fontes binárias (NetCDF, HDF4/5, e Protocolos online)
Introduzir conceitos básicos de CDM (Common Data Models)
Relembrando Slices
Uma forma de lembrar como fazer slices em Python é pensar nos pontos entre
os índices e não na célula do índice.
+---+---+---+---+---+---+
| P | y | t | h | o | n |
+---+---+---+---+---+---+
0 1 2 3 4 5 6
-6 -5 -4 -3 -2 -1
Step1: <img src='./files/2dbase2.png', width="300">
Step2: <img src='./files/2dbase1.png', width="300">
Step3: <img src='./files/3darray.png' width="300">
<img src='./files/3dbase1.png', width="300">
Step4: <img src='./files/3dbase2.png', width="300">
Step5: <img src='./files/3dbase5.png', width="300">
Step6: Mais que 3 dimensões
<img src='./files/4dbase.png', width="300">
<img src='./files/4dshape.png', width="300">
<img src='./files/5dbase.png', width="300">
<img src='./files/5shape.png', width="300">
Exemplo real
<img src='./files/netcdf-diagram.png', width="300"> | Python Code:
t = 'Python'
t[0:2]
t[::2]
t[::-1]
import numpy as np
arr = np.array([[3, 6, 2, 1, 7],
[4, 1, 3, 2, 8],
[7, 9, 2, 1, 8],
[8, 6, 9, 6, 7],
[9, 1, 9, 2, 6],
[9, 8, 1, 5, 6],
[0, 4, 2, 0, 6],
[0, 3, 1, 4, 7]])
Explanation: Aula 05 - Lendo dados oceanográficos em diversos formatos (netCDF, OPeNDAP, ERDDAP etc) e dimensões (AKA além das tabelas)]
Objetivos
Exibir dados em várias dimensões (Satélites, Modelos, etc)
Ler de dados diversas fontes binárias (NetCDF, HDF4/5, e Protocolos online)
Introduzir conceitos básicos de CDM (Common Data Models)
Relembrando Slices
Uma forma de lembrar como fazer slices em Python é pensar nos pontos entre
os índices e não na célula do índice.
+---+---+---+---+---+---+
| P | y | t | h | o | n |
+---+---+---+---+---+---+
0 1 2 3 4 5 6
-6 -5 -4 -3 -2 -1
End of explanation
arr[:, 0]
Explanation: <img src='./files/2dbase2.png', width="300">
End of explanation
arr[2:6, 1:4]
arr = np.array([[[3, 6, 2, 1, 7],
[4, 1, 3, 2, 8],
[7, 9, 2, 1, 8],
[8, 6, 9, 6, 7],
[9, 1, 9, 2, 6],
[9, 8, 1, 5, 6],
[0, 4, 2, 0, 6],
[0, 3, 1, 4, 7]],
[[5, 5, 3, 9, 3],
[8, 3, 5, 1, 1],
[3, 4, 3, 0, 9],
[1, 4, 1, 0, 2],
[7, 1, 2, 0, 1],
[5, 1, 3, 7, 8],
[8, 0, 9, 6, 0],
[7, 7, 4, 4, 4]],
[[1, 0, 8, 9, 1],
[7, 4, 8, 8, 2],
[9, 1, 8, 3, 6],
[5, 6, 2, 0, 1],
[7, 4, 2, 5, 7],
[9, 5, 6, 8, 6],
[7, 4, 4, 7, 1],
[8, 4, 4, 9, 1]]])
arr.shape
Explanation: <img src='./files/2dbase1.png', width="300">
End of explanation
arr[0:2, 0:4, -1]
Explanation: <img src='./files/3darray.png' width="300">
<img src='./files/3dbase1.png', width="300">
End of explanation
arr[:, 0:2, 0:3]
Explanation: <img src='./files/3dbase2.png', width="300">
End of explanation
arr[:, 0, 2]
Explanation: <img src='./files/3dbase5.png', width="300">
End of explanation
from netCDF4 import Dataset
nc = Dataset('./data/mdt_cnes_cls2009_global_v1.1.nc')
nc
u = nc.variables['Grid_0002']
u
v = nc.variables['Grid_0003']
v
u, v = u[:], v[:]
lon = nc.variables['NbLongitudes'][:]
lat = nc.variables['NbLatitudes'][:]
import numpy as np
lon, lat = np.meshgrid(lon, lat)
lon.shape, lat.shape, u.shape, v.shape
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
sub = 5
ax.quiver(lon[::sub, ::sub], lat[::sub, ::sub], u.T[::sub, ::sub], v.T[::sub, ::sub])
from oceans import wrap_lon180
lon = wrap_lon180(lon)
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from cartopy.io import shapereader
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
def make_map(projection=ccrs.PlateCarree()):
fig, ax = plt.subplots(figsize=(9, 13),
subplot_kw=dict(projection=projection))
gl = ax.gridlines(draw_labels=True)
gl.xlabels_top = gl.ylabels_right = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
return fig, ax
mask_x = np.logical_and(lon > -40, lon < -36)
mask_y = np.logical_and(lat > -15, lat < -12)
mask = np.logical_and(mask_x, mask_y)
import cartopy.feature as cfeature
land_10m = cfeature.NaturalEarthFeature('physical', 'land', '10m',
edgecolor='face',
facecolor=cfeature.COLORS['land'])
fig, ax = make_map()
ax.quiver(lon[mask], lat[mask], u.T[mask], v.T[mask])
ax.add_feature(land_10m)
ax.coastlines('10m')
ax.set_extent([-40, -36, -15, -12])
Explanation: Mais que 3 dimensões
<img src='./files/4dbase.png', width="300">
<img src='./files/4dshape.png', width="300">
<img src='./files/5dbase.png', width="300">
<img src='./files/5shape.png', width="300">
Exemplo real
<img src='./files/netcdf-diagram.png', width="300">
End of explanation |
5,429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing functions in epydemiology
Import epydemiology
(All other packages will be imported or reported missing.)
Step1: Some background details
Step2: FILE
Step3: FILE
Step4: FUNCTION
Step5: Example 2 – Query entered directly in function call
Step6: FILE
Step7: FUNCTION
Step8: phjRegexPreCompile parameter set to True
Step9: FUNCTION
Step10: FUNCTION
Step11: Example 2 – regex
Step12: FUNCTION
Step13: Multiple dataframes of data
Step14: FUNCTION
Step15: Testing phjUpdateLUT() function with dataframe with multiple columns
Step16: FUNCTION
Step17: FILE
Step18: Output a Pandas dataframe
Step19: FUNCTION
Step20: FILE
Step21: FUNCTION
Step22: FUNCTION
Step23: FUNCTION
Step24: FUNCTION
Step25: FILE
Step26: Clean postcodes based on real postcode and identify closest matches
Step27: FUNCTION
Step28: Matched controls
Step29: FUNCTION
Step30: Categorise using quantile breaks and using 1 and 0 as binary outcome
Step31: FUNCTION
Step32: Return dataframe and list of breaks
Step33: FILE
Step34: FUNCTION | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import epydemiology as epy
Explanation: Testing functions in epydemiology
Import epydemiology
(All other packages will be imported or reported missing.)
End of explanation
help(epy)
print(dir(epy))
Explanation: Some background details
End of explanation
phjPath = "/Users/philipjones/Documents/git_repositories/epydemiology"
phjFileName = "Test data.xlsx"
import pandas as pd
import openpyxl
import epydemiology as epy
print("RANGE: some_test_data")
print("=====================")
myDF = epy.phjReadDataFromExcelNamedCellRange(phjExcelPathAndFileName = '/'.join([phjPath,phjFileName]),
phjExcelCellRangeName = 'some_test_data',
phjDatetimeFormat = "%d%b%Y",
phjMissingValue = "missing",
phjHeaderRow = True,
phjPrintResults = True)
print(myDF.dtypes)
print('\n')
print("RANGE: some_more_test_data")
print("==========================")
myDF2 = epy.phjReadDataFromExcelNamedCellRange(phjExcelPathAndFileName = '/'.join([phjPath,phjFileName]),
phjExcelCellRangeName = 'some_more_test_data',
phjDatetimeFormat = "%Y-%m-%d",
phjMissingValue = "missing",
phjHeaderRow = True,
phjPrintResults = True)
print(myDF.dtypes)
Explanation: FILE: phjGetData.py
FUNCTION: phjReadDataFromExcelNamedCellRange()
End of explanation
import pymysql
import pymssql
import epydemiology as epy
tempConn = epy.phjConnectToDatabase('mysql')
print(tempConn)
Explanation: FILE: phjGetDBData.py
FUNCTION: phjConnectToDatabase()
End of explanation
# The following external libraries are imported automatically but are incuded here for completeness.
import pandas as pd
import pymysql
import pymssql
import epydemiology as epy
myDF = epy.phjGetDataFromDatabase(phjQueryPathAndFile = '/Users/username/Desktop/theSQLQueryFile.mssql',
phjPrintResults = True)
Explanation: FUNCTION: phjGetDataFromDatabase()
Example 1 – Query stored in file
End of explanation
# The following external libraries are imported automatically but are incuded here for completeness.
import pandas as pd
import pymysql
import pymssql
import epydemiology as epy
myDF = epy.phjGetDataFromDatabase(phjQueryStr = 'SELECT * FROM Table1',
phjPrintResults = True)
Explanation: Example 2 – Query entered directly in function call
End of explanation
myStr = epy.phjReadTextFromFile(phjFilePathAndName = '/Users/username/Desktop/myTextFile.txt',
phjMaxAttempts = 3,
phjPrintResults = False)
Explanation: FILE: phjMiscFuncs.py
FUNCTION: phjGetStrFromArgOrFile()
FUNCTION: phjReadTextFromFile()
End of explanation
import numpy as np
import pandas as pd
import re
import epydemiology as epy
df = pd.DataFrame({'id':[2,2,2,1,1],
'group':['dog','dog','dog','cat','cat'],
'regex':['(?:dog)','(?:canine)','(?:k9)','(?:cat)','(?:feline)']})
print("Dataframe\n---------")
print(df)
regexStr = epy.phjCreateNamedGroupRegex(phjDF = df,
phjGroupVarName = 'group',
phjRegexVarName = 'regex',
phjIDVarName = 'id',
phjRegexPreCompile = False,
phjPrintResults = False)
print("\nCombined Regex string\n---------------------")
print(regexStr)
Explanation: FUNCTION: phjCreateNameGroupRegex()
phjRegexPreCompile parameter set to False
End of explanation
df = pd.DataFrame({'id':[2,2,2,1,1],
'group':['dog','dog','dog','cat','cat'],
'regex':['(?:dog)','(?:canine)','(?:k9)','(?:cat)','(?:feline)']})
print("Dataframe\n---------")
print(df)
myCompiledRegexObj = epy.phjCreateNamedGroupRegex(phjDF = df,
phjGroupVarName = 'group',
phjRegexVarName = 'regex',
phjIDVarName = 'id',
phjRegexPreCompile = True,
phjPrintResults = False)
print("\nCompiled Regex object\n---------------------")
print(myCompiledRegexObj)
Explanation: phjRegexPreCompile parameter set to True
End of explanation
import numpy as np
import pandas as pd
import collections
myOrderedDict = collections.OrderedDict()
myOrderedDict['Descriptor'] = ['dog','ferret','cat','rabbit','horse','primate','rodent','gerbil','guinea pig','rat','mammal','lizard','snake','common basilisk','turtle','tortoise','spur-thighed tortoise']
myOrderedDict['Phylum'] = ['Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata','Chordata']
myOrderedDict['Class'] = ['Mammalia','Mammalia','Mammalia','Mammalia','Mammalia','Mammalia','Mammalia','Mammalia','Mammalia','Mammalia','Mammalia','Reptilia','Reptilia','Reptilia','Reptilia','Reptilia','Reptilia']
myOrderedDict['Order'] = ['Carnivora','Carnivora','Carnivora','Lagomorpha','Perissodactyla','Primates','Rodentia','Rodentia','Rodentia','Rodentia','','Squamata','Squamata','Squmata','Testudines','Testudines','Testudines']
myOrderedDict['Suborder'] = ['','','Feliformia','','','','','','','','','Lacertilia','Serpentes','Iguania','','Cryptodira','Cryptodira']
myOrderedDict['Superfamily'] = ['','','','','','','','','','','','','','','','','']
myOrderedDict['Family'] = ['Canidae','Mustelidae','Felidae','Leporidae','Equidae','','','Muridae','Caviidae','Muridae','','','','Corytophanidae','','Testudinidae','Testudinidae']
myOrderedDict['Subfamily'] = ['','','','','','','','Gerbillinae','','Murinae','','','','','','','']
myOrderedDict['Genus'] = ['Canis','Mustela','Felis','Oryctolagus','Equus','','','','Cavia','Rattus','','','','Basiliscus','','','Testudo']
myOrderedDict['Species'] = ['lupus','putorius','silvestris','cuniculus','ferus','','','','porcellus','norvegicus','','','','basiliscus','','','graeca']
myOrderedDict['Subspecies'] = ['familiaris','furo','catus','','caballus','','','','','domestica','','','','','','','']
df = pd.DataFrame(myOrderedDict)
df = epy.phjMaxLevelOfTaxonomicDetail(phjDF = df,
phjFirstCol = 'Phylum',
phjLastCol = 'Subspecies',
phjNewColName = 'max_tax_details',
phjDropPreExisting = False,
phjCleanup = True,
phjPrintResults = False)
Explanation: FUNCTION: phjFindRegexNamedGroup()
FUNCTION: phjMaxLevelOfTaxonomicDetail()
End of explanation
myDF = pd.DataFrame({'id':[1,2,3,4,5,6,7],
'var':['dogg','canine','cannine','catt','felin','cot','feline'],
'dog':[1,2,3,4,5,6,7]})
print(myDF)
d = {'dog':['dogg','canine','cannine'],
'cat':['catt','felin','feline']}
myDF = epy.phjReverseMap(phjDF = myDF,
phjMappingDict = d,
phjCategoryVarName = 'var',
phjMappedVarName = 'spp',
phjUnmapped = 'missing',
phjTreatAsRegex = False,
phjDropPreExisting = True,
phjPrintResults = True)
Explanation: FUNCTION: phjReverseMap()
Example 1 – exact string matches
End of explanation
myDF = pd.DataFrame({'id':[1,2,3,4,5,6,7],
'var':['dogg','canine','cannine','catt','felin','cot','feline'],
'dog':[1,2,3,4,5,6,7]})
print(myDF)
print('\n')
d = {'dog':['(?:dog+)','(?:can*ine)'],
'cat':['(?:cat+)','(?:fel+ine?)']}
print(d)
myDF = epy.phjReverseMap(phjDF = myDF,
phjMappingDict = d,
phjCategoryVarName = 'var',
phjMappedVarName = 'new',
phjUnmapped = 'missing',
phjTreatAsRegex = True,
phjDropPreExisting = True,
phjPrintResults = True)
Explanation: Example 2 – regex
End of explanation
phjTempDF = pd.DataFrame({'a':[1,2,3,4,5,6,1,2,3,4,5,6],
'b':['a','b','c','d','e','f','a','b','w','d','e','f']})
print('Single variable')
print('---------------')
phjOutDF = epy.phjRetrieveUniqueFromMultiDataFrames(phjDFList = [phjTempDF],
phjVarNameList = 'a',
phjSort = True,
phjPrintResults = True)
print('\n')
print('Multiple variables')
print('------------------')
phjOutDF = epy.phjRetrieveUniqueFromMultiDataFrames(phjDFList = phjTempDF,
phjVarNameList = ['a','b'],
phjSort = True,
phjPrintResults = True)
Explanation: FUNCTION: phjRetrieveUniqueFromMultiDataFrames()
Single dataframe
End of explanation
df1 = pd.DataFrame({'m':[1,2,3,4,5,6],
'n':['a','b','c','d','e','f']})
df2 = pd.DataFrame({'m':[2,5,7,8],
'n':['b','e','g','h']})
phjOutDF = epy.phjRetrieveUniqueFromMultiDataFrames(phjDFList = [df1,df2],
phjVarNameList = ['m','n'],
phjSort = True,
phjPrintResults = True)
Explanation: Multiple dataframes of data
End of explanation
old_df = pd.DataFrame({'id':[1,2,3,4,5,6],
'm':['a','b','c','d','e','f']})
new_df = pd.DataFrame({'id':[1,2,3,4],
'm':['b','E','g','H']})
update_df = epy.phjUpdateLUT(phjExistDF = old_df,
phjNewDF = new_df,
phjIDName = 'id',
phjVarNameList = ['m'],
phjMissStr = 'missing',
phjMissCode = 999,
phjIgnoreCase = True,
phjPrintResults = True)
Explanation: FUNCTION: phjUpdateLUT()
Testing phjUpdateLUT() function with dataframe with single column
End of explanation
old_df = pd.DataFrame({'id':[1,2,3,4,5,6],
'm':['a','b','c','d','e','f'],
'n':['A','B','C','D','E','F']})
new_df = pd.DataFrame({'id':[1,2,3,4,5],
'm':['b','e','g','h','a'],
'n':['BB','e','GG','H','a']})
update_df = epy.phjUpdateLUT(phjExistDF = old_df,
phjNewDF = new_df,
phjIDName = 'id',
phjVarNameList = ['m','n'],
phjMissStr = 'missing',
phjMissCode = 999,
phjIgnoreCase = True,
phjPrintResults = True)
print('Updated dataframe')
print('-----------------')
print(update_df)
Explanation: Testing phjUpdateLUT() function with dataframe with multiple columns
End of explanation
df1 = pd.DataFrame({'id':[1,2,3,4,5,6,7,8],
'name':['a','b','c','d','e','f','g','h'],
'value':[999,22,33,44,55,66,999,88]})
df2 = pd.DataFrame({'id':[9,10,11,12],
'name':['a','i','d','g'],
'value':[11,99,None,77]})
df = df1.append(df2).sort_values(by = ['name','id'])
print('First dataframe')
print('---------------')
print(df1)
print('\n')
print('Second dataframe')
print('----------------')
print(df2)
print('\n')
print('Joined dataframes')
print('-----------------')
print(df)
df = epy.phjUpdateLUTToLatestValues(phjDF = df,
phjIDVarName = 'id',
phjGroupbyVarName = 'name',
phjAddCountCol = True,
phjPrintResults = True)
Explanation: FUNCTION: phjUpdateLUTToLatestValues()
End of explanation
rawDataDF = pd.DataFrame({'a':[0,1,1,1,0,0,1,0],
'b':[1,1,0,0,1,0,0,1],
'c':[0,0,1,0,1,1,1,1],
'd':[1,0,0,0,1,0,0,0],
'e':[1,0,0,0,0,1,0,0]})
columns = ['a','b','c','d','e']
print('Raw data')
print(rawDataDF)
print('\n')
phjMatrix = epy.phjBinaryVarsToSquareMatrix(phjDataDF = rawDataDF,
phjColumnNamesList = columns,
phjOutputFormat = 'arr',
phjPrintResults = False)
print('Returned square matrix')
print(phjMatrix)
Explanation: FILE: phjMatrices.py
FUNCTION: phjBinaryVarsToSquareMatrix()
Output a numpy array
End of explanation
rawDataDF = pd.DataFrame({'a':[0,1,1,1,0,0,1,0],
'b':[1,1,0,0,1,0,0,1],
'c':[0,0,1,0,1,1,1,1],
'd':[1,0,0,0,1,0,0,0],
'e':[1,0,0,0,0,1,0,0]})
columns = ['a','b','c','d','e']
print('Raw data')
print(rawDataDF)
print('\n')
phjMatrixDF = epy.phjBinaryVarsToSquareMatrix(phjDataDF = rawDataDF,
phjColumnNamesList = columns,
phjOutputFormat = 'df',
phjPrintResults = False)
print('Returned square matrix dataframe')
print(phjMatrixDF)
Explanation: Output a Pandas dataframe
End of explanation
df = pd.DataFrame({'X':[1,1,1,2,2,3,3,3,3,4],
'Y':['a','b','d','b','c','d','e','a','f','b']})
newDF = epy.phjLongToWideBinary(phjDF = df,
phjGroupbyVarName = 'X',
phjVariablesVarName = 'Y',
phjValuesDict = {0:0,1:1},
phjPrintResults = False)
print('Original dataframe\n')
print(df)
print('\n')
print('New wide dataframe\n')
print(newDF)
Explanation: FUNCTION: phjLongToWideBinary()
End of explanation
phjTempDF = pd.DataFrame({'group':['g1','g1','g2','g1','g2','g2','g1','g1','g2','g1'],
'A':['yes','yes','no','no','no','no','no','yes',np.nan,'yes'],
'B':['no',np.nan,np.nan,'yes','yes','yes','yes','no','no','no'],
'C':['yes','yes','yes',np.nan,'no','yes','yes','yes','no','no']})
print(phjTempDF)
print('\n')
phjPropDF = epy.phjCalculateBinomialProportions(phjDF = phjTempDF,
phjColumnsList = ['A','B','C'],
phjSuccess = 'yes',
phjGroupVarName = 'group',
phjMissingValue = 'missing',
phjBinomialConfIntMethod = 'normal',
phjAlpha = 0.05,
phjPlotProportions = True,
phjGroupsToPlotList = 'all',
phjSortProportions = True,
phjGraphTitle = None,
phjPrintResults = False)
print(phjPropDF)
Explanation: FILE: phjCalculateProportions.py
FUNCTION: phjCalculateBinomialProportions()
Example of calculating binomial proportions (using phjCaculateBinomialProportions() function)
End of explanation
phjTempDF = pd.DataFrame({'year':[2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015,2016,2017,2018],
'success':[109,77,80,57,29,31,29,19,10,16,6,8,4,0],
'failure':[784-109,840-77,715-80,780-57,743-29,743-31,752-29,645-19,509-10,562-16,471-6,471-8,472-4,0-0],
#'total':[784,840,715,780,743,743,752,645,509,562,471,471,472,0]
})
print('Original dataframe\n')
print(phjTempDF)
print('\n')
phjPropDF = epy.phjCalculateBinomialConfInts(phjDF = phjTempDF,
phjSuccVarName = 'success',
phjFailVarName = 'failure',
phjTotalVarName = None,
phjBinomialConfIntMethod = 'normal',
phjAlpha = 0.05,
phjPrintResults = False)
print('Dataframe of confidence intervals\n')
print(phjPropDF)
Explanation: FUNCTION: phjCalculateBinomialConfInts()
End of explanation
phjTempDF = pd.DataFrame({'group':['case','case','case','control','control','case','case','case','control','control','control','control','case','case','case','control','control','control','control','case','case','case','case','case',np.nan,np.nan],
'category':[np.nan,'spaniel','missing','terrier','collie','labrador','labrador','collie','spaniel','spaniel','labrador','collie','terrier','terrier','terrier','collie','labrador','labrador','labrador','spaniel','spaniel','collie','collie','collie','terrier','spaniel'],
'catint':[1,2,3,2,3,2,1,2,1,2,3,2,3,2,3,1,2,3,2,3,2,3,2,3,1,2]})
print(phjTempDF)
print('\n')
phjRelFreqDF = epy.phjCalculateMultinomialProportions(phjDF = phjTempDF,
phjCategoryVarName = 'category',
phjGroupVarName = 'group',
phjMissingValue = 'missing',
phjMultinomialConfIntMethod = 'goodman',
phjAlpha = 0.05,
phjPlotRelFreq = True,
phjCategoriesToPlotList = 'all',
phjGroupsToPlotList = 'all', # Currently not implemented
phjGraphTitle = 'Relative frequencies (Goodman CI)',
phjPrintResults = True)
print(phjRelFreqDF)
Explanation: FUNCTION: phjCalculateMultinomialProportions()
Example of calculating multinomial proportions (using phjCalculateMultinomialProportions() function)
End of explanation
# Generate the dataframe used in the original description of the function
df = pd.DataFrame({'year':[2010,2011,2012,2013,2014],
'cases':[23,34,41,57,62],
'controls':[1023,1243,1145,2017,1876],
'comment':['Small number of cases',
'Proportion increase',
'Trend continues',
'Decreased proportion',
'Increased again']})
# Reorder the columns a little
df = df[['year','cases','controls','comment']]
# Convert to dataframe containing binary outcome data
newDF = epy.phjSummaryTableToBinaryOutcomes(phjDF = df,
phjVarsToIncludeList = ['year','cases','controls'],
phjSuccVarName = 'cases',
phjFailVarName = 'controls',
phjTotalVarName = None,
phjOutcomeVarName = 'outcome',
phjPrintResults = False)
# Print results
print('Original table of summary results\n')
print(df)
print('\n')
print('Dataframe of binary outcomes\n')
with pd.option_context('display.max_rows',6, 'display.max_columns',2):
print(newDF)
Explanation: FUNCTION: phjSummaryTableToBinaryOutcomes()
End of explanation
phjDiseaseDF = pd.DataFrame({'year':[2008,2009,2010,2011,2012,2013,2014,2015,2016,2017,2018],
'positive':[18,34,24,26,30,27,36,17,18,15,4],
'negative':[1695,1733,1929,1517,1449,1329,1130,928,753,496,325]})
phjDiseaseDF = epy.phjAnnualDiseaseTrend(phjDF = phjDiseaseDF.loc[phjDiseaseDF['year'] < 2018,:],
phjYearVarName = 'year',
phjPositivesVarName = 'positive',
phjNegativesVarName = 'negative',
phjTotalVarName = None,
phjConfIntMethod = 'normal',
phjAlpha = 0.05,
phjPlotProportions = True,
phjPlotPrediction = True,
phjGraphTitleStr = None,
phjPrintResults = False)
Explanation: FUNCTION: phjAnnualDiseaseTrend()
End of explanation
# Create a test dataframe that contains a postcode variable and some other empty variables
# that have the same names as the new variables that will be created. Setting the 'phjDropExisting'
# variable to true will automatically drop pre-existing variables before running the function.
# Some of the variables in the test dataframe are not duplicated and are present to show that the
# function preserves those variables in tact.
import numpy as np
import pandas as pd
import re
# Create test dataframe
myTestPostcodeDF = pd.DataFrame({'postcode': ['NP45DG',
'CH647TE',
'CH5 4HE',
'GIR 0AA',
'NOT NOWN',
'GIR0AB',
'NOR12A',
'no idea',
'W1A 1AA',
'missin',
'NP4 OGH',
'P012 OLL',
'p01s',
'ABCD',
'',
'ab123cd',
'un-known',
'B1 INJ',
'AB123CD',
'No idea what the postcode is',
' ???NP4-5DG_*# '],
'pcdClean': np.nan,
'pcd7': np.nan,
'postcodeOutward': np.nan,
'someOtherCol': np.nan})
# Run function to extract postcode data
print('\nStart dataframe\n===============\n')
print(myTestPostcodeDF)
print('\n')
myTestPostcodeDF = epy.phjCleanUKPostcodeVariable(phjDF = myTestPostcodeDF,
phjRealPostcodeSer = None,
phjOrigPostcodeVarName = 'postcode',
phjNewPostcodeVarName = 'pcdClean',
phjNewPostcodeStrLenVarName = 'pcdCleanStrLen',
phjPostcodeCheckVarName = 'pcdFormatCheck',
phjMissingValueCode = 'missing',
phjMinDamerauLevenshteinDistanceVarName = 'minDamLevDist',
phjBestAlternativesVarName = 'bestAlternatives',
phjPostcode7VarName = 'pcd7',
phjPostcodeAreaVarName = 'pcdArea',
phjSalvageOutwardPostcodeComponent = True,
phjCheckByOption = 'format',
phjDropExisting = True,
phjPrintResults = True)
print('\nReturned dataframe\n==================\n')
print(myTestPostcodeDF)
Explanation: FILE: phjCleanUKPostcodes.py
FUNCTION: phjCleanUKPostcodeVariable()
Clean postcodes based on format alone
End of explanation
import re
# N.B. When calculating best alternative postcodes, only postcodes that are within
# 1 DL distance are considered.
# Create a Pandas series that could contain all the postcodes in the UK
realPostcodesSer = pd.Series(['NP4 5DG','CH647TE','CH5 4HE','W1A 1AA','NP4 0GH','PO120LL','AB123CF','AB124DF','AB123CV'])
# Create test dataframe
myTestPostcodeDF = pd.DataFrame({'postcode': ['NP45DG',
'CH647TE',
'CH5 4HE',
'GIR 0AA',
'NOT NOWN',
'GIR0AB',
'NOR12A',
'no idea',
'W1A 1AA',
'missin',
'NP4 OGH',
'P012 OLL',
'p01s',
'ABCD',
'',
'ab123cd',
'un-known',
'B1 INJ',
'AB123CD',
'No idea what the postcode is',
' ???NP4-5DG_*# '],
'pcdClean': np.nan,
'pcd7': np.nan,
'postcodeOutward': np.nan,
'someOtherCol': np.nan})
# Run function to extract postcode data
print('\nStart dataframe\n===============\n')
print(myTestPostcodeDF)
print('\n')
myTestPostcodeDF = epy.phjCleanUKPostcodeVariable(phjDF = myTestPostcodeDF,
phjRealPostcodeSer = realPostcodesSer,
phjOrigPostcodeVarName = 'postcode',
phjNewPostcodeVarName = 'pcdClean',
phjNewPostcodeStrLenVarName = 'pcdCleanStrLen',
phjPostcodeCheckVarName = 'pcdFormatCheck',
phjMissingValueCode = 'missing',
phjMinDamerauLevenshteinDistanceVarName = 'minDamLevDist',
phjBestAlternativesVarName = 'bestAlternatives',
phjPostcode7VarName = 'pcd7',
phjPostcodeAreaVarName = 'pcdArea',
phjSalvageOutwardPostcodeComponent = True,
phjCheckByOption = 'dictionary',
phjDropExisting = True,
phjPrintResults = True)
print('\nReturned dataframe\n==================\n')
print(myTestPostcodeDF)
Explanation: Clean postcodes based on real postcode and identify closest matches
End of explanation
casesDF = pd.DataFrame({'animalID':[1,2,3,4,5],'var1':[43,45,34,45,56],'sp':['dog','dog','dog','dog','dog']})
potControlsDF = pd.DataFrame({'animalID':[11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],
'var1':[34,54,34,23,34,45,56,67,56,67,78,98,65,54,34,76,87,56,45,34],
'sp':['dog','cat','dog','dog','cat','dog','cat','dog','cat','dog',
'dog','dog','dog','cat','dog','cat','dog','dog','dog','cat']})
print("This dataframe contains all the cases of disease\n")
print(casesDF)
print("\n")
print("This dataframe contains all the animals you could potentially use as controls\n")
print(potControlsDF)
print("\n")
# Selecting unmatched controls
unmatchedDF = epy.phjSelectCaseControlDataset(phjCasesDF = casesDF,
phjPotentialControlsDF = potControlsDF,
phjUniqueIdentifierVarName = 'animalID',
phjMatchingVariablesList = None,
phjControlsPerCaseInt = 2,
phjPrintResults = False)
print(unmatchedDF)
Explanation: FUNCTION: phjPostcodeFormat7()
FILE: phjSelectData.py
FUNCTION: phjGenerateCaseControlDataset()
FUNCTION: phjSelectCaseControlDataset()
Unmatched controls
End of explanation
casesDF = pd.DataFrame({'animalID':[1,2,3,4,5],'var1':[43,45,34,45,56],'sp':['dog','dog','dog','dog','dog']})
potControlsDF = pd.DataFrame({'animalID':[11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],
'var1':[34,54,34,23,34,45,56,67,56,67,78,98,65,54,34,76,87,56,45,34],
'sp':['dog','cat','dog','dog','cat','dog','cat','dog','cat','dog',
'dog','dog','dog','cat','dog','cat','dog','dog','dog','cat']})
print("This dataframe contains all the cases of disease\n")
print(casesDF)
print("\n")
print("This dataframe contains all the animals you could potentially use as controls\n")
print(potControlsDF)
print("\n")
# Selecting controls that are matched to cases on variable 'sp'
matchedDF = epy.phjSelectCaseControlDataset(phjCasesDF = casesDF,
phjPotentialControlsDF = potControlsDF,
phjUniqueIdentifierVarName = 'animalID',
phjMatchingVariablesList = ['sp'],
phjControlsPerCaseInt = 2,
phjPrintResults = False)
print(matchedDF)
Explanation: Matched controls
End of explanation
# Define example dataset
phjTempDF = pd.DataFrame({'binDepVar':['yes']*50000 + ['no']*50000,
'riskFactorCont':np.random.uniform(0,1,100000)})
with pd.option_context('display.max_rows', 10, 'display.max_columns', 5):
print(phjTempDF)
# View log odds
phjOutDF = epy.phjViewLogOdds(phjDF = phjTempDF,
phjBinaryDepVarName = 'binDepVar',
phjContIndepVarName = 'riskFactorCont',
phjCaseValue = 'yes',
phjMissingValue = 'missing',
phjNumberOfCategoriesInt = 5,
phjNewCategoryVarName = 'categoricalVar',
phjCategorisationMethod = 'jenkss',
phjGroupVarName = None,
phjPrintResults = True)
with pd.option_context('display.max_rows', 10, 'display.max_columns', 10):
print('Log odds for categorised variable')
print(phjOutDF)
# View log odds
phjOutDF = epy.phjViewLogOdds(phjDF = phjTempDF,
phjBinaryDepVarName = 'binDepVar',
phjContIndepVarName = 'riskFactorCont',
phjCaseValue = 'yes',
phjMissingValue = 'missing',
phjNumberOfCategoriesInt = 5,
phjNewCategoryVarName = 'categoricalVar',
phjCategorisationMethod = 'quantile',
phjGroupVarName = None,
phjPrintResults = True)
with pd.option_context('display.max_rows', 10, 'display.max_columns', 10):
print('Log odds for categorised variable')
print(phjOutDF)
Explanation: FUNCTION: phjCollapseOnPatientID()
FILE: phjCleanData.py
FUNCTION: phjParseDateVar()
FILE: phjExploreData.py
FUNCTION: phjViewLogOdds()
Example of viewing log odds plotted against mid-point of categories.
Categorise using Jenks breaks and using 'yes' and 'no' as binary outcome
End of explanation
# Define example dataset
phjTempDF = pd.DataFrame({'binDepVar':[1]*50000 + [0]*50000,
'riskFactorCont':np.random.uniform(0,1,100000)})
with pd.option_context('display.max_rows', 10, 'display.max_columns', 5):
print(phjTempDF)
# View log odds
phjTempDF = epy.phjViewLogOdds(phjDF = phjTempDF,
phjBinaryDepVarName = 'binDepVar',
phjContIndepVarName = 'riskFactorCont',
phjCaseValue = 1,
phjMissingValue = 'missing',
phjNumberOfCategoriesInt = 8,
phjNewCategoryVarName = 'categoricalVar',
phjCategorisationMethod = 'jenks',
phjGroupVarName = None,
phjPrintResults = False)
with pd.option_context('display.max_rows', 10, 'display.max_columns', 10):
print('Log odds for categorised variable')
print(phjTempDF)
Explanation: Categorise using quantile breaks and using 1 and 0 as binary outcome
End of explanation
# Define example dataset
phjTempDF = pd.DataFrame({'binDepVar':['yes']*50000 + ['no']*50000,
'riskFactorCont':[0.1] + ['missing','missing'] + list(np.random.uniform(0,1,49947)) + [np.nan]*50 + list(np.random.uniform(0,1,49950)) + [np.nan]*50,
'otherVar':['xyz']*100000,
'include':[0,1]*50000})
print(phjTempDF.dtypes)
print('\n')
with pd.option_context('display.max_rows', 10, 'display.max_columns', 5):
print(phjTempDF)
# Categorise a continuous variable
phjTempDF = epy.phjCategoriseContinuousVariable(phjDF = phjTempDF.loc[phjTempDF['include'] == 1,:],
phjContinuousVarName = 'riskFactorCont',
phjMissingValue = 'missing',
phjNumberOfCategoriesInt = 10,
phjNewCategoryVarName = 'catVar',
phjCategorisationMethod = 'jenks',
phjReturnBreaks = False,
phjPrintResults = True)
with pd.option_context('display.max_rows', 10, 'display.max_columns', 5):
print('\nLog odds for categorised variable')
print(phjTempDF)
Explanation: FUNCTION: phjCategoriseContinuousVariable()
Return dataframe alone
End of explanation
# Define example dataset
phjTempDF = pd.DataFrame({'binDepVar':['yes']*50000 + ['no']*50000,
'riskFactorCont':np.random.uniform(0,1,100000),
'otherVar':['xyz']*100000})
with pd.option_context('display.max_rows', 10, 'display.max_columns', 5):
print(phjTempDF)
# Categorise a continuous variable
phjTempDF, phjBreaksList = epy.phjCategoriseContinuousVariable(phjDF = phjTempDF,
phjContinuousVarName = 'riskFactorCont',
phjMissingValue = 'missing',
phjNumberOfCategoriesInt = 10,
phjNewCategoryVarName = 'catVar',
phjCategorisationMethod = 'quantile',
phjReturnBreaks = True,
phjPrintResults = False)
with pd.option_context('display.max_rows', 10, 'display.max_columns', 5):
print('\nCategorised variable')
print(phjTempDF)
print('\n')
print('Breaks')
print(phjBreaksList)
Explanation: Return dataframe and list of breaks
End of explanation
tempDF = pd.DataFrame({'caseN':[1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0],
'caseA':['y','y','y','y','y','y','y','y','n','n','n','n','n','n','n','n','n','n','n','n'],
'catN':[1,2,3,2,3,4,3,2,3,4,3,2,1,2,1,2,3,2,3,4],
'catA':['a','a','b','b','c','d','a','c','c','d','a','b','c','a','d','a','b','c','missing','d'],
'floatN':[1.2,4.3,2.3,4.3,5.3,4.3,2.4,6.5,4.5,7.6,5.6,5.6,4.8,5.2,7.4,5.4,6.5,5.7,6.8,4.5]})
phjORTable = epy.phjOddsRatio(phjDF = tempDF,
phjCaseVarName = 'caseA',
phjCaseValue = 'y',
phjRiskFactorVarName = 'catA',
phjRiskFactorBaseValue = 'a',
phjMissingValue = 'missing',
phjAlpha = 0.05,
phjPrintResults = True)
pd.options.display.float_format = '{:,.3f}'.format
#print(phjORTable)
Explanation: FILE: phjRROR.py
FUNCTION: phjOddsRatio()
End of explanation
tempDF = pd.DataFrame({'caseN':[1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0],
'caseA':['y','y','y','y','y','y','y','y','n','n','n','n','n','n','n','n','n','n','n','n'],
'catN':[1,2,3,2,3,4,3,2,3,4,3,2,1,2,1,2,3,2,3,4],
'catA':['a','a','b','b','c','d','a','c','c','d','a','b','c','a','d','a','b','c','missing','d'],
'floatN':[1.2,4.3,2.3,4.3,5.3,4.3,2.4,6.5,4.5,7.6,5.6,5.6,4.8,5.2,7.4,5.4,6.5,5.7,6.8,4.5]})
phjRRTable = epy.phjRelativeRisk( phjDF = tempDF,
phjCaseVarName = 'caseA',
phjCaseValue = 'y',
phjRiskFactorVarName = 'catA',
phjRiskFactorBaseValue = 'a',
phjMissingValue = 'missing',
phjAlpha = 0.05,
phjPrintResults = False)
pd.options.display.float_format = '{:,.3f}'.format
print(phjRRTable)
---
Explanation: FUNCTION: phjRelativeRisk()
End of explanation |
5,430 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimal Example to Produce a Synthetic Light Curve
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Datasets
Now we'll create an empty lc dataset
Step3: Running Compute
Now we'll compute synthetics at the times provided using the default options
Step4: Plotting
Now we can simply plot the resulting synthetic light curve. | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
Explanation: Minimal Example to Produce a Synthetic Light Curve
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,201), dataset='mylc')
Explanation: Adding Datasets
Now we'll create an empty lc dataset:
End of explanation
b.run_compute(irrad_method='none')
Explanation: Running Compute
Now we'll compute synthetics at the times provided using the default options
End of explanation
afig, mplfig = b['mylc@model'].plot(show=True)
afig, mplfig = b['mylc@model'].plot(x='phases', show=True)
Explanation: Plotting
Now we can simply plot the resulting synthetic light curve.
End of explanation |
5,431 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An implicit feedback recommender for the Movielens dataset
Implicit feedback
For some time, the recommender system literature focused on explicit feedback
Step1: This gives us a dictionary with the following fields
Step2: The train and test elements are the most important
Step3: The WARP model, on the other hand, optimises for precision@k---we should expect its performance to be better on precision. | Python Code:
import numpy as np
from lightfm.datasets import fetch_movielens
movielens = fetch_movielens()
Explanation: An implicit feedback recommender for the Movielens dataset
Implicit feedback
For some time, the recommender system literature focused on explicit feedback: the Netflix prize focused on accurately reproducing the ratings users have given to movies they watched.
Focusing on ratings in this way ignored the importance of taking into account which movies the users chose to watch in the first place, and treating the absence of ratings as absence of information.
But the things that we don't have ratings for aren't unknowns: we know the user didn't pick them. This reflects a user's conscious choice, and is a good source of information on what she thinks she might like.
This sort of phenomenon is described as data which is missing-not-at-random in the literature: the ratings that are missing are more likely to be negative precisely because the user chooses which items to rate. When choosing a restaurant, you only go to places which you think you'll enjoy, and never go to places that you think you'll hate. What this leads to is that you're only going to be submitting ratings for things which, a priori, you expected to like; the things that you expect you will not like you will never rate.
This observation has led to the development of models that are suitable for implicit feedback. LightFM implements two that have proven particular successful:
BPR: Bayesian Personalised Ranking [1] pairwise loss. Maximises the prediction difference between a positive example and a randomly chosen negative example. Useful when only positive interactions are present and optimising ROC AUC is desired.
WARP: Weighted Approximate-Rank Pairwise [2] loss. Maximises the rank of positive examples by repeatedly sampling negative examples until rank violating one is found. Useful when only positive interactions are present and optimising the top of the recommendation list (precision@k) is desired.
This example shows how to estimate these models on the Movielens dataset.
[1] Rendle, Steffen, et al. "BPR: Bayesian personalized ranking from implicit feedback." Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence. AUAI Press, 2009.
[2] Weston, Jason, Samy Bengio, and Nicolas Usunier. "Wsabie: Scaling up to large vocabulary image annotation." IJCAI. Vol. 11. 2011.
Getting the data
The first step is to get the Movielens data. This is a classic small recommender dataset, consisting of around 950 users, 1700 movies, and 100,000 ratings. The ratings are on a scale from 1 to 5, but we'll all treat them as implicit positive feedback in this example.
Fortunately, this is one of the functions provided by LightFM itself.
End of explanation
for key, value in movielens.items():
print(key, value)
Explanation: This gives us a dictionary with the following fields:
End of explanation
model = LightFM(learning_rate=0.05, loss='bpr')
model.fit(train, epochs=10)
train_precision = precision_at_k(model, train, k=10).mean()
test_precision = precision_at_k(model, test, k=10).mean()
train_auc = auc_score(model, train).mean()
test_auc = auc_score(model, test).mean()
print('Precision: train %.2f, test %.2f.' % (train_precision, test_precision))
print('AUC: train %.2f, test %.2f.' % (train_auc, test_auc))
Explanation: The train and test elements are the most important: they contain the raw rating data, split into a train and a test set. Each row represents a user, and each column an item. Entries are ratings from 1 to 5.
Fitting models
Now let's train a BPR model and look at its accuracy.
We'll use two metrics of accuracy: precision@k and ROC AUC. Both are ranking metrics: to compute them, we'll be constructing recommendation lists for all of our users, and checking the ranking of known positive movies. For precision at k we'll be looking at whether they are within the first k results on the list; for AUC, we'll be calculating the probability that any known positive is higher on the list than a random negative example.
End of explanation
model = LightFM(learning_rate=0.05, loss='warp')
model.fit_partial(train, epochs=10)
train_precision = precision_at_k(model, train, k=10).mean()
test_precision = precision_at_k(model, test, k=10).mean()
train_auc = auc_score(model, train).mean()
test_auc = auc_score(model, test).mean()
print('Precision: train %.2f, test %.2f.' % (train_precision, test_precision))
print('AUC: train %.2f, test %.2f.' % (train_auc, test_auc))
Explanation: The WARP model, on the other hand, optimises for precision@k---we should expect its performance to be better on precision.
End of explanation |
5,432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Metrics
When we think about summarizing data, what are the metrics that we look at?
In this notebook, we will look at the car dataset
To read how the data was acquired, please read this repo to get more information
Step1: Read the dataset
Step2: Warm up
Step3: Exercise
Step4: How to handle missing values?
Step5: Mean, Median, Variance, Standard Deviation
Mean
arithmetic average of a range of values or quantities, computed by dividing the total of all values by the number of values.
Step6: Let's do something fancier.
Let's find mean mileage of every make.
Hint
Step7: Exercise
How about finding the average mileage for every Type-GearType combination?
Median
Denotes value or quantity lying at the midpoint of a frequency distribution of observed values or quantities, such that there is an equal probability of falling above or below it. Simply put, it is the middle value in the list of numbers.
If count is odd, the median is the value at (n+1)/2,
else it is the average of n/2 and (n+1)/2
Find median of mileage
Step8: Mode
It is the number which appears most often in a set of numbers.
Find the mode of Type of cars
Step9: Variance
Once two statistician of height 4 feet and 5 feet have to cross a river of AVERAGE depth 3 feet. Meanwhile, a third person comes and said, "what are you waiting for? You can easily cross the river"
It's the average distance of the data values from the mean
<img style="float
Step10: Standard Deviation
It is the square root of variance. This will have the same units as the data and mean.
Find standard deviation of mileage
Step11: Using Pandas built-in function
Step12: Co-variance
covariance as a measure of the (average) co-variation between two variables, say x and y. Covariance describes both how far the variables are spread out, and the nature of their relationship, Covariance is a measure of how much two variables change together. Compare this to Variance, which is just the range over which one measure (or variable) varies.
<img style="float
Step13: The number of observations have to be same. For the current exercise, let's take the first 300 observations in both the datasets
Step14: Correlation
Extent to which two or more variables fluctuate together. A positive correlation indicates the extent to which those variables increase or decrease in parallel; a negative correlation indicates the extent to which one variable increases as the other decreases.
<img style="float | Python Code:
#Import the required libraries
import numpy as np
import pandas as pd
from datetime import datetime as dt
from scipy import stats
Explanation: Basic Metrics
When we think about summarizing data, what are the metrics that we look at?
In this notebook, we will look at the car dataset
To read how the data was acquired, please read this repo to get more information
End of explanation
cars = pd.read_csv("cars_v1.csv", encoding = "ISO-8859-1")
Explanation: Read the dataset
End of explanation
cars.head()
Explanation: Warm up
End of explanation
#Display the first 10 records
cars.head(10)
#Display the last 5 records
cars.tail()
#Find the number of rows and columns in the dataset
cars.shape
#What are the column names in the dataset?
cars.columns
#What are the types of those columns ?
cars.dtypes
cars.head()
#How to check if there are null values in any of the columns?
#Hint: use the isnull() function (how about using sum or values/any with it?)
cars.isnull().sum()
Explanation: Exercise
End of explanation
#fillna function
Explanation: How to handle missing values?
End of explanation
#Find mean of price
cars.Price.mean()
#Find mean of Mileage
cars.Mileage.mean()
Explanation: Mean, Median, Variance, Standard Deviation
Mean
arithmetic average of a range of values or quantities, computed by dividing the total of all values by the number of values.
End of explanation
#cars.groupby('Make') : Finish the code
cars.groupby('Make').Mileage.mean().reset_index()
Explanation: Let's do something fancier.
Let's find mean mileage of every make.
Hint: need to use groupby
End of explanation
cars.Mileage.median()
Explanation: Exercise
How about finding the average mileage for every Type-GearType combination?
Median
Denotes value or quantity lying at the midpoint of a frequency distribution of observed values or quantities, such that there is an equal probability of falling above or below it. Simply put, it is the middle value in the list of numbers.
If count is odd, the median is the value at (n+1)/2,
else it is the average of n/2 and (n+1)/2
Find median of mileage
End of explanation
#Let's first find count of each of the car Types
#Hint: use value_counts
cars.Type.value_counts()
#Mode of cars
cars.Type
cars.Type.mode()
cars.head()
Explanation: Mode
It is the number which appears most often in a set of numbers.
Find the mode of Type of cars
End of explanation
cars.Mileage.var()
Explanation: Variance
Once two statistician of height 4 feet and 5 feet have to cross a river of AVERAGE depth 3 feet. Meanwhile, a third person comes and said, "what are you waiting for? You can easily cross the river"
It's the average distance of the data values from the mean
<img style="float: left;" src="img/variance.png" height="320" width="320">
Find variance of mileage
End of explanation
cars.Mileage.std()
Explanation: Standard Deviation
It is the square root of variance. This will have the same units as the data and mean.
Find standard deviation of mileage
End of explanation
cars.describe()
Explanation: Using Pandas built-in function
End of explanation
pd.unique(cars.GearType)
cars_Automatic = cars[cars.GearType==' Automatic'].copy().reset_index()
cars_Manual = cars[cars.GearType==' Manual'].copy().reset_index()
cars_Automatic.head()
cars_Manual.head()
cars_Manual.shape
cars_Automatic.shape
Explanation: Co-variance
covariance as a measure of the (average) co-variation between two variables, say x and y. Covariance describes both how far the variables are spread out, and the nature of their relationship, Covariance is a measure of how much two variables change together. Compare this to Variance, which is just the range over which one measure (or variable) varies.
<img style="float: left;" src="img/covariance.png" height="270" width="270">
<br>
<br>
<br>
<br>
Co-variance of mileage of Automatic and Manual Gear Type
End of explanation
cars_Automatic = cars_Automatic.ix[:299,:]
cars_Manual = cars_Manual.ix[:299,:]
cars_Automatic.shape
cars_Manual.shape
cars_manual_automatic = pd.DataFrame([cars_Automatic.Mileage, cars_Manual.Mileage])
cars_manual_automatic
cars_manual_automatic = cars_manual_automatic.T
cars_manual_automatic.head()
cars_manual_automatic.columns = ['Mileage_Automatic', 'Mileage_Manual']
cars_manual_automatic.head()
#Co-variance matrix between the mileages of automatic and manual:
cars_manual_automatic.cov()
Explanation: The number of observations have to be same. For the current exercise, let's take the first 300 observations in both the datasets
End of explanation
#### Find the correlation between the mileages of automatic and manual in the above dataset
cars_manual_automatic.corr()
cars_manual_automatic.corrwith?
Explanation: Correlation
Extent to which two or more variables fluctuate together. A positive correlation indicates the extent to which those variables increase or decrease in parallel; a negative correlation indicates the extent to which one variable increases as the other decreases.
<img style="float: left;" src="img/correlation.gif" height="270" width="270">
<br>
<br>
<br>
End of explanation |
5,433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Design of Experiments
Unit 18, Lecture 1
Numerical Methods and Statistics
Prof. Andrew White, April 30, 2019
Goals
Know the vocubulary (treatment condition, factor, level, response, ANOVA, coding, factorial design, interaction, confound, grand mean, nuisance factor, blocking)
Know that design of experiments and its analysis is for seeing what factors affect a response, not necessarily getting good regression models
Recognize that design of experiments analysis is based on linear regression and hypothesis tests
Be able to read and interpret an ANOVA table
Be able to read and create a table of experiments following factorial or other designs
Understand how to treat unkown nuisance factors (randomize experiment order) and known nuisance factors (blocking)
Design of experiments
Step1: We'll use multidimensional ordinary least squares with an intercept
Step2: We'll compute our coefficients and their standard error
Step3: Now we can compute p-values and confidence intervals
Step4: So we found that our intercept is likely necessary (p < 0.05), but the two factors do not have a significant effect. We also found that factor 1 is more important than factor 2 as judged from the p-value
Using Statsmodels to for Regression
We're going to be using a new library to do regression on this unit because of its ability to do an ANOVA analysis. We'll learn about ANOVA below, but let's first repeat the above regression with this tool. Creating a statsmodel requires two ingredients
Step5: Interpreting Statsmodels
This regression summary has a huge amount of information. The top table includes information about the goodness of fit and regression model, like degrees of freedom and what the independent variable is. The middle table contains information about the regression coefficients including confidence intervals and p-values. The final table contains information about the residuals. The Jarque-Bera test is a normality test, like the Shapiro-Wilks test we learned previously. The p-values are slightly different because they use dof as 1, instead of 2, for their hypothesis test.
ANOVA
One of the most common analysis techniques of a design of experiments is the use of an Analysis of Variance (ANOVA). An ANOVA breaks up the response variance into factor variances. It explains where the variance in the response comes from. We aren't going to go deeply into the theory of ANOVA, but it's important that you know how it's used and how to intepret it. An ANOVA is based on a linear regression like above, but it's a different way of computing p-values. The p-values are the most relevant output of an ANOVA.
Here's an ANOVA of the above example
Step6: The ANOVA test gives information about each factor. The df is the degrees of freedom used to model each factor, the sum_sq is difference between the grand mean response and mean response of the treatment, the mean_sq is the sum_sq divided by the degrees of freedom, the F is an F-test statistic (like a T statistic from a t-test), and the final column contains p-value for the existence of each treatment.
F-test
The F-test is an alternative to the t-tests we do for regression coefficients being non-zero. The F-test is a little bit different than a t-test. One important idea of an F-test is that when we consider regression coefficents, we imagine our null model as being nested within the model we're considering. That means that the null hypothesis, the regression coefficient is zero, is a special case of the model we're considering where the regression coefficient is non-zero. An example of models that are not nested would be comparing using a $\beta \sin x$ vs $\beta x^2$. There is no obvious way to nest one of these models in the other to create a null hypothesis. Notice that if you imagine the F-test exactly the same as the t-test (null is regression coefficient being 0), then you'll always have nested models.
Designing Experiments
One at a time (bad example)
One common choice for designing experiments might be one at a time. In this approach you vary each treatment once. Let's see an example. Say you want to know how water, sun, and playing music affects plant growth. A one at a time design would look like this | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
import scipy.stats as ss
import numpy.linalg as linalg
x1 = [1, 1, -1, -1]
x2 = [1, -1, 1, -1]
y = [1.2, 3.2, 4.1, 3.6]
Explanation: Design of Experiments
Unit 18, Lecture 1
Numerical Methods and Statistics
Prof. Andrew White, April 30, 2019
Goals
Know the vocubulary (treatment condition, factor, level, response, ANOVA, coding, factorial design, interaction, confound, grand mean, nuisance factor, blocking)
Know that design of experiments and its analysis is for seeing what factors affect a response, not necessarily getting good regression models
Recognize that design of experiments analysis is based on linear regression and hypothesis tests
Be able to read and interpret an ANOVA table
Be able to read and create a table of experiments following factorial or other designs
Understand how to treat unkown nuisance factors (randomize experiment order) and known nuisance factors (blocking)
Design of experiments: (Wikipedia)
The design of experiments is the design of any task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation.
Table of Experiments
Let's see an example of a design of experiments table:
| TC | $X_1$ | $X_2$ | $Y$ |
| --- |: ---- | :----:| ---:|
| 1 | +1 | +1 |$y_1$|
| 2 | +1 | -1 |$y_2$|
| 3 | -1 | +1 |$y_3$|
| 4 | -1 | -1 |$y_4$|
TC Treatment Condition
$X$ A factor
+1 The factor level
$Y$ The response
The use of +1,-1 is called the coding
This table shows a 2 factor, 2 level design of experiments that has 4 treatment conditions.
Factor Levels
What is the meaning of the +1, -1? We do design of experiments to see if factors affect something. For example, our response might be the concentration of a chemical species and factors could be temperature and pressure. Because there are many temperatures to test, we might just only consider two temperature: hot and cold. This can be coded as levels: -1, +1. This is often done because there are standard analysis equations that would with integer levels, especially with two levels.
If we regress against these integer levels, the regression coefficients aren't really meaningful. Instead, we care about p-values. That is, we care about discovering if certain factors affect our response. This will allow to say "temperature affects the concentration" or "pressure does not affect concentration".
Replicates
Note that our experimental design doesn't include replicates. The design of experiments is meant to be as efficient as possible. Note that here we're trying to see what matters, and not trying to get an accurate regression model. If you want to do regression for accuracy, then you should include replicates and work with actual factor values instead of levels.
Connecting to Categorical Regression
We saw in unit 12, lecture 3 how to treat discrete data like this. Let's try regressing it! The data is 2 dimensional, so we will use 2 dimensional least squares. Should we include an intercept? Yes! One way to include is it to compute the grand mean from all responses so that they are centered at 0. Then the intercept will be 0. You should know this is commonly done, but we won't do this for our analysis. We'll just use a regular intercept as we saw in our regression unit.
End of explanation
x_mat = np.column_stack((np.ones(4), x1, x2))
x_mat
Explanation: We'll use multidimensional ordinary least squares with an intercept:
End of explanation
beta, *_ = linalg.lstsq(x_mat, y)
y_hat = x_mat @ beta
resids = (y - y_hat)
SSR = np.sum(resids**2)
se2_epsilon = SSR / (len(x) - len(beta))
se2_beta = se2_epsilon * linalg.inv(x_mat.transpose() @ x_mat)
print(np.sqrt(se2_beta), np.sqrt(se2_epsilon))
Explanation: We'll compute our coefficients and their standard error
End of explanation
df = len(x) - len(beta)
print('df = ', df)
for i in range(len(beta)):
#get our T-value for the confidence interval
T = ss.t.ppf(0.975, df)
# Get the width of the confidence interval using our previously computed standard error
cwidth = T * np.sqrt(se2_beta[i,i])
# print the result, using 2 - i to match our numbering above
hypothesis_T = -abs(beta[i]) / np.sqrt(se2_beta[i,i])
p = 2 * ss.t.cdf(hypothesis_T, df + 1) # +1 because null hypothesis doesn't include coefficient
print(f'beta_{i} is {beta[i]:.2f} +/- {cwidth:.2f} with 95% confidence. p-value: {p:.2f} (T = {hypothesis_T:.2f})')
Explanation: Now we can compute p-values and confidence intervals:
End of explanation
from statsmodels.formula.api import ols
x1 = [1, 1, -1, -1]
x2 = [1, -1, 1, -1]
y = [1.2, 3.2, 4.1, 3.6]
data = {'x1': x1, 'x2': x2, 'y': y}
model = ols('y ~ x1 + x2', data=data).fit()
model.summary()
Explanation: So we found that our intercept is likely necessary (p < 0.05), but the two factors do not have a significant effect. We also found that factor 1 is more important than factor 2 as judged from the p-value
Using Statsmodels to for Regression
We're going to be using a new library to do regression on this unit because of its ability to do an ANOVA analysis. We'll learn about ANOVA below, but let's first repeat the above regression with this tool. Creating a statsmodel requires two ingredients: data and a formula. The formula is a string that matches your regression model. In this case we use y ~ x1 + x2. The ~ means equal to here. The data should be a dictionary whose keys match the variables you used in your formula. Thus doing data[y] should give the y vector. The statsmodels regression is created by calling ols and then we must call fit() to do the regression and summary to get a report on the results.
End of explanation
sm.stats.anova_lm(model)
Explanation: Interpreting Statsmodels
This regression summary has a huge amount of information. The top table includes information about the goodness of fit and regression model, like degrees of freedom and what the independent variable is. The middle table contains information about the regression coefficients including confidence intervals and p-values. The final table contains information about the residuals. The Jarque-Bera test is a normality test, like the Shapiro-Wilks test we learned previously. The p-values are slightly different because they use dof as 1, instead of 2, for their hypothesis test.
ANOVA
One of the most common analysis techniques of a design of experiments is the use of an Analysis of Variance (ANOVA). An ANOVA breaks up the response variance into factor variances. It explains where the variance in the response comes from. We aren't going to go deeply into the theory of ANOVA, but it's important that you know how it's used and how to intepret it. An ANOVA is based on a linear regression like above, but it's a different way of computing p-values. The p-values are the most relevant output of an ANOVA.
Here's an ANOVA of the above example:
End of explanation
xw = [0, 1, 0, 0, 1, 0, 1, 1]
xs = [0, 0, 1, 0, 1, 1, 0, 1]
xm = [0, 0, 0, 1, 0, 1, 1, 1]
y = [0.4, 0.3, 0.3, 0.2, 4.6, 0.3, 0.2, 5.2, 0.3, 0.2, 0.4, 0.3, 5.0, 0.3, 0.3, 5.0]
# we do xw + xw because we have 2 replicates at each condition
data = {'xw': xw + xw, 'xs': xs + xs, 'xm': xm + xm, 'y': y}
model = ols('y~xw + xs + xm + xw * xs + xw * xm + xs * xm + xw * xm * xs', data=data).fit()
sm.stats.anova_lm(model, typ=2)
Explanation: The ANOVA test gives information about each factor. The df is the degrees of freedom used to model each factor, the sum_sq is difference between the grand mean response and mean response of the treatment, the mean_sq is the sum_sq divided by the degrees of freedom, the F is an F-test statistic (like a T statistic from a t-test), and the final column contains p-value for the existence of each treatment.
F-test
The F-test is an alternative to the t-tests we do for regression coefficients being non-zero. The F-test is a little bit different than a t-test. One important idea of an F-test is that when we consider regression coefficents, we imagine our null model as being nested within the model we're considering. That means that the null hypothesis, the regression coefficient is zero, is a special case of the model we're considering where the regression coefficient is non-zero. An example of models that are not nested would be comparing using a $\beta \sin x$ vs $\beta x^2$. There is no obvious way to nest one of these models in the other to create a null hypothesis. Notice that if you imagine the F-test exactly the same as the t-test (null is regression coefficient being 0), then you'll always have nested models.
Designing Experiments
One at a time (bad example)
One common choice for designing experiments might be one at a time. In this approach you vary each treatment once. Let's see an example. Say you want to know how water, sun, and playing music affects plant growth. A one at a time design would look like this:
| TC | Water | Sun | Music | Plant Growth |
| --- |: ---- | :----:| :---: | ---:|
| 1 | 0 | 0 | 0 |$y_1$|
| 2 | 1 | 0 | 0 |$y_2$|
| 3 | 0 | 1 | 0 |$y_3$|
| 4 | 0 | 0 | 1 |$y_4$|
Notice that we have switched our level coding to $0$ and $1$. The choice is arbitrary, but it demonstrates better the idea of one at a time design. What is wrong with this design?
Water and sun will never be active at the same time, meaning that all of our experiments will not actually have plant growth. As we discussed in unit 12, lecture 3, this means we're missing interactions. Look at the model equation we assume with one at a time:
$$
y = \beta_w x_w + \beta_s x_s + \beta_m x_m \ldots + \epsilon
$$
This is missing those interactions, like how the system changes when both water and sun are given to the plant. The correct model equation is:
$$
y = \beta_w x_w + \beta_s x_s + \beta_m x_m + \beta_{ws} x_{ws} + \beta_{wm} x_{wm} + \beta_{sm} x_{sm} + \beta_{wsm} x_{wsm} + \epsilon
$$
To solve for all these regression coefficients, we need to have at least as many experiments. This leads to..
Factoial Design
With a factorial design, we have one treatment condition for all permutations of the factor levels. For our plant growth example, the experiments would look like:
| TC | Water | Sun | Music | Plant Growth |
| --- |: ---- | :----:| :---: | ---:|
| 1 | 0 | 0 | 0 |$y_1$|
| 2 | 1 | 0 | 0 |$y_2$|
| 3 | 0 | 1 | 0 |$y_3$|
| 4 | 0 | 0 | 1 |$y_4$|
| 5 | 1 | 1 | 0 |$y_5$|
| 6 | 0 | 1 | 1 |$y_6$|
| 7 | 1 | 0 | 1 |$y_7$|
| 8 | 1 | 1 | 1 |$y_8$|
The factorial design will have $L^K$ treatment conditions, where $L$ is the number of levels and $K$ is the number of factors. $2^3 = 8$ in this case. One at a time is $1 + K$ treatment conditions for comparison.
Factorial Analysis Example
Let's consider the following example data. The plant growth is in grams. We have one replicate at each condition in this example
End of explanation |
5,434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducción a Python
Step1: Con esta línea, ya tendremos en nuestro pequeño script el paquete listo para ser usado. Para acceder a los módulos tendremos que hacer numpy.nombre_de_la_función. Esto no es que este mal, pero por así decirlo, el estandar es importar NumPy como se ve a continuación
Step2: De esta forma, vuestro código pasa a ser más legible, el resto de gente y mas cómodo y rápido de escribir. Además de que por convenio, debes importar NumPy como np.
Arrays de NumPy
Numpy puede trabajar con arrays de $n$ dimensiones, pero los más comunes son los arrays de una, dos o tres dimensiones (1D, 2D, 3D). Además de esto, podemos establecer el tipo de los elementos del array a la hora de crearlo. Esto es muy importante, ya que si el array del tipo está determinado antes de ejecutar el script, Python no tiene que pararse a inferir el tipo de estos elementos, por lo que ganamos mucha velocidad.
Además de esto, ciertas operaciones en NumPy, o librerías que requieren el uso de NumPy, necesitan que los operandos sean de un determinado tipo.
A continuación vamos a crear una serie de arrays de distintas dimensiones a partir de una lista propia de Python
Step3: Hay que tener encuenta que usar np.array requiere de un objeto ya existente, como en este caso son las listas utilizadas anteriormente. Este objeto es el que se inserta en el parámetro object. El otro parámetro más importante es dtype que nos sirve para indicar de qué tipo será el array que construyamos. Para más información sobre la creación de un array, puedes consultar la documentación oficial.
Además de crear arrays a partir de un objeto, NumPy nos ofrece la posibilidad de crear arrays "vacíos" o con valores predeterminados, gracias a las siguientes funciones
Step4: Además de esto, podemos acceder a un elemento del array, podemos acceder de la misma manera que se hace en las listas de Python, usando el operador [].
Step5: También podemos acceder a subconjuntos dentro del array usando el operador [] y el operador
Step6: Y usando estos dos mismos operadores, por ejemplo, en los arrays bidimensionales podemos acceder a todos los elementos de una fila o una columna de la siguiente forma
Step7: Ahora que ya sabemos cómo manejarnos con los arrays de NumPy, cómo crearlos y acceder a la información de ellos, ya podemos empezar a realizar operaciones matemáticas con ellos.
Ejercicio
Gracias a las distintas formas de indexar un array que nos permite NumPy, podemos hacer operaciones de forma vectorizada, evitando los bucles. Esto supone un incremento en la eficiencia del código y tener un código más corto y legible. Para ello, vamos a realizar el siguiente ejercicio.
Genera una matriz aleatoria cuadrada de tamaño 1000. Una vez creada, genera una nueva matriz donde las filas y columnas 0 y $n-1$ estén repetidas 500 veces y el centro de la matriz quede exactamente igual a la original. Un ejemplo de esto lo podemos ver a continuación
Step8: También podemos encontrar funciones para calcular el coseno entre dos arrays, el producto vectorial, elevar a un exponente un array. Todas las funciones matemáticas las podemos encontrar en la documentación.
Otro módulo muy importante de NumPy es el de álgebra lineal, que podemos acceder a las funciones de este módulo haciendo np.linalg. Este módulo contiene operaciones como el producto vectorial de dos arrays, la descomposición en valores singulares, funciones para resolver sistemas de ecuaciones, etc. Una vez más, la documentación es nuestra amiga y en ella podemos encontrar toda la información sobre estas funciones, junto con ejemplos de uso.
Ejercicio
Una matriz de rotación $R$ es una matriz que representa una rotación en el espacio euclídeo. Esta matriz $R$ se representa como $$R=\left(\begin{matrix} \cos\theta & -\sin\theta \ \sin\theta & -\cos\theta \end{matrix}\right)$$ donde $\theta$ es el número de ángulos rotados en sentido antihorario.
Estas matrices son muy usadas en geometría, informática o física. Un ejemplo de uso de estas matrices puede ser el cálculo de una rotación de un objeto en un sistema gráfico, la rotación de una cámara respecto a un punto en el espacio, etc.
Estas matrices tienen como propiedades que son matrices ortogonales (su inversa y su traspuesta son iguales) y su determinante es igual a 1. Por tanto, genera un array y muestra si ese array es una matriz de rotación.
Ejercicio
Dados el array que se ve a continuación, realiza los siguientes apartados
Step9: Multiplica array1 por $\frac{\pi}{4}$ y calcula el seno del array resultante.
Genera un nuevo array cuyo valor sea el doble del resultado anterior mas el vector array1.
Calcula la norma del vector resultante. Para ello, consulta la documentación para ver qué función realiza esta tarea, y ten en cuenta los parámetros que recibe.
Operaciones lógicas
Además de las funciones algebraicas y aritméticas, también podemos hacer uso de los operadores lógicos para comparar dos arrays. Por ejemplo, si queremos comparar elemento a elemento si son iguales dos arrays, usaremos el operador ==.
Step10: Lo mismo sucede con los operadores de comparación < y >, que comparan elemento a elemento según el operador que hayamos decidido usar.
Step11: Pero, en el caso de que queramos comparar si dos arrays son completamente iguales, devolviendo un único valor que sea True o False, debemos usar la función np.array_equal(array_1, array_2).
Step12: También están disponibles los operadores lógicos & y | para realizar operaciones lógicas con ellas. Sin embargo, NumPy también ofrece las funciones np.logical_and(), np.logical_or() para realizar estas operaciones y np.logical_not para negar el array.
Otras funciones a las que tenemos acceso son a funciones que nos ofrecen datos estadísticos como la media o la desviación típica, o el valor de la suma de una fila de un array, etc. A continuación, podemos ver algunas de las que ofrece
Step13: Calcula la media y la desviación típica de la matriz.
Obtén el elemento mínimo y máximo de la matriz.
Calcula el determinante, la traza y la traspuesta de la matriz.
Calcula la descomposición en valores singulares de la matriz.
Calcula el valor de la suma de los elementos de la diagonal principal de la matriz.
Salvar arrays de NumPy en ficheros
Hay veces que en nuestro problema podemos generar matrices de un tamaño considerable, y que el tiempo de cómputo que hemos necesitado para obtener esa matriz es demasiado grande como para permitirnos el lujo de generarlo de nuevo, o simplemente no es posible reproducir los cálculos.
Es por ello que NumPy ofrece una solución, y es que nos permite el poder salvar estos arrays en ficheros gracias a las funciones save, savetxt, savez o savez_compressed.
* save
Step14: Como podemos ver, ambas funciones dan el mismo resultado, así que, ¿para qué aprender a usar funciones lambda? Bueno, con estas funciones podemos hacer cosas como se ve a continuación
Step15: ¿Qué ha pasado aquí? Pues bien, para empezar, esto es una de las ventajas de las funciones lambda, que podemos crear funciones anónimas en una sola línea, con un uso específico como acaba de pasar aquí, que no está asociada a ningún nombre en específico como pasa con las funciones def. Concretamente, lo que acaba de pasar es que hemos declarado una función lambda que recibe dos parámetros, $x$ e $y$, los suma y los multiplica por dos.
Acto seguido, hemos pasado como parámetros los dos números que van a realizar la operación, como parámetros de la función lambda, que se suman, se multiplican por dos y automáticamente se devuelven.
Además, aquí podemos ver una de las diferencias con las funciones clásicas, y es que en las funciones lambda existe un return implícito. Es decir, una vez realizadas todas las operaciones que hay definidas en la función lambda automáticamente se devuelve este valor.
Otra diferencia es que las funciones lambda son expresiones, no sentencias, por lo que pueden aparecer en sitios donde por la sintaxis de Python, no puede aparecer un def.
También tienen otras peculiaridades, como el que no se puede crear dentro de una función lambda un bloque if-else como el que estamos acostumbrados, sino que se tiene que expresar como vemos a continuación
Step16: También son capaces de realizar bucles internos y llamar a funciones como map, usar las expresiones de comprensión de listas, etc.
Step17: Cuándo usar funciones lambda
Uno de los mejores momentos para usar una función lambda, es el momento en el que vamos a usar una función para ordenar los elementos de una lista, siguiendo un orden en concreto. Este orden se puede definir usando una función, así que, qué mejor que usar una función lambda para establecer este orden y al ser una función anónima, no es necesario predefinirla. Vamos a ver un ejemplo de esto
Step18: También podemos hacer que una función lambda actúe como una clausura léxica. Pero, ¿qué es una clausura? Una clausura léxica es un nombre para una función que recuerda los valores que se encuentran en un cierto espacio de nombres, incluso cuando el flujo de ejecución del programa no se encuentra en ese espacio de nombres. Vamos a ver un ejemplo de esto.
Step19: Otro uso muy conocido de las funciones lambda es el uso de estas en las funciones filter o map, ya que podemos filtrar elementos que cumplan una serie de condiciones, haciendo uso de la Programación Funcional.
La programación funcional es un paradigma de programación que evalua las expresiones al igual que se hace en matemáticas, es decir, se evalua la expresión, se devuelve un resultado y los operandos quedan inmutables. Esto hace que dada una función $f$ que realiza una tarea determinada, siempre devuelva el mismo resultado si recibe como parámetro el elemento $x$. Esto es conocido como transparencia referencial. Esta programación funcional se basa mucho en el $\lambda$-calculus.
Python, aunque sea un lenguaje por naturaleza imperativo, tiene la posibilidad de emular esta programación funcional y emular también la transparencia referencial, usando las funciones lambda que hemos visto, y funciones como filter, map o reduce, que gracias a la programación funcional, nos permite hacer en una sola línea de código lo que haríamos en bastante más escribiendo de forma "clásica", y de forma más eficiente y elegante.
Función map
La función map recibe como parámetros una función y una lista o sequence. map aplica la función que recibe como entrada sobre la lista y devuelve un generador (en Python 2 devuelve una lista). Un generador es una estructura de Python que actúa como un iterador. Por ejemplo, al usar la función range(5), obtenemos un generador de números enteros de 0 a 5. Los elementos de este generador los podemos obtener todos del tirón haciendo list(range(5)) o bien, sacarlos de uno en uno en un bucle for i in range(5). Una vez generados estos elementos, desaparecen del generador (obviamente).
Vamos a ver un ejemplo a continuación, en el que usaremos la función map sin y con funciones lambda.
Step20: Como puedes ver, el código usado escribiendo la función lambda es mucho menor que usando una función def y más limpio. Además, no es necesario definir una función y asociarla a un nombre para hacer esta tarea.
Además de esto, también podemos hacer una función map a varias listas a la vez. El único requisito es que estas listas sean de la misma longitud.
Step21: Función filter
La función filter es la manera elegante de filtrar nuestras listas y eliminar aquellos elementos que no cumplan cierta condición. Al igual que map, recibe como parámetros una función y una sequence. Por ejemplo, vamos a extraer los elementos pares e impares de la sucesión de Fibonacci en listas separadas usando filter
Step22: Función reduce
La función reduce se encarga de, tal y como dice su nombre, reducir una lista a un único valor, mediante una función que hayamos definido. A diferencia de las otras dos, esta función no se encuentra en el ámbito por defecto de Python, es decir, no podemos acceder a ella cargando simplemente el prompt de Python en Python 3.
Para poder usarla, tenemos que hacer lo siguiente
Step23: Con esto, ya podemos utilizar la función reduce. Al igual que las anteriores, recibe como primer argumento una función que aplicar sobre la lista que recibe como segundo argumento. Vamos a ver un ejemplo de cómo calcular el factorial de un número en una sola línea. | Python Code:
import numpy
Explanation: Introducción a Python: nivel intermedio
Python es un lenguaje muy extendido, con una rica comunidad que abarca muchos aspectos, muy fácil de aprender y programar con él, y que nos permite realizar un montón de tareas diferentes. Pero, no solo de pan vive. Python tiene muchas librerías o módulos que nos facilitan muchas tareas, que están respaldados por un gran número de personas detrás y que en muchos casos, realizan tareas específicas de una manera mucho más eficiente que haciéndolo puramente en Python.
Este es el caso de librerías como NumPy, Pandas, scikit-learn, Django y muchísimas más.
Además de esto, aunque Python sea un lenguaje puramente imperativo, nos permite simular lo que se conoce como programación funcional, que viene directamente del Lambda Calculus, gracias al uso de las funciones lambda.
A continuación, veremos una introducción a NumPy que nos sirve como puerta de entrada a un montón de módulos más, y la programación funcional con las funciones lambda.
NumPy
NumPy es el paquete central que usaremos para tareas como ciencia de datos, o muchos cálculos matemáticos que requieren el uso de estructuras algebraicas como los vectores y matrices. Este paquete tiene la ventaja de que podemos trabajar cómodamente con la sintaxis de Python, creando estructuras $n$-dimensionales de forma sencilla y realizar complejas funciones sobre estas estructuras en una sola línea de código muy eficiente.
Esta eficiencia se debe a que NumPy utiliza código C para realizar estos cálculos, que es mucho más rápido que la misma implementación en Python.
Además de esto, otras de las grandes ventajas de NumPy es la documentación oficial que tiene, que es una de las más completas y bien documentadas que hay, ya que cubren muy bien todos los aspectos de las distintas funciones que tiene, además de incorporar una gran cantidad de ejemplos.
Instalación
Para instalar NumPy tenemos que ejecutar la siguiente línea en nuestro terminal:
[braulio@braulio-PC ~]$ sudo pip install numpy
Con esto, gracias a pip, descargaremos e instalaremos NumPy. Así que, una vez instalados... a trabajar!!
Primeros pasos con NumPy
Para empezar a trabajar con NumPy, lo primero es importar el paquete en nuestro script.
End of explanation
import numpy as np
Explanation: Con esta línea, ya tendremos en nuestro pequeño script el paquete listo para ser usado. Para acceder a los módulos tendremos que hacer numpy.nombre_de_la_función. Esto no es que este mal, pero por así decirlo, el estandar es importar NumPy como se ve a continuación:
End of explanation
lista1D = [1, 5, 6, 79]
lista2D = [[2, 5.3, 0, -1.99], [14.5, 5, -5., 1]]
lista3D = [[[1, 5, 6, 79], [5, 7, 9, 0]], [[1, 5, 6, 79], [5, 7, 9, 0]]]
uni_dimensional = np.array(lista1D, dtype=np.int32) # Los elementos de este array serán enteros de 32 bits
bi_dimensional = np.array(lista2D, dtype=np.float64) # En este caso, serán float de 64 bits
tri_dimensional = np.array(lista3D) # Y en este caso?
print("Array 1D:\n", uni_dimensional)
print("Array 2D:\n", bi_dimensional)
print("Array 3D:\n", tri_dimensional)
Explanation: De esta forma, vuestro código pasa a ser más legible, el resto de gente y mas cómodo y rápido de escribir. Además de que por convenio, debes importar NumPy como np.
Arrays de NumPy
Numpy puede trabajar con arrays de $n$ dimensiones, pero los más comunes son los arrays de una, dos o tres dimensiones (1D, 2D, 3D). Además de esto, podemos establecer el tipo de los elementos del array a la hora de crearlo. Esto es muy importante, ya que si el array del tipo está determinado antes de ejecutar el script, Python no tiene que pararse a inferir el tipo de estos elementos, por lo que ganamos mucha velocidad.
Además de esto, ciertas operaciones en NumPy, o librerías que requieren el uso de NumPy, necesitan que los operandos sean de un determinado tipo.
A continuación vamos a crear una serie de arrays de distintas dimensiones a partir de una lista propia de Python:
End of explanation
x = np.arange(0.0,10.0,0.5)
print(x.ndim) # Nos muestra el número de dimensiones que tiene el array
print(x.size) # Muestra el número de elementos que tiene x
print(x.flags) # Nos muestra la información de los distintos flags de x
print(x.itemsize) # Nos muestra los bytes que ocupa un elemento de x
print(x.nbytes) # Nos muestra el número total de bytes que ocupan los elementos de x
Explanation: Hay que tener encuenta que usar np.array requiere de un objeto ya existente, como en este caso son las listas utilizadas anteriormente. Este objeto es el que se inserta en el parámetro object. El otro parámetro más importante es dtype que nos sirve para indicar de qué tipo será el array que construyamos. Para más información sobre la creación de un array, puedes consultar la documentación oficial.
Además de crear arrays a partir de un objeto, NumPy nos ofrece la posibilidad de crear arrays "vacíos" o con valores predeterminados, gracias a las siguientes funciones:
* np.ones: genera un array de unos.
* np.zeros: genera un array de ceros.
* np.random.random: genera un array de números aleatorios, float en el intervalo $[0,1]$.
* np.full: similar a np.ones con la diferencia de que todos los valores serán iguales al que reciba como parámetro.
* np.arange: genera un array con números dentro de un intervalo, con la frecuencia que queramos.
* np.linspace: similar al anterior, solo que fijamos el intervalo y el número de elementos que queremos.
* np.eye__o np.identity__: crean la matriz identidad.
Ejercicio
Ahora que hemos visto cómo crear arrays a partir de un objeto y otros para crear arrays con tipos prefijados, crea distintos arrays con las funciones anteriores para 1D, 2D y 3D e imprímelas por pantalla. Prueba a usar distintos tipos para ver cómo cambian los arrays. Si tienes dudas sobre cómo usarlos, puedes consultar la documentación oficial.
Una vez creado nuestros arrays, podemos encontrar información muy útil sobre ellos en los propios atributos del array. Estos atributos nos pueden dar información sobre el número de dimensiones, el espacio en memoria que ocupa cada elemento, etc. Los más comunes y el acceso a esta información podemos verlos a continuación:
End of explanation
array_1d = np.arange(-5,5,1)
array_1d[1]
Explanation: Además de esto, podemos acceder a un elemento del array, podemos acceder de la misma manera que se hace en las listas de Python, usando el operador [].
End of explanation
array_1d[3:6]
Explanation: También podemos acceder a subconjuntos dentro del array usando el operador [] y el operador : de la siguente forma:
End of explanation
array_random2d = np.random.random((4,6))
print("Matriz aleatoria:\n", array_random2d)
# Para acceder a todos los elementos de la segunda fila
print("Fila: \n", array_random2d[1])
# Para acceder a todos los elementos de la cuarta columna
print("Columna: \n", array_random2d[:,3])
# Filas de la segunda fila en adelante incluida esta
print("Filas: \n", array_random2d[1:])
# Columnas de la tercera columna en adelante
print("Columnas: \n", array_random2d[:,2:])
# Subconjuntos en filas y comunas
print("Subconjunto de columnas: \n", array_random2d[0:2,2:5])
Explanation: Y usando estos dos mismos operadores, por ejemplo, en los arrays bidimensionales podemos acceder a todos los elementos de una fila o una columna de la siguiente forma:
End of explanation
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
print("Suma usando el operador +:\n\t", a + b)
print("Suma usando np.add:\n\t", np.add(a,b))
Explanation: Ahora que ya sabemos cómo manejarnos con los arrays de NumPy, cómo crearlos y acceder a la información de ellos, ya podemos empezar a realizar operaciones matemáticas con ellos.
Ejercicio
Gracias a las distintas formas de indexar un array que nos permite NumPy, podemos hacer operaciones de forma vectorizada, evitando los bucles. Esto supone un incremento en la eficiencia del código y tener un código más corto y legible. Para ello, vamos a realizar el siguiente ejercicio.
Genera una matriz aleatoria cuadrada de tamaño 1000. Una vez creada, genera una nueva matriz donde las filas y columnas 0 y $n-1$ estén repetidas 500 veces y el centro de la matriz quede exactamente igual a la original. Un ejemplo de esto lo podemos ver a continuación: $$\left(\begin{matrix} 1 & 2 & 3 \ 2 & 3 & 4 \ 3 & 4 & 5\end{matrix}\right) \Longrightarrow
\left(\begin{matrix}
1 & 1 & 1 & 2 & 3 & 3 & 3 \
1 & 1 & 1 & 2 & 3 & 3 & 3 \
1 & 1 & 1 & 2 & 3 & 3 & 3 \
2 & 2 & 2 & 3 & 4 & 4 & 4 \
3 & 3 & 3 & 4 & 5 & 5 & 5 \
3 & 3 & 3 & 4 & 5 & 5 & 5 \
3 & 3 & 3 & 4 & 5 & 5 & 5
\end{matrix}\right) $$
Impleméntalo usando un bucle for y vectorizando el cálculo usando lo anteriormente visto para ver la diferencias de tiempos usando ambas variantes. Para medir el tiempo, puedes usar el módulo time.
Operaciones con Arrays
Ahora que ya hemos visto cómo crear y manejar arrays, podemos pasar a ver cómo realizar operaciones aritméticas con ellos. NumPy tiene funciones como np.add, np.substract, np.multiply y np.divide para sumar, restar, multiplicar y dividir arrays. También podemos calcular el módulo entre dos arrays usando np.remainder. Pero, no es necesario usar estas funciones como tal, sino que podemos usar nuestros operadores aritméticos de siempre: +, -, *, / y %.
End of explanation
array1 = np.array([ -1., 4., -9.])
Explanation: También podemos encontrar funciones para calcular el coseno entre dos arrays, el producto vectorial, elevar a un exponente un array. Todas las funciones matemáticas las podemos encontrar en la documentación.
Otro módulo muy importante de NumPy es el de álgebra lineal, que podemos acceder a las funciones de este módulo haciendo np.linalg. Este módulo contiene operaciones como el producto vectorial de dos arrays, la descomposición en valores singulares, funciones para resolver sistemas de ecuaciones, etc. Una vez más, la documentación es nuestra amiga y en ella podemos encontrar toda la información sobre estas funciones, junto con ejemplos de uso.
Ejercicio
Una matriz de rotación $R$ es una matriz que representa una rotación en el espacio euclídeo. Esta matriz $R$ se representa como $$R=\left(\begin{matrix} \cos\theta & -\sin\theta \ \sin\theta & -\cos\theta \end{matrix}\right)$$ donde $\theta$ es el número de ángulos rotados en sentido antihorario.
Estas matrices son muy usadas en geometría, informática o física. Un ejemplo de uso de estas matrices puede ser el cálculo de una rotación de un objeto en un sistema gráfico, la rotación de una cámara respecto a un punto en el espacio, etc.
Estas matrices tienen como propiedades que son matrices ortogonales (su inversa y su traspuesta son iguales) y su determinante es igual a 1. Por tanto, genera un array y muestra si ese array es una matriz de rotación.
Ejercicio
Dados el array que se ve a continuación, realiza los siguientes apartados:
End of explanation
a = np.array([1, 2, 2, -1, 5])
b = np.array([0, 1, 2, 42, 5])
print(a == b)
Explanation: Multiplica array1 por $\frac{\pi}{4}$ y calcula el seno del array resultante.
Genera un nuevo array cuyo valor sea el doble del resultado anterior mas el vector array1.
Calcula la norma del vector resultante. Para ello, consulta la documentación para ver qué función realiza esta tarea, y ten en cuenta los parámetros que recibe.
Operaciones lógicas
Además de las funciones algebraicas y aritméticas, también podemos hacer uso de los operadores lógicos para comparar dos arrays. Por ejemplo, si queremos comparar elemento a elemento si son iguales dos arrays, usaremos el operador ==.
End of explanation
print("Operador <:\t", a < b)
print("Operador >:\t", a > b)
print("Operador <=:\t", a <= b)
Explanation: Lo mismo sucede con los operadores de comparación < y >, que comparan elemento a elemento según el operador que hayamos decidido usar.
End of explanation
print("¿Son iguales a y b?", np.array_equal(a, b))
Explanation: Pero, en el caso de que queramos comparar si dos arrays son completamente iguales, devolviendo un único valor que sea True o False, debemos usar la función np.array_equal(array_1, array_2).
End of explanation
n_array1 = np.array([[ 1., 3., 5.], [7., -9., 2.], [4., 6., 8.]])
Explanation: También están disponibles los operadores lógicos & y | para realizar operaciones lógicas con ellas. Sin embargo, NumPy también ofrece las funciones np.logical_and(), np.logical_or() para realizar estas operaciones y np.logical_not para negar el array.
Otras funciones a las que tenemos acceso son a funciones que nos ofrecen datos estadísticos como la media o la desviación típica, o el valor de la suma de una fila de un array, etc. A continuación, podemos ver algunas de las que ofrece:
* array.sum(): devuelve la suma total de los componentes del array.
* array.min(): devuelve el mínimo valor del array.
* array.max(): devuelve el valor máximo de la fila o la columna, dependiendo del valor que tenga el parámetro axis.
* array.cumsum(): nos devuelve un nuevo array donde cada elemento es la suma acumulativa de los elementos del array. Al igual que antes, es dependiente del parámetro axis.
* array.mean(): para obtener la media.
* array.median(): para obtener la mediana.
* np.std(array): para obtener la desviación típica del array.
Ejercicio
Dada la siguiente matriz, realiza los siguientes apartados:
End of explanation
# Definimos la función lambda de esta manera
suma_l = lambda x, y: x + y
print("Suma con lambda: ", suma_l(4,1))
def suma(x,y):
return x+y
print("Suma con def: ", suma(4,1))
Explanation: Calcula la media y la desviación típica de la matriz.
Obtén el elemento mínimo y máximo de la matriz.
Calcula el determinante, la traza y la traspuesta de la matriz.
Calcula la descomposición en valores singulares de la matriz.
Calcula el valor de la suma de los elementos de la diagonal principal de la matriz.
Salvar arrays de NumPy en ficheros
Hay veces que en nuestro problema podemos generar matrices de un tamaño considerable, y que el tiempo de cómputo que hemos necesitado para obtener esa matriz es demasiado grande como para permitirnos el lujo de generarlo de nuevo, o simplemente no es posible reproducir los cálculos.
Es por ello que NumPy ofrece una solución, y es que nos permite el poder salvar estos arrays en ficheros gracias a las funciones save, savetxt, savez o savez_compressed.
* save: guarda el array en un fichero binario de NumPy. Este fichero tiene como formato .npy.
* savetxt: guarda el array en un fichero de texto.
* savez: guarda varios arrays en un mismo fichero sin comprimir. Este fichero tiene formato .npz.
* savez_compressed: similar al anterior pero en este caso el fichero estará comprimido. Su formato también es .npz.
Para más infomración y ejemplos de uso, siempre podréis consultar la documentación oficial.
Quiero más ejercicios para seguir entrenando con NumPy
Existe un repositorio en GitHub lleno de ejercicios que cubren un gran número de módulos y funciones de NumPy, con las soluciones a dichos ejercicios. Estos ejercicios, al igual que esta breve introducción, se encuentran en jupyter-notebooks que podemos descargar y manipular. Este repositorio se encuentra aquí. Y recordar que siempre que useis un repositorio en GitHub y os ha sido de utilidad, dadle star al repositorio.
Funciones lambda ($\lambda$)
Las funciones lambda se utilizan para declarar pequeñas funciones anónimas con un objetivo claro. Su comportamiento es parecido a las funciones que declaramos con def, aunque no son exactamente iguales.
Estas funciones lambda se asocian a una única expresión, por lo que elementos como return no están permitidos al igual que tampoco están permitidos los bloques de sentencias. Además de esto, las funciones lambda no tienen porque estar asociadas a un nombre en concreto, como lo hacen de forma obligatoria las funciones definidas con def. Vamos a ver un ejemplo a continuación.
En este ejemplo, vamos a hacer una función que recibe como parámetros dos elementos y devuelve la suma de ellos. Pero, vamos a definirlo primero como una función lambda y luego con una función clásica con def.
End of explanation
(lambda x, y: 2*(x+y))(5,4)
Explanation: Como podemos ver, ambas funciones dan el mismo resultado, así que, ¿para qué aprender a usar funciones lambda? Bueno, con estas funciones podemos hacer cosas como se ve a continuación:
End of explanation
get_min = lambda x, y: x if x < y else y
print("Mínimo entre 2 y 5: ", get_min(2,5))
Explanation: ¿Qué ha pasado aquí? Pues bien, para empezar, esto es una de las ventajas de las funciones lambda, que podemos crear funciones anónimas en una sola línea, con un uso específico como acaba de pasar aquí, que no está asociada a ningún nombre en específico como pasa con las funciones def. Concretamente, lo que acaba de pasar es que hemos declarado una función lambda que recibe dos parámetros, $x$ e $y$, los suma y los multiplica por dos.
Acto seguido, hemos pasado como parámetros los dos números que van a realizar la operación, como parámetros de la función lambda, que se suman, se multiplican por dos y automáticamente se devuelven.
Además, aquí podemos ver una de las diferencias con las funciones clásicas, y es que en las funciones lambda existe un return implícito. Es decir, una vez realizadas todas las operaciones que hay definidas en la función lambda automáticamente se devuelve este valor.
Otra diferencia es que las funciones lambda son expresiones, no sentencias, por lo que pueden aparecer en sitios donde por la sintaxis de Python, no puede aparecer un def.
También tienen otras peculiaridades, como el que no se puede crear dentro de una función lambda un bloque if-else como el que estamos acostumbrados, sino que se tiene que expresar como vemos a continuación:
End of explanation
import sys
pinta = lambda x: list(map(sys.stdout.write, x))
lista = pinta(['Un\n', 'Dos\n', 'Tres\n'])
bucle = lambda x: [i for i in range(0,10) if i%x==0]
bucle(2)
Explanation: También son capaces de realizar bucles internos y llamar a funciones como map, usar las expresiones de comprensión de listas, etc.
End of explanation
ini = list(range(-10, 10,))
print("Lista inicial con range:\t", ini)
# ¿Y si la queremos ordenada de esta manera?
# [0, -1, 1, -2, 2, ...]
sorted_list = sorted(range(-10, 10), key=lambda x: x**2)
print("Lista ordenada con lambda:\t", sorted_list)
Explanation: Cuándo usar funciones lambda
Uno de los mejores momentos para usar una función lambda, es el momento en el que vamos a usar una función para ordenar los elementos de una lista, siguiendo un orden en concreto. Este orden se puede definir usando una función, así que, qué mejor que usar una función lambda para establecer este orden y al ser una función anónima, no es necesario predefinirla. Vamos a ver un ejemplo de esto:
End of explanation
def suma_n(n):
return lambda x: x+n
suma5 = suma_n(5)
suma10 = suma_n(10)
print("Suma 5: ", suma5(5))
print("Suma 10: ",suma10(5))
Explanation: También podemos hacer que una función lambda actúe como una clausura léxica. Pero, ¿qué es una clausura? Una clausura léxica es un nombre para una función que recuerda los valores que se encuentran en un cierto espacio de nombres, incluso cuando el flujo de ejecución del programa no se encuentra en ese espacio de nombres. Vamos a ver un ejemplo de esto.
End of explanation
from numpy import pi
# Definimos nuestra lista que queremos transformar
degrees = [180, 250, 0, 18, 214, ]
# Pasar de grados a radianes
def deg2rad(deg):
return deg * pi/180
rads_generator = map(deg2rad, degrees)
print("Lista transformada: ", list(rads_generator))
# En caso de que queramos extraerlos de uno en uno:
# for rads in rads_generator:
# print(rads)
rads_generator_lambda = map(lambda x: x*pi/180, degrees)
print("Lista transformada con lambda: ", list(rads_generator_lambda))
Explanation: Otro uso muy conocido de las funciones lambda es el uso de estas en las funciones filter o map, ya que podemos filtrar elementos que cumplan una serie de condiciones, haciendo uso de la Programación Funcional.
La programación funcional es un paradigma de programación que evalua las expresiones al igual que se hace en matemáticas, es decir, se evalua la expresión, se devuelve un resultado y los operandos quedan inmutables. Esto hace que dada una función $f$ que realiza una tarea determinada, siempre devuelva el mismo resultado si recibe como parámetro el elemento $x$. Esto es conocido como transparencia referencial. Esta programación funcional se basa mucho en el $\lambda$-calculus.
Python, aunque sea un lenguaje por naturaleza imperativo, tiene la posibilidad de emular esta programación funcional y emular también la transparencia referencial, usando las funciones lambda que hemos visto, y funciones como filter, map o reduce, que gracias a la programación funcional, nos permite hacer en una sola línea de código lo que haríamos en bastante más escribiendo de forma "clásica", y de forma más eficiente y elegante.
Función map
La función map recibe como parámetros una función y una lista o sequence. map aplica la función que recibe como entrada sobre la lista y devuelve un generador (en Python 2 devuelve una lista). Un generador es una estructura de Python que actúa como un iterador. Por ejemplo, al usar la función range(5), obtenemos un generador de números enteros de 0 a 5. Los elementos de este generador los podemos obtener todos del tirón haciendo list(range(5)) o bien, sacarlos de uno en uno en un bucle for i in range(5). Una vez generados estos elementos, desaparecen del generador (obviamente).
Vamos a ver un ejemplo a continuación, en el que usaremos la función map sin y con funciones lambda.
End of explanation
a = [1, 2, 3, 4]
b = [5, 6, 7, 8]
result = list(map(lambda x, y: 2*(x+y), a, b))
print("Resultado: ", result)
Explanation: Como puedes ver, el código usado escribiendo la función lambda es mucho menor que usando una función def y más limpio. Además, no es necesario definir una función y asociarla a un nombre para hacer esta tarea.
Además de esto, también podemos hacer una función map a varias listas a la vez. El único requisito es que estas listas sean de la misma longitud.
End of explanation
def Fibonacci(n):
a, b = 0, 1
yield a
yield b
i = 0
while i < n-2:
a, b = b, a + b
yield b
i+=1
pybonacci = list(Fibonacci(1000))
print(pybonacci[0:50], '...', pybonacci[-10:])
# Obtenemos los pares
even = list(filter(lambda x: x%2==0, pybonacci))
# Y los impares
odd = list(filter(lambda x: x%2!=0, pybonacci))
print("Impares:\n", odd[:50], "...", odd[-3])
print("Pares:\n", even[:50], "...", even[-3])
Explanation: Función filter
La función filter es la manera elegante de filtrar nuestras listas y eliminar aquellos elementos que no cumplan cierta condición. Al igual que map, recibe como parámetros una función y una sequence. Por ejemplo, vamos a extraer los elementos pares e impares de la sucesión de Fibonacci en listas separadas usando filter:
End of explanation
from functools import reduce
Explanation: Función reduce
La función reduce se encarga de, tal y como dice su nombre, reducir una lista a un único valor, mediante una función que hayamos definido. A diferencia de las otras dos, esta función no se encuentra en el ámbito por defecto de Python, es decir, no podemos acceder a ella cargando simplemente el prompt de Python en Python 3.
Para poder usarla, tenemos que hacer lo siguiente:
End of explanation
n = 10
fact = reduce(lambda x,y:x*y, range(1,n+1))
print("Factorial de un 10: ", fact)
Explanation: Con esto, ya podemos utilizar la función reduce. Al igual que las anteriores, recibe como primer argumento una función que aplicar sobre la lista que recibe como segundo argumento. Vamos a ver un ejemplo de cómo calcular el factorial de un número en una sola línea.
End of explanation |
5,435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
tulipy
Python bindings for Tulip Indicators
Tulipy requires numpy as all inputs and outputs are numpy arrays (dtype=np.float64).
Installation
You can install via pip install tulipy.
If a wheel is not available for your system, you will need to pip install Cython numpy to build from the source distribution.
Usage
Step1: Information about indicators are exposed as properties
Step2: Single outputs are returned directly. Indicators returning multiple outputs use
a tuple in the order indicated by the outputs property.
Step3: Invalid options will throw an InvalidOptionError
Step4: If inputs of differing sizes are provided, they are right-aligned and trimmed from the left | Python Code:
import numpy as np
import tulipy as ti
ti.TI_VERSION
DATA = np.array([81.59, 81.06, 82.87, 83, 83.61,
83.15, 82.84, 83.99, 84.55, 84.36,
85.53, 86.54, 86.89, 87.77, 87.29])
Explanation: tulipy
Python bindings for Tulip Indicators
Tulipy requires numpy as all inputs and outputs are numpy arrays (dtype=np.float64).
Installation
You can install via pip install tulipy.
If a wheel is not available for your system, you will need to pip install Cython numpy to build from the source distribution.
Usage
End of explanation
def print_info(indicator):
print("Type:", indicator.type)
print("Full Name:", indicator.full_name)
print("Inputs:", indicator.inputs)
print("Options:", indicator.options)
print("Outputs:", indicator.outputs)
print_info(ti.sqrt)
Explanation: Information about indicators are exposed as properties:
End of explanation
ti.sqrt(DATA)
print_info(ti.sma)
ti.sma(DATA, period=5)
Explanation: Single outputs are returned directly. Indicators returning multiple outputs use
a tuple in the order indicated by the outputs property.
End of explanation
try:
ti.sma(DATA, period=-5)
except ti.InvalidOptionError:
print("Invalid Option!")
print_info(ti.bbands)
ti.bbands(DATA, period=5, stddev=2)
Explanation: Invalid options will throw an InvalidOptionError:
End of explanation
DATA2 = np.array([83.15, 82.84, 83.99, 84.55, 84.36])
# 'high' trimmed to DATA[-5:] == array([ 85.53, 86.54, 86.89, 87.77, 87.29])
ti.aroonosc(high=DATA, low=DATA2, period=2)
Explanation: If inputs of differing sizes are provided, they are right-aligned and trimmed from the left:
End of explanation |
5,436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Traveling Salesman problem
Names of group members
// put your names here!
Goals of this assignment
The main goal of this assignment is to use Monte Carlo methods to find the shortest path between several cities - the "Traveling Salesman" problem. This is an example of how randomization can be used to optimize problems that would be incredibly computationally expensive (and sometimes impossible) to solve exactly.
The Traveling Salesman problem
The Traveling Salesman Problem is a classic problem in computer science where the focus is on optimization. The problem is as follows
Step1: This code sets up everything we need
Given a number of cities, set up random x and y positions and calculate a table of distances between pairs of cities (used for calculating the total trip distance). Then set up an array that controls the order that the salesman travels between cities, and plots out the initial path.
Step2: Put your code below this!
Your code should take some number of steps, doing the following at each step
Step4: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import display, clear_output
def calc_total_distance(table_of_distances, city_order):
'''
Calculates distances between a sequence of cities.
Inputs: N x N table containing distances between each pair of the N
cities, as well as an array of length N+1 containing the city order,
which starts and ends with the same city (ensuring that the path is
closed)
Returns: total path length for the closed loop.
'''
total_distance = 0.0
# loop over cities and sum up the path length between successive pairs
for i in range(city_order.size-1):
total_distance += table_of_distances[city_order[i]][city_order[i+1]]
return total_distance
def plot_cities(city_order,city_x,city_y):
'''
Plots cities and the path between them.
Inputs: ordering of cities, x and y coordinates of each city.
Returns: a plot showing the cities and the path between them.
'''
# first make x,y arrays
x = []
y = []
# put together arrays of x and y positions that show the order that the
# salesman traverses the cities
for i in range(0, city_order.size):
x.append(city_x[city_order[i]])
y.append(city_y[city_order[i]])
# append the first city onto the end so the loop is closed
x.append(city_x[city_order[0]])
y.append(city_y[city_order[0]])
#time.sleep(0.1)
clear_output(wait=True)
display(fig) # Reset display
fig.clear() # clear output for animation
plt.xlim(-0.2, 20.2) # give a little space around the edges of the plot
plt.ylim(-0.2, 20.2)
# plot city positions in blue, and path in red.
plt.plot(city_x,city_y, 'bo', x, y, 'r-')
Explanation: The Traveling Salesman problem
Names of group members
// put your names here!
Goals of this assignment
The main goal of this assignment is to use Monte Carlo methods to find the shortest path between several cities - the "Traveling Salesman" problem. This is an example of how randomization can be used to optimize problems that would be incredibly computationally expensive (and sometimes impossible) to solve exactly.
The Traveling Salesman problem
The Traveling Salesman Problem is a classic problem in computer science where the focus is on optimization. The problem is as follows: Imagine there is a salesman who has to travel to N cities. The order is unimportant, as long as he only visits each city once on each trip, and finishes where he started. The salesman wants to keep the distance traveled (and thus travel costs) as low as possible. This problem is interesting for a variety of reasons - it applies to transportation (finding the most efficient bus routes), logistics (finding the best UPS or FedEx delivery routes for some number of packages), or in optimizing manufacturing processes to reduce cost.
The Traveling Salesman Problem is extremely difficult to solve for large numbers of cities - testing every possible combination of cities would take N! (N factorial) individual tests. For 10 cities, this would require 3,628,800 separate tests. For 20 cities, this would require 2,432,902,008,176,640,000 (approximately $2.4 \times 10^{18}$) tests - if you could test one combination per microsecond ($10^{-6}$ s) it would take approximately 76,000 years! For 30 cities, at the same rate testing every combination would take more than one billion times the age of the Universe. As a result, this is the kind of problem where a "good enough" answer is sufficient, and where randomization comes in.
A good local example of a solution to the Traveling Salesman Problem is an optimized Michigan road trip calculated by a former MSU graduate student (and one across the US). There's also a widely-used software library for solving the Traveling Salesman Problem; the website has some interesting applications of the problem!
End of explanation
# number of cities we'll use.
number_of_cities = 30
# seed for random number generator so we get the same value every time!
np.random.seed(2024561414)
# create random x,y positions for our current number of cities. (Distance scaling is arbitrary.)
city_x = np.random.random(size=number_of_cities)*20.0
city_y = np.random.random(size=number_of_cities)*20.0
# table of city distances - empty for the moment
city_distances = np.zeros((number_of_cities,number_of_cities))
# calculate distnace between each pair of cities and store it in the table.
# technically we're calculating 2x as many things as we need (as well as the
# diagonal, which should all be zeros), but whatever, it's cheap.
for a in range(number_of_cities):
for b in range(number_of_cities):
city_distances[a][b] = ((city_x[a]-city_x[b])**2 + (city_y[a]-city_y[b])**2 )**0.5
# create the array of cities in the order we're going to go through them
city_order = np.arange(city_distances.shape[0])
# tack on the first city to the end of the array, since that ensures a closed loop
city_order = np.append(city_order, city_order[0])
Explanation: This code sets up everything we need
Given a number of cities, set up random x and y positions and calculate a table of distances between pairs of cities (used for calculating the total trip distance). Then set up an array that controls the order that the salesman travels between cities, and plots out the initial path.
End of explanation
fig = plt.figure()
# Put your code here!
Explanation: Put your code below this!
Your code should take some number of steps, doing the following at each step:
Randomly swap two cities in the array of cities (except for the first/last city)
Check the total distance traversed by the salesman
If the new ordering results in a shorter path, keep it. If not, throw it away.
Plot the shorter of the two paths (the original one or the new one)
Also, keep track of the steps and the minimum distance traveled as a function of number of steps and plot out the minimum distance as a function of step!
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/dDkx8yxbMC2aKHJb2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation |
5,437 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression for Banknote Authentication
<hr>
Overview
Choosing a classification algorithm
First steps with scikit-learn
Loading the Dataset
Logistic regression
Training a logistic regression model with scikit-learn
Measuring our classifier using Binary classification performance metrics
Confusion Matrix
Precision and Recall
Calculating the F1 measure
ROC-AUC
Finding the most Important Features
Plotting our model decison regions
Tackling overfitting via regularization
Summary
Choosing a classification algorithm
In the subsequent chapters, we will take a tour through a selection of popular and powerful machine learning algorithms that are commonly used in academia as well as in the industry. While learning about the differences between several supervised learning algorithms for classification, we will also develop an intuitive appreciation of their individual strengths and weaknesses by tackling real-word classification problems. We will take our first steps with the scikit-learn library, which offers a user-friendly interface for using those algorithms efficiently and productively.
Choosing an appropriate classification algorithm for a particular problem task requires practice
Step1: We'll take a look at our data
Step2: Apparently, the first 5 instances of our datasets are all fake (Class is 0).
Step3: This shows we have 762 total instances of Fake banknotes and 610 total instances of Authentic banknotes in our dataset.
Step4: To evaluate how well a trained model performs on unseen data, we will further split the dataset into separate training and test datasets. Splitting data into 70% training and 30% test data
Step5: Many machine learning and optimization algorithms also require feature scaling
for optimal performance. Here, we will standardize the features using the StandardScaler class from scikit-learn's preprocessing module
Step6: Using the preceding code, we loaded the StandardScaler class from the preprocessing module and initialized a new StandardScaler object that we assigned to the variable sc. Using the fit method, StandardScaler estimated the parameters μ (sample mean) and (standard deviation) for each feature dimension from the training data. By calling the transform method, we then standardized the training data using those estimated parameters μ and . Note that we used the same scaling parameters to standardize the test set so that both the values in the training and test dataset are comparable to each other.
Logistic regression
Step7: Learning the weights of the logistic cost function
Step8: Training a logistic regression model with scikit-learn
Scikit-learn implements a highly optimized version of logistic regression that also supports multiclass settings off-the-shelf, we will skip the implementation and use the sklearn.linear_model.LogisticRegression class as well as the familiar fit method to train the model on the standardized flower training dataset
Step9: Having trained a model in scikit-learn, we can make predictions via the predict method
Step10: On executing the preceding code, we see that the perceptron misclassifies 5 out of the 412 note samples. Thus, the misclassification error on the test dataset is 0.012 or 1.2 percent (5/412 = 0.012
).
Measuring our classifier using Binary classification performance metrics
A variety of metrics exist to evaluate the performance of binary classifiers against
trusted labels. The most common metrics are accuracy, precision, recall, F1 measure,
and ROC AUC score. All of these measures depend on the concepts of true positives,
true negatives, false positives, and false negatives. Positive and negative refer to the
classes. True and false denote whether the predicted class is the same as the true class.
For our Banknote classifier, a true positive prediction is when the classifier correctly
predicts that a note is authentic. A true negative prediction is when the classifier
correctly predicts that a note is fake. A prediction that a fake note is authentic
is a false positive prediction, and an authentic note is incorrectly classified as fake is a
false negative prediction.
Confusion Matrix
A confusion matrix, or contingency table, can be used to
visualize true and false positives and negatives. The rows of the matrix are the true
classes of the instances, and the columns are the predicted classes of the instances
Step11: The confusion matrix indicates that there were 227 true negative predictions, 180
true positive predictions, 0 false negative predictions, and 5 false positive
prediction.
Scikit-learn also implements a large variety of different performance metrics that are available via the metrics module. For example, we can calculate the classification accuracy of the perceptron on the test set as follows
Step12: Here, y_test are the true class labels and y_pred are the class labels that we predicted previously.
Furthermore, we can predict the class-membership probability of the samples via
the predict_proba method. For example, we can predict the probabilities of the
first banknote sample
Step13: The preceding array tells us that the model predicts a chance of 99.96 percent that the sample is an autentic banknote (y = 1) class, and 0.003 percent chance that the sample is a fake note (y = 0).
While accuracy measures the overall correctness of the classifier, it does not distinguish between false positive errors and false negative errors. Some applications may be more sensitive to false negatives than false positives, or vice
versa. Furthermore, accuracy is not an informative metric if the proportions of the classes are skewed in the population. For example, a classifier that predicts whether or not credit card transactions are fraudulent may be more sensitive to
false negatives than to false positives.
A classifier that always predicts that transactions are legitimate could have a high accuracy score, but would not be useful. For these reasons, classifiers are often evaluated using two additional measures called precision and recall.
Precision and Recall
Step14: Our classifier's precision is 0.988; almost all of the notes that it predicted as
authentic were actually authentic. Its recall is also high, indicating that it correctly classified
approximately 98 percent of the authentic messages as authentic.
Calculating the F1 measure
The F1 measure is the harmonic mean, or weighted average, of the precision and
recall scores. Also called the f-measure or the f-score, the F1 score is calculated using
the following formula
Step15: The arithmetic mean of our classifier's precision and recall scores is 0.98. As the
difference between the classifier's precision and recall is small, the F1 measure's
penalty is small. Models are sometimes evaluated using the F0.5 and F2 scores,
which favor precision over recall and recall over precision, respectively.
ROC AUC
A Receiver Operating Characteristic, or ROC curve, visualizes a classifier's performance. Unlike accuracy, the ROC curve is insensitive to data sets with unbalanced class proportions; unlike precision and recall, the ROC curve illustrates the classifier's performance for all values of the discrimination threshold. ROC curves plot the classifier's recall against its fall-out. Fall-out, or the false positive rate, is the number of false positives divided by the total number of negatives. It is
calculated using the following formula
Step16: Plotting the ROC curve for our SMS spam classifier
Step17: From the ROC AUC plot, it is apparent that our classifier outperforms random
guessing and does a very good job in classifying; almost all of the plot area lies under its curve.
Finding the most important features with forests of trees
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
Step18: We'll cover the details of the code later. For now it can be evidently seen that our most important features that are helping us to correctly classify are | Python Code:
import numpy as np
import pandas as pd
# read .csv from provided dataset
csv_filename="data_banknote_authentication.txt"
# We assign the collumn names ourselves and load the data in a Pandas Dataframe
df=pd.read_csv(csv_filename,names=["Variance","Skewness","Curtosis","Entropy","Class"])
Explanation: Logistic Regression for Banknote Authentication
<hr>
Overview
Choosing a classification algorithm
First steps with scikit-learn
Loading the Dataset
Logistic regression
Training a logistic regression model with scikit-learn
Measuring our classifier using Binary classification performance metrics
Confusion Matrix
Precision and Recall
Calculating the F1 measure
ROC-AUC
Finding the most Important Features
Plotting our model decison regions
Tackling overfitting via regularization
Summary
Choosing a classification algorithm
In the subsequent chapters, we will take a tour through a selection of popular and powerful machine learning algorithms that are commonly used in academia as well as in the industry. While learning about the differences between several supervised learning algorithms for classification, we will also develop an intuitive appreciation of their individual strengths and weaknesses by tackling real-word classification problems. We will take our first steps with the scikit-learn library, which offers a user-friendly interface for using those algorithms efficiently and productively.
Choosing an appropriate classification algorithm for a particular problem task requires practice: each algorithm has its own quirks and is based on certain assumptions. The "No Free Lunch" theorem: no single classifier works best across all possible scenarios. In practice, it is always recommended that you compare the performance of at least a handful of different learning algorithms to select the best model for the particular problem; these may differ in the number of features or samples, the amount of noise in a dataset, and whether the classes are linearly separable or not.
Eventually, the performance of a classifier, computational power as well as predictive power, depends heavily on the underlying data that are available for learning. The five main steps that are involved in training a machine learning algorithm can be summarized as follows:
Selection of features.
Choosing a performance metric.
Choosing a classifier and optimization algorithm.
Evaluating the performance of the model.
Tuning the algorithm.
Since the approach of this section is to build machine learning knowledge step by step, we will mainly focus on the principal concepts of the different algorithms in this chapter and revisit topics such as feature selection and preprocessing, performance metrics, and hyperparameter tuning for more detailed discussions later in the section.
First steps with scikit-learn
In this example we are going to use Logistic Regression algorithm to classify banknotes as authentic or not. Since we have two outputs (Authentic or Not Authentic) this type of classification is called Binary Classification. Classification task consisting cases where we have to classify into one or more classes is called Multi-Class Classifation.
Dataset
You can get the Banknote Authentication dataset from here: http://archive.ics.uci.edu/ml/datasets/banknote+authentication. UCI Machine Learning Repository is one of the most widely used resource for datasets. As you'll see, we use multiple datasets from this repository to tackle different Machine Learning tasks.
The Banknote Authentication data was extracted from images that were taken from genuine and forged banknote-like specimens. For digitization, an industrial camera usually used for print inspection was used. The final images have 400x 400 pixels. Due to the object lens and distance to the investigated object gray-scale pictures with a resolution of about 660 dpi were gained. Wavelet Transform tool were used to extract features from images. The dataset contains the following attributes:
variance of Wavelet Transformed image (continuous)
skewness of Wavelet Transformed image (continuous)
curtosis of Wavelet Transformed image (continuous)
entropy of image (continuous)
class (integer 0 for fake and 1 for authentic bank notes)
Save the downloaded data_banknote_authentication.txt in the same directory as of your code.
End of explanation
df.head()
Explanation: We'll take a look at our data:
End of explanation
print("No of Fake bank notes = " + str(len(df[df['Class'] == 0])))
print("No of Authentic bank notes = " + str(len(df[df['Class'] == 1])))
Explanation: Apparently, the first 5 instances of our datasets are all fake (Class is 0).
End of explanation
features=list(df.columns[:-1])
print("Our features :" )
features
X = df[features]
y = df['Class']
print('Class labels:', np.unique(y))
Explanation: This shows we have 762 total instances of Fake banknotes and 610 total instances of Authentic banknotes in our dataset.
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
Explanation: To evaluate how well a trained model performs on unseen data, we will further split the dataset into separate training and test datasets. Splitting data into 70% training and 30% test data:
End of explanation
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
Explanation: Many machine learning and optimization algorithms also require feature scaling
for optimal performance. Here, we will standardize the features using the StandardScaler class from scikit-learn's preprocessing module:
Standardizing the features:
End of explanation
import matplotlib.pyplot as plt
import numpy as np
def sigmoid(z):
return 1.0 / (1.0 + np.exp(-z))
z = np.arange(-7, 7, 0.1)
phi_z = sigmoid(z)
plt.plot(z, phi_z)
plt.axvline(0.0, color='k')
plt.ylim(-0.1, 1.1)
plt.xlabel('z')
plt.ylabel('$\phi (z)$')
# y axis ticks and gridline
plt.yticks([0.0, 0.5, 1.0])
ax = plt.gca()
ax.yaxis.grid(True)
plt.tight_layout()
# plt.savefig('./figures/sigmoid.png', dpi=300)
plt.show()
Explanation: Using the preceding code, we loaded the StandardScaler class from the preprocessing module and initialized a new StandardScaler object that we assigned to the variable sc. Using the fit method, StandardScaler estimated the parameters μ (sample mean) and (standard deviation) for each feature dimension from the training data. By calling the transform method, we then standardized the training data using those estimated parameters μ and . Note that we used the same scaling parameters to standardize the test set so that both the values in the training and test dataset are comparable to each other.
Logistic regression:
Logistic regression is a classification model that is very easy to implement but performs very well on linearly separable classes. It is one of the most widely used algorithms for classification in industry. The logistic regression model is a linear model for binary classification that can be extended to multiclass classification via the OvR technique.
The sigmoid function used in the Logistic Regression:
End of explanation
def cost_1(z):
return - np.log(sigmoid(z))
def cost_0(z):
return - np.log(1 - sigmoid(z))
z = np.arange(-10, 10, 0.1)
phi_z = sigmoid(z)
c1 = [cost_1(x) for x in z]
plt.plot(phi_z, c1, label='J(w) if y=1')
c0 = [cost_0(x) for x in z]
plt.plot(phi_z, c0, linestyle='--', label='J(w) if y=0')
plt.ylim(0.0, 5.1)
plt.xlim([0, 1])
plt.xlabel('$\phi$(z)')
plt.ylabel('J(w)')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/log_cost.png', dpi=300)
plt.show()
Explanation: Learning the weights of the logistic cost function
End of explanation
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=1000.0, random_state=0)
lr.fit(X_train_std, y_train)
y_test.shape
Explanation: Training a logistic regression model with scikit-learn
Scikit-learn implements a highly optimized version of logistic regression that also supports multiclass settings off-the-shelf, we will skip the implementation and use the sklearn.linear_model.LogisticRegression class as well as the familiar fit method to train the model on the standardized flower training dataset:
End of explanation
y_pred = lr.predict(X_test_std)
print('Misclassified samples: %d' % (y_test != y_pred).sum())
Explanation: Having trained a model in scikit-learn, we can make predictions via the predict method
End of explanation
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
confusion_matrix = confusion_matrix(y_test, y_pred)
print(confusion_matrix)
plt.matshow(confusion_matrix)
plt.title('Confusion matrix')
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
Explanation: On executing the preceding code, we see that the perceptron misclassifies 5 out of the 412 note samples. Thus, the misclassification error on the test dataset is 0.012 or 1.2 percent (5/412 = 0.012
).
Measuring our classifier using Binary classification performance metrics
A variety of metrics exist to evaluate the performance of binary classifiers against
trusted labels. The most common metrics are accuracy, precision, recall, F1 measure,
and ROC AUC score. All of these measures depend on the concepts of true positives,
true negatives, false positives, and false negatives. Positive and negative refer to the
classes. True and false denote whether the predicted class is the same as the true class.
For our Banknote classifier, a true positive prediction is when the classifier correctly
predicts that a note is authentic. A true negative prediction is when the classifier
correctly predicts that a note is fake. A prediction that a fake note is authentic
is a false positive prediction, and an authentic note is incorrectly classified as fake is a
false negative prediction.
Confusion Matrix
A confusion matrix, or contingency table, can be used to
visualize true and false positives and negatives. The rows of the matrix are the true
classes of the instances, and the columns are the predicted classes of the instances:
End of explanation
from sklearn.metrics import accuracy_score
print('Accuracy: %.2f' % accuracy_score(y_test, y_pred))
Explanation: The confusion matrix indicates that there were 227 true negative predictions, 180
true positive predictions, 0 false negative predictions, and 5 false positive
prediction.
Scikit-learn also implements a large variety of different performance metrics that are available via the metrics module. For example, we can calculate the classification accuracy of the perceptron on the test set as follows:
End of explanation
lr.predict_proba(X_test_std[0,:])
Explanation: Here, y_test are the true class labels and y_pred are the class labels that we predicted previously.
Furthermore, we can predict the class-membership probability of the samples via
the predict_proba method. For example, we can predict the probabilities of the
first banknote sample:
End of explanation
from sklearn.cross_validation import cross_val_score
precisions = cross_val_score(lr, X_train_std, y_train, cv=5,scoring='precision')
print('Precision', np.mean(precisions), precisions)
recalls = cross_val_score(lr, X_train_std, y_train, cv=5,scoring='recall')
print('Recalls', np.mean(recalls), recalls)
Explanation: The preceding array tells us that the model predicts a chance of 99.96 percent that the sample is an autentic banknote (y = 1) class, and 0.003 percent chance that the sample is a fake note (y = 0).
While accuracy measures the overall correctness of the classifier, it does not distinguish between false positive errors and false negative errors. Some applications may be more sensitive to false negatives than false positives, or vice
versa. Furthermore, accuracy is not an informative metric if the proportions of the classes are skewed in the population. For example, a classifier that predicts whether or not credit card transactions are fraudulent may be more sensitive to
false negatives than to false positives.
A classifier that always predicts that transactions are legitimate could have a high accuracy score, but would not be useful. For these reasons, classifiers are often evaluated using two additional measures called precision and recall.
Precision and Recall:
Precision is the fraction of positive predictions that are correct. For instance, in our Banknote Authentication
classifier, precision is the fraction of notes classified as authentic that are actually
authentic.
Precision is given by the following ratio:
P = TP / (TP + FP)
Sometimes called sensitivity in medical domains, recall is the fraction of the truly
positive instances that the classifier recognizes. A recall score of one indicates
that the classifier did not make any false negative predictions. For our Banknote Authentication
classifier, recall is the fraction of authentic notes that were truly classified as authentic.
Recall is calculated with the following ratio:
R = TP / (TP + FN)
Individually, precision and recall are seldom informative; they are both incomplete
views of a classifier's performance. Both precision and recall can fail to distinguish
classifiers that perform well from certain types of classifiers that perform poorly. A
trivial classifier could easily achieve a perfect recall score by predicting positive for
every instance. For example, assume that a test set contains ten positive examples
and ten negative examples.
A classifier that predicts positive for every example will
achieve a recall of one, as follows:
R = 10 / (10 + 0) = 1
A classifier that predicts negative for every example, or that makes only false positive
and true negative predictions, will achieve a recall score of zero. Similarly, a classifier
that predicts that only a single instance is positive and happens to be correct will
achieve perfect precision.
Scikit-learn provides a function to calculate the precision and recall for a classifier
from a set of predictions and the corresponding set of trusted labels.
Calculating our Banknote Authentication classifier's precision and recall:
End of explanation
f1s = cross_val_score(lr, X_train_std, y_train, cv=5, scoring='f1')
print('F1', np.mean(f1s), f1s)
Explanation: Our classifier's precision is 0.988; almost all of the notes that it predicted as
authentic were actually authentic. Its recall is also high, indicating that it correctly classified
approximately 98 percent of the authentic messages as authentic.
Calculating the F1 measure
The F1 measure is the harmonic mean, or weighted average, of the precision and
recall scores. Also called the f-measure or the f-score, the F1 score is calculated using
the following formula:
F1 = 2PR / (P + R)
The F1 measure penalizes classifiers with imbalanced precision and recall scores,
like the trivial classifier that always predicts the positive class. A model with perfect
precision and recall scores will achieve an F1 score of one. A model with a perfect
precision score and a recall score of zero will achieve an F1 score of zero. As for
precision and recall, scikit-learn provides a function to calculate the F1 score for
a set of predictions. Let's compute our classifier's F1 score.
End of explanation
from sklearn.metrics import roc_auc_score, roc_curve, auc
roc_auc_score(y_test,lr.predict(X_test_std))
Explanation: The arithmetic mean of our classifier's precision and recall scores is 0.98. As the
difference between the classifier's precision and recall is small, the F1 measure's
penalty is small. Models are sometimes evaluated using the F0.5 and F2 scores,
which favor precision over recall and recall over precision, respectively.
ROC AUC
A Receiver Operating Characteristic, or ROC curve, visualizes a classifier's performance. Unlike accuracy, the ROC curve is insensitive to data sets with unbalanced class proportions; unlike precision and recall, the ROC curve illustrates the classifier's performance for all values of the discrimination threshold. ROC curves plot the classifier's recall against its fall-out. Fall-out, or the false positive rate, is the number of false positives divided by the total number of negatives. It is
calculated using the following formula:
F = FP / (TN + FP)
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
y_pred = lr.predict_proba(X_test_std)
false_positive_rate, recall, thresholds = roc_curve(y_test, y_pred[:, 1])
roc_auc = auc(false_positive_rate, recall)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, recall, 'b', label='AUC = %0.2f' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1], 'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.ylabel('Recall')
plt.xlabel('Fall-out')
plt.show()
Explanation: Plotting the ROC curve for our SMS spam classifier:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
# Build a classification task using 3 informative features
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d - %s (%f) " % (f + 1, indices[f], features[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
Explanation: From the ROC AUC plot, it is apparent that our classifier outperforms random
guessing and does a very good job in classifying; almost all of the plot area lies under its curve.
Finding the most important features with forests of trees
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(
X[['Variance','Skewness']], y, test_size=0.3, random_state=0)
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import warnings
def versiontuple(v):
return tuple(map(int, (v.split("."))))
def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
# highlight test samples
if test_idx:
# plot all samples
if not versiontuple(np.__version__) >= versiontuple('1.9.0'):
X_test, y_test = X[list(test_idx), :], y[list(test_idx)]
warnings.warn('Please update to NumPy 1.9.0 or newer')
else:
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0],
X_test[:, 1],
c='',
alpha=1.0,
linewidths=1,
marker='o',
s=55, label='test set')
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=1000.0, random_state=0)
lr.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=lr, test_idx=range(105, 150))
plt.xlabel('Variance')
plt.ylabel('Skewness')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: We'll cover the details of the code later. For now it can be evidently seen that our most important features that are helping us to correctly classify are: Variance and Skewness.
We'll use these two features to plot our graph.
Plotting our model decison regions
Finally, we can plot the decision regions of our newly trained perceptron model and visualize how well it separates the different samples.
End of explanation |
5,438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial - Distributed training in a notebook!
Using Accelerate to launch a training script from your notebook
Step1: Overview
In this tutorial we will see how to use Accelerate to launch a training function on a distributed system, from inside your notebook!
To keep it easy, this example will follow training PETs, showcasing how all it takes is 3 new lines of code to be on your way!
Setting up imports and building the DataLoaders
First, make sure that Accelerate is installed on your system by running
Step2: We need to setup Accelerate to use all of our GPUs. We can do so quickly with write_basic_config ()
Step3: Next let's download some data to train on. You don't need to worry about using rank0_first, as since we're in our Jupyter Notebook it will only run on one process like normal
Step4: We wrap the creation of the DataLoaders, our vision_learner, and call to fine_tune inside of a train function.
Note
Step5: The last addition to the train function needed is to use our context manager before calling fine_tune and setting in_notebook to True
Step6: if not rank_distrib()
Step7: Afterwards we can import our exported Learner, save, or anything else we may want to do in our Jupyter Notebook outside of a distributed process | Python Code:
#|all_multicuda
Explanation: Tutorial - Distributed training in a notebook!
Using Accelerate to launch a training script from your notebook
End of explanation
#hide
from fastai.vision.all import *
from fastai.distributed import *
from fastai.vision.models.xresnet import *
from accelerate import notebook_launcher
from accelerate.utils import write_basic_config
Explanation: Overview
In this tutorial we will see how to use Accelerate to launch a training function on a distributed system, from inside your notebook!
To keep it easy, this example will follow training PETs, showcasing how all it takes is 3 new lines of code to be on your way!
Setting up imports and building the DataLoaders
First, make sure that Accelerate is installed on your system by running:
bash
pip install accelerate -U
In your code, along with the normal from fastai.module.all import * imports two new ones need to be added:
```diff
+ from fastai.distributed import *
from fastai.vision.all import *
from fastai.vision.models.xresnet import *
from accelerate import notebook_launcher
```
The first brings in the Learner.distrib_ctx context manager. The second brings in Accelerate's notebook_launcher, the key function we will call to run what we want.
End of explanation
#from accelerate.utils import write_basic_config
#write_basic_config()
Explanation: We need to setup Accelerate to use all of our GPUs. We can do so quickly with write_basic_config ():
Note: Since this checks torch.cuda.device_count, you will need to restart your notebook and skip calling this again to continue. It only needs to be ran once!
End of explanation
path = untar_data(URLs.PETS)
Explanation: Next let's download some data to train on. You don't need to worry about using rank0_first, as since we're in our Jupyter Notebook it will only run on one process like normal:
End of explanation
def get_y(o): return o[0].isupper()
def train(path):
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2,
label_func=get_y, item_tfms=Resize(224))
learn = vision_learner(dls, resnet34, metrics=error_rate).to_fp16()
learn.fine_tune(1)
Explanation: We wrap the creation of the DataLoaders, our vision_learner, and call to fine_tune inside of a train function.
Note: It is important to not build the DataLoaders outside of the function, as absolutely nothing can be loaded onto CUDA beforehand.
End of explanation
def train(path):
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2,
label_func=get_y, item_tfms=Resize(224))
learn = vision_learner(dls, resnet34, metrics=error_rate).to_fp16()
with learn.distrib_ctx(sync_bn=False, in_notebook=True):
learn.fine_tune(1)
learn.export("pets")
Explanation: The last addition to the train function needed is to use our context manager before calling fine_tune and setting in_notebook to True:
Note: for this example sync_bn is disabled for compatibility purposes with torchvision's resnet34
End of explanation
notebook_launcher(train, (path,), num_processes=2)
Explanation: if not rank_distrib(): checks if you are on the main process or not, and in this case if you are you export your Learner only once.
Finally, just call notebook_launcher, passing in the training function, any arguments as a tuple, and the number of GPUs (processes) to use:
End of explanation
imgs = get_image_files(path)
learn = load_learner(path/'pets')
learn.predict(imgs[0])
Explanation: Afterwards we can import our exported Learner, save, or anything else we may want to do in our Jupyter Notebook outside of a distributed process
End of explanation |
5,439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Rabbit example
Copyright 2017 Allen Downey
License
Step1: Rabbit Redux
This notebook starts with a version of the rabbit population growth model and walks through some steps for extending it.
In the original model, we treat all rabbits as adults; that is, we assume that a rabbit is able to breed in the season after it is born. In this notebook, we extend the model to include both juvenile and adult rabbits.
As an example, let's assume that rabbits take 3 seasons to mature. We could model that process explicitly by counting the number of rabbits that are 1, 2, or 3 seasons old. As an alternative, we can model just two stages, juvenile and adult. In the simpler model, the maturation rate is 1/3 of the juveniles per season.
To implement this model, make these changes in the System object
Step3: Now update run_simulation with the following changes
Step4: Test your changes in run_simulation
Step6: Next, update plot_results to plot both the adult and juvenile TimeSeries.
Step7: And test your updated version of plot_results. | Python Code:
%matplotlib inline
from modsim import *
Explanation: Modeling and Simulation in Python
Rabbit example
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
system = System(t0 = 0,
t_end = 10,
adult_pop0 = 10,
birth_rate = 0.9,
death_rate = 0.5)
system
Explanation: Rabbit Redux
This notebook starts with a version of the rabbit population growth model and walks through some steps for extending it.
In the original model, we treat all rabbits as adults; that is, we assume that a rabbit is able to breed in the season after it is born. In this notebook, we extend the model to include both juvenile and adult rabbits.
As an example, let's assume that rabbits take 3 seasons to mature. We could model that process explicitly by counting the number of rabbits that are 1, 2, or 3 seasons old. As an alternative, we can model just two stages, juvenile and adult. In the simpler model, the maturation rate is 1/3 of the juveniles per season.
To implement this model, make these changes in the System object:
Before you make any changes, run all cells and confirm your understand them.
Then, add a second initial populations: juvenile_pop0, with value 0.
Add an additional variable, mature_rate, with the value 0.33.
End of explanation
def run_simulation(system):
Runs a proportional growth model.
Adds TimeSeries to `system` as `results`.
system: System object with t0, t_end, p0,
birth_rate and death_rate
adults = TimeSeries()
adults[system.t0] = system.adult_pop0
for t in linrange(system.t0, system.t_end):
births = system.birth_rate * adults[t]
deaths = system.death_rate * adults[t]
adults[t+1] = adults[t] + births - deaths
system.adults = adults
Explanation: Now update run_simulation with the following changes:
Add a second TimeSeries, named juveniles, to keep track of the juvenile population, and initialize it with juvenile_pop0.
Inside the for loop, compute the number of juveniles that mature during each time step.
Also inside the for loop, add a line that stores the number of juveniles in the new TimeSeries. For simplicity, let's assume that only adult rabbits die.
During each time step, subtract the number of maturations from the juvenile population and add it to the adult population.
After the for loop, store the juveniles TimeSeries as a variable in System.
End of explanation
run_simulation(system)
system.adults
Explanation: Test your changes in run_simulation:
End of explanation
def plot_results(system, title=None):
Plot the estimates and the model.
system: System object with `results`
newfig()
plot(system.adults, 'bo-', label='adults')
decorate(xlabel='Season',
ylabel='Rabbit population',
title=title)
Explanation: Next, update plot_results to plot both the adult and juvenile TimeSeries.
End of explanation
plot_results(system, title='Proportional growth model')
Explanation: And test your updated version of plot_results.
End of explanation |
5,440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Weighted Least Squares
Step1: WLS Estimation
Artificial data
Step2: WLS knowing the true variance ratio of heteroscedasticity
In this example, w is the standard deviation of the error. WLS requires that the weights are proportional to the inverse of the error variance.
Step3: OLS vs. WLS
Estimate an OLS model for comparison
Step4: Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors
Step5: Calculate OLS prediction interval
Step6: Draw a plot to compare predicted values in WLS and OLS
Step7: Feasible Weighted Least Squares (2-stage FWLS)
Like w, w_est is proportional to the standard deviation, and so must be squared. | Python Code:
%matplotlib inline
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
np.random.seed(1024)
Explanation: Weighted Least Squares
End of explanation
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6//10:] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:,[0,1]]
Explanation: WLS Estimation
Artificial data: Heteroscedasticity 2 groups
Model assumptions:
Misspecification: true model is quadratic, estimate only linear
Independent noise/error term
Two groups for error variance, low and high variance groups
End of explanation
mod_wls = sm.WLS(y, X, weights=1./(w ** 2))
res_wls = mod_wls.fit()
print(res_wls.summary())
Explanation: WLS knowing the true variance ratio of heteroscedasticity
In this example, w is the standard deviation of the error. WLS requires that the weights are proportional to the inverse of the error variance.
End of explanation
res_ols = sm.OLS(y, X).fit()
print(res_ols.params)
print(res_wls.params)
Explanation: OLS vs. WLS
Estimate an OLS model for comparison:
End of explanation
se = np.vstack([[res_wls.bse], [res_ols.bse], [res_ols.HC0_se],
[res_ols.HC1_se], [res_ols.HC2_se], [res_ols.HC3_se]])
se = np.round(se,4)
colnames = ['x1', 'const']
rownames = ['WLS', 'OLS', 'OLS_HC0', 'OLS_HC1', 'OLS_HC3', 'OLS_HC3']
tabl = SimpleTable(se, colnames, rownames, txt_fmt=default_txt_fmt)
print(tabl)
Explanation: Compare the WLS standard errors to heteroscedasticity corrected OLS standard errors:
End of explanation
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb,X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(res_ols)
Explanation: Calculate OLS prediction interval:
End of explanation
prstd, iv_l, iv_u = wls_prediction_std(res_wls)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, 'r--')
ax.plot(x, iv_u_ols, 'r--', label="OLS")
ax.plot(x, iv_l_ols, 'r--')
# WLS
ax.plot(x, res_wls.fittedvalues, 'g--.')
ax.plot(x, iv_u, 'g--', label="WLS")
ax.plot(x, iv_l, 'g--')
ax.legend(loc="best");
Explanation: Draw a plot to compare predicted values in WLS and OLS:
End of explanation
resid1 = res_ols.resid[w==1.]
var1 = resid1.var(ddof=int(res_ols.df_model)+1)
resid2 = res_ols.resid[w!=1.]
var2 = resid2.var(ddof=int(res_ols.df_model)+1)
w_est = w.copy()
w_est[w!=1.] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1./((w_est ** 2))).fit()
print(res_fwls.summary())
Explanation: Feasible Weighted Least Squares (2-stage FWLS)
Like w, w_est is proportional to the standard deviation, and so must be squared.
End of explanation |
5,441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cube multidimensionnel - énoncé
Ce notebook aborde différentes solutions pour traiter les données qu'on représente plus volontiers en plusieurs dimensions. Le mot-clé associé est OLAP ou cube OLAP. Mondrian est une solution open source, cubes est écrit en python. Kylin propose ce service sur des données stockées sur Hadoop. L'objectif est ici de découvrir pas d'explorer ces solutions.
Step1: Représentation
Le module pandas manipule des tables et c'est la façon la plus commune de représenter les données. Lorsque les données sont multidimensionnelles, on distingue les coordonnées des valeurs
Step2: Dans cet exemple, il y a
Step3: C'est assez simple. Prenons un exemple
Step4: Cubes de données avec xarray
création
Cette réprésentation sous forme de table n'est pas toujours très lisible. Les informations sont souvent répétées et les données sont vraiment multidimensionnelles. Le module xarray propose de distinguer les coordonnées des valeurs pour proposer des manipulations plus intuitives. Le module propose un DataFrame multidimensionnelle DataSet.
Step5: Dans ce cas-ci, pour reprendre la temrinologie du module xarray, nous avons
Step6: L'opération complexe consiste à faire passer les valeurs de la colonne indicateur en tant que colonnes. C'est l'objet de la méthode pivot_table
Step7: Les données sont maintenant prêtes à passer sous xarray
Step8: sélection
Il est facile ensuite d'extraire les données d'un pays avec la méthode sel
Step9: Ou de plusieurs pays
Step10: Ou plusieurs dimensions
Step11: Pour accéder à la série LIFEXP pour les hommes, les années 2000 et 2010, le pays FR, on fait la différence, puis on la récupère sous forme de DataFrame
Step12: On a gagné presque deux ans et demi d'espérance de vie à la naissance en 10 ans.
Step13: Quelques graphes
Step14: Exercice 1
Step15: Lire, écrire des datasets
Le module xarray s'appuie sur le module netCDF4 qui lui-même est un wrapper poython de la libraire netCDF-c. Celle-ci est spécialisée dans la lecture et l'écriture de données scientifiques. Concrètement, ce n'est pas xarray qui s'en occupe mais netCDF4. Cela explique la syntaxe décrite par Serialization and IO
Step16: C'est un format binaire plus efficace que le format texte
Step17: On relit
Step18: Le module xarray propose également de lire des données de plusieurs fichiers pour ne former qu'un seul dataset (voir Combining multiple files)
Step19: On relit | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import pyensae
from pyquickhelper.helpgen import NbImage
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Cube multidimensionnel - énoncé
Ce notebook aborde différentes solutions pour traiter les données qu'on représente plus volontiers en plusieurs dimensions. Le mot-clé associé est OLAP ou cube OLAP. Mondrian est une solution open source, cubes est écrit en python. Kylin propose ce service sur des données stockées sur Hadoop. L'objectif est ici de découvrir pas d'explorer ces solutions.
End of explanation
NbImage("cube1.png")
Explanation: Représentation
Le module pandas manipule des tables et c'est la façon la plus commune de représenter les données. Lorsque les données sont multidimensionnelles, on distingue les coordonnées des valeurs :
End of explanation
NbImage("cube2.png")
Explanation: Dans cet exemple, il y a :
3 coordonnées : Age, Profession, Annéee
2 valeurs : Espérance de vie, Population
On peut représenter les donnés également comme ceci :
End of explanation
from actuariat_python.data import table_mortalite_euro_stat
table_mortalite_euro_stat()
import os
os.stat("mortalite.txt")
import pandas
df = pandas.read_csv("mortalite.txt", sep="\t", encoding="utf8", low_memory=False)
df.head()
Explanation: C'est assez simple. Prenons un exemple : table de mortalité de 1960 à 2010 qu'on récupère à l'aide de la fonction table_mortalite_euro_stat. C'est assez long (4-5 minutes) sur l'ensemble des données car elles doivent être prétraitées (voir la documentation de la fonction). Pour écouter, il faut utiliser le paramètre stop_at.
End of explanation
from actuariat_python.data import table_mortalite_euro_stat
table_mortalite_euro_stat()
import pandas
df = pandas.read_csv("mortalite.txt", sep="\t", encoding="utf8", low_memory=False)
df.columns
Explanation: Cubes de données avec xarray
création
Cette réprésentation sous forme de table n'est pas toujours très lisible. Les informations sont souvent répétées et les données sont vraiment multidimensionnelles. Le module xarray propose de distinguer les coordonnées des valeurs pour proposer des manipulations plus intuitives. Le module propose un DataFrame multidimensionnelle DataSet.
End of explanation
df2 = df[["annee", "age_num","indicateur","pays","genre","valeur"]].dropna().reset_index(drop=True)
df.columns, df2.columns
df2.head()
df2["indicateur"] = df2["indicateur"].astype(str)
df2["genre"] = df2["genre"].astype(str)
df2["pays"] = df2["pays"].astype(str)
df2.dtypes
Explanation: Dans ce cas-ci, pour reprendre la temrinologie du module xarray, nous avons :
les dimensions : annee, age_num, pays, genre
les valeurs : une valeur par indicateur
On peut passer d'un DataFrame à un DataSet de la façon suivante :
les colonnes indéxées représentent les dimensions
les colonnes non indéxées sont les valeurs
On garde supprime les colonnes qui ne nous intéressent pas et les valeurs manquantes :
End of explanation
piv = df2.pivot_table(index=["annee", "age_num","pays","genre"],
columns=["indicateur"],
values="valeur")
piv.head()
piv.dtypes
Explanation: L'opération complexe consiste à faire passer les valeurs de la colonne indicateur en tant que colonnes. C'est l'objet de la méthode pivot_table :
End of explanation
import xarray
ds = xarray.Dataset.from_dataframe(piv)
ds
Explanation: Les données sont maintenant prêtes à passer sous xarray :
End of explanation
ds.sel(pays=["FR"])
Explanation: sélection
Il est facile ensuite d'extraire les données d'un pays avec la méthode sel :
End of explanation
ds.sel(pays=["FR", "BE"])
Explanation: Ou de plusieurs pays :
End of explanation
ds.sel(pays="FR", annee=2000)
Explanation: Ou plusieurs dimensions :
End of explanation
(ds.sel(pays="FR", annee=2010, genre="T")["LIFEXP"] -
ds.sel(pays="FR", annee=2000, genre="T")["LIFEXP"]).to_dataframe().head()
Explanation: Pour accéder à la série LIFEXP pour les hommes, les années 2000 et 2010, le pays FR, on fait la différence, puis on la récupère sous forme de DataFrame :
End of explanation
(ds.sel(pays="FR", annee=2010, genre=["F","M"])["LIFEXP"] -
ds.sel(pays="UK", annee=2010, genre=["F","M"])["LIFEXP"]).to_dataframe().head()
Explanation: On a gagné presque deux ans et demi d'espérance de vie à la naissance en 10 ans.
End of explanation
ds.sel(annee=2010,age_num=1,genre="T")["LIFEXP"].to_dataframe() \
.sort_values("LIFEXP", ascending=False) \
.plot(y="LIFEXP", kind="bar", figsize=(16,6), ylim=[65,85])
ds.sel(annee=2010,genre="T",pays="FR")["LIFEXP"].plot()
Explanation: Quelques graphes
End of explanation
ds.assign(LIFEEXP_add = ds.LIFEXP-1)
meanp = ds.mean(dim="pays")
ds1, ds2 = xarray.align(ds, meanp, join='outer')
joined = ds1.assign(meanp = ds2["LIFEXP"])
joined.to_dataframe().head()
Explanation: Exercice 1 : que font les lignes suivantes ?
On pourra s'aider des pages :
align nad reindex
transforming datasets
End of explanation
ds.data_vars
try:
ds.to_netcdf('mortalite.nc')
except ValueError as e:
# it breaks with pandas 0.17
# xarray has to be updated
print("l'écriture a échoué")
pass
Explanation: Lire, écrire des datasets
Le module xarray s'appuie sur le module netCDF4 qui lui-même est un wrapper poython de la libraire netCDF-c. Celle-ci est spécialisée dans la lecture et l'écriture de données scientifiques. Concrètement, ce n'est pas xarray qui s'en occupe mais netCDF4. Cela explique la syntaxe décrite par Serialization and IO :
End of explanation
import os
if os.path.exists("mortalite.nc"):
os.stat('mortalite.nc').st_size, os.stat('mortalite.txt').st_size
Explanation: C'est un format binaire plus efficace que le format texte :
End of explanation
if os.path.exists("mortalite.nc"):
ds_lu = xarray.open_dataset('mortalite.nc')
ds_lu
Explanation: On relit :
End of explanation
pays = list(_.values for _ in ds["pays"])
pays[:5]
for p in pays[:5]:
print("enregistre", p)
d = ds.sel(pays=[p])
try:
d.to_netcdf("mortalite_pays_%s.nc" % p)
except ValueError:
print("l'écriture a échoué pour", p)
Explanation: Le module xarray propose également de lire des données de plusieurs fichiers pour ne former qu'un seul dataset (voir Combining multiple files) :
End of explanation
import os
if os.path.exists("mortalite_pays_AM.nc"):
ds_lu2 = xarray.open_mfdataset('mortalite_pays*.nc')
ds_lu2
Explanation: On relit :
End of explanation |
5,442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bokeh Charts Attributes
One of Bokeh Charts main contributions is that it provides a flexible interface for applying unique attributes based on the unique values in column(s) of a DataFrame.
Internally, the bokeh chart uses the AttrSpec to define the mapping, but allows the user to pass in their own spec, or utilize a function to produce a customized one.
Step1: Simple Examples
The AttrSpec assigns values in the iterable to values in items.
Step2: You will see that the key in the mapping will be a tuple, and it will always be a tuple. The mapping works like this because the AttrSpec(s) are often used with Pandas DataFrames groupby method. The groupby method can return a single value or a tuple of values when used with multiple columns, so this is just making sure that is consistent.
However, you can still access the values in the following way
Step3: The ColorAttr is just a custom AttrSpec that has a default palette as the iterable, but can be customized, and will likely provide some other color generation functionality.
Step4: Let's assume that you don't know how many unique items you are working with, but you have defined the things that you want to assign the items to. The AttrSpec will automatically cycle the iterable for you. This is important for exploratory analysis.
Step5: Because there are only 6 unique colors in the default palette, the palette repeats starting on the 7th item.
Using with Pandas
Step6: You will notice that this is similar to a pandas series with a MultiIndex, which is seen below.
Step7: You can think of this as a SQL table with 3 columns, two of which are an index. You can imagine how you might join this view data into the original data source to assign these colors to the associated rows.
Combining with ChartDataSource
Step8: Multiple Attributes
Step9: Custom Iterable
You will see that the output contains the combined chart_index and the columns for both attributes. The values of each are joined in based on the original assignment. For example, line_color only has two colors because the large_displ column only has two values.
If we wanted to change the true/false, we can modify the ColorAttr.
Step10: Altering Attribute Assignment Order
You may not have wanted to assign the values in the order that occured. So, you would have five options.
Pre order the data and tell the attribute not to sort.
Make the column a categorical and set the order.
Specify the sort options to the AttrSpec
Manually specify the items in the order you want them to be assigned.
Specify the iterable in the order you want.
1. Pre order the data
Step11: 2. Make the column a categorical and set the order
We'll show the default sort order of a boolean column, which is ascending.
Step12: 3. Specify the sort options to the AttrSpec
Step13: 4. Manually specify the items in the order you want them
Step14: 5. Change the order of the iterable | Python Code:
from bokeh.charts.attributes import AttrSpec, ColorAttr, MarkerAttr
Explanation: Bokeh Charts Attributes
One of Bokeh Charts main contributions is that it provides a flexible interface for applying unique attributes based on the unique values in column(s) of a DataFrame.
Internally, the bokeh chart uses the AttrSpec to define the mapping, but allows the user to pass in their own spec, or utilize a function to produce a customized one.
End of explanation
attr = AttrSpec(items=[1, 2, 3], iterable=['a', 'b', 'c'])
attr.attr_map
Explanation: Simple Examples
The AttrSpec assigns values in the iterable to values in items.
End of explanation
attr[1]
Explanation: You will see that the key in the mapping will be a tuple, and it will always be a tuple. The mapping works like this because the AttrSpec(s) are often used with Pandas DataFrames groupby method. The groupby method can return a single value or a tuple of values when used with multiple columns, so this is just making sure that is consistent.
However, you can still access the values in the following way:
End of explanation
color = ColorAttr(items=[1, 2, 3])
color.attr_map
Explanation: The ColorAttr is just a custom AttrSpec that has a default palette as the iterable, but can be customized, and will likely provide some other color generation functionality.
End of explanation
color = ColorAttr(items=list(range(0, 10)))
color.attr_map
Explanation: Let's assume that you don't know how many unique items you are working with, but you have defined the things that you want to assign the items to. The AttrSpec will automatically cycle the iterable for you. This is important for exploratory analysis.
End of explanation
from bokeh.sampledata.autompg import autompg as df
df.head()
color_attr = ColorAttr(df=df, columns=['cyl', 'origin'])
color_attr.attr_map
Explanation: Because there are only 6 unique colors in the default palette, the palette repeats starting on the 7th item.
Using with Pandas
End of explanation
color_attr.series
Explanation: You will notice that this is similar to a pandas series with a MultiIndex, which is seen below.
End of explanation
from bokeh.charts.data_source import ChartDataSource
fill_color = ColorAttr(columns=['cyl', 'origin'])
ds = ChartDataSource.from_data(df)
ds.join_attrs(fill_color=fill_color).head()
Explanation: You can think of this as a SQL table with 3 columns, two of which are an index. You can imagine how you might join this view data into the original data source to assign these colors to the associated rows.
Combining with ChartDataSource
End of explanation
# add new column
df['large_displ'] = df['displ'] >= 350
fill_color = ColorAttr(columns=['cyl', 'origin'])
line_color = ColorAttr(columns=['large_displ'])
ds = ChartDataSource.from_data(df)
ds.join_attrs(fill_color=fill_color, line_color=line_color).head(10)
Explanation: Multiple Attributes
End of explanation
line_color = ColorAttr(df=df, columns=['large_displ'], palette=['Green', 'Red'])
ds.join_attrs(fill_color=fill_color, line_color=line_color).head(10)
Explanation: Custom Iterable
You will see that the output contains the combined chart_index and the columns for both attributes. The values of each are joined in based on the original assignment. For example, line_color only has two colors because the large_displ column only has two values.
If we wanted to change the true/false, we can modify the ColorAttr.
End of explanation
df_sorted = df.sort(columns=['large_displ'], ascending=False)
line_color = ColorAttr(df=df_sorted, columns=['large_displ'], palette=['Green', 'Red'], sort=False)
ds.join_attrs(fill_color=fill_color, line_color=line_color).head()
Explanation: Altering Attribute Assignment Order
You may not have wanted to assign the values in the order that occured. So, you would have five options.
Pre order the data and tell the attribute not to sort.
Make the column a categorical and set the order.
Specify the sort options to the AttrSpec
Manually specify the items in the order you want them to be assigned.
Specify the iterable in the order you want.
1. Pre order the data
End of explanation
df.sort(columns='large_displ').head()
import pandas as pd
df_cat = df.copy()
# create the categorical and set the default (ascending)
df_cat['large_displ'] = pd.Categorical.from_array(df.large_displ).reorder_categories([True, False])
# we don't have to sort here, but doing it so you can see the order that the attr spec will see
df_cat.sort(columns='large_displ').head()
line_color = ColorAttr(df=df_cat, columns=['large_displ'], palette=['Green', 'Red'])
ds.join_attrs(fill_color=fill_color, line_color=line_color).head()
Explanation: 2. Make the column a categorical and set the order
We'll show the default sort order of a boolean column, which is ascending.
End of explanation
# the items will be sorted descending (uses same sorting options as pandas)
line_color = ColorAttr(df=df, columns=['large_displ'], palette=['Green', 'Red'], sort=True, ascending=False)
ds.join_attrs(fill_color=fill_color, line_color=line_color).head()
Explanation: 3. Specify the sort options to the AttrSpec
End of explanation
# remove df so the items aren't auto-calculated
# still need column name for when palette is joined into the dataset
line_color = ColorAttr(columns=['large_displ'], items=[True, False], palette=['Green', 'Red'])
ds.join_attrs(fill_color=fill_color, line_color=line_color).head()
Explanation: 4. Manually specify the items in the order you want them
End of explanation
line_color = ColorAttr(df=df, columns=['large_displ'], palette=['Red', 'Green'])
ds.join_attrs(fill_color=fill_color, line_color=line_color).head()
Explanation: 5. Change the order of the iterable
End of explanation |
5,443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Think Bayes
Step1: Improving Reading Ability
From DASL(http
Step2: And use groupby to compute the means for the two groups.
Step4: The Normal class provides a Likelihood function that computes the likelihood of a sample from a normal distribution.
Step5: The prior distributions for mu and sigma are uniform.
Step6: I use itertools.product to enumerate all pairs of mu and sigma.
Step7: After the update, we can plot the probability of each mu-sigma pair as a contour plot.
Step8: And then we can extract the marginal distribution of mu
Step9: And the marginal distribution of sigma
Step10: Exercise
Step16: Paintball
Suppose you are playing paintball in an indoor arena 30 feet
wide and 50 feet long. You are standing near one of the 30 foot
walls, and you suspect that one of your opponents has taken cover
nearby. Along the wall, you see several paint spatters, all the same
color, that you think your opponent fired recently.
The spatters are at 15, 16, 18, and 21 feet, measured from the
lower-left corner of the room. Based on these data, where do you
think your opponent is hiding?
Here's the Suite that does the update. It uses MakeLocationPmf,
defined below.
Step17: The prior probabilities for alpha and beta are uniform.
Step18: To visualize the joint posterior, I take slices for a few values of beta and plot the conditional distributions of alpha. If the shooter is close to the wall, we can be somewhat confident of his position. The farther away he is, the less certain we are.
Step19: Here are the marginal posterior distributions for alpha and beta.
Step20: To visualize the joint posterior, I take slices for a few values of beta and plot the conditional distributions of alpha. If the shooter is close to the wall, we can be somewhat confident of his position. The farther away he is, the less certain we are.
Step21: Another way to visualize the posterio distribution
Step22: Here's another visualization that shows posterior credible regions.
Step26: Exercise
Step29: Exercise
Step30: Exercise | Python Code:
from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import math
import numpy as np
from thinkbayes2 import Pmf, Cdf, Suite, Joint
import thinkplot
Explanation: Think Bayes: Chapter 9
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
import pandas as pd
df = pd.read_csv('drp_scores.csv', skiprows=21, delimiter='\t')
df.head()
Explanation: Improving Reading Ability
From DASL(http://lib.stat.cmu.edu/DASL/Stories/ImprovingReadingAbility.html)
An educator conducted an experiment to test whether new directed reading activities in the classroom will help elementary school pupils improve some aspects of their reading ability. She arranged for a third grade class of 21 students to follow these activities for an 8-week period. A control classroom of 23 third graders followed the same curriculum without the activities. At the end of the 8 weeks, all students took a Degree of Reading Power (DRP) test, which measures the aspects of reading ability that the treatment is designed to improve.
Summary statistics on the two groups of children show that the average score of the treatment class was almost ten points higher than the average of the control class. A two-sample t-test is appropriate for testing whether this difference is statistically significant. The t-statistic is 2.31, which is significant at the .05 level.
I'll use Pandas to load the data into a DataFrame.
End of explanation
grouped = df.groupby('Treatment')
for name, group in grouped:
print(name, group.Response.mean())
Explanation: And use groupby to compute the means for the two groups.
End of explanation
from scipy.stats import norm
class Normal(Suite, Joint):
def Likelihood(self, data, hypo):
data: sequence of test scores
hypo: mu, sigma
mu, sigma = hypo
likes = norm.pdf(data, mu, sigma)
return np.prod(likes)
Explanation: The Normal class provides a Likelihood function that computes the likelihood of a sample from a normal distribution.
End of explanation
mus = np.linspace(20, 80, 101)
sigmas = np.linspace(5, 30, 101)
Explanation: The prior distributions for mu and sigma are uniform.
End of explanation
from itertools import product
control = Normal(product(mus, sigmas))
data = df[df.Treatment=='Control'].Response
control.Update(data)
Explanation: I use itertools.product to enumerate all pairs of mu and sigma.
End of explanation
thinkplot.Contour(control, pcolor=True)
thinkplot.Config(xlabel='mu', ylabel='sigma')
Explanation: After the update, we can plot the probability of each mu-sigma pair as a contour plot.
End of explanation
pmf_mu0 = control.Marginal(0)
thinkplot.Pdf(pmf_mu0)
thinkplot.Config(xlabel='mu', ylabel='Pmf')
Explanation: And then we can extract the marginal distribution of mu
End of explanation
pmf_sigma0 = control.Marginal(1)
thinkplot.Pdf(pmf_sigma0)
thinkplot.Config(xlabel='sigma', ylabel='Pmf')
Explanation: And the marginal distribution of sigma
End of explanation
# Solution
treated = Normal(product(mus, sigmas))
data = df[df.Treatment=='Treated'].Response
treated.Update(data)
# Solution
# Here's the posterior joint distribution for the treated group
thinkplot.Contour(treated, pcolor=True)
thinkplot.Config(xlabel='mu', ylabel='Pmf')
# Solution
# The marginal distribution of mu
pmf_mu1 = treated.Marginal(0)
thinkplot.Pdf(pmf_mu1)
thinkplot.Config(xlabel='mu', ylabel='Pmf')
# Solution
# The marginal distribution of sigma
pmf_sigma1 = treated.Marginal(1)
thinkplot.Pdf(pmf_sigma1)
thinkplot.Config(xlabel='sigma', ylabel='Pmf')
# Solution
# Now we can compute the distribution of the difference between groups
pmf_diff = pmf_mu1 - pmf_mu0
pmf_diff.Mean(), pmf_diff.MAP()
# Solution
# And CDF_diff(0), which is the probability that the difference is <= 0
pmf_diff = pmf_mu1 - pmf_mu0
cdf_diff = pmf_diff.MakeCdf()
thinkplot.Cdf(cdf_diff)
cdf_diff[0]
# Solution
# Or we could directly compute the probability that mu is
# greater than mu2
pmf_mu1.ProbGreater(pmf_mu0)
# Solution
# Finally, here's the probability that the standard deviation
# in the treatment group is higher.
pmf_sigma1.ProbGreater(pmf_sigma0)
# It looks like there is a high probability that the mean of
# the treatment group is higher, and the most likely size of
# the effect is 9-10 points.
# It looks like the variance of the treated group is substantially
# smaller, which suggests that the treatment might be helping
# low scorers more than high scorers.
Explanation: Exercise: Run this analysis again for the control group. What is the distribution of the difference between the groups? What is the probability that the average "reading power" for the treatment group is higher? What is the probability that the variance of the treatment group is higher?
End of explanation
class Paintball(Suite, Joint):
Represents hypotheses about the location of an opponent.
def __init__(self, alphas, betas, locations):
Makes a joint suite of parameters alpha and beta.
Enumerates all pairs of alpha and beta.
Stores locations for use in Likelihood.
alphas: possible values for alpha
betas: possible values for beta
locations: possible locations along the wall
self.locations = locations
pairs = [(alpha, beta)
for alpha in alphas
for beta in betas]
Suite.__init__(self, pairs)
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: pair of alpha, beta
data: location of a hit
Returns: float likelihood
alpha, beta = hypo
x = data
pmf = MakeLocationPmf(alpha, beta, self.locations)
like = pmf.Prob(x)
return like
def MakeLocationPmf(alpha, beta, locations):
Computes the Pmf of the locations, given alpha and beta.
Given that the shooter is at coordinates (alpha, beta),
the probability of hitting any spot is inversely proportionate
to the strafe speed.
alpha: x position
beta: y position
locations: x locations where the pmf is evaluated
Returns: Pmf object
pmf = Pmf()
for x in locations:
prob = 1.0 / StrafingSpeed(alpha, beta, x)
pmf.Set(x, prob)
pmf.Normalize()
return pmf
def StrafingSpeed(alpha, beta, x):
Computes strafing speed, given location of shooter and impact.
alpha: x location of shooter
beta: y location of shooter
x: location of impact
Returns: derivative of x with respect to theta
theta = math.atan2(x - alpha, beta)
speed = beta / math.cos(theta)**2
return speed
Explanation: Paintball
Suppose you are playing paintball in an indoor arena 30 feet
wide and 50 feet long. You are standing near one of the 30 foot
walls, and you suspect that one of your opponents has taken cover
nearby. Along the wall, you see several paint spatters, all the same
color, that you think your opponent fired recently.
The spatters are at 15, 16, 18, and 21 feet, measured from the
lower-left corner of the room. Based on these data, where do you
think your opponent is hiding?
Here's the Suite that does the update. It uses MakeLocationPmf,
defined below.
End of explanation
alphas = range(0, 31)
betas = range(1, 51)
locations = range(0, 31)
suite = Paintball(alphas, betas, locations)
suite.UpdateSet([15, 16, 18, 21])
Explanation: The prior probabilities for alpha and beta are uniform.
End of explanation
locations = range(0, 31)
alpha = 10
betas = [10, 20, 40]
thinkplot.PrePlot(num=len(betas))
for beta in betas:
pmf = MakeLocationPmf(alpha, beta, locations)
pmf.label = 'beta = %d' % beta
thinkplot.Pdf(pmf)
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
Explanation: To visualize the joint posterior, I take slices for a few values of beta and plot the conditional distributions of alpha. If the shooter is close to the wall, we can be somewhat confident of his position. The farther away he is, the less certain we are.
End of explanation
marginal_alpha = suite.Marginal(0, label='alpha')
marginal_beta = suite.Marginal(1, label='beta')
print('alpha CI', marginal_alpha.CredibleInterval(50))
print('beta CI', marginal_beta.CredibleInterval(50))
thinkplot.PrePlot(num=2)
thinkplot.Cdf(Cdf(marginal_alpha))
thinkplot.Cdf(Cdf(marginal_beta))
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
Explanation: Here are the marginal posterior distributions for alpha and beta.
End of explanation
betas = [10, 20, 40]
thinkplot.PrePlot(num=len(betas))
for beta in betas:
cond = suite.Conditional(0, 1, beta)
cond.label = 'beta = %d' % beta
thinkplot.Pdf(cond)
thinkplot.Config(xlabel='Distance',
ylabel='Prob')
Explanation: To visualize the joint posterior, I take slices for a few values of beta and plot the conditional distributions of alpha. If the shooter is close to the wall, we can be somewhat confident of his position. The farther away he is, the less certain we are.
End of explanation
thinkplot.Contour(suite.GetDict(), contour=False, pcolor=True)
thinkplot.Config(xlabel='alpha',
ylabel='beta',
axis=[0, 30, 0, 20])
Explanation: Another way to visualize the posterio distribution: a pseudocolor plot of probability as a function of alpha and beta.
End of explanation
d = dict((pair, 0) for pair in suite.Values())
percentages = [75, 50, 25]
for p in percentages:
interval = suite.MaxLikeInterval(p)
for pair in interval:
d[pair] += 1
thinkplot.Contour(d, contour=False, pcolor=True)
thinkplot.Text(17, 4, '25', color='white')
thinkplot.Text(17, 15, '50', color='white')
thinkplot.Text(17, 30, '75')
thinkplot.Config(xlabel='alpha',
ylabel='beta',
legend=False)
Explanation: Here's another visualization that shows posterior credible regions.
End of explanation
# Solution
from scipy.special import binom as choose
def binom(k, n, p):
Computes the rest of the binomial PMF.
k: number of hits
n: number of attempts
p: probability of a hit
return p**k * (1-p)**(n-k)
class Lincoln(Suite, Joint):
Represents hypotheses about the number of errors.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo: n, p1, p2
data: k1, k2, c
n, p1, p2 = hypo
k1, k2, c = data
part1 = choose(n, k1) * binom(k1, n, p1)
part2 = choose(k1, c) * choose(n-k1, k2-c) * binom(k2, n, p2)
return part1 * part2
# Solution
data = 20, 15, 3
probs = np.linspace(0, 1, 31)
hypos = []
for n in range(32, 350):
for p1 in probs:
for p2 in probs:
hypos.append((n, p1, p2))
suite = Lincoln(hypos)
suite.Update(data)
# Solution
n_marginal = suite.Marginal(0)
thinkplot.Pmf(n_marginal, label='n')
thinkplot.Config(xlabel='number of bugs',
ylabel='PMF')
# Solution
print('post mean n', n_marginal.Mean())
print('MAP n', n_marginal.MAP())
Explanation: Exercise: From John D. Cook
"Suppose you have a tester who finds 20 bugs in your program. You want to estimate how many bugs are really in the program. You know there are at least 20 bugs, and if you have supreme confidence in your tester, you may suppose there are around 20 bugs. But maybe your tester isn't very good. Maybe there are hundreds of bugs. How can you have any idea how many bugs there are? There’s no way to know with one tester. But if you have two testers, you can get a good idea, even if you don’t know how skilled the testers are.
Suppose two testers independently search for bugs. Let k1 be the number of errors the first tester finds and k2 the number of errors the second tester finds. Let c be the number of errors both testers find. The Lincoln Index estimates the total number of errors as k1 k2 / c [I changed his notation to be consistent with mine]."
So if the first tester finds 20 bugs, the second finds 15, and they find 3 in common, we estimate that there are about 100 bugs. What is the Bayesian estimate of the number of errors based on this data?
End of explanation
# Solution
from thinkbayes2 import EvalNormalPdf
class Gps(Suite, Joint):
Represents hypotheses about your location in the field.
def Likelihood(self, data, hypo):
Computes the likelihood of the data under the hypothesis.
hypo:
data:
std = 30
meanx, meany = hypo
x, y = data
like = EvalNormalPdf(x, meanx, std)
like *= EvalNormalPdf(y, meany, std)
return like
# Solution
from itertools import product
coords = np.linspace(-100, 100, 101)
joint = Gps(product(coords, coords))
joint.Update((51, -15))
# Solution
joint.Update((48, 90))
# Solution
pairs = [(11.903060613102866, 19.79168669735705),
(77.10743601503178, 39.87062906535289),
(80.16596823095534, -12.797927542984425),
(67.38157493119053, 83.52841028148538),
(89.43965206875271, 20.52141889230797),
(58.794021026248245, 30.23054016065644),
(2.5844401241265302, 51.012041625783766),
(45.58108994142448, 3.5718287379754585)]
joint.UpdateSet(pairs)
# Solution
thinkplot.PrePlot(2)
pdfx = joint.Marginal(0)
pdfy = joint.Marginal(1)
thinkplot.Pdf(pdfx, label='posterior x')
thinkplot.Pdf(pdfy, label='posterior y')
# Solution
print(pdfx.Mean(), pdfx.Std())
print(pdfy.Mean(), pdfy.Std())
Explanation: Exercise: The GPS problem. According to Wikipedia

GPS included a (currently disabled) feature called Selective Availability (SA) that adds intentional, time varying errors of up to 100 meters (328 ft) to the publicly available navigation signals. This was intended to deny an enemy the use of civilian GPS receivers for precision weapon guidance.
[...]
Before it was turned off on May 2, 2000, typical SA errors were about 50 m (164 ft) horizontally and about 100 m (328 ft) vertically.[10] Because SA affects every GPS receiver in a given area almost equally, a fixed station with an accurately known position can measure the SA error values and transmit them to the local GPS receivers so they may correct their position fixes. This is called Differential GPS or DGPS. DGPS also corrects for several other important sources of GPS errors, particularly ionospheric delay, so it continues to be widely used even though SA has been turned off. The ineffectiveness of SA in the face of widely available DGPS was a common argument for turning off SA, and this was finally done by order of President Clinton in 2000.
Suppose it is 1 May 2000, and you are standing in a field that is 200m square. You are holding a GPS unit that indicates that your location is 51m north and 15m west of a known reference point in the middle of the field.
However, you know that each of these coordinates has been perturbed by a "feature" that adds random errors with mean 0 and standard deviation 30m.
1) After taking one measurement, what should you believe about your position?
Note: Since the intentional errors are independent, you could solve this problem independently for X and Y. But we'll treat it as a two-dimensional problem, partly for practice and partly to see how we could extend the solution to handle dependent errors.
You can start with the code in gps.py.
2) Suppose that after one second the GPS updates your position and reports coordinates (48, 90). What should you believe now?
3) Suppose you take 8 more measurements and get:
(11.903060613102866, 19.79168669735705)
(77.10743601503178, 39.87062906535289)
(80.16596823095534, -12.797927542984425)
(67.38157493119053, 83.52841028148538)
(89.43965206875271, 20.52141889230797)
(58.794021026248245, 30.23054016065644)
(2.5844401241265302, 51.012041625783766)
(45.58108994142448, 3.5718287379754585)
At this point, how certain are you about your location?
End of explanation
import pandas as pd
df = pd.read_csv('flea_beetles.csv', delimiter='\t')
df.head()
# Solution coming soon
Explanation: Exercise: The Flea Beetle problem from DASL
Datafile Name: Flea Beetles
Datafile Subjects: Biology
Story Names: Flea Beetles
Reference: Lubischew, A.A. (1962) On the use of discriminant functions in taxonomy. Biometrics, 18, 455-477. Also found in: Hand, D.J., et al. (1994) A Handbook of Small Data Sets, London: Chapman & Hall, 254-255.
Authorization: Contact Authors
Description: Data were collected on the genus of flea beetle Chaetocnema, which contains three species: concinna (Con), heikertingeri (Hei), and heptapotamica (Hep). Measurements were made on the width and angle of the aedeagus of each beetle. The goal of the original study was to form a classification rule to distinguish the three species.
Number of cases: 74
Variable Names:
Width: The maximal width of aedeagus in the forpart (in microns)
Angle: The front angle of the aedeagus (1 unit = 7.5 degrees)
Species: Species of flea beetle from the genus Chaetocnema
Suggestions:
Plot CDFs for the width and angle data, broken down by species, to get a visual sense of whether the normal distribution is a good model.
Use the data to estimate the mean and standard deviation for each variable, broken down by species.
Given a joint posterior distribution for mu and sigma, what is the likelihood of a given datum?
Write a function that takes a measured width and angle and returns a posterior PMF of species.
Use the function to classify each of the specimens in the table and see how many you get right.
End of explanation |
5,444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
5,445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
연습문제
아래 문제들을 해결하는 코드를 lab06.py 파일에 작성하여 제출하라.
연습 1
아래 코드를 실행하고 23.5입력하면 ValueError 오류가 발생한다.
======
n = int(raw_input("Please enter a number
Step1: 견본 단안 2
Step2: 연습 2
아래 코드를 실행하면 왜 어떤 결과가 나오는지 설명하라.
try
Step3: 연습 6
함수 f가 아래와 같이 정의되었다.
f(x) = sin(x) - 0.5 * x + 30
f(x) = x를 만족시키는 x를 함수 f의 고정점(fixed-point)이라 한다.
함수 f의 부동점을 구하기 위해서 f(x) 반복적으로 업데이트 하는 고정점반복 기술을 아래와 같이 적용한다
Step4: 참고
Step5: 따라서 f(x) = sin(x) - 0.5 * x + 30의 고정점은 x = 20 근처에 존재함을 알 수 있다.
연습 7
다음과 같이 수열이 정의되어 있다.
x(n) = (sin(1/n))**2 / n
x(n) 값은 n이 증가함에 따라 0으로 수렴한다. x(n) >= 1e-9 조건을 만족하는 모든 x(n)들의 리스트를 구하는 코드를 작성하라. | Python Code:
while True:
try:
n = int(raw_input("Please enter a number: "))
print("정확히 입력되었습니다.")
break
except ValueError:
print("정수를 입력하시오.")
Explanation: 연습문제
아래 문제들을 해결하는 코드를 lab06.py 파일에 작성하여 제출하라.
연습 1
아래 코드를 실행하고 23.5입력하면 ValueError 오류가 발생한다.
======
n = int(raw_input("Please enter a number: "))
======
위 코드를 아래 조건들이 만족되도록 수정해라.
정수가 입력되지 않으면 제대로된 정수를 입력하라는 메시지를 보여주고 동일한 입력창이 다시 보이도록 한다.
정수가 입력되면 "정확히 입력되었다" 라는 확인 메시지를 보여주고 실행을 멈춘다.
견본 답안 1: while 문 활용
End of explanation
# 아래 `inpint()` 함수는 재귀함수이다.
def inpint():
try:
n = int(raw_input("Please enter a number: "))
print("정확히 입력되었습니다.")
except ValueError as e:
print("정수를 입력하시오.")
inpint()
inpint()
Explanation: 견본 단안 2: 재귀함수 활용
재귀함수에 대해서는 이후에 배울 예정임
End of explanation
x=5
y=3
if(not(x<y)):
raise AssertionError ('x has to be smaller than y')
Explanation: 연습 2
아래 코드를 실행하면 왜 어떤 결과가 나오는지 설명하라.
try:
x = float(raw_input("Your number: "))
inverse = 1.0 / x
finally:
print("There may or may not have been an exception.")
print("The inverse: ", inverse)
견본답안
0이 아닌 숫자를 입력하면 입력한 숫자의 역원를 보여준다.
숫자 0을 입력하면 ZeroDivisionError가 발생한다.
숫자가 아닌 문자열을 입력하면 IOError 오류가 발생한다.
어떤 경우든 "There may or may not have been an exception." 라는 문장이 보여진다.
연습 3
아래 코드를 실행하면 왜 어떤 결과가 나오는지 설명하라.
try:
x = float(raw_input("Your number: "))
inverse = 1.0 / x
except ValueError:
print "You should have given either an int or a float"
except ZeroDivisionError:
print "Infinity"
finally:
print("There may or may not have been an exception.")
견본답안
숫자 0을 입력하면 Infinity 문자열이 보여지며 예외처리가 발생한다.
숫자가 아닌 문자열을 입력하면 "You should have given either an int or a float" 문장이 보여지며 예외처리 발생한다.
어떤 경우든 "There may or may not have been an exception." 라는 문장이 보여진다.
연습 4
아래 코드를 exception_test.py에 저장하라.
=========
import sys
file_name = sys.argv[1]
text = []
try:
fh = open(file_name, 'r')
except IOError:
print 'cannot open', file_name
else:
text = fh.readlines()
fh.close()
if text:
print text[100]
==========
이제 터미널에서 아래 명령을 실행하면 왜 어떤 일이 발생할 수 있는지 설명하라.
python exception_test.py integers.txt
(주의: 위 명령은 exception_test.py 파일이 저장되어 있는 디렉토리에서 실행해야 한다.)
견본답안
integers.txt 파일이 해당 폴더에 존재하지 않으면 아래 문장이 보여지면 예외처리 된다.
'cannot open', integers.txt
integers.txt 파일이 존재할 경우:
파일에 한 줄도 없으면, 즉 비어 있으면:
아무 결과도 보여지지 않는다.
파일에 한 줄이 있지만 첫째줄에 100개 이하의 문자가 있다면:
IndexError 오류 발생
파일에 한 줄 이상 있고 첫째줄에 100개 이상의 문자가 있다면:
첫째줄의 백번째 문자를 보여줌.
연습 5
아래 코드를 실행하면 AssertionError가 발생한다. 아래 코드를 raise 함수를 이용하여 동일한 결과가 나오도록 수정하라.
x = 5
y = 3
assert x < y, "x has to be smaller than y"
견본답안
End of explanation
from math import sin
x = 0.5
a = 0.5
for i in range(200):
y = sin(x) - a * x + 30
if(abs(y - x) < 1e-8):
print("The result after %s iteration is %s" % (i, x))
break
else:
x = y
if(i == 199):
raise ReferenceError("unsatisfied")
Explanation: 연습 6
함수 f가 아래와 같이 정의되었다.
f(x) = sin(x) - 0.5 * x + 30
f(x) = x를 만족시키는 x를 함수 f의 고정점(fixed-point)이라 한다.
함수 f의 부동점을 구하기 위해서 f(x) 반복적으로 업데이트 하는 고정점반복 기술을 아래와 같이 적용한다:
x = sin(x) - a*x + 30
위 아래 코드가 고정점반복 기술을 적용하고 있다.
x = 0.5
a = 0.5
for i in range(200):
x = sin(x) - a * x + 30 #(*)
print("The result after %s iteration is %s" % (i, x))
위 코드를 다음 조건이 만족되도록 수정하라.
아래의 절대값 조건이 만족될 때까지 고정점 계산 (*)이 반복되어야 한다.
abs(f(x) - x) < 1e-8
고정점 반복 계산이 200번 실행되었어도 앞서 언급된 절대값 조건이 만족되지 않으면 그 조건이 만족되지 않았다는 메시지를 보여주면서 에러가 발생하도록 해야 한다.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.axhline(y=0, color = 'r') # draw y =0 axes
x = np.arange(0, 10 * np.pi, 0.01)
y = np.sin(x) - 1.5 * x +30
plt.plot(x, y)
plt.show()
Explanation: 참고: 그래프 그리기
아래 등식
x = sin(x) - 0.5 * x + 30
를 만족시키는 x가 과연 존재하는지, 존재한다면 얼마정도인지를 먼저 확인할 필요가 있다. 그러기 위해서는 아래 함수의 그래프를 그려보아야 한다.
g(x) = sin(x) - 1.5 * x + 30
그래프를 그리는 방법은 나중에 배울 예정이지만, 아래와 같이 실행하면 된다.
End of explanation
n = 1
l = []
def x(n):
return ((sin(1.0/n))**2) / n
while(x(n) >= 1e-9):
l.append(x(n))
n = n + 1
print("구해지는 리스트의 길이는 {}이다.\n".format(len(l)))
print(l)
Explanation: 따라서 f(x) = sin(x) - 0.5 * x + 30의 고정점은 x = 20 근처에 존재함을 알 수 있다.
연습 7
다음과 같이 수열이 정의되어 있다.
x(n) = (sin(1/n))**2 / n
x(n) 값은 n이 증가함에 따라 0으로 수렴한다. x(n) >= 1e-9 조건을 만족하는 모든 x(n)들의 리스트를 구하는 코드를 작성하라.
End of explanation |
5,446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Spark-PCA" data-toc-modified-id="Spark-PCA-1"><span class="toc-item-num">1 </span>Spark PCA</a></span></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Spark PCA
This is simply an API walkthough, for more details on PCA consider referring to the following documentation.
Step2: Next, in order to train ML models in Spark later, we'll use the VectorAssembler to combine a given list of columns into a single vector column.
Step3: Next, we standardize the features, notice here we only need to specify the assembled column as the input feature.
Step4: After the preprocessing step, we fit the PCA model.
Step6: Notice that unlike scikit-learn, we use transform on the dataframe at hand for all ML models' class after fitting it (calling .fit on the dataframe). This will return the result in a new column, where the name is specified by the outputCol argument in the ML models' class.
We can convert it back to a numpy array by extracting the pcaFeatures column from each row, and use collect to bring the entire dataset back to a single machine. | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style = False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession
from pyspark.ml.feature import VectorAssembler, StandardScaler, PCA
# create the SparkSession class,
# which is the entry point into all functionality in Spark
# The .master part sets it to run on all cores on local, note
# that we should leave out the .master part if we're actually
# running the job on a cluster, or else we won't be actually
# using the cluster
spark = (SparkSession.
builder.
master('local[*]').
appName('PCA').
config(conf = SparkConf()).
getOrCreate())
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib,sklearn,pyspark
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Spark-PCA" data-toc-modified-id="Spark-PCA-1"><span class="toc-item-num">1 </span>Spark PCA</a></span></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
# load the data and convert it to a pandas DataFrame,
# then use that to create the spark DataFrame
iris = load_iris()
X = iris['data']
y = iris['target']
data = pd.DataFrame(X, columns = iris.feature_names)
dataset = spark.createDataFrame(data, iris.feature_names)
dataset.show(6)
Explanation: Spark PCA
This is simply an API walkthough, for more details on PCA consider referring to the following documentation.
End of explanation
# specify the input columns' name and
# the combined output column's name
assembler = VectorAssembler(
inputCols = iris.feature_names, outputCol = 'features')
# use it to transform the dataset and select just
# the output column
df = assembler.transform(dataset).select('features')
df.show(6)
Explanation: Next, in order to train ML models in Spark later, we'll use the VectorAssembler to combine a given list of columns into a single vector column.
End of explanation
scaler = StandardScaler(
inputCol = 'features',
outputCol = 'scaledFeatures',
withMean = True,
withStd = True
).fit(df)
# when we transform the dataframe, the old
# feature will still remain in it
df_scaled = scaler.transform(df)
df_scaled.show(6)
Explanation: Next, we standardize the features, notice here we only need to specify the assembled column as the input feature.
End of explanation
n_components = 2
pca = PCA(
k = n_components,
inputCol = 'scaledFeatures',
outputCol = 'pcaFeatures'
).fit(df_scaled)
df_pca = pca.transform(df_scaled)
print('Explained Variance Ratio', pca.explainedVariance.toArray())
df_pca.show(6)
Explanation: After the preprocessing step, we fit the PCA model.
End of explanation
# not sure if this is the best way to do it
X_pca = df_pca.rdd.map(lambda row: row.pcaFeatures).collect()
X_pca = np.array(X_pca)
# change default style figure and font size
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
def plot_iris_pca(X_pca, y):
a scatter plot of the 2-dimensional iris data
markers = 's', 'x', 'o'
colors = list(plt.rcParams['axes.prop_cycle'])
target = np.unique(y)
for idx, (t, m) in enumerate(zip(target, markers)):
subset = X_pca[y == t]
plt.scatter(subset[:, 0], subset[:, 1], s = 50,
c = colors[idx]['color'], label = t, marker = m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc = 'lower left')
plt.tight_layout()
plt.show()
plot_iris_pca(X_pca, y)
# stop the current sparkSession
spark.stop()
Explanation: Notice that unlike scikit-learn, we use transform on the dataframe at hand for all ML models' class after fitting it (calling .fit on the dataframe). This will return the result in a new column, where the name is specified by the outputCol argument in the ML models' class.
We can convert it back to a numpy array by extracting the pcaFeatures column from each row, and use collect to bring the entire dataset back to a single machine.
End of explanation |
5,447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Vision Transformer without Attention
Author
Step1: Hyperparameters
These are the hyperparameters that we have chosen for the experiment.
Please feel free to tune them.
Step2: Load the CIFAR-10 dataset
We use the CIFAR-10 dataset for our experiments.
Step4: Data Augmentation
The augmentation pipeline consists of
Step6: The ShiftViT architecture
In this section, we build the architecture proposed in
the ShiftViT paper.
| |
|
Step8: The DropPath layer
Stochastic depth is a regularization technique that randomly drops a set of
layers. During inference, the layers are kept as they are. It is very
similar to Dropout, but it operates on a block of layers rather
than on individual nodes present inside a layer.
Step11: Block
The most important operation in this paper is the shift opperation. In this section,
we describe the shift operation and compare it with its original implementation provided
by the authors.
A generic feature map is assumed to have the shape [N, H, W, C]. Here we choose a
num_div parameter that decides the division size of the channels. The first 4 divisions
are shifted (1 pixel) in the left, right, up, and down direction. The remaining splits
are kept as is. After partial shifting the shifted channels are padded and the overflown
pixels are chopped off. This completes the partial shifting operation.
In the original implementation, the code is approximately
Step13: The ShiftViT blocks
| |
|
Step15: Stacked Shift Blocks
Each stage will have a variable number of stacked ShiftViT Blocks, as suggested in
the paper. This is a generic layer that will contain the stacked shift vit blocks
with the patch merging layer as well. Combining the two operations (shift ViT
block and patch merging) is a design choice we picked for better code reusability.
Step17: The ShiftViT model
Build the ShiftViT custom model.
Step18: Instantiate the model
Step21: Learning rate schedule
In many experiments, we want to warm up the model with a slowly increasing learning rate
and then cool down the model with a slowly decaying learning rate. In the warmup cosine
decay, the learning rate linearly increases for the warmup steps and then decays with a
cosine decay.
Step22: Compile and train the model | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_addons as tfa
# Setting seed for reproducibiltiy
SEED = 42
keras.utils.set_random_seed(SEED)
Explanation: A Vision Transformer without Attention
Author: Aritra Roy Gosthipaty, Ritwik Raha<br>
Date created: 2022/02/24<br>
Last modified: 2022/03/01<br>
Description: A minimal implementation of ShiftViT.
Introduction
Vision Transformers (ViTs) have sparked a wave of
research at the intersection of Transformers and Computer Vision (CV).
ViTs can simultaneously model long- and short-range dependencies, thanks to
the Multi-Head Self-Attention mechanism in the Transformer block. Many researchers believe
that the success of ViTs are purely due to the attention layer, and they seldom
think about other parts of the ViT model.
In the academic paper
When Shift Operation Meets Vision Transformer: An Extremely Simple Alternative to Attention Mechanism
the authors propose to demystify the success of ViTs with the introduction of a NO
PARAMETER operation in place of the attention operation. They swap the attention
operation with a shifting operation.
In this example, we minimally implement the paper with close alignement to the author's
official implementation.
This example requires TensorFlow 2.6 or higher, as well as TensorFlow Addons, which can
be installed using the following command:
shell
pip install -qq -U tensorflow-addons
Setup and imports
End of explanation
class Config(object):
# DATA
batch_size = 256
buffer_size = batch_size * 2
input_shape = (32, 32, 3)
num_classes = 10
# AUGMENTATION
image_size = 48
# ARCHITECTURE
patch_size = 4
projected_dim = 96
num_shift_blocks_per_stages = [2, 4, 8, 2]
epsilon = 1e-5
stochastic_depth_rate = 0.2
mlp_dropout_rate = 0.2
num_div = 12
shift_pixel = 1
mlp_expand_ratio = 2
# OPTIMIZER
lr_start = 1e-5
lr_max = 1e-3
weight_decay = 1e-4
# TRAINING
epochs = 100
config = Config()
Explanation: Hyperparameters
These are the hyperparameters that we have chosen for the experiment.
Please feel free to tune them.
End of explanation
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
(x_train, y_train), (x_val, y_val) = (
(x_train[:40000], y_train[:40000]),
(x_train[40000:], y_train[40000:]),
)
print(f"Training samples: {len(x_train)}")
print(f"Validation samples: {len(x_val)}")
print(f"Testing samples: {len(x_test)}")
AUTO = tf.data.AUTOTUNE
train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_ds = train_ds.shuffle(config.buffer_size).batch(config.batch_size).prefetch(AUTO)
val_ds = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_ds = val_ds.batch(config.batch_size).prefetch(AUTO)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_ds = test_ds.batch(config.batch_size).prefetch(AUTO)
Explanation: Load the CIFAR-10 dataset
We use the CIFAR-10 dataset for our experiments.
End of explanation
def get_augmentation_model():
Build the data augmentation model.
data_augmentation = keras.Sequential(
[
layers.Resizing(config.input_shape[0] + 20, config.input_shape[0] + 20),
layers.RandomCrop(config.image_size, config.image_size),
layers.RandomFlip("horizontal"),
layers.Rescaling(1 / 255.0),
]
)
return data_augmentation
Explanation: Data Augmentation
The augmentation pipeline consists of:
Rescaling
Resizing
Random cropping
Random horizontal flipping
Note: The image data augmentation layers do not apply
data transformations at inference time. This means that
when these layers are called with training=False they
behave differently. Refer to the
documentation
for more details.
End of explanation
class MLP(layers.Layer):
Get the MLP layer for each shift block.
Args:
mlp_expand_ratio (int): The ratio with which the first feature map is expanded.
mlp_dropout_rate (float): The rate for dropout.
def __init__(self, mlp_expand_ratio, mlp_dropout_rate, **kwargs):
super().__init__(**kwargs)
self.mlp_expand_ratio = mlp_expand_ratio
self.mlp_dropout_rate = mlp_dropout_rate
def build(self, input_shape):
input_channels = input_shape[-1]
initial_filters = int(self.mlp_expand_ratio * input_channels)
self.mlp = keras.Sequential(
[
layers.Dense(units=initial_filters, activation=tf.nn.gelu,),
layers.Dropout(rate=self.mlp_dropout_rate),
layers.Dense(units=input_channels),
layers.Dropout(rate=self.mlp_dropout_rate),
]
)
def call(self, x):
x = self.mlp(x)
return x
Explanation: The ShiftViT architecture
In this section, we build the architecture proposed in
the ShiftViT paper.
| |
| :--: |
| Figure 1: The entire architecutre of ShiftViT.
Source |
The architecture as shown in Fig. 1, is inspired by
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows.
Here the authors propose a modular architecture with 4 stages. Each stage works on its
own spatial size, creating a hierarchical architecture.
An input image of size HxWx3 is split into non-overlapping patches of size 4x4.
This is done via the patchify layer which results in individual tokens of feature size 48
(4x4x3). Each stage comprises two parts:
Embedding Generation
Stacked Shift Blocks
We discuss the stages and the modules in detail in what follows.
Note: Compared to the official implementation
we restructure some key components to better fit the Keras API.
The ShiftViT Block
| |
| :--: |
| Figure 2: From the Model to a Shift Block. |
Each stage in the ShiftViT architecture comprises of a Shift Block as shown in Fig 2.
| |
| :--: |
| Figure 3: The Shift ViT Block. Source |
The Shift Block as shown in Fig. 3, comprises of the following:
Shift Operation
Linear Normalization
MLP Layer
The MLP block
The MLP block is intended to be a stack of densely-connected layers.s
End of explanation
class DropPath(layers.Layer):
Drop Path also known as the Stochastic Depth layer.
Refernece:
- https://keras.io/examples/vision/cct/#stochastic-depth-for-regularization
- github.com:rwightman/pytorch-image-models
def __init__(self, drop_path_prob, **kwargs):
super().__init__(**kwargs)
self.drop_path_prob = drop_path_prob
def call(self, x, training=False):
if training:
keep_prob = 1 - self.drop_path_prob
shape = (tf.shape(x)[0],) + (1,) * (len(tf.shape(x)) - 1)
random_tensor = keep_prob + tf.random.uniform(shape, 0, 1)
random_tensor = tf.floor(random_tensor)
return (x / keep_prob) * random_tensor
return x
Explanation: The DropPath layer
Stochastic depth is a regularization technique that randomly drops a set of
layers. During inference, the layers are kept as they are. It is very
similar to Dropout, but it operates on a block of layers rather
than on individual nodes present inside a layer.
End of explanation
class ShiftViTBlock(layers.Layer):
A unit ShiftViT Block
Args:
shift_pixel (int): The number of pixels to shift. Default to 1.
mlp_expand_ratio (int): The ratio with which MLP features are
expanded. Default to 2.
mlp_dropout_rate (float): The dropout rate used in MLP.
num_div (int): The number of divisions of the feature map's channel.
Totally, 4/num_div of channels will be shifted. Defaults to 12.
epsilon (float): Epsilon constant.
drop_path_prob (float): The drop probability for drop path.
def __init__(
self,
epsilon,
drop_path_prob,
mlp_dropout_rate,
num_div=12,
shift_pixel=1,
mlp_expand_ratio=2,
**kwargs,
):
super().__init__(**kwargs)
self.shift_pixel = shift_pixel
self.mlp_expand_ratio = mlp_expand_ratio
self.mlp_dropout_rate = mlp_dropout_rate
self.num_div = num_div
self.epsilon = epsilon
self.drop_path_prob = drop_path_prob
def build(self, input_shape):
self.H = input_shape[1]
self.W = input_shape[2]
self.C = input_shape[3]
self.layer_norm = layers.LayerNormalization(epsilon=self.epsilon)
self.drop_path = (
DropPath(drop_path_prob=self.drop_path_prob)
if self.drop_path_prob > 0.0
else layers.Activation("linear")
)
self.mlp = MLP(
mlp_expand_ratio=self.mlp_expand_ratio,
mlp_dropout_rate=self.mlp_dropout_rate,
)
def get_shift_pad(self, x, mode):
Shifts the channels according to the mode chosen.
if mode == "left":
offset_height = 0
offset_width = 0
target_height = 0
target_width = self.shift_pixel
elif mode == "right":
offset_height = 0
offset_width = self.shift_pixel
target_height = 0
target_width = self.shift_pixel
elif mode == "up":
offset_height = 0
offset_width = 0
target_height = self.shift_pixel
target_width = 0
else:
offset_height = self.shift_pixel
offset_width = 0
target_height = self.shift_pixel
target_width = 0
crop = tf.image.crop_to_bounding_box(
x,
offset_height=offset_height,
offset_width=offset_width,
target_height=self.H - target_height,
target_width=self.W - target_width,
)
shift_pad = tf.image.pad_to_bounding_box(
crop,
offset_height=offset_height,
offset_width=offset_width,
target_height=self.H,
target_width=self.W,
)
return shift_pad
def call(self, x, training=False):
# Split the feature maps
x_splits = tf.split(x, num_or_size_splits=self.C // self.num_div, axis=-1)
# Shift the feature maps
x_splits[0] = self.get_shift_pad(x_splits[0], mode="left")
x_splits[1] = self.get_shift_pad(x_splits[1], mode="right")
x_splits[2] = self.get_shift_pad(x_splits[2], mode="up")
x_splits[3] = self.get_shift_pad(x_splits[3], mode="down")
# Concatenate the shifted and unshifted feature maps
x = tf.concat(x_splits, axis=-1)
# Add the residual connection
shortcut = x
x = shortcut + self.drop_path(self.mlp(self.layer_norm(x)), training=training)
return x
Explanation: Block
The most important operation in this paper is the shift opperation. In this section,
we describe the shift operation and compare it with its original implementation provided
by the authors.
A generic feature map is assumed to have the shape [N, H, W, C]. Here we choose a
num_div parameter that decides the division size of the channels. The first 4 divisions
are shifted (1 pixel) in the left, right, up, and down direction. The remaining splits
are kept as is. After partial shifting the shifted channels are padded and the overflown
pixels are chopped off. This completes the partial shifting operation.
In the original implementation, the code is approximately:
```python
out[:, g * 0:g * 1, :, :-1] = x[:, g * 0:g * 1, :, 1:] # shift left
out[:, g * 1:g * 2, :, 1:] = x[:, g * 1:g * 2, :, :-1] # shift right
out[:, g * 2:g * 3, :-1, :] = x[:, g * 2:g * 3, 1:, :] # shift up
out[:, g * 3:g * 4, 1:, :] = x[:, g * 3:g * 4, :-1, :] # shift down
out[:, g * 4:, :, :] = x[:, g * 4:, :, :] # no shift
```
In TensorFlow it would be infeasible for us to assign shifted channels to a tensor in the
middle of the training process. This is why we have resorted to the following procedure:
Split the channels with the num_div parameter.
Select each of the first four spilts and shift and pad them in the respective
directions.
After shifting and padding, we concatenate the channel back.
| |
| :--: |
| Figure 4: The TensorFlow style shifting |
The entire procedure is explained in the Fig. 4.
End of explanation
class PatchMerging(layers.Layer):
The Patch Merging layer.
Args:
epsilon (float): The epsilon constant.
def __init__(self, epsilon, **kwargs):
super().__init__(**kwargs)
self.epsilon = epsilon
def build(self, input_shape):
filters = 2 * input_shape[-1]
self.reduction = layers.Conv2D(
filters=filters, kernel_size=2, strides=2, padding="same", use_bias=False
)
self.layer_norm = layers.LayerNormalization(epsilon=self.epsilon)
def call(self, x):
# Apply the patch merging algorithm on the feature maps
x = self.layer_norm(x)
x = self.reduction(x)
return x
Explanation: The ShiftViT blocks
| |
| :--: |
| Figure 5: Shift Blocks in the architecture. Source |
Each stage of the architecture has shift blocks as shown in Fig.5. Each of these blocks
contain a variable number of stacked ShiftViT block (as built in the earlier section).
Shift blocks are followed by a PatchMerging layer that scales down feature inputs. The
PatchMerging layer helps in the pyramidal structure of the model.
The PatchMerging layer
This layer merges the two adjacent tokens. This layer helps in scaling the features down
spatially and increasing the features up channel wise. We use a Conv2D layer to merge the
patches.
End of explanation
# Note: This layer will have a different depth of stacking
# for different stages on the model.
class StackedShiftBlocks(layers.Layer):
The layer containing stacked ShiftViTBlocks.
Args:
epsilon (float): The epsilon constant.
mlp_dropout_rate (float): The dropout rate used in the MLP block.
num_shift_blocks (int): The number of shift vit blocks for this stage.
stochastic_depth_rate (float): The maximum drop path rate chosen.
is_merge (boolean): A flag that determines the use of the Patch Merge
layer after the shift vit blocks.
num_div (int): The division of channels of the feature map. Defaults to 12.
shift_pixel (int): The number of pixels to shift. Defaults to 1.
mlp_expand_ratio (int): The ratio with which the initial dense layer of
the MLP is expanded Defaults to 2.
def __init__(
self,
epsilon,
mlp_dropout_rate,
num_shift_blocks,
stochastic_depth_rate,
is_merge,
num_div=12,
shift_pixel=1,
mlp_expand_ratio=2,
**kwargs,
):
super().__init__(**kwargs)
self.epsilon = epsilon
self.mlp_dropout_rate = mlp_dropout_rate
self.num_shift_blocks = num_shift_blocks
self.stochastic_depth_rate = stochastic_depth_rate
self.is_merge = is_merge
self.num_div = num_div
self.shift_pixel = shift_pixel
self.mlp_expand_ratio = mlp_expand_ratio
def build(self, input_shapes):
# Calculate stochastic depth probabilities.
# Reference: https://keras.io/examples/vision/cct/#the-final-cct-model
dpr = [
x
for x in np.linspace(
start=0, stop=self.stochastic_depth_rate, num=self.num_shift_blocks
)
]
# Build the shift blocks as a list of ShiftViT Blocks
self.shift_blocks = list()
for num in range(self.num_shift_blocks):
self.shift_blocks.append(
ShiftViTBlock(
num_div=self.num_div,
epsilon=self.epsilon,
drop_path_prob=dpr[num],
mlp_dropout_rate=self.mlp_dropout_rate,
shift_pixel=self.shift_pixel,
mlp_expand_ratio=self.mlp_expand_ratio,
)
)
if self.is_merge:
self.patch_merge = PatchMerging(epsilon=self.epsilon)
def call(self, x, training=False):
for shift_block in self.shift_blocks:
x = shift_block(x, training=training)
if self.is_merge:
x = self.patch_merge(x)
return x
Explanation: Stacked Shift Blocks
Each stage will have a variable number of stacked ShiftViT Blocks, as suggested in
the paper. This is a generic layer that will contain the stacked shift vit blocks
with the patch merging layer as well. Combining the two operations (shift ViT
block and patch merging) is a design choice we picked for better code reusability.
End of explanation
class ShiftViTModel(keras.Model):
The ShiftViT Model.
Args:
data_augmentation (keras.Model): A data augmentation model.
projected_dim (int): The dimension to which the patches of the image are
projected.
patch_size (int): The patch size of the images.
num_shift_blocks_per_stages (list[int]): A list of all the number of shit
blocks per stage.
epsilon (float): The epsilon constant.
mlp_dropout_rate (float): The dropout rate used in the MLP block.
stochastic_depth_rate (float): The maximum drop rate probability.
num_div (int): The number of divisions of the channesl of the feature
map. Defaults to 12.
shift_pixel (int): The number of pixel to shift. Default to 1.
mlp_expand_ratio (int): The ratio with which the initial mlp dense layer
is expanded to. Defaults to 2.
def __init__(
self,
data_augmentation,
projected_dim,
patch_size,
num_shift_blocks_per_stages,
epsilon,
mlp_dropout_rate,
stochastic_depth_rate,
num_div=12,
shift_pixel=1,
mlp_expand_ratio=2,
**kwargs,
):
super().__init__(**kwargs)
self.data_augmentation = data_augmentation
self.patch_projection = layers.Conv2D(
filters=projected_dim,
kernel_size=patch_size,
strides=patch_size,
padding="same",
)
self.stages = list()
for index, num_shift_blocks in enumerate(num_shift_blocks_per_stages):
if index == len(num_shift_blocks_per_stages) - 1:
# This is the last stage, do not use the patch merge here.
is_merge = False
else:
is_merge = True
# Build the stages.
self.stages.append(
StackedShiftBlocks(
epsilon=epsilon,
mlp_dropout_rate=mlp_dropout_rate,
num_shift_blocks=num_shift_blocks,
stochastic_depth_rate=stochastic_depth_rate,
is_merge=is_merge,
num_div=num_div,
shift_pixel=shift_pixel,
mlp_expand_ratio=mlp_expand_ratio,
)
)
self.global_avg_pool = layers.GlobalAveragePooling2D()
def get_config(self):
config = super().get_config()
config.update(
{
"data_augmentation": self.data_augmentation,
"patch_projection": self.patch_projection,
"stages": self.stages,
"global_avg_pool": self.global_avg_pool,
}
)
return config
def _calculate_loss(self, data, training=False):
(images, labels) = data
# Augment the images
augmented_images = self.data_augmentation(images, training=training)
# Create patches and project the pathces.
projected_patches = self.patch_projection(augmented_images)
# Pass through the stages
x = projected_patches
for stage in self.stages:
x = stage(x, training=training)
# Get the logits.
logits = self.global_avg_pool(x)
# Calculate the loss and return it.
total_loss = self.compiled_loss(labels, logits)
return total_loss, labels, logits
def train_step(self, inputs):
with tf.GradientTape() as tape:
total_loss, labels, logits = self._calculate_loss(
data=inputs, training=True
)
# Apply gradients.
train_vars = [
self.data_augmentation.trainable_variables,
self.patch_projection.trainable_variables,
self.global_avg_pool.trainable_variables,
]
train_vars = train_vars + [stage.trainable_variables for stage in self.stages]
# Optimize the gradients.
grads = tape.gradient(total_loss, train_vars)
trainable_variable_list = []
for (grad, var) in zip(grads, train_vars):
for g, v in zip(grad, var):
trainable_variable_list.append((g, v))
self.optimizer.apply_gradients(trainable_variable_list)
# Update the metrics
self.compiled_metrics.update_state(labels, logits)
return {m.name: m.result() for m in self.metrics}
def test_step(self, data):
_, labels, logits = self._calculate_loss(data=data, training=False)
# Update the metrics
self.compiled_metrics.update_state(labels, logits)
return {m.name: m.result() for m in self.metrics}
Explanation: The ShiftViT model
Build the ShiftViT custom model.
End of explanation
model = ShiftViTModel(
data_augmentation=get_augmentation_model(),
projected_dim=config.projected_dim,
patch_size=config.patch_size,
num_shift_blocks_per_stages=config.num_shift_blocks_per_stages,
epsilon=config.epsilon,
mlp_dropout_rate=config.mlp_dropout_rate,
stochastic_depth_rate=config.stochastic_depth_rate,
num_div=config.num_div,
shift_pixel=config.shift_pixel,
mlp_expand_ratio=config.mlp_expand_ratio,
)
Explanation: Instantiate the model
End of explanation
# Some code is taken from:
# https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2.
class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule):
A LearningRateSchedule that uses a warmup cosine decay schedule.
def __init__(self, lr_start, lr_max, warmup_steps, total_steps):
Args:
lr_start: The initial learning rate
lr_max: The maximum learning rate to which lr should increase to in
the warmup steps
warmup_steps: The number of steps for which the model warms up
total_steps: The total number of steps for the model training
super().__init__()
self.lr_start = lr_start
self.lr_max = lr_max
self.warmup_steps = warmup_steps
self.total_steps = total_steps
self.pi = tf.constant(np.pi)
def __call__(self, step):
# Check whether the total number of steps is larger than the warmup
# steps. If not, then throw a value error.
if self.total_steps < self.warmup_steps:
raise ValueError(
f"Total number of steps {self.total_steps} must be"
+ f"larger or equal to warmup steps {self.warmup_steps}."
)
# `cos_annealed_lr` is a graph that increases to 1 from the initial
# step to the warmup step. After that this graph decays to -1 at the
# final step mark.
cos_annealed_lr = tf.cos(
self.pi
* (tf.cast(step, tf.float32) - self.warmup_steps)
/ tf.cast(self.total_steps - self.warmup_steps, tf.float32)
)
# Shift the mean of the `cos_annealed_lr` graph to 1. Now the grpah goes
# from 0 to 2. Normalize the graph with 0.5 so that now it goes from 0
# to 1. With the normalized graph we scale it with `lr_max` such that
# it goes from 0 to `lr_max`
learning_rate = 0.5 * self.lr_max * (1 + cos_annealed_lr)
# Check whether warmup_steps is more than 0.
if self.warmup_steps > 0:
# Check whether lr_max is larger that lr_start. If not, throw a value
# error.
if self.lr_max < self.lr_start:
raise ValueError(
f"lr_start {self.lr_start} must be smaller or"
+ f"equal to lr_max {self.lr_max}."
)
# Calculate the slope with which the learning rate should increase
# in the warumup schedule. The formula for slope is m = ((b-a)/steps)
slope = (self.lr_max - self.lr_start) / self.warmup_steps
# With the formula for a straight line (y = mx+c) build the warmup
# schedule
warmup_rate = slope * tf.cast(step, tf.float32) + self.lr_start
# When the current step is lesser that warmup steps, get the line
# graph. When the current step is greater than the warmup steps, get
# the scaled cos graph.
learning_rate = tf.where(
step < self.warmup_steps, warmup_rate, learning_rate
)
# When the current step is more that the total steps, return 0 else return
# the calculated graph.
return tf.where(
step > self.total_steps, 0.0, learning_rate, name="learning_rate"
)
Explanation: Learning rate schedule
In many experiments, we want to warm up the model with a slowly increasing learning rate
and then cool down the model with a slowly decaying learning rate. In the warmup cosine
decay, the learning rate linearly increases for the warmup steps and then decays with a
cosine decay.
End of explanation
# Get the total number of steps for training.
total_steps = int((len(x_train) / config.batch_size) * config.epochs)
# Calculate the number of steps for warmup.
warmup_epoch_percentage = 0.15
warmup_steps = int(total_steps * warmup_epoch_percentage)
# Initialize the warmupcosine schedule.
scheduled_lrs = WarmUpCosine(
lr_start=1e-5, lr_max=1e-3, warmup_steps=warmup_steps, total_steps=total_steps,
)
# Get the optimizer.
optimizer = tfa.optimizers.AdamW(
learning_rate=scheduled_lrs, weight_decay=config.weight_decay
)
# Compile and pretrain the model.
model.compile(
optimizer=optimizer,
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[
keras.metrics.SparseCategoricalAccuracy(name="accuracy"),
keras.metrics.SparseTopKCategoricalAccuracy(5, name="top-5-accuracy"),
],
)
# Train the model
history = model.fit(
train_ds,
epochs=config.epochs,
validation_data=val_ds,
callbacks=[
keras.callbacks.EarlyStopping(monitor="val_accuracy", patience=5, mode="auto",)
],
)
# Evaluate the model with the test dataset.
print("TESTING")
loss, acc_top1, acc_top5 = model.evaluate(test_ds)
print(f"Loss: {loss:0.2f}")
print(f"Top 1 test accuracy: {acc_top1*100:0.2f}%")
print(f"Top 5 test accuracy: {acc_top5*100:0.2f}%")
Explanation: Compile and train the model
End of explanation |
5,448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GP Regression with Uncertain Inputs
Introduction
In this notebook, we're going to demonstrate one way of dealing with uncertainty in our training data. Let's say that we're collecting training data that models the following function.
\begin{align}
y &= \sin(2\pi x) + \epsilon \
\epsilon &\sim \mathcal{N}(0, 0.2)
\end{align}
However, now assume that we're a bit uncertain about our features. In particular, we're going to assume that every x_i value is not a point but a distribution instead. E.g.
$$ x_i \sim \mathcal{N}(\mu_i, \sigma_i). $$
Using stochastic variational inference to deal with uncertain inputs
To deal with this uncertainty, we'll use variational inference (VI) in conjunction with stochastic optimization. At every optimization iteration, we'll draw a sample x_i from the input distribution. The objective function (ELBO) that we compute will be an unbiased estimate of the true ELBO, and so a stochastic optimizer like SGD or Adam should converge to the true ELBO (or at least a local minimum of it).
Step1: Set up training data
In the next cell, we set up the training data for this example. We'll be using 20 regularly spaced points on [0,1].
We'll represent each of the training points $x_i$ by their mean $\mu_i$ and variance $\sigma_i$.
Step2: Setting up the model
Since we're performing VI to deal with the feature uncertainty, we'll be using a ~gpytorch.models.ApproximateGP. Similar to the SVGP example, we'll use a VariationalStrategy and a CholeskyVariationalDistribution to define our posterior approximation.
Step3: Training the model with uncertain features
The training iteration should look pretty similar to the SVGP example -- where we optimize the variational parameters and model hyperparameters. The key difference is that, at every iteration, we will draw samples from our features distribution (since we don't have point measurements of our features).
```python
Inside the training iteration...
train_x_sample = torch.distributions.Normal(train_x_mean, train_x_stdv).rsample()
Rest of training iteration...
``` | Python Code:
import math
import torch
import tqdm
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: GP Regression with Uncertain Inputs
Introduction
In this notebook, we're going to demonstrate one way of dealing with uncertainty in our training data. Let's say that we're collecting training data that models the following function.
\begin{align}
y &= \sin(2\pi x) + \epsilon \
\epsilon &\sim \mathcal{N}(0, 0.2)
\end{align}
However, now assume that we're a bit uncertain about our features. In particular, we're going to assume that every x_i value is not a point but a distribution instead. E.g.
$$ x_i \sim \mathcal{N}(\mu_i, \sigma_i). $$
Using stochastic variational inference to deal with uncertain inputs
To deal with this uncertainty, we'll use variational inference (VI) in conjunction with stochastic optimization. At every optimization iteration, we'll draw a sample x_i from the input distribution. The objective function (ELBO) that we compute will be an unbiased estimate of the true ELBO, and so a stochastic optimizer like SGD or Adam should converge to the true ELBO (or at least a local minimum of it).
End of explanation
# Training data is 100 points in [0,1] inclusive regularly spaced
train_x_mean = torch.linspace(0, 1, 20)
# We'll assume the variance shrinks the closer we get to 1
train_x_stdv = torch.linspace(0.03, 0.01, 20)
# True function is sin(2*pi*x) with Gaussian noise
train_y = torch.sin(train_x_mean * (2 * math.pi)) + torch.randn(train_x_mean.size()) * 0.2
f, ax = plt.subplots(1, 1, figsize=(8, 3))
ax.errorbar(train_x_mean, train_y, xerr=(train_x_stdv * 2), fmt="k*", label="Train Data")
ax.legend()
Explanation: Set up training data
In the next cell, we set up the training data for this example. We'll be using 20 regularly spaced points on [0,1].
We'll represent each of the training points $x_i$ by their mean $\mu_i$ and variance $\sigma_i$.
End of explanation
from gpytorch.models import ApproximateGP
from gpytorch.variational import CholeskyVariationalDistribution
from gpytorch.variational import VariationalStrategy
class GPModel(ApproximateGP):
def __init__(self, inducing_points):
variational_distribution = CholeskyVariationalDistribution(inducing_points.size(0))
variational_strategy = VariationalStrategy(self, inducing_points, variational_distribution, learn_inducing_locations=True)
super(GPModel, self).__init__(variational_strategy)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
inducing_points = torch.randn(10, 1)
model = GPModel(inducing_points=inducing_points)
likelihood = gpytorch.likelihoods.GaussianLikelihood()
Explanation: Setting up the model
Since we're performing VI to deal with the feature uncertainty, we'll be using a ~gpytorch.models.ApproximateGP. Similar to the SVGP example, we'll use a VariationalStrategy and a CholeskyVariationalDistribution to define our posterior approximation.
End of explanation
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
training_iter = 2 if smoke_test else 400
model.train()
likelihood.train()
# We use SGD here, rather than Adam. Emperically, we find that SGD is better for variational regression
optimizer = torch.optim.Adam([
{'params': model.parameters()},
{'params': likelihood.parameters()},
], lr=0.01)
# Our loss object. We're using the VariationalELBO
mll = gpytorch.mlls.VariationalELBO(likelihood, model, num_data=train_y.size(0))
iterator = tqdm.notebook.tqdm(range(training_iter))
for i in iterator:
# First thing: draw a sample set of features from our distribution
train_x_sample = torch.distributions.Normal(train_x_mean, train_x_stdv).rsample()
# Now do the rest of the training loop
optimizer.zero_grad()
output = model(train_x_sample)
loss = -mll(output, train_y)
iterator.set_postfix(loss=loss.item())
loss.backward()
optimizer.step()
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
test_x = torch.linspace(0, 1, 51)
observed_pred = likelihood(model(test_x))
with torch.no_grad():
# Initialize plot
f, ax = plt.subplots(1, 1, figsize=(8, 3))
# Get upper and lower confidence bounds
lower, upper = observed_pred.confidence_region()
# Plot training data as black stars
ax.errorbar(train_x_mean.numpy(), train_y.numpy(), xerr=train_x_stdv, fmt='k*')
# Plot predictive means as blue line
ax.plot(test_x.numpy(), observed_pred.mean.numpy(), 'b')
# Shade between the lower and upper confidence bounds
ax.fill_between(test_x.numpy(), lower.numpy(), upper.numpy(), alpha=0.5)
ax.set_ylim([-3, 3])
ax.legend(['Observed Data', 'Mean', 'Confidence'])
Explanation: Training the model with uncertain features
The training iteration should look pretty similar to the SVGP example -- where we optimize the variational parameters and model hyperparameters. The key difference is that, at every iteration, we will draw samples from our features distribution (since we don't have point measurements of our features).
```python
Inside the training iteration...
train_x_sample = torch.distributions.Normal(train_x_mean, train_x_stdv).rsample()
Rest of training iteration...
```
End of explanation |
5,449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Introduction
In this tutorial, you will learn how to do statistical analysis of your simulation data.
This is an important topic, because the statistics of your data determine how precise your simulation result is. Furthermore, knowing about the statistics can help you optimize your disk space usage.
ESPResSo provides a lot of ways to take measurements of your system. Usually, you will sample a quantity many times during a simulation and in the end average over all samples. Intuitively, the simulation result will be more precise the more samples are taken during the simulation. However, this is not the whole truth. There are some things that need to be considered, which we will cover in this tutorial.
Formally, if you determine a physical quantity by averaging over several samples, you only approximate the unknown, true mean value. Usually, the quantity is expected to fluctuate around its mean; therefore, you can never directly measure the mean. You are bound to take repeated measurements and in the end average over all samples (a finite number). In your report, you will present this average as your result. Additionally, you should express the precision of your measurements to give a proper meaning to your result. And this is where things get more involved.
There are several different ways to express the precision of your measurements. We will begin by briefly discussing what they are and what their differences are. After that, we will continue with the standard error of the mean as a viable option to be presented in your simulation results.
Standard deviation
The standard deviation is a measure for how much individual samples are expected to deviate from the mean. We want to use precise terminology, and therefore need to state that, in fact, we cannot directly measure the standard deviation but only estimate it. A commonly used estimator for the standard deviation is
$
\begin{align}
\hat{\sigma} = \sqrt{\frac{1}{N-1.5}\sum_{i=1}^{N}(X_i-\overline{X})^2}\tag{1}
\end{align}
$
where $\hat{\sigma}$ is the estimator of the standard deviation $\sigma$, $N$ the number of samples, $X_i$ the individual samples and $\overline{X}$ their mean. This estimator somewhat resembles the "square root of the variance". The curious $-1.5$ in the denominator is a necessary correction to make the estimator less biased (for further reading, see <a href='#[1]'>[1]</a>).
Standard error of the mean
The standard error of the mean (often abbreviated as SEM, or $s$, and its estimator is designated $\hat{\sigma}_\overline{X}$) describes how much the mean value of your sample is expected to deviate from the true mean value $\mu$. Imagine repeating the whole simulation over and over again, taking $N$ samples every time and averaging over them. The SEM quantifies how much those averages will fluctuate around the true mean $\mu$. In fact, it is defined as the standard deviation of the averages.
At first glance, it might seem to be very expensive to compute the SEM, because one would have to repeat the whole simulation many times. However, under the right circumstances, the SEM can be estimated from a single series of $N$ measurements. We will discuss how this can be done.
Confidence interval
A confidence interval (CI) specifies a range of numbers within which the unknown true mean value $\mu$ lies with a certain probability $1-\alpha$. A common confidence level is $1-\alpha=95~\%$. A $95~\%$ CI would contain the true value $\mu$ with probability $95~\%$. Care must be taken interpreting the CI, since the lower and upper bound of a CI are themselves random variables. Just as a simulation run drafts samples from the overall ensemble, determining a CI from a simulation run is drafting a CI from all possible CIs. When the upper and lower bound of a CI have been calculated, this range either contains the true value or not, so there no longer is a probability attached to it. However, for repeated simulations with subsequent computation of the corresponding CIs, on average $95~\%$ of CIs will contain the true value, while $5~\%$ won't.
If the samples are normally distributed and the SEM is known, the upper and lower bounds of the $95~\%$ CI are $\overline{X} \pm 1.96 \, \hat{\sigma}_\overline{X}$.
Interquartile range
The interquartile range denotes the range, within which the central $50~\%$ of all samples lie, if one were to order them by their size. This leaves one quarter of all samples lying below the interquartile range, and another quarter of all samples above it.
Now – what do we use?
We are interested in the precision of our overall, averaged, simulation result, and not in the precision of the individual samples. Those are expected to fluctuate, and in many cases, those fluctuations are uninteresting for the end result. Out of the options presented above, the SEM and the CI are the only ones doing this requirement justice. Since they are related, the question boils down to how to compute the SEM, which will be the topic of the rest of this tutorial.
Uncorrelated samples
How the SEM can be computed depends on the data itself. For uncorrelated samples, it is nearly trivial
Step2: One can clearly see that each sample lies in the vicinity of the previous one.
Below is an example for almost completely uncorrelated samples. The data points are taken from the same time series as in the previous example, but this time they are chosen with large gaps in between (every 800th sample is used). These samples appear to fluctuate a lot more randomly.
Step3: However, you should not trust your eye in deciding whether or not a time series is correlated. In fact, when running molecular dynamics simulations, your best guess is to always assume that samples are correlated, and that you should use one of the following techniques for statistical analysis, and rather not just use equation (2).
Binning analysis
Binning analysis is a straightforward method to calculate the SEM for correlated data. A time series of measurements of $N$ samples is divided into $N_\mathrm{B}$ equally long blocks called bins. If $N$ is not an integer multiple of $N_\mathrm{B}$, some data must be discarded to achieve this. The samples in every bin are averaged, giving the bin averages $\overline{X}i$. It is important that the bin size $N/N\mathrm{B}$ is significantly larger than the correlation time. Otherwise, binning analysis will yield the wrong SEM.
Once we have computed the bin averages $\overline{X}_i$, getting the SEM is straightforward
Step4: Exercise
Determine the maximally possible number of bins of size BIN_SIZE with the data in time_series_1, and store it in a variable N_BINS.
Create a numpy array called bin_avgs of length N_BINS.
Compute the bin averages of time_series_1 and store them in bin_avgs.
python
N_BINS = N_SAMPLES // BIN_SIZE
bin_avgs = np.zeros(N_BINS)
for i in range(N_BINS)
Step5: Now we already have an estimate on how precise our simulation result is. But how do we know if we chose the appropriate bin size? The answer is, we can perform binning analysis for many different bin sizes and check when the SEM converges. For that we would like to define a function that does the binning analysis in one go.
Exercise
Define a function called do_binning_analysis that takes as arguments data (a numpy array containing the samples) and bin_size and returns the estimated SEM. You can reuse your code from the previous exercises and adapt it to be part of the function.
python
def do_binning_analysis(data, bin_size)
Step6: Even though the fit is not perfect, it suffices to give us the position of the asymptote, which is the final estimate for the standard error of the mean. You can see that binning analysis, in fact, managed to estimate the SEM very precisely compared to the analytical solution. This illustrates that most of the time, binning analysis will give you a very reasonable estimate for the SEM, and in fact, is often used in practice because of its simplicity.
However, in some cases, the statistics of your system can be quite challenging. Remember that in real applications, there won't be an analytical solution for the SEM. Therefore, you need to rely entirely on the statistical analysis. It is important to view the statistical analysis critically to decide whether the statistical analysis is trustworthy or not. To illustrate this, let's have a look at the binning analysis of the other time series that was generated at the start of the tutorial | Python Code:
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 18})
import sys
import logging
logging.basicConfig(level=logging.INFO, stream=sys.stdout)
np.random.seed(43)
def ar_1_process(n_samples, c, phi, eps):
'''
Generate a correlated random sequence with the AR(1) process.
Parameters
----------
n_samples: :obj:`int`
Sample size.
c: :obj:`float`
Constant term.
phi: :obj:`float`
Correlation magnitude.
eps: :obj:`float`
Shock magnitude.
'''
ys = np.zeros(n_samples)
if abs(phi) >= 1:
raise ValueError("abs(phi) must be smaller than 1.")
# draw initial value from normal distribution with known mean and variance
ys[0] = np.random.normal(loc=c / (1 - phi), scale=np.sqrt(eps**2 / (1 - phi**2)))
for i in range(1, n_samples):
ys[i] = c + phi * ys[i - 1] + np.random.normal(loc=0., scale=eps)
return ys
# generate simulation data using the AR(1) process
logging.info("Generating data sets for the tutorial ...")
N_SAMPLES = 100000
C_1 = 2.0
PHI_1 = 0.85
EPS_1 = 2.0
time_series_1 = ar_1_process(N_SAMPLES, C_1, PHI_1, EPS_1)
C_2 = 0.05
PHI_2 = 0.999
EPS_2 = 1.0
time_series_2 = ar_1_process(N_SAMPLES, C_2, PHI_2, EPS_2)
logging.info("Done")
fig = plt.figure(figsize=(10, 6))
plt.title("The first 1000 samples of both time series")
plt.plot(time_series_1[0:1000], label="time series 1")
plt.plot(time_series_2[0:1000], label="time series 2")
plt.xlabel("$i$")
plt.ylabel("$X_i$")
plt.legend()
plt.show()
Explanation: Tutorial: Error Estimation - Part 1 (Introduction and Binning Analysis)
Table of contents
Data generation
Introduction
Uncorrelated samples
Binning analysis
References
Data generation
In this tutorial, you will learn how to estimate the accuracy of your simulation results. Because we are going to emply statistical methods, we need a fair amount of data to play with. The following code cell will generate two data sets which will be used throughout the tutorial.
End of explanation
fig = plt.figure(figsize=(10, 6))
plt.plot(time_series_1[1000:1050], "x")
fig.axes[0].margins(y=0.1)
plt.xlabel("$i$")
plt.ylabel("$X_i$")
plt.show()
Explanation: Introduction
In this tutorial, you will learn how to do statistical analysis of your simulation data.
This is an important topic, because the statistics of your data determine how precise your simulation result is. Furthermore, knowing about the statistics can help you optimize your disk space usage.
ESPResSo provides a lot of ways to take measurements of your system. Usually, you will sample a quantity many times during a simulation and in the end average over all samples. Intuitively, the simulation result will be more precise the more samples are taken during the simulation. However, this is not the whole truth. There are some things that need to be considered, which we will cover in this tutorial.
Formally, if you determine a physical quantity by averaging over several samples, you only approximate the unknown, true mean value. Usually, the quantity is expected to fluctuate around its mean; therefore, you can never directly measure the mean. You are bound to take repeated measurements and in the end average over all samples (a finite number). In your report, you will present this average as your result. Additionally, you should express the precision of your measurements to give a proper meaning to your result. And this is where things get more involved.
There are several different ways to express the precision of your measurements. We will begin by briefly discussing what they are and what their differences are. After that, we will continue with the standard error of the mean as a viable option to be presented in your simulation results.
Standard deviation
The standard deviation is a measure for how much individual samples are expected to deviate from the mean. We want to use precise terminology, and therefore need to state that, in fact, we cannot directly measure the standard deviation but only estimate it. A commonly used estimator for the standard deviation is
$
\begin{align}
\hat{\sigma} = \sqrt{\frac{1}{N-1.5}\sum_{i=1}^{N}(X_i-\overline{X})^2}\tag{1}
\end{align}
$
where $\hat{\sigma}$ is the estimator of the standard deviation $\sigma$, $N$ the number of samples, $X_i$ the individual samples and $\overline{X}$ their mean. This estimator somewhat resembles the "square root of the variance". The curious $-1.5$ in the denominator is a necessary correction to make the estimator less biased (for further reading, see <a href='#[1]'>[1]</a>).
Standard error of the mean
The standard error of the mean (often abbreviated as SEM, or $s$, and its estimator is designated $\hat{\sigma}_\overline{X}$) describes how much the mean value of your sample is expected to deviate from the true mean value $\mu$. Imagine repeating the whole simulation over and over again, taking $N$ samples every time and averaging over them. The SEM quantifies how much those averages will fluctuate around the true mean $\mu$. In fact, it is defined as the standard deviation of the averages.
At first glance, it might seem to be very expensive to compute the SEM, because one would have to repeat the whole simulation many times. However, under the right circumstances, the SEM can be estimated from a single series of $N$ measurements. We will discuss how this can be done.
Confidence interval
A confidence interval (CI) specifies a range of numbers within which the unknown true mean value $\mu$ lies with a certain probability $1-\alpha$. A common confidence level is $1-\alpha=95~\%$. A $95~\%$ CI would contain the true value $\mu$ with probability $95~\%$. Care must be taken interpreting the CI, since the lower and upper bound of a CI are themselves random variables. Just as a simulation run drafts samples from the overall ensemble, determining a CI from a simulation run is drafting a CI from all possible CIs. When the upper and lower bound of a CI have been calculated, this range either contains the true value or not, so there no longer is a probability attached to it. However, for repeated simulations with subsequent computation of the corresponding CIs, on average $95~\%$ of CIs will contain the true value, while $5~\%$ won't.
If the samples are normally distributed and the SEM is known, the upper and lower bounds of the $95~\%$ CI are $\overline{X} \pm 1.96 \, \hat{\sigma}_\overline{X}$.
Interquartile range
The interquartile range denotes the range, within which the central $50~\%$ of all samples lie, if one were to order them by their size. This leaves one quarter of all samples lying below the interquartile range, and another quarter of all samples above it.
Now – what do we use?
We are interested in the precision of our overall, averaged, simulation result, and not in the precision of the individual samples. Those are expected to fluctuate, and in many cases, those fluctuations are uninteresting for the end result. Out of the options presented above, the SEM and the CI are the only ones doing this requirement justice. Since they are related, the question boils down to how to compute the SEM, which will be the topic of the rest of this tutorial.
Uncorrelated samples
How the SEM can be computed depends on the data itself. For uncorrelated samples, it is nearly trivial:
$
\begin{align}
\hat\sigma_\overline{X} = \frac{\hat\sigma}{\sqrt{N}}\tag{2}
\end{align}
$
where $\hat\sigma_\overline{X}$ is the estimated SEM, $\hat\sigma$ is the estimated standard deviation (see eq. 1) and $N$ is the number of samples. But what does it mean for samples to be uncorrelated?
An example for uncorrelated samples would be the rolling of a dice. The outcome of each trial is completely independent to the previous trials. We might guess any number from 1 to 6, regardless of what has been the last result. The same could be true if we ran an experiment many times independently from one another and measured a quantity each time. By looking at one experimental value, we would'nt be able to predict the next one. The best guess would be simply the mean value of the entire series. In the case of rolling a dice, correlations could for example be observed if it was more probable to obtain the same result as in the previous dice roll rather than another result.
Usually, when you run a molecular dynamics simulation, the particles will only move by a tiny amount during a time step. Consequently, most observables also change only by a small amount during a time step and it is, therefore, more probable to obtain a similar result rather than a completely different result. If we were to sample an observable in every time step, we would get a lot of samples with very similar values. It is said that the samples are correlated. Only if we wait for a sufficiently long time, the system will eventually have evolved to a completely different configuration, and we can expect the observable to assume a truly independent, uncorrelated value.
It is often easy to see when samples are correlated. Execute the code cell below for an example, where a small part of time_series_1 is plotted.
End of explanation
fig = plt.figure(figsize=(10, 6))
plt.plot(np.arange(2000, 42000, 800), time_series_1[2000:42000:800], "x")
fig.axes[0].margins(y=0.1)
plt.xlabel("$i$")
plt.ylabel("$X_i$")
fig.axes[0].xaxis.set_major_locator(plt.MultipleLocator(base=8000))
plt.show()
Explanation: One can clearly see that each sample lies in the vicinity of the previous one.
Below is an example for almost completely uncorrelated samples. The data points are taken from the same time series as in the previous example, but this time they are chosen with large gaps in between (every 800th sample is used). These samples appear to fluctuate a lot more randomly.
End of explanation
BIN_SIZE = 2000
Explanation: However, you should not trust your eye in deciding whether or not a time series is correlated. In fact, when running molecular dynamics simulations, your best guess is to always assume that samples are correlated, and that you should use one of the following techniques for statistical analysis, and rather not just use equation (2).
Binning analysis
Binning analysis is a straightforward method to calculate the SEM for correlated data. A time series of measurements of $N$ samples is divided into $N_\mathrm{B}$ equally long blocks called bins. If $N$ is not an integer multiple of $N_\mathrm{B}$, some data must be discarded to achieve this. The samples in every bin are averaged, giving the bin averages $\overline{X}i$. It is important that the bin size $N/N\mathrm{B}$ is significantly larger than the correlation time. Otherwise, binning analysis will yield the wrong SEM.
Once we have computed the bin averages $\overline{X}_i$, getting the SEM is straightforward: we can simply treat $\overline{X}_i$ as an uncorrelated time series. In other words, we can compute the SEM by using equation (1) and (2)!
Let's implement this.
End of explanation
print(f"Best guess for measured quantity: {avg:.3f}")
print(f"Standard error of the mean: {sem:.3f}")
Explanation: Exercise
Determine the maximally possible number of bins of size BIN_SIZE with the data in time_series_1, and store it in a variable N_BINS.
Create a numpy array called bin_avgs of length N_BINS.
Compute the bin averages of time_series_1 and store them in bin_avgs.
python
N_BINS = N_SAMPLES // BIN_SIZE
bin_avgs = np.zeros(N_BINS)
for i in range(N_BINS):
bin_avgs[i] = np.average(time_series_1[i * BIN_SIZE:(i + 1) * BIN_SIZE])
Exercise
Compute the average of all bin averages and store it in avg. This is the overall average, our best guess for the measured quantity. Furthermore, compute the standard error of the mean using equations (1) and (2) from the values in bin_avgs and store it in sem.
python
avg = np.average(bin_avgs)
sem = np.sqrt(np.sum((bin_avgs - avg)**2) / (N_BINS - 1.5) / N_BINS)
End of explanation
from scipy.optimize import curve_fit
# only fit to the first couple of SEMs
CUTOFF = 600
# sizes of the corresponding bins
sizes_subset = np.arange(3, 3 + CUTOFF, dtype=int)
def fit_fn(x, a, b, c):
return -np.exp(-a * x) * b + c
fit_params, _ = curve_fit(fit_fn, sizes_subset, sems[:CUTOFF], (0.05, 1, 0.5))
fit_sems = fit_fn(sizes, *fit_params)
# compute analytical solutions for AR(1) process
AN_SIGMA_1 = np.sqrt(EPS_1 ** 2 / (1 - PHI_1 ** 2))
AN_TAU_EXP_1 = -1 / np.log(PHI_1)
AN_SEM_1 = np.sqrt(2 * AN_SIGMA_1 ** 2 * AN_TAU_EXP_1 / N_SAMPLES)
plt.figure(figsize=(10, 6))
plt.plot(sizes, sems, "x", label="binning analysis")
plt.plot(sizes[(0, -1),], np.repeat(AN_SEM_1, 2), "-.", label="analytical solution")
plt.plot(sizes, fit_sems, "-", label="fit")
plt.xscale("log")
plt.xlabel("$N_B$")
plt.ylabel("SEM")
plt.legend()
plt.show()
print(f"Final Standard Error of the Mean: {fit_params[2]:.4f}")
print(f"Analytical Standard Error of the Mean: {AN_SEM_1:.4f}")
Explanation: Now we already have an estimate on how precise our simulation result is. But how do we know if we chose the appropriate bin size? The answer is, we can perform binning analysis for many different bin sizes and check when the SEM converges. For that we would like to define a function that does the binning analysis in one go.
Exercise
Define a function called do_binning_analysis that takes as arguments data (a numpy array containing the samples) and bin_size and returns the estimated SEM. You can reuse your code from the previous exercises and adapt it to be part of the function.
python
def do_binning_analysis(data, bin_size):
n_samples = len(data)
n_bins = n_samples // bin_size
bin_avgs = np.mean(data[:n_bins * bin_size].reshape((n_bins, -1)), axis=1)
return np.std(bin_avgs, ddof=1.5) / np.sqrt(n_bins)
Exercise
Now take the data in time_series_1 and perform binning analysis for bin sizes from 3 up to 5000 and plot the estimated SEMs against the bin size with logarithmic x axis. Your SEM estimates should be stored in a numpy array called sems.
```python
sizes = np.arange(3, 5001, dtype=int)
sems = np.zeros(5001 - 3, dtype=float)
for s in range(len(sizes)):
sems[s] = do_binning_analysis(time_series_1, sizes[s])
plt.figure(figsize=(10, 6))
plt.plot(sizes, sems, "x")
plt.xscale("log")
plt.xlabel("$N_B$")
plt.ylabel("SEM")
plt.show()
```
You should see that the series converges to a value between 0.04 and 0.05, before transitioning into a noisy tail. The tail becomes increasingly noisy, because as the block size increases, the number of blocks decreases, thus resulting in worse statistics.
To extract the correct SEM from this plot, we can fit an exponential function to the first part of the data, that doesn't suffer from too much noise.
End of explanation
sizes = np.arange(3, 5001, dtype=int)
sems = np.zeros(5001 - 3, dtype=float)
for s in range(len(sizes)):
sems[s] = do_binning_analysis(time_series_2, sizes[s])
# compute analytical solutions for AR(1) process
AN_SIGMA_2 = np.sqrt(EPS_2 ** 2 / (1 - PHI_2 ** 2))
AN_TAU_EXP_2 = -1 / np.log(PHI_2)
AN_SEM_2 = np.sqrt(2 * AN_SIGMA_2 ** 2 * AN_TAU_EXP_2 / N_SAMPLES)
plt.figure(figsize=(10, 6))
plt.plot(sizes, sems, "x", label="binning analysis")
plt.plot(sizes[(0, -1),], np.repeat(AN_SEM_2, 2), "-.", label="analytical solution")
plt.xscale("log")
plt.xlabel("$N_B$")
plt.ylabel("SEM")
plt.show()
Explanation: Even though the fit is not perfect, it suffices to give us the position of the asymptote, which is the final estimate for the standard error of the mean. You can see that binning analysis, in fact, managed to estimate the SEM very precisely compared to the analytical solution. This illustrates that most of the time, binning analysis will give you a very reasonable estimate for the SEM, and in fact, is often used in practice because of its simplicity.
However, in some cases, the statistics of your system can be quite challenging. Remember that in real applications, there won't be an analytical solution for the SEM. Therefore, you need to rely entirely on the statistical analysis. It is important to view the statistical analysis critically to decide whether the statistical analysis is trustworthy or not. To illustrate this, let's have a look at the binning analysis of the other time series that was generated at the start of the tutorial:
End of explanation |
5,450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deskew the MNIST training and test images
This set of scripts serves several purposes
Step1: OpenCV deskew function
Step3: Read MNIST binary-file data and convert to numpy.ndarray
thanks to http
Step4: Training Data
Read in the training images and labels
Step5: Deskew training data
Step6: Show a sampling of training images before and after deskewing
Step7: Save training data to csv
Step8: Test Data
Step9: Read in the test images and labels
Step10: Deskew test data
Step11: Show a sampling of test images before and after deskewing
Step12: Save test data to csv | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import cv2
print cv2.__version__
Explanation: Deskew the MNIST training and test images
This set of scripts serves several purposes:
* it provides two functions
- a function to read the binary MNIST data into numpy.ndarray
- a function to deskew the data
* it deskews the data and displays the results
* it saves the original and deskewed data plus labels as csv files
End of explanation
SZ = 28 # images are SZ x SZ grayscale
affine_flags = cv2.WARP_INVERSE_MAP|cv2.INTER_LINEAR
def deskew(img):
m = cv2.moments(img)
if abs(m['mu02']) < 1e-2:
return img.copy()
skew = m['mu11']/m['mu02']
M = np.float32([[1, skew, -0.5*SZ*skew], [0, 1, 0]])
img = cv2.warpAffine(img,M,(SZ, SZ),flags=affine_flags)
return img
Explanation: OpenCV deskew function
End of explanation
import os, struct
from array import array as pyarray
from numpy import append, array, int8, uint8, zeros
def load_mnist(dataset="training", digits=None, path=None, asbytes=False, selection=None, return_labels=True, return_indices=False):
Loads MNIST files into a 3D numpy array.
You have to download the data separately from [MNIST]_. It is recommended
to set the environment variable ``MNIST`` to point to the folder where you
put the data, so that you don't have to select path. On a Linux+bash setup,
this is done by adding the following to your ``.bashrc``::
export MNIST=/path/to/mnist
Parameters
----------
dataset : str
Either "training" or "testing", depending on which dataset you want to
load.
digits : list
Integer list of digits to load. The entire database is loaded if set to
``None``. Default is ``None``.
path : str
Path to your MNIST datafiles. The default is ``None``, which will try
to take the path from your environment variable ``MNIST``. The data can
be downloaded from http://yann.lecun.com/exdb/mnist/.
asbytes : bool
If True, returns data as ``numpy.uint8`` in [0, 255] as opposed to
``numpy.float64`` in [0.0, 1.0].
selection : slice
Using a `slice` object, specify what subset of the dataset to load. An
example is ``slice(0, 20, 2)``, which would load every other digit
until--but not including--the twentieth.
return_labels : bool
Specify whether or not labels should be returned. This is also a speed
performance if digits are not specified, since then the labels file
does not need to be read at all.
return_indicies : bool
Specify whether or not to return the MNIST indices that were fetched.
This is valuable only if digits is specified, because in that case it
can be valuable to know how far
in the database it reached.
Returns
-------
images : ndarray
Image data of shape ``(N, rows, cols)``, where ``N`` is the number of images. If neither labels nor inices are returned, then this is returned directly, and not inside a 1-sized tuple.
labels : ndarray
Array of size ``N`` describing the labels. Returned only if ``return_labels`` is `True`, which is default.
indices : ndarray
The indices in the database that were returned.
Examples
--------
Assuming that you have downloaded the MNIST database and set the
environment variable ``$MNIST`` point to the folder, this will load all
images and labels from the training set:
>>> images, labels = ag.io.load_mnist('training') # doctest: +SKIP
Load 100 sevens from the testing set:
>>> sevens = ag.io.load_mnist('testing', digits=[7], selection=slice(0, 100), return_labels=False) # doctest: +SKIP
# The files are assumed to have these names and should be found in 'path'
files = {
'training': ('train-images.idx3-ubyte', 'train-labels.idx1-ubyte'),
'testing': ('t10k-images.idx3-ubyte', 't10k-labels.idx1-ubyte'),
}
if path is None:
try:
path = os.environ['MNIST']
except KeyError:
raise ValueError("Unspecified path requires environment variable $MNIST to be set")
try:
images_fname = os.path.join(path, files[dataset][0])
labels_fname = os.path.join(path, files[dataset][1])
except KeyError:
raise ValueError("Data set must be 'testing' or 'training'")
# We can skip the labels file only if digits aren't specified and labels aren't asked for
if return_labels or digits is not None:
flbl = open(labels_fname, 'rb')
magic_nr, size = struct.unpack(">II", flbl.read(8))
labels_raw = pyarray("b", flbl.read())
flbl.close()
fimg = open(images_fname, 'rb')
magic_nr, size, rows, cols = struct.unpack(">IIII", fimg.read(16))
images_raw = pyarray("B", fimg.read())
fimg.close()
if digits:
indices = [k for k in range(size) if labels_raw[k] in digits]
else:
indices = range(size)
if selection:
indices = indices[selection]
N = len(indices)
images = zeros((N, rows, cols), dtype=uint8)
if return_labels:
labels = zeros((N), dtype=int8)
for i, index in enumerate(indices):
images[i] = array(images_raw[ indices[i]*rows*cols : (indices[i]+1)*rows*cols ]).reshape((rows, cols))
if return_labels:
labels[i] = labels_raw[indices[i]]
if not asbytes:
images = images.astype(float)/255.0
ret = (images,)
if return_labels:
ret += (labels,)
if return_indices:
ret += (indices,)
if len(ret) == 1:
return ret[0] # Don't return a tuple of one
else:
return ret
Explanation: Read MNIST binary-file data and convert to numpy.ndarray
thanks to http://g.sweyla.com/blog/2012/mnist-numpy/
End of explanation
images, labels = load_mnist('training', path="/home/george/Dropbox/MNIST/data")
Explanation: Training Data
Read in the training images and labels
End of explanation
images_deskewed = np.empty(np.shape(images))
i = 0
for img in images:
images_deskewed[i] = deskew(np.reshape(img,(28,28)))
i+=1
Explanation: Deskew training data
End of explanation
fig, axs = plt.subplots(5,6, figsize=(28,28), facecolor='w', edgecolor='k')
fig.subplots_adjust(hspace = 0.0001, wspace=.001)
axs = axs.ravel()
for i in range(0,30,2):
# convert from 1 x 784 to 28 x 28
img = np.reshape(images[i,:],(28,28))
img_dsk = np.reshape(images_deskewed[i,:],(28,28))
axs[i].imshow(img,cmap=plt.cm.gray_r, interpolation='nearest')
axs[i].set_title(labels[i],fontsize=32)
axs[i+1].imshow(img_dsk,cmap=plt.cm.gray_r, interpolation='nearest')
axs[i+1].set_title('deskewed',fontsize=32)
plt.show()
Explanation: Show a sampling of training images before and after deskewing
End of explanation
import csv
with open('data/train-images.csv', 'wb') as f:
csv.writer(f).writerows([img.flatten() for img in images])
with open('data/train-labels.csv', 'wb') as f:
csv.writer(f).writerows([label.flatten() for label in labels])
with open('data/train-images_deskewed.csv', 'wb') as f:
csv.writer(f).writerows([img.flatten() for img in images_deskewed])
Explanation: Save training data to csv
End of explanation
images = None
labels = None
images_deskewed = None
Explanation: Test Data
End of explanation
images, labels = load_mnist('testing', path="/home/george/Dropbox/MNIST/data")
Explanation: Read in the test images and labels
End of explanation
images_deskewed = np.empty(np.shape(images))
i = 0
for img in images:
images_deskewed[i] = deskew(np.reshape(img,(28,28)))
i+=1
Explanation: Deskew test data
End of explanation
fig, axs = plt.subplots(5,6, figsize=(28,28), facecolor='w', edgecolor='k')
fig.subplots_adjust(hspace = 0.0001, wspace=.001)
axs = axs.ravel()
for i in range(0,30,2):
# convert from 1 x 784 to 28 x 28
img = np.reshape(images[i,:],(28,28))
img_dsk = np.reshape(images_deskewed[i,:],(28,28))
axs[i].imshow(img,cmap=plt.cm.gray_r, interpolation='nearest')
axs[i].set_title(labels[i],fontsize=32)
axs[i+1].imshow(img_dsk,cmap=plt.cm.gray_r, interpolation='nearest')
axs[i+1].set_title('deskewed',fontsize=32)
plt.show()
Explanation: Show a sampling of test images before and after deskewing
End of explanation
import csv
with open('data/t10k-images.csv', 'wb') as f:
csv.writer(f).writerows([img.flatten() for img in images])
with open('data/t10k-labels.csv', 'wb') as f:
csv.writer(f).writerows([label.flatten() for label in labels])
with open('data/t10k-images_deskewed.csv', 'wb') as f:
csv.writer(f).writerows([img.flatten() for img in images_deskewed])
Explanation: Save test data to csv
End of explanation |
5,451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sequential Domain Reduction
Background
Sequential domain reduction is a process where the bounds of the optimization problem are mutated (typically contracted) to reduce the time required to converge to an optimal value. The advantage of this method is typically seen when a cost function is particularly expensive to calculate, or if the optimization routine oscilates heavily.
Basics
The basic steps are a pan and a zoom. These two steps are applied at one time, therefore updating the problem search space evey iteration.
Pan
Step1: Now let's create an example cost function. This is the Ackley function, which is quite non-linear.
Step2: We will use the standard bounds for this problem.
Step3: This is where we define our bound_transformer , the Sequential Domain Reduction Transformer
Step4: Now we can set up two idential optimization problems, except one has the bound_transformer variable set.
Step5: After both have completed we can plot to see how the objectives performed. It's quite obvious to see that the Sequential Domain Reduction technique contracted onto the optimal point relativly quickly.
Step6: Now let's plot the actual contraction of one of the variables (x) | Python Code:
import numpy as np
from bayes_opt import BayesianOptimization
from bayes_opt import SequentialDomainReductionTransformer
import matplotlib.pyplot as plt
Explanation: Sequential Domain Reduction
Background
Sequential domain reduction is a process where the bounds of the optimization problem are mutated (typically contracted) to reduce the time required to converge to an optimal value. The advantage of this method is typically seen when a cost function is particularly expensive to calculate, or if the optimization routine oscilates heavily.
Basics
The basic steps are a pan and a zoom. These two steps are applied at one time, therefore updating the problem search space evey iteration.
Pan: recentering the region of interest around the most optimal point found.
Zoom: contract the region of interest.
Parameters
There are three parameters for the built-in SequentialDomainReductionTransformer object:
$\gamma_{osc}:$ shrinkage parameter for oscillation. Typically [0.5-0.7]. Default = 0.7
$\gamma_{pan}:$ panning parameter. Typically 1.0. Default = 1.0
$\eta:$ zoom parameter. Default = 0.9
More information can be found in this reference document:
Title: "On the robustness of a simple domain reduction scheme for simulation‐based optimization"
Date: 2002
Author: Stander, N. and Craig, K.
---
Let's start by importing the packages we'll be needing
End of explanation
def ackley(**kwargs):
x = np.fromiter(kwargs.values(), dtype=float)
arg1 = -0.2 * np.sqrt(0.5 * (x[0] ** 2 + x[1] ** 2))
arg2 = 0.5 * (np.cos(2. * np.pi * x[0]) + np.cos(2. * np.pi * x[1]))
return -1.0 * (-20. * np.exp(arg1) - np.exp(arg2) + 20. + np.e)
Explanation: Now let's create an example cost function. This is the Ackley function, which is quite non-linear.
End of explanation
pbounds = {'x': (-5, 5), 'y': (-5, 5)}
Explanation: We will use the standard bounds for this problem.
End of explanation
bounds_transformer = SequentialDomainReductionTransformer()
Explanation: This is where we define our bound_transformer , the Sequential Domain Reduction Transformer
End of explanation
mutating_optimizer = BayesianOptimization(
f=ackley,
pbounds=pbounds,
verbose=0,
random_state=1,
bounds_transformer=bounds_transformer
)
mutating_optimizer.maximize(
init_points=2,
n_iter=50,
)
standard_optimizer = BayesianOptimization(
f=ackley,
pbounds=pbounds,
verbose=0,
random_state=1,
)
standard_optimizer.maximize(
init_points=2,
n_iter=50,
)
Explanation: Now we can set up two idential optimization problems, except one has the bound_transformer variable set.
End of explanation
plt.plot(mutating_optimizer.space.target, label='Mutated Optimizer')
plt.plot(standard_optimizer.space.target, label='Standard Optimizer')
plt.legend()
Explanation: After both have completed we can plot to see how the objectives performed. It's quite obvious to see that the Sequential Domain Reduction technique contracted onto the optimal point relativly quickly.
End of explanation
# example x-bound shrinking
x_min_bound = [b[0][0] for b in bounds_transformer.bounds]
x_max_bound = [b[0][1] for b in bounds_transformer.bounds]
x = [x[0] for x in mutating_optimizer.space.params]
plt.plot(x_min_bound[1:], label='x lower bound')
plt.plot(x_max_bound[1:], label='x upper bound')
plt.plot(x[1:], label='x')
plt.legend()
Explanation: Now let's plot the actual contraction of one of the variables (x)
End of explanation |
5,452 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Impact of Scattered Moonlight on Exposure Times
Step1: Simulation Config
Step2: ELG Fiducial Target
Look up the expected redshift distribution of ELG targets. Note that the ELG doublet falls off the spectrograph around z = 1.63.
Step3: Generate a random sample of redshifts from this distribution
Step4: Define a canonical set of 50 ELG redshifts to use for SNR calculations
Step5: Generate a rest-frame doublet spectrum using the specified parameter values
Step6: The default parameter values define the fiducial ELG target. Add zero padding so we cover the full spectrograph at any redshift.
Step7: Configure the simulator to use this rest-frame spectrum
Step8: Simulate the nominal ELG target given a redshift (or list of redshifts) and observing conditions, and calculate the total [OII] SNR
Step9: Calculate the nominal (dark-time, airmass 1) SNR for each of the reference redshifts
Step10: Calculate the SNR for specified observing conditions and use these to calculate the exposure time correction assuming that
Step11: Calculate correction factors for a range of moon conditions, and verify that only the V-band magnitude is needed
Step12: Build a self-contained function that calculates the exposure time correction given input moon parameters
Step13: Study the scaling of SNR with exposure time
Step14: Check that the scaling violations are due to read noise by creating an alternative simulator with read noises set to zero.
Step15: Study the scaling with airmass | Python Code:
%pylab inline
import os
import os.path
import astropy.table
import astropy.constants
import astropy.units as u
import sklearn.linear_model
Explanation: Impact of Scattered Moonlight on Exposure Times
End of explanation
import specsim.simulator
desi = specsim.simulator.Simulator('desi')
Explanation: Simulation Config
End of explanation
def get_elg_nz(plot=True):
# Read the nz file from $DESIMODEL.
full_name = os.path.join(os.environ['DESIMODEL'], 'data', 'targets', 'nz_elg.dat')
table = astropy.table.Table.read(full_name, format='ascii')
# Extract the n(z) histogram into numpy arrays.
z_lo, z_hi = table['col1'], table['col2']
assert np.all(z_hi[:-1] == z_lo[1:])
z_edge = np.hstack((z_lo, [z_hi[-1]]))
nz = table['col3']
# Trim to bins where n(z) > 0.
non_zero = np.where(nz > 0)[0]
lo, hi = non_zero[0], non_zero[-1] + 1
nz = nz[lo: hi]
z_edge = z_edge[lo: hi + 1]
if plot:
plt.hist(0.5 * (z_edge[1:] + z_edge[:-1]), weights=nz, bins=z_edge, histtype='stepfilled')
plt.xlabel('ELG redshift')
plt.ylabel('Target density [/sq.deg./dz]')
plt.xlim(z_edge[0], z_edge[-1])
return nz, z_edge
nz, z_edge = get_elg_nz()
Explanation: ELG Fiducial Target
Look up the expected redshift distribution of ELG targets. Note that the ELG doublet falls off the spectrograph around z = 1.63.
End of explanation
def generate_elg_z(n=100, seed=123):
cdf = np.cumsum(nz)
cdf = np.hstack(([0], cdf / cdf[-1]))
gen = np.random.RandomState(seed)
return np.interp(gen.rand(n), cdf, z_edge)
plt.hist(generate_elg_z(n=20000), bins=z_edge, histtype='stepfilled')
plt.xlim(z_edge[0], z_edge[-1]);
Explanation: Generate a random sample of redshifts from this distribution:
End of explanation
z_elg_ref = generate_elg_z(n=50, seed=123)
Explanation: Define a canonical set of 50 ELG redshifts to use for SNR calculations:
End of explanation
CLIGHT_KM_S = astropy.constants.c.to(u.km / u.s).value
def oii_doublet(wlen, total_flux=8e-17, peak1_wlen=3727.092, peak2_wlen=3729.874, flux_ratio12=0.73, sigma_v=75.):
log10 = np.log(10)
sigma_log10_wlen = sigma_v / CLIGHT_KM_S / log10
log10_wlen = np.log10(wlen)
flux1 = total_flux * flux_ratio12 / (1 + flux_ratio12)
flux2 = total_flux / (1 + flux_ratio12)
denom = np.sqrt(2 * np.pi) * sigma_log10_wlen
amp1 = flux1 / peak1_wlen / log10 / denom
amp2 = flux2 / peak2_wlen / log10 / denom
flux = (
amp1 * np.exp(-0.5 * ((log10_wlen - np.log10(peak1_wlen)) / sigma_log10_wlen)**2) +
amp2 * np.exp(-0.5 * ((log10_wlen - np.log10(peak2_wlen)) / sigma_log10_wlen)**2))
return flux
Explanation: Generate a rest-frame doublet spectrum using the specified parameter values:
End of explanation
elg_wlen0 = np.hstack(([1000], np.linspace(3723., 3734., 50), [11000]))
elg_flux0 = oii_doublet(elg_wlen0)
def plot_fiducial_elg():
plt.plot(elg_wlen0, 1e17 * elg_flux0, 'r.-')
plt.xlim(elg_wlen0[1], elg_wlen0[-2])
plt.xlabel('Rest-frame wavelength [A]')
plt.ylabel('Rest-frame flux [1e17 erg/(s cm2 A)]')
plot_fiducial_elg()
Explanation: The default parameter values define the fiducial ELG target. Add zero padding so we cover the full spectrograph at any redshift.
End of explanation
desi.source.update_in(
'ELG [OII] doublet', 'elg',
elg_wlen0 * u.Angstrom, elg_flux0 * u.erg/(u.cm**2 * u.s * u.Angstrom), z_in=0.)
Explanation: Configure the simulator to use this rest-frame spectrum:
End of explanation
def simulate_elg(z, exposure_time=1000*u.s, airmass=1.0, simulator=desi,
moon_phase=0.25, moon_zenith=100*u.deg, moon_separation=60*u.deg):
# Configure the simulation.
simulator.observation.exposure_time = exposure_time
simulator.atmosphere.airmass = airmass
simulator.atmosphere.moon.moon_phase = moon_phase
simulator.atmosphere.moon.moon_zenith = moon_zenith
simulator.atmosphere.moon.separation_angle = moon_separation
# z can either be a scalar or an array.
z = np.asarray(z)
if z.shape == ():
is_scalar = True
z = [z]
else:
is_scalar = False
snr_sum_sq = np.zeros_like(z)
for i, z_elg in enumerate(z):
simulator.source.update_out(z_out=z_elg)
simulator.simulate()
# Integrate the SNR over all three cameras.
for camera in simulator.camera_output:
snr = camera['observed_flux'] * np.sqrt(camera['flux_inverse_variance'])
snr_sum_sq[i] += np.sum(snr ** 2)
snr_tot = np.sqrt(snr_sum_sq)
return snr_tot[0] if is_scalar else snr_tot
Explanation: Simulate the nominal ELG target given a redshift (or list of redshifts) and observing conditions, and calculate the total [OII] SNR:
End of explanation
snr_elg_ref = simulate_elg(z_elg_ref)
Explanation: Calculate the nominal (dark-time, airmass 1) SNR for each of the reference redshifts:
End of explanation
def get_elg_exposure_factor(**kwargs):
plot = kwargs.get('plot', None)
if plot:
del kwargs['plot']
snr = simulate_elg(z_elg_ref, exposure_time=1000*u.s, **kwargs)
moon_V = desi.atmosphere.moon.scattered_V
ratio = np.sqrt(snr_elg_ref / snr)
factor = np.median(ratio)
if plot:
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
lo, hi = np.min(snr), np.max(snr_elg_ref)
pad = 0.05 * (hi - lo)
lo -= pad
hi += pad
s = ax[0].scatter(snr_elg_ref, snr, c=z_elg_ref, lw=0, s=50)
plt.colorbar(s, ax=ax[0]).set_label('ELG redshift')
ax[0].plot([lo, hi], [lo, hi], 'r-', lw=2)
ax[0].plot([lo, hi], [lo / factor ** 2, hi / factor ** 2], 'r-', lw=2, ls='--')
ax[0].set_xlim(lo, hi)
ax[0].set_ylim(lo, hi)
ax[0].set_xlabel('Dark ELG SNR')
ax[0].set_ylabel('Moon ELG SNR')
ax[1].hist(ratio, range=(1, 2), bins=10, histtype='step')
ax[1].axvline(factor, ls='--', color='r', lw=2)
ax[1].set_xlabel('Exposure time correction $\sqrt{\\nu_{dark}/\\nu_{moon}}$')
ax[1].set_xlim(1, 2)
return factor, moon_V
%time get_elg_exposure_factor(moon_zenith=20*u.deg, moon_separation=60*u.deg, plot=True)
Explanation: Calculate the SNR for specified observing conditions and use these to calculate the exposure time correction assuming that:
$$
t_{moon} / t_{dark} = \sqrt{\nu_{dark}/\nu_{moon}} \; .
$$
Also returns the scattered moon V-band magnitude.
End of explanation
def run_moon_study(n=500, seed=123):
gen = np.random.RandomState(seed)
# Generate random zenith moon parameters, with a bias towards
# lots of scattered moonlight.
moon_separation = gen.uniform(0, 60, n) * u.deg
moon_phase = gen.uniform(0, 0.8, n) * u.deg
factor = np.empty(n)
moon_V = np.empty(n)
for i, (s, p) in enumerate(zip(moon_separation, moon_phase)):
factor[i], V = get_elg_exposure_factor(moon_zenith=10*u.deg, moon_separation=s, moon_phase=p)
moon_V[i] = V.value
if i % 50 == 0:
print i, (s, p), factor[i], moon_V[i]
return moon_V, factor
%time moon_V, factor = run_moon_study()
np.save('moon_study.npy', np.vstack((moon_V, factor)))
def analyze_moon_study():
model = sklearn.linear_model.LinearRegression(fit_intercept=True)
getX = lambda x: np.vstack((np.exp(-x),1/x, 1/x**2, 1/x**3)).transpose()
model.fit(getX(moon_V), factor)
print model.intercept_, list(model.coef_)
x_model = np.linspace(16.5, 24, 50)
y_model = model.predict(getX(x_model))
plt.plot(moon_V, factor, '+')
plt.plot(x_model, y_model, '-')
plt.xlabel('Scattered moon V-band magnitude')
plt.ylabel('Exposure time correction')
plt.xlim(x_model[0], x_model[-1])
plt.ylim(0.95, 2.25)
analyze_moon_study()
Explanation: Calculate correction factors for a range of moon conditions, and verify that only the V-band magnitude is needed:
End of explanation
def get_moon_correction(moon_phase=0.25, moon_zenith=100*u.deg, moon_separation=60*u.deg):
desi.atmosphere.moon.moon_phase = moon_phase
desi.atmosphere.moon.moon_zenith = moon_zenith
desi.atmosphere.moon.separation_angle = moon_separation
x = desi.desi.atmospherepheree.moon.scattered_V
print x
return intercept + coefs[0] * np.exp(-x) + coefs[1] / x + coefs[2] / x ** 2
Explanation: Build a self-contained function that calculates the exposure time correction given input moon parameters:
End of explanation
def snr_vs_texp(iscale=5, simulator=desi, save=None):
zvec = (0.6, 1.0, 1.5)
tvec = (600, 800, 1000, 1200, 1600, 2000, 2500, 3000, 3500)
# Calculate sqrt(t) scaling relative to tvec[iscale]
tscale = np.linspace(600, 3500, 100)
sfac = np.sqrt(tscale / tvec[iscale])
snrtot = np.empty((len(zvec), len(tvec)))
for i, z in enumerate(zvec):
for j, t in enumerate(tvec):
snrtot[i, j] = simulate_elg(z, exposure_time=t * u.s, simulator=simulator)
plt.plot(tvec, snrtot[i], '-o', label='z={:.1f}'.format(z))
# Use sqrt(t) scaling of the 1000s point.
plt.plot(tscale, snrtot[i, iscale] * sfac, 'k--')
plt.legend(loc='upper left')
plt.xlim(tvec[0], tvec[-1])
plt.ylim(np.min(snrtot), np.max(snrtot))
plt.grid()
plt.axhline(7, lw=4, alpha=0.2, color='k')
plt.xlabel('Exposure time [s]')
plt.ylabel('Total ELG [OII] SNR')
if save:
plt.savefig(save)
snr_vs_texp(save='texp_scaling.pdf')
Explanation: Study the scaling of SNR with exposure time:
End of explanation
import specsim.config
desi_alt_config = specsim.config.load_config('desi')
desi_alt_config.instrument.cameras.b.constants.read_noise = '0 electron/pixel2'
desi_alt_config.instrument.cameras.r.constants.read_noise = '0 electron/pixel2'
desi_alt_config.instrument.cameras.z.constants.read_noise = '0 electron/pixel2'
desi_alt = specsim.simulator.Simulator(desi_alt_config)
desi_alt.source.update_in(
'ELG [OII] doublet', 'elg',
elg_wlen0 * u.Angstrom, elg_flux0 * u.erg/(u.cm**2 * u.s * u.Angstrom), z_in=0.)
snr_vs_texp(simulator=desi_alt, save='texp_scaling_alt.pdf')
Explanation: Check that the scaling violations are due to read noise by creating an alternative simulator with read noises set to zero.
End of explanation
def snr_vs_airmass(iscale=2):
zvec = (0.6, 1.0, 1.5)
Xvec = np.linspace(1., 1.5, 6)
# Calculate X**-2.5 scaling relative to Xvec[iscale]
Xscale = np.linspace(1.,1.5, 100)
sfac = (Xscale / Xvec[iscale]) ** -2.5
snrtot = np.empty((len(zvec), len(Xvec)))
for i, z in enumerate(zvec):
for j, X in enumerate(Xvec):
snrtot[i, j] = simulate_elg(z, airmass=X)
plt.plot(Xvec, snrtot[i], '-o', label='z={:.1f}'.format(z))
plt.plot(Xscale, snrtot[i, iscale] * sfac, 'k--')
#plt.xscale('log')
#plt.yscale('log')
plt.legend(loc='upper left')
plt.xlim(Xvec[0], Xvec[-1])
plt.ylim(np.min(snrtot), np.max(snrtot))
plt.grid()
plt.axhline(7, lw=4, alpha=0.2, color='k')
plt.xlabel('Airmass')
plt.ylabel('Total ELG [OII] SNR')
snr_vs_airmass()
Explanation: Study the scaling with airmass:
End of explanation |
5,453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-label prediction with Planet Amazon dataset
Step1: Getting the data
The planet dataset isn't available on the fastai dataset page due to copyright restrictions. You can download it from Kaggle however. Let's see how to do this by using the Kaggle API as it's going to be pretty useful to you if you want to join a competition or use other Kaggle datasets later on.
First, install the Kaggle API by uncommenting the following line and executing it, or by executing it in your terminal (depending on your platform you may need to modify this slightly to either add source activate fastai or similar, or prefix pip with a path. Have a look at how conda install is called for your platform in the appropriate Returning to work section of https
Step2: Then you need to upload your credentials from Kaggle on your instance. Login to kaggle and click on your profile picture on the top left corner, then 'My account'. Scroll down until you find a button named 'Create New API Token' and click on it. This will trigger the download of a file named 'kaggle.json'.
Upload this file to the directory this notebook is running in, by clicking "Upload" on your main Jupyter page, then uncomment and execute the next two commands (or run them in a terminal). For Windows, uncomment the last two commands.
Step3: You're all set to download the data from planet competition. You first need to go to its main page and accept its rules, and run the two cells below (uncomment the shell commands to download and unzip the data). If you get a 403 forbidden error it means you haven't accepted the competition rules yet (you have to go to the competition page, click on Rules tab, and then scroll to the bottom to find the accept button).
Step4: To extract the content of this file, we'll need 7zip, so uncomment the following line if you need to install it (or run sudo apt install p7zip-full in your terminal).
Step5: And now we can unpack the data (uncomment to run - this might take a few minutes to complete).
Step6: Multiclassification
Contrary to the pets dataset studied in last lesson, here each picture can have multiple labels. If we take a look at the csv file containing the labels (in 'train_v2.csv' here) we see that each 'image_name' is associated to several tags separated by spaces.
Step7: To put this in a DataBunch while using the data block API, we then need to using ImageList (and not ImageDataBunch). This will make sure the model created has the proper loss function to deal with the multiple classes.
Step8: We use parentheses around the data block pipeline below, so that we can use a multiline statement without needing to add '\'.
Step9: show_batch still works, and show us the different labels separated by ;.
Step10: To create a Learner we use the same function as in lesson 1. Our base architecture is resnet50 again, but the metrics are a little bit differeent
Step11: We use the LR Finder to pick a good learning rate.
Step12: Then we can fit the head of our network.
Step13: ...And fine-tune the whole model
Step14: You won't really know how you're going until you submit to Kaggle, since the leaderboard isn't using the same subset as we have for training. But as a guide, 50th place (out of 938 teams) on the private leaderboard was a score of 0.930.
Step15: fin
(This section will be covered in part 2 - please don't ask about it just yet! | Python Code:
%reload_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.vision import *
Explanation: Multi-label prediction with Planet Amazon dataset
End of explanation
# ! {sys.executable} -m pip install kaggle --upgrade
Explanation: Getting the data
The planet dataset isn't available on the fastai dataset page due to copyright restrictions. You can download it from Kaggle however. Let's see how to do this by using the Kaggle API as it's going to be pretty useful to you if you want to join a competition or use other Kaggle datasets later on.
First, install the Kaggle API by uncommenting the following line and executing it, or by executing it in your terminal (depending on your platform you may need to modify this slightly to either add source activate fastai or similar, or prefix pip with a path. Have a look at how conda install is called for your platform in the appropriate Returning to work section of https://course.fast.ai/. (Depending on your environment, you may also need to append "--user" to the command.)
End of explanation
# ! mkdir -p ~/.kaggle/
# ! mv kaggle.json ~/.kaggle/
# For Windows, uncomment these two commands
# ! mkdir %userprofile%\.kaggle
# ! move kaggle.json %userprofile%\.kaggle
Explanation: Then you need to upload your credentials from Kaggle on your instance. Login to kaggle and click on your profile picture on the top left corner, then 'My account'. Scroll down until you find a button named 'Create New API Token' and click on it. This will trigger the download of a file named 'kaggle.json'.
Upload this file to the directory this notebook is running in, by clicking "Upload" on your main Jupyter page, then uncomment and execute the next two commands (or run them in a terminal). For Windows, uncomment the last two commands.
End of explanation
path = Config.data_path()/'planet'
path.mkdir(parents=True, exist_ok=True)
path
# ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {path}
# ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train_v2.csv -p {path}
# ! unzip -q -n {path}/train_v2.csv.zip -d {path}
Explanation: You're all set to download the data from planet competition. You first need to go to its main page and accept its rules, and run the two cells below (uncomment the shell commands to download and unzip the data). If you get a 403 forbidden error it means you haven't accepted the competition rules yet (you have to go to the competition page, click on Rules tab, and then scroll to the bottom to find the accept button).
End of explanation
# ! conda install --yes --prefix {sys.prefix} -c haasad eidl7zip
Explanation: To extract the content of this file, we'll need 7zip, so uncomment the following line if you need to install it (or run sudo apt install p7zip-full in your terminal).
End of explanation
# ! 7za -bd -y -so x {path}/train-jpg.tar.7z | tar xf - -C {path.as_posix()}
Explanation: And now we can unpack the data (uncomment to run - this might take a few minutes to complete).
End of explanation
df = pd.read_csv(path/'train_v2.csv')
df.head()
Explanation: Multiclassification
Contrary to the pets dataset studied in last lesson, here each picture can have multiple labels. If we take a look at the csv file containing the labels (in 'train_v2.csv' here) we see that each 'image_name' is associated to several tags separated by spaces.
End of explanation
tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
Explanation: To put this in a DataBunch while using the data block API, we then need to using ImageList (and not ImageDataBunch). This will make sure the model created has the proper loss function to deal with the multiple classes.
End of explanation
np.random.seed(42)
src = (ImageList.from_csv(path, 'train_v2.csv', folder='train-jpg', suffix='.jpg')
.split_by_rand_pct(0.2)
.label_from_df(label_delim=' '))
data = (src.transform(tfms, size=128)
.databunch().normalize(imagenet_stats))
Explanation: We use parentheses around the data block pipeline below, so that we can use a multiline statement without needing to add '\'.
End of explanation
data.show_batch(rows=3, figsize=(12,9))
Explanation: show_batch still works, and show us the different labels separated by ;.
End of explanation
arch = models.resnet50
acc_02 = partial(accuracy_thresh, thresh=0.2)
f_score = partial(fbeta, thresh=0.2)
learn = cnn_learner(data, arch, metrics=[acc_02, f_score])
Explanation: To create a Learner we use the same function as in lesson 1. Our base architecture is resnet50 again, but the metrics are a little bit differeent: we use accuracy_thresh instead of accuracy. In lesson 1, we determined the predicition for a given class by picking the final activation that was the biggest, but here, each activation can be 0. or 1. accuracy_thresh selects the ones that are above a certain threshold (0.5 by default) and compares them to the ground truth.
As for Fbeta, it's the metric that was used by Kaggle on this competition. See here for more details.
End of explanation
learn.lr_find()
learn.recorder.plot()
Explanation: We use the LR Finder to pick a good learning rate.
End of explanation
lr = 0.01
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1-rn50')
Explanation: Then we can fit the head of our network.
End of explanation
learn.unfreeze()
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(5, slice(1e-5, lr/5))
learn.save('stage-2-rn50')
data = (src.transform(tfms, size=256)
.databunch().normalize(imagenet_stats))
learn.data = data
data.train_ds[0][0].shape
learn.freeze()
learn.lr_find()
learn.recorder.plot()
lr=1e-2/2
learn.fit_one_cycle(5, slice(lr))
learn.save('stage-1-256-rn50')
learn.unfreeze()
learn.fit_one_cycle(5, slice(1e-5, lr/5))
learn.recorder.plot_losses()
learn.save('stage-2-256-rn50')
Explanation: ...And fine-tune the whole model:
End of explanation
learn.export()
Explanation: You won't really know how you're going until you submit to Kaggle, since the leaderboard isn't using the same subset as we have for training. But as a guide, 50th place (out of 938 teams) on the private leaderboard was a score of 0.930.
End of explanation
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg.tar.7z -p {path}
#! 7za -bd -y -so x {path}/test-jpg.tar.7z | tar xf - -C {path}
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg-additional.tar.7z -p {path}
#! 7za -bd -y -so x {path}/test-jpg-additional.tar.7z | tar xf - -C {path}
test = ImageList.from_folder(path/'test-jpg').add(ImageList.from_folder(path/'test-jpg-additional'))
len(test)
learn = load_learner(path, test=test)
preds, _ = learn.get_preds(ds_type=DatasetType.Test)
thresh = 0.2
labelled_preds = [' '.join([learn.data.classes[i] for i,p in enumerate(pred) if p > thresh]) for pred in preds]
labelled_preds[:5]
fnames = [f.name[:-4] for f in learn.data.test_ds.items]
df = pd.DataFrame({'image_name':fnames, 'tags':labelled_preds}, columns=['image_name', 'tags'])
df.to_csv(path/'submission.csv', index=False)
! kaggle competitions submit planet-understanding-the-amazon-from-space -f {path/'submission.csv'} -m "My submission"
Explanation: fin
(This section will be covered in part 2 - please don't ask about it just yet! :) )
End of explanation |
5,454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaia
Real data!
gully
Sept 14, 2016
Outline
Step1: 1. Retrieve existing catalogs
Retrieve Data file from here
Step2: 2. Read in the Gaia data
Step3: This takes a finite amount of RAM, but should be fine for modern laptops.
Step4: Match
Step5: Forced to match to nearest neighbor
Step6: ... yielding some redundancies in cross matching
Step7: 112 out of 862 have Gaia parallaxes... that seems high for some reason? | Python Code:
#! cat /Users/gully/.ipython/profile_default/startup/start.ipy
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import pandas as pd
from astropy import units as u
from astropy.coordinates import SkyCoord
Explanation: Gaia
Real data!
gully
Sept 14, 2016
Outline:
More exploring
Import these first-- I auto import them every time!:
End of explanation
d1 = pd.read_csv('../../ApJdataFrames/data/Luhman2012/tbl1_plusSimbad.csv') #local version
d1 = d1[~d1.RA.isnull()]
d1.columns
c1 = SkyCoord(d1.RA.values, d1.DEC.values, unit=(u.hourangle, u.deg), frame='icrs')
Explanation: 1. Retrieve existing catalogs
Retrieve Data file from here:
https://github.com/BrownDwarf/ApJdataFrames/blob/master/data/Luhman2012/tbl1_plusSimbad.csv
End of explanation
df_list = []
Explanation: 2. Read in the Gaia data
End of explanation
for i in range(16):
df_list.append(pd.read_csv('../data/TgasSource_000-000-{:03d}.csv'.format(i)))
tt = pd.concat(df_list, ignore_index=True)
plt.figure(figsize=(10,4))
ax = sns.jointplot(tt.ra, tt.dec, kind='hex', size=8)
ax.ax_joint.plot(c1.ra.deg, c1.dec.deg, '.', alpha=0.5)
cg = SkyCoord(tt.ra.values, tt.dec.values, unit=(u.deg, u.deg), frame='icrs')
Explanation: This takes a finite amount of RAM, but should be fine for modern laptops.
End of explanation
idx, d2d, blah = c1.match_to_catalog_sky(cg)
vec_units = d2d.to(u.arcsecond)
vec = vec_units.value
bins = np.arange(0, 4, 0.2)
sns.distplot(vec, bins=bins, kde=False),
plt.xlim(0,4)
plt.xlabel('match separation (arcsec)')
Explanation: Match
End of explanation
len(set(idx)), idx.shape[0]
Explanation: Forced to match to nearest neighbor
End of explanation
tt_sub = tt.iloc[idx]
tt_sub = tt_sub.reset_index()
tt_sub = tt_sub.drop('index', axis=1)
d1 = d1.reset_index()
d1 = d1.drop('index', axis=1)
x1 = pd.concat([d1, tt_sub], axis=1)
x1.shape
col_order = d1.columns.values.tolist() + tt_sub.columns.values.tolist()
x1 = x1[col_order]
x0 = x1.copy()
x0['xmatch_sep_as'] = vec
x0['Gaia_match'] = vec < 2.0 #Fairly liberal, 1.0 might be better.
plt.figure(figsize=(8,4))
bins = np.arange(2, 14, 0.2)
sns.distplot(x0.parallax[x0.Gaia_match], bins=bins)
#sns.distplot(1.0/(x0.parallax[x0.Gaia_match]/1000.0))
plt.xlabel('Parallax (mas)')
plt.savefig('../results/luhman_mamajek2012.png', dpi=300)
x0.Gaia_match.sum(), len(d1)
Explanation: ... yielding some redundancies in cross matching
End of explanation
plt.figure(figsize=(10,4))
ax = sns.jointplot(tt.ra, tt.dec, kind='hex', size=8, xlim=(230,255), ylim=(-40,-10))
ax.ax_joint.plot(c1.ra.deg, c1.dec.deg, '.', alpha=0.5)
ax.ax_joint.scatter(x0.ra[x0.Gaia_match], x0.dec[x0.Gaia_match],
s=x0.parallax[x0.Gaia_match]**3*0.2, c='r',alpha=0.5)
Explanation: 112 out of 862 have Gaia parallaxes... that seems high for some reason?
End of explanation |
5,455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook shows you step by step how you can transform text data from vmstat output file into a pandas DataFrame.
Step1: Data Input
In this version, I'll guide you through data parsing step by step.
Step2: Data Selection
Step3: Visualization | Python Code:
%less ../datasets/vmstat_loadtest.log
Explanation: This notebook shows you step by step how you can transform text data from vmstat output file into a pandas DataFrame.
End of explanation
import pandas as pd
raw = pd.read_csv("../datasets/vmstat_loadtest.log", skiprows=1)
raw.head()
columns = raw.columns.str.split().values[0]
print(columns)
data = raw.iloc[:,0].str.split(n=len(columns)-1).apply(pd.Series)
data.head()
data.columns = columns
data.head()
vmstat = data.iloc[:,:-1].apply(pd.to_numeric)
vmstat['UTC'] = pd.to_datetime(data['UTC'])
vmstat.head()
Explanation: Data Input
In this version, I'll guide you through data parsing step by step.
End of explanation
cpu = vmstat[['us','sy','id','wa', 'st']]
cpu.head()
Explanation: Data Selection
End of explanation
%matplotlib inline
cpu.plot.area();
Explanation: Visualization
End of explanation |
5,456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repetition Codes on the BSC, BEC, and BI-AWGN Channel
This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.
This code illustrates
* The decoding performance of repetition codes
* The performance of reptition codes in the spectral efficiency chart
Step1: Repetition Codes on the Hard BI-AWGN Channel / the BSC
We compare the performance of repetition codes of length $n$ ($n$ odd) over the Hard BI-AWGN channel, which, as we have seen, is essentially a BSC with error probability $\delta = Q(\frac{1}{\sigma_n^2}) = Q\left(\sqrt{2\frac{E_{\mathrm{s}}}{N_0}}\right) = \frac{1}{2}\mathrm{erfc}\left(\sqrt{\frac{E_{\mathrm{s}}}{N_0}}\right)$. This is also the uncoded error rate.
The decoding error probability is then given as
$$
P_e = 1 - \sum_{i=0}^{\lfloor\frac{n-1}{2}\rfloor}\binom{n}{i}\delta^i(1-\delta)^{n-i}
$$
The bit error rate $P_b$ is equivalent to the decoding error rate $P_e$ as a single bit is mapped into a codeword.
Step2: Show performance on the spectral efficiency chart at a target BER of $10^{-6}$. For this, we need to solve the the equation
\begin{equation}
P_e = 1 - \sum_{i=0}^{\lfloor\frac{n-1}{2}\rfloor}\binom{n}{i}\delta^i(1-\delta)^{n-i}
\end{equation}
for $\delta$, which we can do numerically using the fsolve function.
Step3: Show the performance also in the (unusual) spectral efficiency chart of the BSC.
Step4: Performance on the Binary Erasure Channel (BEC)
Maximum likelihood of repetition codes on the BEC can be easily carried out. As the BEC doesn't introduce any errors, as soon as a single non-erased position is obtained, we can infer the value of the bit. The error probability is hence the probability that all positions are erased, i.e.,
$$
P_e = \frac12\epsilon^n
$$
In the case where everything is erased, we can still make a guess and will make a decoding error in $\frac12$ of the cases.
Step5: Plot the performance of repetition codes on the binary erasure channel in the spectral efficiency chart. For a given target $P_e$, we can directly find the corresponding erasure probability as $\epsilon = \sqrt[n]{2P_e}$.
Step6: Performance on the Z-channel
Maximum likelihood of repetition codes on the BEC can be easily carried out. As shown in the tutorial, the error probability is hence the probability that the all-zero codeword is received, which is, assuming equiprobable codewords
$$
P_e = \frac12q^n
$$
where $q$ is the probability that a transmitted 1 is flipped to a 0 and where the factor $\frac12$ stems from the fact that in the case where the all-zero codeword is transmitted (with probability $\frac{1}{2}$, we never make an error with ML decoding, and hence we are only interested in the case where the all-1 codeword is transmitted
Step7: Plot the spectral efficiency chart for the Z-channel. Assuming uniform input symbols $P(X=0)=P(X=1)=\frac12$, the mutual information $I(X;Y)$ for the Z-channel is given by
$$
I(X;Y) = h\left(\frac{1}{2}(1-q)\right) - \frac12h(q)
$$
with $h(x) | Python Code:
import numpy as np
from scipy.stats import norm
from scipy.special import comb
import matplotlib.pyplot as plt
Explanation: Repetition Codes on the BSC, BEC, and BI-AWGN Channel
This code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.
This code illustrates
* The decoding performance of repetition codes
* The performance of reptition codes in the spectral efficiency chart
End of explanation
esno_dB_range = np.linspace(-18,20,100)
esno_linear_range = 10**(esno_dB_range/10)
# get delta from Es/N0, note that the Q function is norm.sf
delta_range = norm.sf(np.sqrt(2*esno_linear_range))
# range of repetition codfe lengths
n_range = np.arange(3,20,2)
fig,ax = plt.subplots(1,2, figsize=(15,7))
colors = plt.cm.viridis(np.linspace(1,0,20))
legend = []
for n in n_range:
rate = 1/n
# compute Eb/N0 with the corresponding rate
ebno_dB_range = 10*np.log10(esno_linear_range / rate)
# get error rate of repetition code
Pe = [1 - np.sum([comb(n,i,exact=False)*(delta**i)*((1-delta)**(n-i)) for i in range(0,(n-1)//2+1)]) for delta in delta_range]
ax[0].semilogy(esno_dB_range, Pe, color = colors[n])
ax[1].semilogy(ebno_dB_range, Pe, color = colors[n])
legend.append('$n = %d$' % n)
ax[0].set_xlabel('$E_s/N_0$ (dB)', fontsize=16)
ax[1].set_xlabel('$E_b/N_0$ (dB)', fontsize=16)
for ai in range(2):
# for uncoded transmission, Eb = Es
ax[ai].semilogy(esno_dB_range, delta_range, 'k--')
ax[ai].set_ylabel('BER ($P_e$)', fontsize=16)
ax[ai].grid(True)
ax[ai].set_ylim((1e-9,1))
ax[ai].tick_params(labelsize=14)
legend.append('Uncoded')
ax[0].set_xlim((-8,13))
ax[1].set_xlim((0,13))
ax[1].legend(legend,fontsize=14);
Explanation: Repetition Codes on the Hard BI-AWGN Channel / the BSC
We compare the performance of repetition codes of length $n$ ($n$ odd) over the Hard BI-AWGN channel, which, as we have seen, is essentially a BSC with error probability $\delta = Q(\frac{1}{\sigma_n^2}) = Q\left(\sqrt{2\frac{E_{\mathrm{s}}}{N_0}}\right) = \frac{1}{2}\mathrm{erfc}\left(\sqrt{\frac{E_{\mathrm{s}}}{N_0}}\right)$. This is also the uncoded error rate.
The decoding error probability is then given as
$$
P_e = 1 - \sum_{i=0}^{\lfloor\frac{n-1}{2}\rfloor}\binom{n}{i}\delta^i(1-\delta)^{n-i}
$$
The bit error rate $P_b$ is equivalent to the decoding error rate $P_e$ as a single bit is mapped into a codeword.
End of explanation
from scipy.optimize import fsolve
def get_delta_from_Pe(Pe, n):
func = lambda delta : Pe - 1 + np.sum([comb(n,i,exact=False)*(delta**i)*((1-delta)**(n-i)) for i in range(0,(n-1)//2+1)])
delta = fsolve(func, 0.4)
return delta[0]
Pe_target = 1e-6;
esno_dB_range = np.linspace(-16,10,100)
# compute sigma_n
sigman_range = [np.sqrt(0.5 * 10**(-esno_db/10)) for esno_db in esno_dB_range]
delta_range = [norm.sf(1/sigman) for sigman in sigman_range]
capacity_AWGN = [0.5*np.log2(1+1/(sigman**2)) for sigman in sigman_range]
capacity_BIAWGN_hard = [1+delta*np.log2(delta)+(1-delta)*np.log2(1-delta) for delta in delta_range]
# repetition codes
rate_range = [1/n for n in n_range]
delta_for_Pe_range = [get_delta_from_Pe(Pe_target, n) for n in n_range]
esno_dB_for_Pe_range = [10*np.log10(0.5*np.square(norm.isf(delta))) for delta in delta_for_Pe_range]
fig = plt.figure(1,figsize=(15,7))
plt.subplot(121)
plt.plot(esno_dB_range, capacity_AWGN)
plt.plot(esno_dB_range, capacity_BIAWGN_hard)
plt.scatter(esno_dB_for_Pe_range, rate_range, marker='s', c=colors[n_range,:], s=64)
plt.xlim((-10,10))
plt.ylim((0,1.1))
plt.xlabel('$E_s/N_0$ (dB)',fontsize=16)
plt.ylabel('Capacity (bit/channel use)',fontsize=16)
plt.grid(True)
plt.legend(['AWGN', 'Hard BI-AWGN'],fontsize=14)
# plot Eb/N0 . Note that in this case, the rate that is used for calculating Eb/N0 is the capcity
# Eb/N0 = 1/r (Es/N0)
plt.subplot(122)
plt.plot(esno_dB_range - 10*np.log10(capacity_AWGN), capacity_AWGN)
plt.plot(esno_dB_range - 10*np.log10(capacity_BIAWGN_hard), capacity_BIAWGN_hard)
plt.scatter(esno_dB_for_Pe_range - 10*np.log10(rate_range), rate_range, marker='s', c=colors[n_range,:], s=64)
plt.xlim((-2,15))
plt.ylim((0,1.1))
plt.xlabel('$E_b/N_0$ (dB)',fontsize=16)
plt.ylabel('Capacity (bit/channel use)',fontsize=16)
plt.grid(True)
Explanation: Show performance on the spectral efficiency chart at a target BER of $10^{-6}$. For this, we need to solve the the equation
\begin{equation}
P_e = 1 - \sum_{i=0}^{\lfloor\frac{n-1}{2}\rfloor}\binom{n}{i}\delta^i(1-\delta)^{n-i}
\end{equation}
for $\delta$, which we can do numerically using the fsolve function.
End of explanation
capacity_BSC = [1 + delta*np.log2(delta) + (1-delta)*np.log2(1-delta) for delta in delta_range]
fig = plt.figure(1,figsize=(12,7))
plt.semilogx(delta_range, capacity_BSC,'k-')
plt.scatter(delta_for_Pe_range, rate_range, marker='s', c=colors[n_range,:], s=64)
plt.xlim((1e-4,0.5))
plt.ylim((0,1.1))
plt.xlabel('$\delta$', fontsize=16)
plt.ylabel('Capacity (bit/channel use)', fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
Explanation: Show the performance also in the (unusual) spectral efficiency chart of the BSC.
End of explanation
epsilon_range = np.linspace(0,1,1000)
fig = plt.figure(1,figsize=(12,7))
legend = [];
for n in n_range:
# get error rate of repetition code
Pe = [0.5*epsilon**n for epsilon in epsilon_range]
plt.semilogy(epsilon_range, Pe, color = colors[n])
legend.append('$n = %d$' % n)
# In the case of uncoded transmission, we can guess the value of the bit in case of an erasure and we make a mistake in 50% of the cases, hence the error probability which is 0.5*epsilon
plt.semilogy(epsilon_range, 0.5*epsilon_range, 'k--')
legend.append('Uncoded')
plt.ylim((1e-6,1))
plt.xlim((0,1))
plt.legend(legend,fontsize=14)
plt.xlabel('$\epsilon$', fontsize=16)
plt.ylabel('BER ($P_e$)', fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('Repetition_BEC_BER.pdf',bbox_inches='tight')
Explanation: Performance on the Binary Erasure Channel (BEC)
Maximum likelihood of repetition codes on the BEC can be easily carried out. As the BEC doesn't introduce any errors, as soon as a single non-erased position is obtained, we can infer the value of the bit. The error probability is hence the probability that all positions are erased, i.e.,
$$
P_e = \frac12\epsilon^n
$$
In the case where everything is erased, we can still make a guess and will make a decoding error in $\frac12$ of the cases.
End of explanation
Pe_target =1e-6
fig = plt.figure(1,figsize=(8,7))
plt.plot(epsilon_range, 1-epsilon_range,'k-')
epsilon_for_Pe_range = [(2*Pe_target)**(1/n) for n in n_range]
plt.scatter(epsilon_for_Pe_range, rate_range, marker='s', s=64, c=colors[n_range,:])
plt.xlim((0,1))
plt.ylim((0,1))
plt.xlabel('$\epsilon$', fontsize=16)
plt.ylabel('Capacity (bit/channel use)', fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('Repetition_BEC_SE.pdf',bbox_inches='tight')
Explanation: Plot the performance of repetition codes on the binary erasure channel in the spectral efficiency chart. For a given target $P_e$, we can directly find the corresponding erasure probability as $\epsilon = \sqrt[n]{2P_e}$.
End of explanation
q_range = np.linspace(0,1,1000)
fig = plt.figure(1,figsize=(12,7))
legend = [];
for n in n_range:
# get error rate of repetition code
Pe = [0.5*q**n for q in q_range]
plt.semilogy(q_range, Pe, color = colors[n])
legend.append('$n = %d$' % n)
# In the case of uncoded transmission, we can guess the value of the bit in case of an erasure and we make a mistake in 50% of the cases, hence the error probability which is 0.5*epsilon
plt.semilogy(q_range, 0.5*q_range, 'k--')
legend.append('Uncoded')
plt.ylim((1e-6,1))
plt.xlim((0,1))
plt.legend(legend,fontsize=14)
plt.xlabel('$q$', fontsize=16)
plt.ylabel('BER ($P_e$)', fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('Repetition_Z_BER.pdf',bbox_inches='tight')
Explanation: Performance on the Z-channel
Maximum likelihood of repetition codes on the BEC can be easily carried out. As shown in the tutorial, the error probability is hence the probability that the all-zero codeword is received, which is, assuming equiprobable codewords
$$
P_e = \frac12q^n
$$
where $q$ is the probability that a transmitted 1 is flipped to a 0 and where the factor $\frac12$ stems from the fact that in the case where the all-zero codeword is transmitted (with probability $\frac{1}{2}$, we never make an error with ML decoding, and hence we are only interested in the case where the all-1 codeword is transmitted
End of explanation
Pe_target =1e-6
fig = plt.figure(1,figsize=(8,7))
# Compute the achievable rate for the Z-channel with uniform inputs (P(X=0)=1/2) which is the mutual information I(X;Y)
hb = lambda x : -x*np.log2(x) - (1-x)*np.log2(1-x) if (x > 1e-20 and x < 1-1e-20) else 0
AchievableRate_Z = [hb(0.5*(1-q)) - 0.5*hb(q) for q in q_range]
plt.plot(q_range, AchievableRate_Z,'k-')
q_for_Pe_range = [(2*Pe_target)**(1/n) for n in n_range]
plt.scatter(q_for_Pe_range, rate_range, marker='s', s=64, c=colors[n_range,:])
plt.xlim((0,1))
plt.ylim((0,1))
plt.xlabel('$q$', fontsize=16)
plt.ylabel('Capacity (bit/channel use)', fontsize=16)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
#plt.savefig('Repetition_Z_SE.pdf',bbox_inches='tight')
Explanation: Plot the spectral efficiency chart for the Z-channel. Assuming uniform input symbols $P(X=0)=P(X=1)=\frac12$, the mutual information $I(X;Y)$ for the Z-channel is given by
$$
I(X;Y) = h\left(\frac{1}{2}(1-q)\right) - \frac12h(q)
$$
with $h(x) := -x\log_2(x)-(1-x)\log_2(1-x)$ the binary entropy function.
End of explanation |
5,457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ODM2 API
Step1: odm2api version used to run this notebook
Step2: Connect to the ODM2 SQLite Database
This example uses an ODM2 SQLite database file loaded with a sensor-based, high-frequency temperature time series from a site in the Little Bear River, in Logan, Utah, from Utah State University. The database (USU_LittleBearRiver_timeseriesresults_ODM2.sqlite) contains "timeSeriesCoverage"-type results.
The example database is located in the data sub-directory.
Step3: Run Some Basic Queries on the ODM2 Database
This section shows some examples of how to use the API to run both simple and more advanced queries on the ODM2 database, as well as how to examine the query output in convenient ways thanks to Python tools. The notebook WaterQualityMeasurements_RetrieveVisualize.ipynb includes more extensive examples of odm2api-based querying and examinations of the information that is returned.
Simple query functions like getVariables( ) return objects similar to the entities in ODM2, and individual attributes can then be retrieved from the objects returned.
Step4: SamplingFeatures
Request all sampling features, then examine them. Only one sampling feature is present, with SamplingFeatureTypeCV type Site.
Step5: Results and Actions
You can also drill down and get objects linked by foreign keys. The API returns related objects in a nested hierarchy so they can be interrogated in an object oriented way. So, if I use the getResults( ) function to return a Result from the database (e.g., a "Time Series" Result), I also get the associated Action that created that Result (e.g., an "Observation" Action).
Step6: Retrieve Attributes of a Time Series Result using a ResultID
Use the ResultID (1) from the above result to issue a filtered query.
Step7: Get a Result and its Attributes
Because all of the objects are returned in a nested form, if you retrieve a result, you can interrogate it to get all of its related attributes. When a Result object is returned, it includes objects that contain information about Variable, Units, ProcessingLevel, and the related Action that created that Result.
Step8: Retrieve Time Series Result Values for a given Result
The database contains a single time series result (a time series of water temperature sensor data at a single site). Let's use the getResultValues() function to retrieve the time series values for this result by passing in the ResultID. We set the index to ValueDateTime for convenience.
Step9: Now plot the time series
First as a very quick and easy plot using the Pandas Dataframe plot method with default settings. Then with fancier matplotlib customizations of the axes and figure size. | Python Code:
import os
import datetime
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import odm2api
from odm2api.ODMconnection import dbconnection
import odm2api.services.readService as odm2rs
"{} UTC".format(datetime.datetime.utcnow())
pd.__version__
Explanation: ODM2 API: Retrieve, manipulate and visualize ODM2 time series data
This example shows how to use the ODM2 Python API (odm2api) to connect to an ODM2 database, retrieve data, and analyze and visualize the data. The database (USU_LittleBearRiver_timeseriesresults_ODM2.sqlite) contains "timeSeriesCoverage"-type results.
This example uses SQLite for the database because it doesn't require a server. However, the ODM2 Python API demonstrated here can alse be used with ODM2 databases implemented in MySQL, PostgreSQL or Microsoft SQL Server.
More details on the ODM2 Python API and its source code and latest development can be found at https://github.com/ODM2/ODM2PythonAPI
Adapted from notebook https://big-cz.github.io/notebook_data_demo/notebooks/2017-06-24-odm2api_sample_fromsqlite/, based on earlier code and an ODM2 database from Jeff Horsburgh's group at Utah State University.
Emilio Mayorga
End of explanation
odm2api.__version__
Explanation: odm2api version used to run this notebook:
End of explanation
# Assign directory paths and SQLite file name
dbname_sqlite = "USU_LittleBearRiver_timeseriesresults_ODM2.sqlite"
sqlite_pth = os.path.join("data", dbname_sqlite)
try:
session_factory = dbconnection.createConnection('sqlite', sqlite_pth)
read = odm2rs.ReadODM2(session_factory)
print("Database connection successful!")
except Exception as e:
print("Unable to establish connection to the database: ", e)
Explanation: Connect to the ODM2 SQLite Database
This example uses an ODM2 SQLite database file loaded with a sensor-based, high-frequency temperature time series from a site in the Little Bear River, in Logan, Utah, from Utah State University. The database (USU_LittleBearRiver_timeseriesresults_ODM2.sqlite) contains "timeSeriesCoverage"-type results.
The example database is located in the data sub-directory.
End of explanation
allVars = read.getVariables()
for x in allVars:
print('{}: {}'.format(x.VariableCode, x.VariableNameCV))
Explanation: Run Some Basic Queries on the ODM2 Database
This section shows some examples of how to use the API to run both simple and more advanced queries on the ODM2 database, as well as how to examine the query output in convenient ways thanks to Python tools. The notebook WaterQualityMeasurements_RetrieveVisualize.ipynb includes more extensive examples of odm2api-based querying and examinations of the information that is returned.
Simple query functions like getVariables( ) return objects similar to the entities in ODM2, and individual attributes can then be retrieved from the objects returned.
End of explanation
sf_lst = read.getSamplingFeatures()
len(sf_lst)
vars(sf_lst[0])
print('{}: {}'.format(sf_lst[0].SamplingFeatureCode, sf_lst[0].SamplingFeatureName))
Explanation: SamplingFeatures
Request all sampling features, then examine them. Only one sampling feature is present, with SamplingFeatureTypeCV type Site.
End of explanation
# What's the total number of results in the database?
len(read.getResults())
try:
# Call getResults, but return only the first Result
firstResult = read.getResults()[0]
frfa = firstResult.FeatureActionObj
frfaa = firstResult.FeatureActionObj.ActionObj
print("The ResultID for the Result is: {}".format(firstResult.ResultID))
print("The FeatureAction object for the Result is: ", frfa)
print("The Action object for the Result is: ", frfaa)
# Print some Action attributes in a more human readable form
print("\nThe following are some of the attributes for the Action that created the Result: ")
print("ActionTypeCV: {}".format(frfaa.ActionTypeCV))
print("ActionDescription: {}".format(frfaa.ActionDescription))
print("BeginDateTime: {}".format(frfaa.BeginDateTime))
print("EndDateTime: {}".format(frfaa.EndDateTime))
print("MethodName: {}".format(frfaa.MethodObj.MethodName))
print("MethodDescription: {}".format(frfaa.MethodObj.MethodDescription))
except Exception as e:
print("Unable to demo Foreign Key Example: {}".format(e))
Explanation: Results and Actions
You can also drill down and get objects linked by foreign keys. The API returns related objects in a nested hierarchy so they can be interrogated in an object oriented way. So, if I use the getResults( ) function to return a Result from the database (e.g., a "Time Series" Result), I also get the associated Action that created that Result (e.g., an "Observation" Action).
End of explanation
# Filering on a single ResultID will invariably return a single result;
# so, get the single element in the returned list
tsResult = read.getResults(ids=[1])[0]
# Examine the object type and content
type(tsResult), vars(tsResult)
Explanation: Retrieve Attributes of a Time Series Result using a ResultID
Use the ResultID (1) from the above result to issue a filtered query.
End of explanation
print("------- Example of Retrieving Attributes of a Result -------")
try:
firstResult = read.getResults()[0]
frfa = firstResult.FeatureActionObj
print("The following are some of the attributes for the Result retrieved: ")
print("ResultID: {}".format(firstResult.ResultID))
print("ResultTypeCV: {}".format(firstResult.ResultTypeCV))
print("ValueCount: {}".format(firstResult.ValueCount))
print("ProcessingLevel: {}".format(firstResult.ProcessingLevelObj.Definition))
print("SampledMedium: {}".format(firstResult.SampledMediumCV))
print("Variable: {}: {}".format(firstResult.VariableObj.VariableCode,
firstResult.VariableObj.VariableNameCV))
print("AggregationStatisticCV: {}".format(firstResult.AggregationStatisticCV))
print("Units: {}".format(firstResult.UnitsObj.UnitsName))
print("SamplingFeatureID: {}".format(frfa.SamplingFeatureObj.SamplingFeatureID))
print("SamplingFeatureCode: {}".format(frfa.SamplingFeatureObj.SamplingFeatureCode))
except Exception as e:
print("Unable to demo example of retrieving Attributes of a Result: {}".format(e))
Explanation: Get a Result and its Attributes
Because all of the objects are returned in a nested form, if you retrieve a result, you can interrogate it to get all of its related attributes. When a Result object is returned, it includes objects that contain information about Variable, Units, ProcessingLevel, and the related Action that created that Result.
End of explanation
# Get the values for a particular TimeSeriesResult; a Pandas Dataframe is returned
tsValues = read.getResultValues(resultids=[1], lowercols=False)
tsValues.set_index('ValueDateTime', inplace=True)
tsValues.sort_index(inplace=True)
tsValues.head()
Explanation: Retrieve Time Series Result Values for a given Result
The database contains a single time series result (a time series of water temperature sensor data at a single site). Let's use the getResultValues() function to retrieve the time series values for this result by passing in the ResultID. We set the index to ValueDateTime for convenience.
End of explanation
tsValues['DataValue'].plot()
fig, ax = plt.subplots(figsize=(12, 4))
tsValues['DataValue'].plot(ax=ax)
ax.set_ylabel('{} ({})'.format(
tsResult.VariableObj.VariableNameCV,
tsResult.UnitsObj.UnitsAbbreviation))
ax.set_xlabel('')
ax.xaxis.set_minor_locator(mpl.dates.MonthLocator())
ax.xaxis.set_minor_formatter(mpl.dates.DateFormatter('%b'))
ax.xaxis.set_major_locator(mpl.dates.YearLocator())
ax.xaxis.set_major_formatter(mpl.dates.DateFormatter('\n%Y'))
Explanation: Now plot the time series
First as a very quick and easy plot using the Pandas Dataframe plot method with default settings. Then with fancier matplotlib customizations of the axes and figure size.
End of explanation |
5,458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3.1 Correction of symmetry distortion using thin plate spline
3.1.0 Retrieve experimental data
Step1: 3.1.1 Calculate coordinate deformation from selected or detected landmarks and their symmetric correspondences
Step2: 3.1.2 Pose adjustment
Options to adjust scaling, rotation and translation, if needed. When the transform is desired, set keep=True
Step3: 3.1.3 Apply deformation to all images in a stack
Step4: 3.2 Correct using manually selected landmarks | Python Code:
fpath = r'../data/data_114_4axis_100x100x200x50.h5'
fbinned = fp.readBinnedhdf5(fpath)
V = fbinned['V']
V.shape
Eslice = V[15:20, :, :, 35:41].sum(axis=(0,3))
plt.imshow(Eslice, origin='lower', cmap='terrain_r')
mc = aly.MomentumCorrector(V[15:20, :, :, :].sum(axis=0), rotsym=6)
Explanation: 3.1 Correction of symmetry distortion using thin plate spline
3.1.0 Retrieve experimental data
End of explanation
mc.selectSlice2D(selector=slice(35, 41), axis=2)
mc.featureExtract(image=mc.slice, method='daofind', sigma=5, fwhm=8)
mc.view(image=mc.slice, annotated=True, points=mc.features)
mc.ascale = np.array([1, 1, 1., 1., 1, 1., 1.])
mc.ptargs = []
mc.splineWarpEstimate(image=mc.slice, landmarks=mc.pouter_ord, include_center=True, fixed_center=True,
iterative=False, interp_order=1, update=True)
mc.view(image=mc.slice_transformed)
mc.view(image=mc.slice, annotated=True, points={'feats':mc.ptargs})
subs = 5 # Subsampling ratio
plt.scatter(mc.cdeform_field[::subs,::subs].ravel(), mc.rdeform_field[::subs,::subs].ravel(), c='b')
Explanation: 3.1.1 Calculate coordinate deformation from selected or detected landmarks and their symmetric correspondences
End of explanation
mc.coordinateTransform(image=mc.slice_transformed, type='scaling', xscale=1.13, yscale=1.13, keep=True)
mc.view(image=mc.slice_transformed)
mc.coordinateTransform(image=mc.slice_transformed, type='rotation', angle=-9, center=(45, 55), keep=True)
mc.view(image=mc.slice_transformed)
mc.coordinateTransform(image=mc.slice_transformed, type='translation', xtrans=-5, ytrans=3, keep=True)
mc.view(image=mc.slice_transformed)
Explanation: 3.1.2 Pose adjustment
Options to adjust scaling, rotation and translation, if needed. When the transform is desired, set keep=True
End of explanation
mc.correct(image=mc.image, axis=2, use_deform_field=True, update='image')
mc.view(image=mc.image[...,70:75].sum(axis=2), annotated=False)
sliceid = [20, 40, 50, 65, 75, 90, 100, 120, 140, 180]
vis.sliceview3d(mc.image[...,sliceid], axis=2, ncol=5, colormap='terrain_r', numsize=50);
for i in range(50):
fbinned['V'][i,...] = mc.correct(image=fbinned['V'][i,...], axis=2, use_deform_field=True)
from silx.io import dictdump
dictdump.dicttoh5(fbinned, '../data/data_114_4axis_corrected.h5')
Explanation: 3.1.3 Apply deformation to all images in a stack
End of explanation
V = fp.readBinnedhdf5(fpath)['V']
mch = aly.MomentumCorrector(V[15:20, :, :, :].sum(axis=0), rotsym=3)
mch.selectSlice2D(selector=slice(35, 41), axis=2)
mch.view(mch.slice)
# Manually assign reference landmark positions, image center, and target landmark positions
mch.pouter_ord = np.array([[31, 74], [75, 49], [24, 22]]) # reference landmark positions
mch.pcent = np.array([41, 50]) # center position
mch.ptargs = np.array([[31, 74], [75, 49], [24, 22]]) # target landmark positions
mch.rdeform_field = np.zeros_like(mch.slice)
mch.cdeform_field = np.zeros_like(mch.slice)
mch.splineWarpEstimate(image=mch.slice, landmarks=mch.pouter_ord, include_center=True, fixed_center=True,
iterative=False, interp_order=1, update=True)
mch.view(image=mch.slice_transformed)
Explanation: 3.2 Correct using manually selected landmarks
End of explanation |
5,459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minist retrainning(finetuning)
Minist 예제에서 Network를 구조를 마지막 fully connected (linear) 부분의 차원을 수정한 다음 재훈련을 하도록 하겠습니다.
이것을 수행하기 위해서는 다음과 같은 방법이 있는데, finetuning에 해당하는 방법3을 구현하도록 하겠습니다.
방법1
Step1: 1. 입력DataLoader 설정
train 데이터로 loader를 지정 (dataset은 Minist, batch 사이즈 50, shuffle를 실행)
test 데이터로 loader를 지정 (dataset은 Minist, batch 사이즈 1000)
Step2: 2. 사전 설정
* model
* loss
* opimizer
Step3: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
```
Step4: 4. Predict & Evaluate | Python Code:
%matplotlib inline
Explanation: Minist retrainning(finetuning)
Minist 예제에서 Network를 구조를 마지막 fully connected (linear) 부분의 차원을 수정한 다음 재훈련을 하도록 하겠습니다.
이것을 수행하기 위해서는 다음과 같은 방법이 있는데, finetuning에 해당하는 방법3을 구현하도록 하겠습니다.
방법1 : Minist Network를 새로 작성한다.
python
self.fc1 = nn.Linear(64*7*7, 1024)
self.fc2 = nn.Linear(1024, 10)
위의 부분을 아래와 같이 수정하고
python
self.fc1 = nn.Linear(64*7*7, 512)
self.fc2 = nn.Linear(512, 10)
그리고 trainning을 다시 수행합니다.
방법2 : 기존의 Minist Network를 불러들인 후, 필요부분을 수정함.
python
model = MnistModel()
model.fc1 = nn.Linear(64*7*7, 512)
model.fc2 = nn.Linear(512, 10)
그리고 trainning을 다시 수행합니다.
방법3 : 기존의 Minist Network와 사전학습된(pretrained) parameter를 복원한 후, Network를 수정하고 수정부분만을 trainning 함.
변경된 network의 parameter만을 update하므로 방법2보다 빨리 training을 할 수 있고, 정확도도 유사합니다.
```python
model = MnistModel()
load parameter of MnistModel
checkpoint = torch.load(checkpoint_filename)
model.load_state_dict(checkpoint)
modify last two layer in model = MnistModel()
model.fc1 = nn.Linear(6477, 512)
model.fc2 = nn.Linear(512, 10)
specify parameters to update
fc_parameters = [
{'params': model.fc1.parameters()},
{'params': model.fc2.parameters()}
]
assign parameter in optimizer
optimizer = torch.optim.Adam(fc_parameters, lr=0.0001)
```
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
import numpy as np
is_cuda = torch.cuda.is_available() # cuda 사용가능시, True
checkpoint_filename = 'minist.ckpt'
batch_size = 50
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, download=True, transform=transforms.ToTensor()),
batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=True, transform=transforms.ToTensor()),
batch_size=100)
Explanation: 1. 입력DataLoader 설정
train 데이터로 loader를 지정 (dataset은 Minist, batch 사이즈 50, shuffle를 실행)
test 데이터로 loader를 지정 (dataset은 Minist, batch 사이즈 1000)
End of explanation
class MnistModel(nn.Module):
def __init__(self):
super(MnistModel, self).__init__()
# input is 28x28
# padding=2 for same padding
self.conv1 = nn.Conv2d(1, 32, 5, padding=2)
# feature map size is 14*14 by pooling
# padding=2 for same padding
self.conv2 = nn.Conv2d(32, 64, 5, padding=2)
# feature map size is 7*7 by pooling
self.fc1 = nn.Linear(64*7*7, 1024)
self.fc2 = nn.Linear(1024, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), 2)
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, 64*7*7) # reshape Variable
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
model = MnistModel()
#load parameter of MnistModel
checkpoint = torch.load(checkpoint_filename)
model.load_state_dict(checkpoint)
### don't update model parameters
for param in model.parameters() :
param.requires_grad = False
#modify last two layer in model = MnistModel()
model.fc1 = nn.Linear(64*7*7, 512)
model.fc2 = nn.Linear(512, 10)
fc_parameters = [
{'params': model.fc1.parameters()},
{'params': model.fc2.parameters()}
]
optimizer = torch.optim.Adam(fc_parameters, lr=0.0001)
if is_cuda : model.cuda()
loss_fn = nn.NLLLoss()
Explanation: 2. 사전 설정
* model
* loss
* opimizer
End of explanation
# trainning
model.train()
train_loss = []
train_accu = []
for epoch in range(3):
for i, (image, target) in enumerate(train_loader):
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image), Variable(target) # 입력image Target 설정
output = model(image) # model 생성
loss = loss_fn(output, target) #loss 생성
optimizer.zero_grad() # zero_grad
loss.backward() # calc backward grad
optimizer.step() # update parameter
pred = output.data.max(1)[1]
accuracy = pred.eq(target.data).sum()/batch_size
train_loss.append(loss.data[0])
train_accu.append(accuracy)
if i % 300 == 0:
print(i, loss.data[0])
plt.plot(train_accu)
plt.plot(train_loss)
Explanation: 3. Trainning loop
* (입력 생성)
* model 생성
* loss 생성
* zeroGrad
* backpropagation
* optimizer step (update model parameter)
```
End of explanation
model.eval()
correct = 0
for image, target in test_loader:
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
prediction = output.data.max(1)[1]
correct += prediction.eq(target.data).sum()
print('\nTest set: Accuracy: {:.2f}%'.format(100. * correct / len(test_loader.dataset)))
Explanation: 4. Predict & Evaluate
End of explanation |
5,460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Copyright 2017 Allen Downey
License
Step1: Low pass filter
The following circuit diagram (from Wikipedia) shows a low-pass filter built with one resistor and one capacitor.
A "filter" is a circuit takes a signal, $V_{in}$, as input and produces a signal, $V_{out}$, as output. In this context, a "signal" is a voltage that changes over time.
A filter is "low-pass" if it allows low-frequency signals to pass from $V_{in}$ to $V_{out}$ unchanged, but it reduces the amplitude of high-frequency signals.
By applying the laws of circuit analysis, we can derive a differential equation that describes the behavior of this system. By solving the differential equation, we can predict the effect of this circuit on any input signal.
Suppose we are given $V_{in}$ and $V_{out}$ at a particular instant in time. By Ohm's law, which is a simple model of the behavior of resistors, the instantaneous current through the resistor is
Step3: Now we can pass the Params object make_system which computes some additional parameters and defines init.
omega is the frequency of the input signal in radians/second.
tau is the time constant for this circuit, which is the time it takes to get from an initial startup phase to
cutoff is the cutoff frequency for this circuit (in Hz), which marks the transition from low frequency signals, which pass through the filter unchanged, to high frequency signals, which are attenuated.
t_end is chosen so we run the simulation for 4 cycles of the input signal.
Step4: Let's make a System
Step6: Exercise
Step7: Test the slope function with the initial conditions.
Step8: And then run the simulation. I suggest using t_eval=ts to make sure we have enough data points to plot and analyze the results.
Step9: Here's a function you can use to plot V_out as a function of time.
Step10: If things have gone according to plan, the amplitude of the output signal should be about 0.8 V.
Also, you might notice that it takes a few cycles for the signal to get to the full amplitude.
Sweeping frequency
Plot V_out looks like for a range of frequencies
Step11: At low frequencies, notice that there is an initial "transient" before the output gets to a steady-state sinusoidal output. The duration of this transient is a small multiple of the time constant, tau, which is 1 ms.
Estimating the output ratio
Let's compare the amplitudes of the input and output signals. Below the cutoff frequency, we expect them to be about the same. Above the cutoff, we expect the amplitude of the output signal to be smaller.
We'll start with a signal at the cutoff frequency, f=1000 Hz.
Step13: The following function computes V_in as a TimeSeries
Step14: Here's what the input and output look like. Notice that the output is not just smaller; it is also "out of phase"; that is, the peaks of the output are shifted to the right, relative to the peaks of the input.
Step16: The following function estimates the amplitude of a signal by computing half the distance between the min and max.
Step17: The amplitude of V_in should be near 5 (but not exact because we evaluated it at a finite number of points).
Step18: The amplitude of V_out should be lower.
Step19: And here's the ratio between them.
Step21: Exercise
Step22: And test your function.
Step23: Estimating phase offset
The delay between the peak of the input and the peak of the output is call a "phase shift" or "phase offset", usually measured in fractions of a cycle, degrees, or radians.
To estimate the phase offset between two signals, we can use cross-correlation. Here's what the cross-correlation looks like between V_out and V_in
Step24: The location of the peak in the cross correlation is the estimated shift between the two signals, in seconds.
Step25: We can express the phase offset as a multiple of the period of the input signal
Step26: We don't care about whole period offsets, only the fractional part, which we can get using modf
Step27: Finally, we can convert from a fraction of a cycle to degrees
Step29: Exercise
Step30: Test your function.
Step31: Sweeping frequency again
Exercise
Step32: Run your function with these frequencies.
Step33: We can plot output ratios like this
Step35: But it is useful and conventional to plot ratios on a log-log scale. The vertical gray line shows the cutoff frequency.
Step37: This plot shows the cutoff behavior more clearly. Below the cutoff, the output ratio is close to 1. Above the cutoff, it drops off linearly, on a log scale, which indicates that output ratios for high frequencies are practically 0.
Here's the plot for phase offset, on a log-x scale
Step38: For low frequencies, the phase offset is near 0. For high frequencies, it approaches 90 degrees.
Analysis
By analysis we can show that the output ratio for this signal is
$A = \frac{1}{\sqrt{1 + (R C \omega)^2}}$
where $\omega = 2 \pi f$, and the phase offset is
$ \phi = \arctan (- R C \omega)$
Exercise
Step39: Test your function
Step40: Test your function
Step41: Plot the theoretical results along with the simulation results and see if they agree. | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
Explanation: Modeling and Simulation in Python
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
ohm = UNITS.ohm
farad = UNITS.farad
volt = UNITS.volt
Hz = UNITS.Hz
second = UNITS.second
params = Params(
R1 = 1e6 * ohm,
C1 = 1e-9 * farad,
A = 5 * volt,
f = 1000 * Hz)
Explanation: Low pass filter
The following circuit diagram (from Wikipedia) shows a low-pass filter built with one resistor and one capacitor.
A "filter" is a circuit takes a signal, $V_{in}$, as input and produces a signal, $V_{out}$, as output. In this context, a "signal" is a voltage that changes over time.
A filter is "low-pass" if it allows low-frequency signals to pass from $V_{in}$ to $V_{out}$ unchanged, but it reduces the amplitude of high-frequency signals.
By applying the laws of circuit analysis, we can derive a differential equation that describes the behavior of this system. By solving the differential equation, we can predict the effect of this circuit on any input signal.
Suppose we are given $V_{in}$ and $V_{out}$ at a particular instant in time. By Ohm's law, which is a simple model of the behavior of resistors, the instantaneous current through the resistor is:
$ I_R = (V_{in} - V_{out}) / R $
where $R$ is resistance in ohms.
Assuming that no current flows through the output of the circuit, Kirchhoff's current law implies that the current through the capacitor is:
$ I_C = I_R $
According to a simple model of the behavior of capacitors, current through the capacitor causes a change in the voltage across the capacitor:
$ I_C = C \frac{d V_{out}}{dt} $
where $C$ is capacitance in farads (F).
Combining these equations yields a differential equation for $V_{out}$:
$ \frac{d }{dt} V_{out} = \frac{V_{in} - V_{out}}{R C} $
Follow the instructions blow to simulate the low-pass filter for input signals like this:
$ V_{in}(t) = A \cos (2 \pi f t) $
where $A$ is the amplitude of the input signal, say 5 V, and $f$ is the frequency of the signal in Hz.
Params and System objects
Here's a Params object to contain the quantities we need. I've chosen values for R1 and C1 that might be typical for a circuit that works with audio signal.
End of explanation
def make_system(params):
Makes a System object for the given conditions.
params: Params object
returns: System object
f, R1, C1 = params.f, params.R1, params.C1
init = State(V_out = 0)
omega = 2 * np.pi * f
tau = R1 * C1
cutoff = 1 / R1 / C1 / 2 / np.pi
t_end = 4 / f
dt = t_end / 4000
return System(params, init=init, t_end=t_end, dt=dt,
omega=omega, tau=tau, cutoff=cutoff.to(Hz))
Explanation: Now we can pass the Params object make_system which computes some additional parameters and defines init.
omega is the frequency of the input signal in radians/second.
tau is the time constant for this circuit, which is the time it takes to get from an initial startup phase to
cutoff is the cutoff frequency for this circuit (in Hz), which marks the transition from low frequency signals, which pass through the filter unchanged, to high frequency signals, which are attenuated.
t_end is chosen so we run the simulation for 4 cycles of the input signal.
End of explanation
system = make_system(params)
Explanation: Let's make a System
End of explanation
# Solution
def slope_func(state, t, system):
Compute derivatives of the state.
state: V_out
t: time
system: System object with A, omega, R1 and C1
returns: dV_out/dt
[V_out] = state
R1, C1 = system.R1, system.C1
A, omega = system.A, system.omega
V_in = A * np.cos(omega * t)
V_R1 = V_in - V_out
I_R1 = V_R1 / R1
I_C1 = I_R1
dV_out_dt = I_C1 / C1
return [dV_out_dt]
Explanation: Exercise: Write a slope function that takes as an input a State object that contains V_out, and returns the derivative of V_out.
Note: The ODE solver requires the return value from slope_func to be a sequence, even if there is only one element. The simplest way to do that is to return a list with a single element:
return [dV_out_dt]
End of explanation
slope_func(system.init, 0*UNITS.s, system)
Explanation: Test the slope function with the initial conditions.
End of explanation
results, details = run_ode_solver(system, slope_func)
details
results.head()
Explanation: And then run the simulation. I suggest using t_eval=ts to make sure we have enough data points to plot and analyze the results.
End of explanation
def plot_results(results):
xs = results.V_out.index
ys = results.V_out.values
t_end = get_last_label(results)
if t_end < 10:
xs *= 1000
xlabel = 'Time (ms)'
else:
xlabel = 'Time (s)'
plot(xs, ys)
decorate(xlabel=xlabel,
ylabel='$V_{out}$ (volt)',
legend=False)
plot_results(results)
Explanation: Here's a function you can use to plot V_out as a function of time.
End of explanation
fs = [1, 10, 100, 1000, 10000, 100000] * Hz
for i, f in enumerate(fs):
system = make_system(Params(params, f=f))
results, details = run_ode_solver(system, slope_func)
subplot(3, 2, i+1)
plot_results(results)
Explanation: If things have gone according to plan, the amplitude of the output signal should be about 0.8 V.
Also, you might notice that it takes a few cycles for the signal to get to the full amplitude.
Sweeping frequency
Plot V_out looks like for a range of frequencies:
End of explanation
system = make_system(Params(params, f=1000*Hz))
results, details = run_ode_solver(system, slope_func)
V_out = results.V_out
plot_results(results)
Explanation: At low frequencies, notice that there is an initial "transient" before the output gets to a steady-state sinusoidal output. The duration of this transient is a small multiple of the time constant, tau, which is 1 ms.
Estimating the output ratio
Let's compare the amplitudes of the input and output signals. Below the cutoff frequency, we expect them to be about the same. Above the cutoff, we expect the amplitude of the output signal to be smaller.
We'll start with a signal at the cutoff frequency, f=1000 Hz.
End of explanation
def compute_vin(results, system):
Computes V_in as a TimeSeries.
results: TimeFrame with simulation results
system: System object with A and omega
returns: TimeSeries
A, omega = system.A, system.omega
ts = results.index.values * UNITS.second
V_in = A * np.cos(omega * ts)
return TimeSeries(V_in, results.index, name='V_in')
Explanation: The following function computes V_in as a TimeSeries:
End of explanation
V_in = compute_vin(results, system)
plot(V_out)
plot(V_in)
decorate(xlabel='Time (s)',
ylabel='V (volt)')
Explanation: Here's what the input and output look like. Notice that the output is not just smaller; it is also "out of phase"; that is, the peaks of the output are shifted to the right, relative to the peaks of the input.
End of explanation
def estimate_A(series):
Estimate amplitude.
series: TimeSeries
returns: amplitude in volts
return (series.max() - series.min()) / 2
Explanation: The following function estimates the amplitude of a signal by computing half the distance between the min and max.
End of explanation
A_in = estimate_A(V_in)
Explanation: The amplitude of V_in should be near 5 (but not exact because we evaluated it at a finite number of points).
End of explanation
A_out = estimate_A(V_out)
Explanation: The amplitude of V_out should be lower.
End of explanation
ratio = A_out / A_in
ratio.to_base_units()
Explanation: And here's the ratio between them.
End of explanation
# Solution
def estimate_ratio(V1, V2):
Estimate the ratio of amplitudes.
V1: TimeSeries
V2: TimeSeries
returns: amplitude ratio
a1 = estimate_A(V1)
a2 = estimate_A(V2)
return a1 / a2
Explanation: Exercise: Encapsulate the code we have so far in a function that takes two TimeSeries objects and returns the ratio between their amplitudes.
End of explanation
estimate_ratio(V_out, V_in)
Explanation: And test your function.
End of explanation
corr = correlate(V_out, V_in, mode='same')
corr = TimeSeries(corr, V_in.index)
plot(corr, color='C4')
decorate(xlabel='Lag (s)',
ylabel='Correlation')
Explanation: Estimating phase offset
The delay between the peak of the input and the peak of the output is call a "phase shift" or "phase offset", usually measured in fractions of a cycle, degrees, or radians.
To estimate the phase offset between two signals, we can use cross-correlation. Here's what the cross-correlation looks like between V_out and V_in:
End of explanation
peak_time = corr.idxmax() * UNITS.second
Explanation: The location of the peak in the cross correlation is the estimated shift between the two signals, in seconds.
End of explanation
period = 1 / system.f
(peak_time / period).to_reduced_units()
Explanation: We can express the phase offset as a multiple of the period of the input signal:
End of explanation
frac, whole = np.modf(peak_time / period)
frac = frac.to_reduced_units()
Explanation: We don't care about whole period offsets, only the fractional part, which we can get using modf:
End of explanation
frac * 360 * UNITS.degree
Explanation: Finally, we can convert from a fraction of a cycle to degrees:
End of explanation
# Solution
def estimate_offset(V1, V2, system):
Estimate phase offset.
V1: TimeSeries
V2: TimeSeries
system: System object with f
returns: amplitude ratio
corr = correlate(V1, V2, mode='same')
corr = TimeSeries(corr, V1.index)
peak_time = corr.idxmax() * UNITS.second
period = 1 / system.f
frac, whole = np.modf(peak_time / period)
frac = frac.to_reduced_units()
return -frac * 360 * UNITS.degree
Explanation: Exercise: Encapsulate this code in a function that takes two TimeSeries objects and a System object, and returns the phase offset in degrees.
Note: by convention, if the output is shifted to the right, the phase offset is negative.
End of explanation
estimate_offset(V_out, V_in, system)
Explanation: Test your function.
End of explanation
# Solution
def sweep_frequency(fs, params):
ratios = SweepSeries()
offsets = SweepSeries()
for i, f in enumerate(fs):
system = make_system(Params(params, f=f))
results, details = run_ode_solver(system, slope_func)
V_out = results.V_out
V_in = compute_vin(results, system)
f = magnitude(f)
ratios[f] = estimate_ratio(V_out, V_in)
offsets[f] = estimate_offset(V_out, V_in, system)
return ratios, offsets
Explanation: Sweeping frequency again
Exercise: Write a function that takes as parameters an array of input frequencies and a Params object.
For each input frequency it should run a simulation and use the results to estimate the output ratio (dimensionless) and phase offset (in degrees).
It should return two SweepSeries objects, one for the ratios and one for the offsets.
End of explanation
fs = 10 ** linspace(0, 4, 9) * Hz
ratios, offsets = sweep_frequency(fs, params)
Explanation: Run your function with these frequencies.
End of explanation
plot(ratios, color='C2', label='output ratio')
decorate(xlabel='Frequency (Hz)',
ylabel='$V_{out} / V_{in}$')
Explanation: We can plot output ratios like this:
End of explanation
def plot_ratios(ratios, system):
Plot output ratios.
# axvline can't handle a Quantity with units
cutoff = magnitude(system.cutoff)
plt.axvline(cutoff, color='gray', alpha=0.4)
plot(ratios, color='C2', label='output ratio')
decorate(xlabel='Frequency (Hz)',
ylabel='$V_{out} / V_{in}$',
xscale='log', yscale='log')
plot_ratios(ratios, system)
Explanation: But it is useful and conventional to plot ratios on a log-log scale. The vertical gray line shows the cutoff frequency.
End of explanation
def plot_offsets(offsets, system):
Plot phase offsets.
# axvline can't handle a Quantity with units
cutoff = magnitude(system.cutoff)
plt.axvline(cutoff, color='gray', alpha=0.4)
plot(offsets, color='C9')
decorate(xlabel='Frequency (Hz)',
ylabel='Phase offset (degree)',
xscale='log')
plot_offsets(offsets, system)
Explanation: This plot shows the cutoff behavior more clearly. Below the cutoff, the output ratio is close to 1. Above the cutoff, it drops off linearly, on a log scale, which indicates that output ratios for high frequencies are practically 0.
Here's the plot for phase offset, on a log-x scale:
End of explanation
# Solution
def output_ratios(fs, system):
R1, C1, omega = system.R1, system.C1, system.omega
omegas = 2 * np.pi * fs
rco = R1 * C1 * omegas
A = 1 / np.sqrt(1 + rco**2)
return SweepSeries(A, magnitude(fs))
Explanation: For low frequencies, the phase offset is near 0. For high frequencies, it approaches 90 degrees.
Analysis
By analysis we can show that the output ratio for this signal is
$A = \frac{1}{\sqrt{1 + (R C \omega)^2}}$
where $\omega = 2 \pi f$, and the phase offset is
$ \phi = \arctan (- R C \omega)$
Exercise: Write functions that take an array of input frequencies and returns $A(f)$ and $\phi(f)$ as SweepSeries objects. Plot these objects and compare them with the results from the previous section.
End of explanation
A = output_ratios(fs, system)
# Solution
def phase_offsets(fs, system):
R1, C1, omega = system.R1, system.C1, system.omega
omegas = 2 * np.pi * fs
rco = R1 * C1 * omegas
phi = np.arctan(-rco).to(UNITS.degree)
return SweepSeries(phi, magnitude(fs))
Explanation: Test your function:
End of explanation
phi = phase_offsets(fs, system)
Explanation: Test your function:
End of explanation
plot(A, ':', color='gray')
plot_ratios(ratios, system)
plot(phi, ':', color='gray')
plot_offsets(offsets, system)
Explanation: Plot the theoretical results along with the simulation results and see if they agree.
End of explanation |
5,461 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ReGraph tutorial
Step1: I. Simple graph rewriting
1. Initialization of a graph
ReGraph works with NetworkX graph objects, both undirected graphs (nx.Graph) and directed ones (nx.DiGraph). The workflow of the graph initialization in NetworkX can be found here.
Step2: ReGraph provides some utils for graph plotting that are going to be used in the course of this tutorial.
Step3: 2. Initialization of a rule
Graph rewriting is implemented as an application of a graph rewriting rule to a given input graph object $G$. A graph rewriting rule $R$ is a span $LHS \leftarrow P \rightarrow RHS$, where $LHS$ is a graph that represents a left hand side of the rule -- a pattern that is going to be matched inside of the graph, $P$ is a graph that represents a preserved part of the rule -- together with a homomorphism $LHS \leftarrow P$ it specifies nodes and edges that are going to be preserved in the course of application of the rule. $RHS$ and a homomorphism $P \rightarrow RHS$ on the other hand specify nodes and edges that are going to be added. In addition, if two nodes $n^P_1, n^P_2$ of $P$ map to the same node $n^{LHS}$ in $LHS$, $n^{LHS}$ is going to be cloned during graph rewriting. Symmetrically, if two nodes of $n^P_1$ and $n^P_2$ in $P$ match to the same node $n^{RHS}$ in $RHS$, $n^P_1$ and $n^P_2$ are merged.
$LHS$, $P$ and $RHS$ can be defined as NetworkX graphs
Step4: A rule of graph rewriting is implemeted in the class regraph.library.rules.Rule. An instance of regraph.library.rules.Rule is initialized with NetworkX graphs $LHS$, $P$, $RHS$, and two dictionaries specifying $LHS \leftarrow P$ and $P \rightarrow RHS$.
For visualization of a rule regraph.library.plotting.plot_rule util is implemented in ReGraph.
Step5: 1. Rewriting
1.1. Matching of LHS
The matchings of $LHS$ in $G$ ($LHS \rightarrowtail G$) can be found using regraph.library.primitives.find_matching function. This function returns a list of dictionaries representing the matchings. If no matchings were found the list is empty.
Visualization of the matching in $G$ is implemented in the regraph.library.plotting.plot_instance util.
Step6: 1.2. Rewriting
Graph rewriting can be performed with the regraph.library.primitives.rewrite function. It takes as an input a graph, an instance of the matching (dictionary that specifies the mapping from the nodes of $LHS$ to the nodes of $G$), a rewriting rule (an instance of the regraph.library.rules.Rule class), and a parameter inplace (by default set to True). If inplace is True rewriting will be performed directly in the provided graph object and the function will return a dictionary corresponding to the $RHS$ matching in the rewritten graph ($RHS \rightarrowtail G'$), otherwise the rewriting function will return a new graph object corresponding to the result of rewriting and the $RHS$ matching.
Another possibility to perform graph rewriting is implemented in the apply_to method of a regraph.library.Rule class. It takes as an input a graph and an instance of the matching. It applies a corresponding (to self) rewriting rule and returns a new graph (the result of graph rewriting).
Step7: ReGraph also provides a primitive for testing equality of two graphs in regraph.library.primitives.equal. In our previous example we can see that a graph obtained by application of a rule new_graph (through the Rule interface) and an initial graph object graph after in-place rewriting are equal.
II. Hierarchy of graphs & rewriting
ReGraph allows to create a hierarchy of graphs connected together by means of typing homomorphisms. In the context of hierarchy if there exists a homomorphism $G \rightarrow T$ we say that graph $G$ is typed by a graph $T$. Graph hierarchy is a DAG, where nodes are graphs and edges are typing homomorphisms between graphs.
ReGraph provides two kinds of typing for graphs
Step8: 1.2. Rewriting in the hierarchy
ReGraph implements rewriting of graphs in the hierarchy, this rewriting is more restrictive as application of a rewriting rule cannot violate any typing defined in the hierarchy. The following code illustrates the application of a rewriting rule to the graph in the hierarchy. On the first step we create a Rule object containing a rule we would like to apply.
Step9: Now, we would like to use the rule defined above in the following context
Step10: regraph.library.Hierarchy provides the method find_matching to find matchings of a pattern in a given graph in the hierarchy. The typing of $LHS$ should be provided to the find_matching method.
Step11: As a rewriting rule can implement addition and merging of some nodes, an appropriate typing of the $RHS$ allows to specify the typing for new nodes.
~~- By default, if a typing of $RHS$ is not provided, all the nodes added and merged will be not typed. Note
Step12: Now, rewriting can be performed using regraph.library.hierarchy.Hierarchy.rewrite method. It takes as an input id of the graph to rewrite, a rule, an instance of the LHS of a rule ($LHS \rightarrow G$), and a typing of $LHS$ and $RHS$.
Note
Step13: Later on if a node form $G$ is not typed in $T$, we can specify a typing for this node.
In the example we type the node 3 as a region in G.
It is also possible to remove a graph from the hierarchy using the regraph.library.hierarchy.Hierarchy.remove_graph method. It takes as an input the id of a graph to remove, and if the argument reconnect is set to True, it reconnects all the graphs typed by the graph being removed to the graphs typing it.
In our example if we remove graph G from the hierarchy, G_prime is now directly typed by T.
Step14: 2. Example
Step15: Some of the graphs in the hierarchy are now typed by multiple graphs, which is reflected in the types of nodes, as in the example below
Step16: Notice that as G3 is paritally typed by both G1 and G2, not all the nodes have types in both G1 and G2. For example, node some_circle_node is typed only by some_circle in G1, but is not typed by any node in G2.
2.2. Rules as nodes of a hierarchy
Having constructed a sophisticated rewriting rule typed by some nodes in the hierarchy one may want to store this rule and to be able to propagate any changes that happen in the hierarchy to the rule as well.
ReGraph's regraph.library.hierarchy.Hierarchy allows to add graph rewriting rules as nodes in the hierarchy. Rules in the hierarchy can be (partially) typed by graphs.
Note
Step17: 2.3. Rewriting and propagation
We now show how graph rewriting can be performed in such an hierarchy. In the previous example we perfromed graph rewriting on the top level of the hierarchy, meaning that the graph that was rewritten did not type any other graph.
The following example illustrates what happens if we rewrite a graph typing some other graphs. The ReGraph hierarchy is able to propagate the changes made by rewriting on any level to all the graphs (as well as the rules) typed by the one subject to rewriting.
Step18: 2.4 Rewriting with the rules in the hierarchy
ReGraph provides utils that allow to apply rules stored in the hierarchy to the graph nodes of the hierarchy.
In the following example the rule r1 is being applied for rewriting of the graph g3.
Step19: 2.5 Export/load hierarchy
ReGraph provides the following methods for loading and exporting your hierarchy
Step20: 3. Example
Step21: 3.1. Strong typing of a rule
Main idea of strong typing is that the typing of LHS and RHS can be inferred from the matching and autocompleted respectively. It does not allow deletion of types as every node preserved throughout the rewriting will keep its original type.
Step22: ~~#### 3.3. Weak typing of a rule~~
~~If rewriting parameter strong_typing is set to False, the weak typing of a rule is applied. All the types of the nodes in the RHS of the rule which do not have explicitly specified types will be removed.~~
4. Merging with a hierarchy
4.1. Example
Step23: 4.2. Example | Python Code:
import copy
import networkx as nx
from regraph.hierarchy import Hierarchy
from regraph.rules import Rule
from regraph.plotting import plot_graph, plot_instance, plot_rule
from regraph.primitives import find_matching, print_graph, equal, add_nodes_from, add_edges_from
from regraph.utils import keys_by_value
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: ReGraph tutorial: from simple graph rewriting to a graph hierarchy
This notebook consists of simple examples of usage of the ReGraph library
End of explanation
graph = nx.DiGraph()
add_nodes_from(graph,
[
('1', {'name': 'EGFR', 'state': 'p'}),
('2', {'name': 'BND'}),
('3', {'name': 'Grb2', 'aa': 'S', 'loc': 90}),
('4', {'name': 'SH2'}),
('5', {'name': 'EGFR'}),
('6', {'name': 'BND'}),
('7', {'name': 'Grb2'}),
('8', {'name': 'WAF1'}),
('9', {'name': 'BND'}),
('10', {'name': 'G1-S/CDK', 'state': 'p'}),
])
edges = [
('1', '2', {'s': 'p'}),
('4', '2', {'s': 'u'}),
('4', '3'),
('5', '6', {'s': 'p'}),
('7', '6', {'s': 'u'}),
('8', '9'),
('9', '8'),
('10', '8', {"a": {1}}),
('10', '9', {"a": {2}}),
('5', '2', {'s': 'u'})
]
add_edges_from(graph, edges)
type(graph.node['1']['name'])
Explanation: I. Simple graph rewriting
1. Initialization of a graph
ReGraph works with NetworkX graph objects, both undirected graphs (nx.Graph) and directed ones (nx.DiGraph). The workflow of the graph initialization in NetworkX can be found here.
End of explanation
positioning = plot_graph(graph)
Explanation: ReGraph provides some utils for graph plotting that are going to be used in the course of this tutorial.
End of explanation
pattern = nx.DiGraph()
add_nodes_from(
pattern,
[(1, {'state': 'p'}),
(2, {'name': 'BND'}),
3,
4]
)
add_edges_from(
pattern,
[(1, 2, {'s': 'p'}),
(3, 2, {'s': 'u'}),
(3, 4)]
)
p = nx.DiGraph()
add_nodes_from(p,
[(1, {'state': 'p'}),
'1_clone',
(2, {'name': 'BND'}),
3,
4
])
add_edges_from(
p,
[(1, 2),
('1_clone', 2),
(3, 4)
])
rhs = nx.DiGraph()
add_nodes_from(
rhs,
[(1, {'state': 'p'}),
'1_clone',
(2, {'name': 'BND'}),
3,
4,
5
])
add_edges_from(
rhs,
[(1, 2, {'s': 'u'}),
('1_clone', 2),
(2, 4),
(3, 4),
(5, 3)
])
p_lhs = {1: 1, '1_clone': 1, 2: 2, 3: 3, 4: 4}
p_rhs = {1: 1, '1_clone': '1_clone', 2: 2, 3: 3, 4: 4}
Explanation: 2. Initialization of a rule
Graph rewriting is implemented as an application of a graph rewriting rule to a given input graph object $G$. A graph rewriting rule $R$ is a span $LHS \leftarrow P \rightarrow RHS$, where $LHS$ is a graph that represents a left hand side of the rule -- a pattern that is going to be matched inside of the graph, $P$ is a graph that represents a preserved part of the rule -- together with a homomorphism $LHS \leftarrow P$ it specifies nodes and edges that are going to be preserved in the course of application of the rule. $RHS$ and a homomorphism $P \rightarrow RHS$ on the other hand specify nodes and edges that are going to be added. In addition, if two nodes $n^P_1, n^P_2$ of $P$ map to the same node $n^{LHS}$ in $LHS$, $n^{LHS}$ is going to be cloned during graph rewriting. Symmetrically, if two nodes of $n^P_1$ and $n^P_2$ in $P$ match to the same node $n^{RHS}$ in $RHS$, $n^P_1$ and $n^P_2$ are merged.
$LHS$, $P$ and $RHS$ can be defined as NetworkX graphs
End of explanation
rule = Rule(p, pattern, rhs, p_lhs, p_rhs)
plot_rule(rule)
Explanation: A rule of graph rewriting is implemeted in the class regraph.library.rules.Rule. An instance of regraph.library.rules.Rule is initialized with NetworkX graphs $LHS$, $P$, $RHS$, and two dictionaries specifying $LHS \leftarrow P$ and $P \rightarrow RHS$.
For visualization of a rule regraph.library.plotting.plot_rule util is implemented in ReGraph.
End of explanation
instances = find_matching(graph, rule.lhs)
print("Instances:")
for instance in instances:
print(instance)
plot_instance(graph, rule.lhs, instance, parent_pos=positioning) #filename=("instance_example_%d.png" % i))
Explanation: 1. Rewriting
1.1. Matching of LHS
The matchings of $LHS$ in $G$ ($LHS \rightarrowtail G$) can be found using regraph.library.primitives.find_matching function. This function returns a list of dictionaries representing the matchings. If no matchings were found the list is empty.
Visualization of the matching in $G$ is implemented in the regraph.library.plotting.plot_instance util.
End of explanation
# Rewriting without modification of the initial object
graph_backup = copy.deepcopy(graph)
new_graph_1, rhs_graph = rule.apply_to(graph, instances[0], inplace=False)
# print(equal(new_graph_1, new_graph_2))
print(new_graph_1.edge['1']['2'])
assert(equal(graph_backup, graph))
print("Matching of RHS:", rhs_graph)
plot_instance(graph, rule.lhs, instances[0], parent_pos=positioning)
new_pos = plot_instance(new_graph_1, rule.rhs, rhs_graph, parent_pos=positioning)
Explanation: 1.2. Rewriting
Graph rewriting can be performed with the regraph.library.primitives.rewrite function. It takes as an input a graph, an instance of the matching (dictionary that specifies the mapping from the nodes of $LHS$ to the nodes of $G$), a rewriting rule (an instance of the regraph.library.rules.Rule class), and a parameter inplace (by default set to True). If inplace is True rewriting will be performed directly in the provided graph object and the function will return a dictionary corresponding to the $RHS$ matching in the rewritten graph ($RHS \rightarrowtail G'$), otherwise the rewriting function will return a new graph object corresponding to the result of rewriting and the $RHS$ matching.
Another possibility to perform graph rewriting is implemented in the apply_to method of a regraph.library.Rule class. It takes as an input a graph and an instance of the matching. It applies a corresponding (to self) rewriting rule and returns a new graph (the result of graph rewriting).
End of explanation
# Define graph G
g = nx.DiGraph()
g.add_nodes_from(["protein", "binding", "region", "compound"])
g.add_edges_from([("region", "protein"), ("protein", "binding"), ("region", "binding"), ("compound", "binding")])
# Define graph T
t = nx.DiGraph()
t.add_nodes_from(["action", "agent"])
t.add_edges_from([("agent", "agent"), ("agent", "action")])
# Define graph G'
g_prime = nx.DiGraph()
g_prime.add_nodes_from(
["EGFR", "BND_1", "SH2", "Grb2"]
)
g_prime.add_edges_from([
("EGFR", "BND_1"),
("SH2", "BND_1"),
("SH2", "Grb2")
])
# Create a hierarchy
simple_hierarchy = Hierarchy()
simple_hierarchy.add_graph("G", g, {"name": "Simple protein interaction"})
simple_hierarchy.add_graph("T", t, {"name": "Agent interaction"})
simple_hierarchy.add_typing(
"G", "T",
{"protein": "agent",
"region": "agent",
"compound": "agent",
"binding": "action",
},
total=True
)
simple_hierarchy.add_graph("G_prime", g_prime, {"name": "EGFR and Grb2 binding"})
simple_hierarchy.add_typing(
"G_prime", "G",
{
"EGFR": "protein",
"BND_1": "binding",
"SH2": "region",
"Grb2": "protein"
},
total=True
)
print(simple_hierarchy)
plot_graph(simple_hierarchy.node["T"].graph)
pos = plot_graph(simple_hierarchy.node["G"].graph)
plot_graph(simple_hierarchy.node["G_prime"].graph, parent_pos=pos)
Explanation: ReGraph also provides a primitive for testing equality of two graphs in regraph.library.primitives.equal. In our previous example we can see that a graph obtained by application of a rule new_graph (through the Rule interface) and an initial graph object graph after in-place rewriting are equal.
II. Hierarchy of graphs & rewriting
ReGraph allows to create a hierarchy of graphs connected together by means of typing homomorphisms. In the context of hierarchy if there exists a homomorphism $G \rightarrow T$ we say that graph $G$ is typed by a graph $T$. Graph hierarchy is a DAG, where nodes are graphs and edges are typing homomorphisms between graphs.
ReGraph provides two kinds of typing for graphs: partial typing and total typing.
- Total typing ($G \rightarrow T)$ is a homomorphism which maps every node of $G$ to some node in $T$ (a type);
- Partial typing ($G \rightharpoonup T$) is a slight generalisation of total typing, which allows only a subset of nodes from $G$ to be typed by nodes in $T$ (to have types in $T$), whereas the rest of the nodes which do not have a mapping to $T$ are considered as nodes which do not have type in $T$.
Note: Use total typing if you would like to make sure that the nodes of your graphs are always strictly typed by some metamodel.
1. Example: simple hierarchy
1.1. Initialization of a hierarchy
Consider the following example of a simple graph hierarchy. The two graphs $G$ and $T$ are being created and added to the heirarchy. Afterwards a typing homomorphism (total) between $G$ and $T$ is added, so that every node of $G$ is typed by some node in $T$.
End of explanation
lhs = nx.DiGraph()
add_nodes_from(lhs, [1, 2])
add_edges_from(lhs, [(1, 2)])
p = nx.DiGraph()
add_nodes_from(p, [1, 2])
add_edges_from(p, [])
rhs = nx.DiGraph()
add_nodes_from(rhs, [1, 2, 3])
add_edges_from(rhs, [(3, 1), (3, 2)])
# By default if `p_lhs` and `p_rhs` are not provided
# to a rule, it tries to construct this homomorphisms
# automatically by matching the names. In this case we
# have defined lhs, p and rhs in such a way that that
# the names of the matching nodes correspond
rule = Rule(p, lhs, rhs)
plot_rule(rule)
Explanation: 1.2. Rewriting in the hierarchy
ReGraph implements rewriting of graphs in the hierarchy, this rewriting is more restrictive as application of a rewriting rule cannot violate any typing defined in the hierarchy. The following code illustrates the application of a rewriting rule to the graph in the hierarchy. On the first step we create a Rule object containing a rule we would like to apply.
End of explanation
lhs_typing = {
"G": {
1: "protein",
2: "binding"
}
}
Explanation: Now, we would like to use the rule defined above in the following context: in the graph G_prime we want to find "protien" nodes connected to "binding" nodes and to delete the edge connecting them, after that we would like to add a new intermediary node and connect it to the previous "protein" and "binding".
We can provide this context by specifying a typing of the $LHS$ of the rule, which would indicated that node 1 is a "protein", and node 2 is a "binding". Now the hierarchy will search for a matching of $LHS$ respecting the types of the nodes.
End of explanation
# Find matching of lhs without lhs_typing
instances_untyped = simple_hierarchy.find_matching("G_prime", lhs)
pos = plot_graph(simple_hierarchy.node["G_prime"].graph)
print("Instances found without pattern typing:")
for instance in instances_untyped:
print(instance)
plot_instance(simple_hierarchy.node["G_prime"].graph, lhs, instance, parent_pos=pos)
# Find matching of lhs with lhs_typing
instances = simple_hierarchy.find_matching("G_prime", lhs, lhs_typing)
print("\n\nInstances found with pattern typing:")
for instance in instances:
print(instance)
plot_instance(simple_hierarchy.node["G_prime"].graph, lhs, instance, parent_pos=pos)
Explanation: regraph.library.Hierarchy provides the method find_matching to find matchings of a pattern in a given graph in the hierarchy. The typing of $LHS$ should be provided to the find_matching method.
End of explanation
print("Node types in `G_prime` before rewriting: \n")
for node in simple_hierarchy.node["G_prime"].graph.nodes():
print(node, simple_hierarchy.node_type("G_prime", node))
rhs_typing = {
"G": {
3: "region"
}
}
new_hierarchy, _ = simple_hierarchy.rewrite("G_prime", rule, instances[0], lhs_typing, rhs_typing, inplace=False)
plot_graph(new_hierarchy.node["G_prime"].graph)
plot_graph(new_hierarchy.node["G"].graph)
plot_graph(new_hierarchy.node["T"].graph)
print("Node types in `G_prime` before rewriting: \n")
for node in new_hierarchy.node["G_prime"].graph.nodes():
print(node, new_hierarchy.node_type("G_prime", node))
Explanation: As a rewriting rule can implement addition and merging of some nodes, an appropriate typing of the $RHS$ allows to specify the typing for new nodes.
~~- By default, if a typing of $RHS$ is not provided, all the nodes added and merged will be not typed. Note: If a graph $G$ was totally typed by some graph $T$, and a rewriting rule which transforms $G$ into $G'$ has added/merged some nodes for which there is no typing in $T$ specified, $G'$ will become only partially typed by $T$ and ReGraph will raise a warning.~~
If a typing of a new node is specified in the $RHS$ typing, the node will have this type as long as it is consistent (homomrophism $G' \rightarrow T$ is valid) with $T$.
If a typing of a merged node is specified in the $RHS$ typing, the node will have this type as long as (a) all the nodes that were merged had this type (b) new typing is a consistent homomrophism ($G' \rightarrow T$ is valid).
For our example, we will not specify the type of the new node 3, so that G_prime after rewriting will become only parially typed by G.
End of explanation
newer_hierarchy, _ = simple_hierarchy.rewrite("G_prime", rule, instances[0], lhs_typing, inplace=False, strict=False)
print("Node types in `G_prime` after rewriting: \n")
for node in newer_hierarchy.node["G_prime"].graph.nodes():
print(node, newer_hierarchy.node_type("G_prime", node))
plot_graph(newer_hierarchy.node["G_prime"].graph)
plot_graph(newer_hierarchy.node["G"].graph)
plot_graph(newer_hierarchy.node["T"].graph)
print("Node types in `G_prime` after rewriting: \n")
for node in new_hierarchy.node["G_prime"].graph.nodes():
print(node, new_hierarchy.node_type("G_prime", node))
Explanation: Now, rewriting can be performed using regraph.library.hierarchy.Hierarchy.rewrite method. It takes as an input id of the graph to rewrite, a rule, an instance of the LHS of a rule ($LHS \rightarrow G$), and a typing of $LHS$ and $RHS$.
Note: In case the graph to be rewritten is not typed by any other graph in the hierarchy, the $LHS$ and $RHS$ typings are not required.
End of explanation
simple_hierarchy.remove_graph("G", reconnect=True)
print(simple_hierarchy)
print("New node types in 'G_prime':\n")
for node in simple_hierarchy.node["G_prime"].graph.nodes():
print(node, ": ", simple_hierarchy.node_type("G_prime", node))
Explanation: Later on if a node form $G$ is not typed in $T$, we can specify a typing for this node.
In the example we type the node 3 as a region in G.
It is also possible to remove a graph from the hierarchy using the regraph.library.hierarchy.Hierarchy.remove_graph method. It takes as an input the id of a graph to remove, and if the argument reconnect is set to True, it reconnects all the graphs typed by the graph being removed to the graphs typing it.
In our example if we remove graph G from the hierarchy, G_prime is now directly typed by T.
End of explanation
hierarchy = Hierarchy()
colors = nx.DiGraph()
colors.add_nodes_from([
"green", "red"
])
colors.add_edges_from([
("red", "green"),
("red", "red"),
("green", "green")
])
hierarchy.add_graph("colors", colors, {"id": "https://some_url"})
shapes = nx.DiGraph()
shapes.add_nodes_from(["circle", "square"])
shapes.add_edges_from([
("circle", "square"),
("square", "circle"),
("circle", "circle")
])
hierarchy.add_graph("shapes", shapes)
quality = nx.DiGraph()
quality.add_nodes_from(["good", "bad"])
quality.add_edges_from([
("bad", "bad"),
("bad", "good"),
("good", "good")
])
hierarchy.add_graph("quality", quality)
g1 = nx.DiGraph()
g1.add_nodes_from([
"red_circle",
"red_square",
"some_circle",
])
g1.add_edges_from([
("red_circle", "red_square"),
("red_circle", "red_circle"),
("red_square", "red_circle"),
("some_circle", "red_circle")
])
g1_colors = {
"red_circle": "red",
"red_square": "red",
}
g1_shapes = {
"red_circle": "circle",
"red_square": "square",
"some_circle": "circle"
}
hierarchy.add_graph("g1", g1)
hierarchy.add_typing("g1", "colors", g1_colors, total=False)
hierarchy.add_typing("g1", "shapes", g1_shapes, total=False)
g2 = nx.DiGraph()
g2.add_nodes_from([
"good_circle",
"good_square",
"bad_circle",
"good_guy",
"some_node"
])
g2.add_edges_from([
("good_circle", "good_square"),
("good_square", "good_circle"),
("bad_circle", "good_circle"),
("bad_circle", "bad_circle"),
("some_node", "good_circle"),
("good_guy", "good_square")
])
g2_shapes = {
"good_circle": "circle",
"good_square": "square",
"bad_circle": "circle"
}
g2_quality = {
"good_circle": "good",
"good_square": "good",
"bad_circle": "bad",
"good_guy": "good"
}
hierarchy.add_graph("g2", g2)
hierarchy.add_typing("g2", "shapes", g2_shapes)
hierarchy.add_typing("g2", "quality", g2_quality)
g3 = nx.DiGraph()
g3.add_nodes_from([
"good_red_circle",
"bad_red_circle",
"good_red_square",
"some_circle_node",
"some_strange_node"
])
g3.add_edges_from([
("bad_red_circle", "good_red_circle"),
("good_red_square", "good_red_circle"),
("good_red_circle", "good_red_square")
])
g3_g1 = {
"good_red_circle": "red_circle",
"bad_red_circle": "red_circle",
"good_red_square": "red_square"
}
g3_g2 = {
"good_red_circle": "good_circle",
"bad_red_circle": "bad_circle",
"good_red_square": "good_square",
}
hierarchy.add_graph("g3", g3)
hierarchy.add_typing("g3", "g1", g3_g1)
hierarchy.add_typing("g3", "g2", g3_g2)
lhs = nx.DiGraph()
lhs.add_nodes_from([1, 2])
lhs.add_edges_from([(1, 2)])
p = nx.DiGraph()
p.add_nodes_from([1, 11, 2])
p.add_edges_from([(1, 2)])
rhs = copy.deepcopy(p)
rhs.add_nodes_from([3])
p_lhs = {1: 1, 11: 1, 2: 2}
p_rhs = {1: 1, 11: 11, 2: 2}
r1 = Rule(p, lhs, rhs, p_lhs, p_rhs)
hierarchy.add_rule("r1", r1, {"desc": "Rule 1: typed by two graphs"})
lhs_typing1 = {1: "red_circle", 2: "red_square"}
rhs_typing1 = {3: "red_circle"}
# rhs_typing1 = {1: "red_circle", 11: "red_circle", 2: "red_square"}
lhs_typing2 = {1: "good_circle", 2: "good_square"}
rhs_typing2 = {3: "bad_circle"}
# rhs_typing2 = {1: "good_circle", 11: "good_circle", 2: "good_square"}
hierarchy.add_rule_typing("r1", "g1", lhs_typing1, rhs_typing1)
hierarchy.add_rule_typing("r1", "g2", lhs_typing2, rhs_typing2)
Explanation: 2. Example: advanced hierarchy
The following example illustrates more sophisticaled hierarchy example.
2.1. DAG hierarchy
End of explanation
print("Node types in G3:\n")
for node in hierarchy.node["g3"].graph.nodes():
print(node, hierarchy.node_type("g3", node))
hierarchy.add_node_type("g3", "some_circle_node", {"g1": "red_circle", "g2": "good_circle"})
hierarchy.add_node_type("g3", "some_strange_node", {"g2": "some_node"})
print("Node types in G3:\n")
for node in hierarchy.node["g3"].graph.nodes():
print(node, hierarchy.node_type("g3", node))
Explanation: Some of the graphs in the hierarchy are now typed by multiple graphs, which is reflected in the types of nodes, as in the example below:
End of explanation
print(hierarchy)
print(hierarchy.edge["r1"]["g1"].lhs_mapping)
print(hierarchy.edge["r1"]["g1"].rhs_mapping)
print(hierarchy.edge["r1"]["g2"].lhs_mapping)
print(hierarchy.edge["r1"]["g2"].rhs_mapping)
Explanation: Notice that as G3 is paritally typed by both G1 and G2, not all the nodes have types in both G1 and G2. For example, node some_circle_node is typed only by some_circle in G1, but is not typed by any node in G2.
2.2. Rules as nodes of a hierarchy
Having constructed a sophisticated rewriting rule typed by some nodes in the hierarchy one may want to store this rule and to be able to propagate any changes that happen in the hierarchy to the rule as well.
ReGraph's regraph.library.hierarchy.Hierarchy allows to add graph rewriting rules as nodes in the hierarchy. Rules in the hierarchy can be (partially) typed by graphs.
Note: nothing can be typed by a rule in the hierarchy.
In the example below, a rule is added to the previously constructed hierarchy and typed by graphs g1 and g2:
End of explanation
lhs = nx.DiGraph()
lhs.add_nodes_from(["a", "b"])
lhs.add_edges_from([
("a", "b"),
("b", "a")
])
p = nx.DiGraph()
p.add_nodes_from(["a", "a1", "b"])
p.add_edges_from([
("a", "b"),
("a1", "b")
])
rhs = copy.deepcopy(p)
rule = Rule(
p, lhs, rhs,
{"a": "a", "a1": "a", "b": "b"},
{"a": "a", "a1": "a1", "b": "b"},
)
instances = hierarchy.find_matching("shapes", lhs)
print("Instances:")
for instance in instances:
print(instance)
plot_instance(hierarchy.node["shapes"].graph, rule.lhs, instance)
_, m = hierarchy.rewrite("shapes", rule, {"a": "circle", "b": "square"})
print(hierarchy)
sep = "========================================\n\n"
print("Graph 'shapes':\n")
print("===============")
print_graph(hierarchy.node["shapes"].graph)
print(sep)
print("Graph 'g1':\n")
print("===========")
print_graph(hierarchy.node["g1"].graph)
print(sep)
print("Graph 'g2':\n")
print("===========")
print_graph(hierarchy.node["g2"].graph)
print(sep)
print("Graph 'g3':\n")
print("===========")
print_graph(hierarchy.node["g3"].graph)
print(sep)
print("Rule 'r1':\n")
print("===========")
print("\nLHS:")
print_graph(hierarchy.node["r1"].rule.lhs)
print("\nP:")
print_graph(hierarchy.node["r1"].rule.p)
print("\nRHS:")
print_graph(hierarchy.node["r1"].rule.rhs)
Explanation: 2.3. Rewriting and propagation
We now show how graph rewriting can be performed in such an hierarchy. In the previous example we perfromed graph rewriting on the top level of the hierarchy, meaning that the graph that was rewritten did not type any other graph.
The following example illustrates what happens if we rewrite a graph typing some other graphs. The ReGraph hierarchy is able to propagate the changes made by rewriting on any level to all the graphs (as well as the rules) typed by the one subject to rewriting.
End of explanation
print(hierarchy.rule_lhs_typing["r1"]["g1"])
print(hierarchy.rule_rhs_typing["r1"]["g1"])
print(hierarchy.typing["g3"]["g1"])
instances = hierarchy.find_rule_matching("g3", "r1")
hierarchy.apply_rule(
"g3",
"r1",
instances[0]
)
print_graph(hierarchy.node["g3"].graph)
Explanation: 2.4 Rewriting with the rules in the hierarchy
ReGraph provides utils that allow to apply rules stored in the hierarchy to the graph nodes of the hierarchy.
In the following example the rule r1 is being applied for rewriting of the graph g3.
End of explanation
hierarchy_json = hierarchy.to_json()
new_hierarchy = Hierarchy.from_json(hierarchy_json, directed=True)
new_hierarchy == hierarchy
Explanation: 2.5 Export/load hierarchy
ReGraph provides the following methods for loading and exporting your hierarchy:
regraph.library.hierarchy.Hierarchy.to_json creates a json representations of the hierarchy;
regraph.library.hierarchy.Hierarchy.from_json loads an hierarchy from json representation (returns new Hierarchy object);
regraph.library.hierarchy.Hierarchy.export exports the hierarchy to a file (json format);
regraph.library.hierarchy.Hierarchy.load loads an hierarchy from a .json file (returns new object as well).
End of explanation
base = nx.DiGraph()
base.add_nodes_from([
("circle", {"a": {1, 2}, "b": {3, 4}}),
("square", {"a": {3, 4}, "b": {1, 2}})
])
base.add_edges_from([
("circle", "circle", {"c": {1, 2}}),
("circle", "square", {"d": {1, 2}}),
])
little_hierarchy = Hierarchy()
little_hierarchy.add_graph("base", base)
graph = nx.DiGraph()
graph.add_nodes_from([
("c1", {"a": {1}}),
("c2", {"a": {2}}),
"s1",
"s2",
("n1", {"x":{1}})
])
graph.add_edges_from([
("c1", "c2", {"c": {1}}),
("c2", "s1"),
("s2", "n1", {"y": {1}})
])
little_hierarchy.add_graph("graph", graph)
little_hierarchy.add_typing(
"graph", "base",
{
"c1": "circle",
"c2": "circle",
"s1": "square",
"s2": "square"
}
)
Explanation: 3. Example: advanced rule and rewriting
By default rewriting requires all the nodes in the result of the rewriting to be totally typed by all the graphs typing the graph subject to rewriting. If parameter total in the rewriting is set to False, rewriting is allowed to produce untyped nodes.
In addition, rewriting is available in these possible configurations:
Strong typing of a rule (default) autocompletes the types of the nodes in a rule with the respective types of the matching.
~~2. Weak typing of a rule: (parameter strong=False) only checks the consistency of the types given explicitly by a rule, and allows to remove node types. If typing of a node in RHS does not contain explicit typing by some typing graph -- this node will be not typed by this graph in the result.~~
~~Note: Weak typing should be used with parameter total set to False, otherwise deletion of node types will be not possible.~~
Examples below illustrate some interesting use-cases of rewriting with different rule examples.
End of explanation
# In this rule we match any pair of nodes and try to add an edge between them
# the rewriting will fail every time the edge is not allowed between two nodes
# by its typing graphs
# define a rule
lhs = nx.DiGraph()
lhs.add_nodes_from([1, 2])
p = copy.deepcopy(lhs)
rhs = copy.deepcopy(lhs)
rhs.add_edges_from([(1, 2)])
rule = Rule(p, lhs, rhs)
instances = little_hierarchy.find_matching(
"graph",
rule.lhs
)
current_hierarchy = copy.deepcopy(little_hierarchy)
for instance in instances:
try:
current_hierarchy.rewrite(
"graph",
rule,
instance
)
print("Instance rewritten: ", instance)
print()
except Exception as e:
print("\nFailed to rewrite an instance: ", instance)
print("Addition of an edge was not allowed, error message received:")
print("Exception type: ", type(e))
print("Message: ", e)
print()
print_graph(current_hierarchy.node["graph"].graph)
print("\n\nTypes of nodes after rewriting:")
for node in current_hierarchy.node["graph"].graph.nodes():
print(node, current_hierarchy.node_type("graph", node))
lhs = nx.DiGraph()
lhs.add_nodes_from([1, 2])
p = copy.deepcopy(lhs)
rhs = nx.DiGraph()
rhs.add_nodes_from([1])
rule = Rule(p, lhs, rhs, p_rhs={1: 1, 2: 1})
instances = little_hierarchy.find_matching(
"graph",
rule.lhs
)
for instance in instances:
try:
current_hierarchy, _ = little_hierarchy.rewrite(
"graph",
rule,
instance,
inplace=False
)
print("Instance rewritten: ", instance)
print_graph(current_hierarchy.node["graph"].graph)
print("\n\nTypes of nodes after rewriting:")
for node in current_hierarchy.node["graph"].graph.nodes():
print(node, current_hierarchy.node_type("graph", node))
print()
except Exception as e:
print("\nFailed to rewrite an instance: ", instance)
print("Merge was not allowed, error message received:")
print("Exception type: ", type(e))
print("Message: ", e)
print()
Explanation: 3.1. Strong typing of a rule
Main idea of strong typing is that the typing of LHS and RHS can be inferred from the matching and autocompleted respectively. It does not allow deletion of types as every node preserved throughout the rewriting will keep its original type.
End of explanation
g1 = nx.DiGraph()
g1.add_node(1)
g2 = copy.deepcopy(g1)
g3 = copy.deepcopy(g1)
g4 = copy.deepcopy(g1)
hierarchy = Hierarchy()
hierarchy.add_graph(1, g1, graph_attrs={"name": {"Main hierarchy"}})
hierarchy.add_graph(2, g2, graph_attrs={"name": {"Base hierarchy"}})
hierarchy.add_graph(3, g3)
hierarchy.add_graph(4, g4)
hierarchy.add_typing(1, 2, {1: 1})
hierarchy.add_typing(1, 4, {1: 1})
hierarchy.add_typing(2, 3, {1: 1})
hierarchy.add_typing(4, 3, {1: 1})
hierarchy1 = copy.deepcopy(hierarchy)
hierarchy2 = copy.deepcopy(hierarchy)
hierarchy3 = copy.deepcopy(hierarchy)
h1 = nx.DiGraph()
h1.add_node(2)
h2 = copy.deepcopy(h1)
h3 = copy.deepcopy(h1)
h4 = copy.deepcopy(h1)
other_hierarchy = Hierarchy()
other_hierarchy.add_graph(1, h1, graph_attrs={"name": {"Main hierarchy"}})
other_hierarchy.add_graph(2, h2, graph_attrs={"name": {"Base hierarchy"}})
other_hierarchy.add_graph(3, h3)
other_hierarchy.add_graph(4, h4)
other_hierarchy.add_typing(1, 2, {2: 2})
other_hierarchy.add_typing(1, 4, {2: 2})
other_hierarchy.add_typing(2, 3, {2: 2})
other_hierarchy.add_typing(4, 3, {2: 2})
hierarchy1.merge_by_id(other_hierarchy)
print(hierarchy1)
Explanation: ~~#### 3.3. Weak typing of a rule~~
~~If rewriting parameter strong_typing is set to False, the weak typing of a rule is applied. All the types of the nodes in the RHS of the rule which do not have explicitly specified types will be removed.~~
4. Merging with a hierarchy
4.1. Example: merging disjoint hierarchies (merge by ids)
End of explanation
# Now we make node 1 in the hierarchies to be the same graph
hierarchy2.node[1].graph.add_node(2)
other_hierarchy.node[1].graph.add_node(1)
hierarchy2.merge_by_id(other_hierarchy)
print(hierarchy2)
# Now make a hierarchies to have two common nodes with an edge between them
hierarchy3.node[1].graph.add_node(2)
other_hierarchy.node[1].graph.add_node(1)
hierarchy3.node[2].graph.add_node(2)
other_hierarchy.node[2].graph.add_node(1)
hierarchy4 = copy.deepcopy(hierarchy3)
hierarchy3.merge_by_id(other_hierarchy)
print(hierarchy3)
print(hierarchy3.edge[1][2].mapping)
hierarchy4.merge_by_attr(other_hierarchy, "name")
print(hierarchy4)
print(hierarchy4.edge['1_1']['2_2'].mapping)
Explanation: 4.2. Example: merging hierarchies with common nodes
End of explanation |
5,462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figure 1. Sketch of a cell (top left) with the horizontal (red) and vertical (green) velocity nodes and the cell-centered node (blue). Definition of the normal vector to "surface" (segment) $S_{i+\frac{1}{2},j}$ and $S_{i,j+\frac{1}{2}}$ (top right). Sketch of uniform grid (bottom).
<h1>Derivation of 1D Transport Equation</h1>
<h2>1D Transport Without Diffusion</h2>
Consider a small control surface (cell) of dimensions $\Delta x\times\Delta y$ within which, we know the velocities on the surfaces $u_{i\pm\frac{1}{2},j}$ and $v_{i,j\pm\frac{1}{2}}$ and a quantity $\phi_{i,j}$ at the center of the cell. This quantity may be temperature, or the concentration of chemical specie. The variation in time of $\phi$ within the cell is equal to the amount of $\phi$ that is flowing in and out of the cell through the boundaries of cell. The velocity vector is defined as
$$
\vec{u}=u\vec{e}_x+v\vec{e}_y
$$
The fluxes of $\phi$ across the right-hand-side and left-hand-side vertical boundaries are, respectively
Step1: The first two lines deal with the ability to show your graphs (generated via matplotlib) within this notebook, the remaining two lines import matplotlib's sublibrary pyplot as <FONT FACE="courier" style="color
Step2: <h3 style="color
Step3: A slower but easier to understand version of this function is shown below. The tag slow is explained shortly after.
Step4: <h3>Step 3
Step5: The choice for the interpolation is obvious
Step6: <h3>Step 4
Step7: Although the plot suggests that the interpolation works, a visual proof can be deceptive. It is best to calculate the error between the exact and interpolated solution. Here we use an $l^2$-norm
Step8: For reasons that will become clearer later, we want to consider other interpolation schemes
Step9: <h3 style="color
Step10: <h3>Step 5
Step11: The discretization of the time derivative is crude. A better discretization is the 2<sup>nd</sup>-order Runge-Kutta | Python Code:
%matplotlib inline
# plots graphs within the notebook
%config InlineBackend.figure_format='svg' # not sure what this does, may be default images to svg format
import matplotlib.pyplot as plt #calls the plotting library hereafter referred as to plt
import numpy as np
Explanation: Figure 1. Sketch of a cell (top left) with the horizontal (red) and vertical (green) velocity nodes and the cell-centered node (blue). Definition of the normal vector to "surface" (segment) $S_{i+\frac{1}{2},j}$ and $S_{i,j+\frac{1}{2}}$ (top right). Sketch of uniform grid (bottom).
<h1>Derivation of 1D Transport Equation</h1>
<h2>1D Transport Without Diffusion</h2>
Consider a small control surface (cell) of dimensions $\Delta x\times\Delta y$ within which, we know the velocities on the surfaces $u_{i\pm\frac{1}{2},j}$ and $v_{i,j\pm\frac{1}{2}}$ and a quantity $\phi_{i,j}$ at the center of the cell. This quantity may be temperature, or the concentration of chemical specie. The variation in time of $\phi$ within the cell is equal to the amount of $\phi$ that is flowing in and out of the cell through the boundaries of cell. The velocity vector is defined as
$$
\vec{u}=u\vec{e}_x+v\vec{e}_y
$$
The fluxes of $\phi$ across the right-hand-side and left-hand-side vertical boundaries are, respectively:
$$
\int_{S_{i+1/2,j}}\phi(\vec{u}{i+\frac{1}{2},j}\cdot\vec{n}{i+\frac{1}{2},j})dy\text{ and }\int_{S_{i-1/2,j}}\phi(\vec{u}{i-\frac{1}{2},j}\cdot\vec{n}{i+\frac{1}{2},j})dy
$$
In the configuration depicted in Figure 1, the mass or heat variation is equal to the flux of $\phi$ entering the cell minus the flux exiting the cell, or:
$$
-\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j}\Delta y + \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}\Delta y \text{, when $\Delta y\rightarrow 0$}
$$
Assuming that there is no vertical velocity ($v=0$), this sum is equal to the variation of $\phi$ within the cell,
$$
\frac{\partial}{\partial t}\iint_{V_{i,j}}\phi dxdy\approx\frac{\partial \phi_{i,j}}{\partial t}\Delta x\Delta y \text{, when $\Delta x\rightarrow 0$ and $\Delta y\rightarrow 0$}
$$
yielding
$$
\frac{\partial \phi_{i,j}}{\partial t}\Delta x\Delta y=-\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j}\Delta y + \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}\Delta y\;,
$$
reducing to
$$
\frac{\partial \phi_{i,j}}{\partial t}=-\frac{\phi_{i+\frac{1}{2},j}u_{i+\frac{1}{2},j} - \phi_{i-\frac{1}{2},j}u_{i-\frac{1}{2},j}}{\Delta x}\;.
$$
In the limit of $\Delta x\rightarrow 0$, we obtain the conservative form of the pure advection equation:
<p class='alert alert-danger'>
$$
\frac{\partial \phi}{\partial t}+\frac{\partial u\phi}{\partial x}=0
$$
</p>
<h2>1.2 Coding the Pure Advection Equation</h2>
The following takes you through the steps to solve numerically the pure advection equation with python. The boundary conditions are (all variables are non-dimensional):
<ol>
<li> Length of the domain: $0\leq x\leq L$ and $L=8\pi$ </li>
<li> Constant velocity $u_0=1$
<li> Inlet $x=0$ and outlet $x=L$: zero-flux variation (in space)</li>
<li> Initial condition:
$$\phi(x,t=0)=\begin{cases}
\cos\left(x-\frac{L}{2}\right)&,\text{ for }\left\vert x-\frac{L}{2}\right\vert\leq\pi\\
0&,\text{ for }\left\vert x-\frac{L}{2}\right\vert>\pi
\end{cases}
$$
</li>
</ol>
Here you will <b>discretize</b> your domain in $N$ small control volumes, such that the size of each control volume is
<p class='alert alert-danger'>
$$
\Delta x = \frac{L}{N}
$$
</p>
You will simulate the system defined so far of a time $T$, to be decided, discretized by small time-steps
<p class='alert alert-danger'>
$$
\Delta t = \frac{T}{N_t}
$$
</p>
We adopt the following index convention:
<ul>
<li> Each cell is labeled by a unique integer $i$ with $i\in[0,N-1]$. This is a python convention that vector and matrices start with index 0, instead of 1 for matlab.</li>
<li> A variable defined at the center of cell $i$ is noted with the subscript $i$: $\phi_i$.</li>
<li> A variable defined at the surface of cell $i$ is noted with the subscript $i\pm1/2$: $\phi_{i\pm 1/2}$</li>
<li> The solution $\phi(x_i,t_n)$, where
$$
x_i = i\Delta x\text{ with $x\in[0,N-1]$, and }t_n=n\Delta t\text{ with $n\in[0,N_t]$,}
$$</li>
is noted $\phi_i^n$.
</ul>
At first we will try to solve the advection equation with the following discretization:
$$
\frac{\phi_i^{n+1}-\phi_i^n}{\Delta t}=-\frac{\phi_{i+\frac{1}{2}}u_{i+\frac{1}{2}} - \phi_{i-\frac{1}{2}}u_{i-\frac{1}{2}}}{\Delta x}
$$
or
<p class='alert alert-info'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(\phi^n_{i+\frac{1}{2}}u_{i+\frac{1}{2}} - \phi^n_{i-\frac{1}{2}}u_{i-\frac{1}{2}}\right)
$$
</p>
The velocity $u$ is constant, therefore defined anywhere in the system (cell center or cell surfaces), however $\phi$ is defined only at the cell center, requiring an interpolation at the cell surface $i\pm 1/2$. For now you will consider a mid-point interpolation:
<p class='alert alert-info'>
$$
\phi^n_{i+\frac{1}{2}} = \frac{\phi^n_{i+1}+\phi^n_i}{2}
$$
</p>
Lastly, our governing equation can be recast with the flux of $\phi$ across the surface $u$:
<p class='alert alert-info'>
$$
F^n_{i\pm\frac{1}{2}}=\phi^n_{i\pm\frac{1}{2}}u_{i\pm\frac{1}{2}}=\frac{\phi^n_{i\pm 1}+\phi^n_i}{2}u_{i\pm\frac{1}{2}}
$$
</p>
yielding the equation you will attempt to solve:
<p class='alert alert-danger'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}\right)
$$
</p>
<h3> Step 1: Import libraries</h3>
Python has a huge collection of libraries contained functions to plot, build matrices, performed mathematical operations, etc. To avoid overloading the CPU and to allow you to choose the best library for your code, you need to first import the libraries you will need, here:
<ul>
<li> <FONT FACE="courier" style="color:blue">matplotlib </FONT>: <a href="http://matplotlib.org">http://matplotlib.org</a> for examples of plots you can make in python.</li>
<li><FONT FACE="courier" style="color:blue">numpy </FONT>: <a href="http://docs.scipy.org/doc/numpy/user/index.html">http://docs.scipy.org/doc/numpy/user/index.html</a> Library for operations on matrices and vectors.</li>
</ul>
Loading a libray in python is done by the command <FONT FACE="courier" style="color:blue">import</FONT>. The best practice is to take the habit to use
<FONT FACE="courier" style="color:blue">import [library] as [library_nickname]</FONT>
For example, the library <FONT FACE="courier" style="color:blue">numpy</FONT> contains vector and matrices operations such <FONT FACE="courier" style="color:blue">zeros</FONT>, which allocate memory for a vector or a matrix of specified dimensions and set all components of the vector and matrix to zero. If you import numpy as np,
<FONT FACE="courier" style="color:blue">import numpy as np</FONT>
the allocation of memory for matrix A of dimensions n and m becomes
<FONT FACE="courier" style="color:blue">A = np.zeros((n,m))</FONT>
The following is a standard initialization for the python codes you will write in this course:
End of explanation
L = 8*np.pi
N = 200
dx = L/N
u_0 = 1.
phi = np.zeros(N)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
Explanation: The first two lines deal with the ability to show your graphs (generated via matplotlib) within this notebook, the remaining two lines import matplotlib's sublibrary pyplot as <FONT FACE="courier" style="color:blue">plt</FONT> and numpy as <FONT FACE="courier" style="color:blue">np</FONT>.
<h3>Step 2: Initialization of variables and allocations of memory</h3>
The first real coding task is to define your variables, with the exception of the time-related variables (you will understand why). Note that in our equation, we can store $\phi^n$ into one variable providing that we create a flux variable $F$.
<h3 style="color:red"> Q1: Explain why.</h3>
End of explanation
def init_simulation(x_phi,N):
phi = np.zeros(N)
phi = 1.+np.cos(x_phi-L/2.)
xmask = np.where(np.abs(x_phi-L/2.) > np.pi)
phi[xmask] = 0.
return phi
phi = init_simulation(x_phi,N)
plt.plot(x_phi,phi,lw=2)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: <h3 style="color:red"> Q2: Search numpy function linspace and describe what <FONT FACE="courier">x_phi</FONT> and <FONT FACE="courier">x_u</FONT> define. Why are the dimensions different?</h3>
<h3>Step 3: Initialization</h3>
Now we define a function to initialize our variables. In python, <b>indentation matters!</b> A function is defined by the command <FONT FACE="courier">def</FONT> followed by the name of the function and the argument given to the function. The variables passed as argument in the function are local, meaning they may or may not have the same names as the variables in the core code. Any other variable used within the function needs to be defined in the function or before.
Note that python accepts implicit loops. Here <FONT FACE="courier">phi</FONT> and <FONT FACE="courier">x_phi</FONT> are two vectors of dimension $N$.
End of explanation
def init_simulation_slow(u,phi,x_phi,N):
for i in range(N):
if (np.abs(x_phi[i]-L/2.) > np.pi):
phi[i] = 0.
else:
phi[i] = 1.+np.cos(x_phi[i]-L/2.)
return phi
phi = init_simulation_slow(u,phi,x_phi,N)
plt.plot(x_phi,phi,lw=2)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: A slower but easier to understand version of this function is shown below. The tag slow is explained shortly after.
End of explanation
%%timeit
flux0 = np.zeros(N+1)
for i in range(1,N):
flux0[i] = 0.5*(phi[i-1]+phi[i])*u[i]
%%timeit
flux1 = np.zeros(N+1)
flux1[1:N] = 0.5*(phi[0:N-1]+phi[1:N])*u[1:N]
Explanation: <h3>Step 3: Code your interpolation/derivativation subroutine</h3>
Before we can simulate our system, we need to write and test our spatial interpolation and derivative procedure. Below we test the speed of two approaches, The first uses a for loop, whereas the second using the rules of indexing in python.
End of explanation
def compute_flux(a,v,N):
f=np.zeros(N+1)
f[1:N] = 0.5*(a[0:N-1]+a[1:N])*v[1:N]
f[0] = f[1]
f[N] = f[N-1]
return f
Explanation: The choice for the interpolation is obvious:
End of explanation
F_exact = np.zeros(N+1)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
plt.plot(x_u,F_exact,lw=2,label="exact")
plt.plot(x_u,F,'r--',lw=2,label="interpolated")
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.legend(loc="upper left", bbox_to_anchor=[0, 1],
ncol=1, shadow=True, fancybox=True)
plt.show()
Explanation: <h3>Step 4: Verification</h3>
The interpolation and derivation operations are critical components of the simulation that must be verified. Since the velocity is unity, $F_{i\pm1/2}=\phi_{i\pm1/2}$.
End of explanation
N = 200
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
error = np.sqrt(np.sum(np.power(F-F_exact,2)))
errorx = np.power(F-F_exact,2)
plt.plot(x_u,errorx)
plt.show()
print('error norm L 2= %1.4e' %error)
Nerror = 3
Narray = np.array([10, 100, 200])
delta = L/Narray
error = np.zeros(Nerror)
order = np.zeros(Nerror)
for ierror in range(Nerror):
N = Narray[ierror]
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
error[ierror] = np.linalg.norm(F-F_exact)
#error[ierror] = np.sqrt(np.sum(np.power(F-F_exact,2)))
print('error norm L 2= %1.4e' %error[ierror])
order = 0.1*delta**(2)
plt.loglog(delta,error,lw=2,label='interpolate')
plt.loglog(delta,order,lw=2,label='$\propto\Delta x^2$')
plt.legend(loc="upper left", bbox_to_anchor=[0, 1],
ncol=1, shadow=True, fancybox=True)
plt.xlabel('$\Delta x$', fontdict = font)
plt.ylabel('$\Vert F\Vert_2$', fontdict = font)
plt.show
Explanation: Although the plot suggests that the interpolation works, a visual proof can be deceptive. It is best to calculate the error between the exact and interpolated solution. Here we use an $l^2$-norm:
$$
\Vert F\Vert_2=\sqrt{\sum_{i=0}^{N}\left(F_i-F_i^e\right)^2}
$$
where $F_e$ is the exact solution for the flux.
End of explanation
Nscheme = 4
Scheme = np.array(['CS','US1','US2','US3'])
g_1 = np.array([1./2.,0.,0.,3./8.])
g_2 = np.array([0.,0.,1./2.,1./8.])
def compute_flux_advanced(a,v,N,num_scheme):
imask = np.where(Scheme == num_scheme)
g1 = g_1[imask]
g2 = g_2[imask]
f=np.zeros(N+1)
f[2:N] = ((1.-g1+g2)*a[1:N-1]+g1*a[2:N]-g2*a[0:N-2])*v[2:N]
if (num_scheme == 'US2') or (num_scheme == 'US3'):
f[1] = ((1.-g1)*a[0]+g1*a[1])*v[1]
f[0] = f[1]
f[N] = f[N-1]
return f
table = ListTable()
table.append(['Scheme', '$g_1$', '$g_2$'])
for i in range(4):
table.append([Scheme[i],g_1[i], g_2[i]])
table
Nerror = 3
Narray = np.array([10, 100, 200])
delta = L/Narray
error = np.zeros((Nerror,Nscheme))
order = np.zeros((Nerror,Nscheme))
for ischeme in range(Nscheme):
num_scheme = Scheme[ischeme]
for ierror in range(Nerror):
N = Narray[ierror]
dx = L/N
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux_advanced(phi,u,N,num_scheme)
error[ierror,ischeme] = np.linalg.norm(F-F_exact)
#print('error norm L 2= %1.4e' %error[ierror,ischeme])
for ischeme in range(Nscheme):
plt.loglog(delta,error[:,ischeme],lw=2,label=Scheme[ischeme])
order = 2.0*(delta/delta[0])
plt.loglog(delta,order,'k:',lw=2,label='$\propto\Delta x$')
order = 0.1*(delta/delta[0])**(2)
plt.loglog(delta,order,'k-',lw=2,label='$\propto\Delta x^2$')
order = 0.1*(delta/delta[0])**(3)
plt.loglog(delta,order,'k--',lw=2,label='$\propto\Delta x^3$')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$\Delta x$', fontdict = font)
plt.ylabel('$\Vert F\Vert_2$', fontdict = font)
plt.xlim(L/300,L/9.)
plt.ylim(1e-5,1e2)
plt.show
Explanation: For reasons that will become clearer later, we want to consider other interpolation schemes:
$$
\phi_{i+\frac{1}{2}}=g_1\phi_{i+1}-g_2\phi_{i-1}+(1-g_1+g_2)\phi_i
$$
The scheme CS is the interpolation scheme we have used so far. Let us test them all, however we have to modify the interpolation function.
End of explanation
def flux_divergence(f,N,dx):
df = np.zeros(N)
df[0:N] = (f[1:N+1]-f[0:N])/dx
return df
Explanation: <h3 style="color:red">Q3: What do you observe? </h3>
<h3 style="color:red">Q4: Write a code to verify the divergence subroutine. </h3>
End of explanation
N=200
Simulation_time = 5.
dx = L/N
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
u_0 = 1.
num_scheme = 'CS'
u = u_0*np.ones(N+1)
phi = np.zeros(N)
flux = np.zeros(N+1)
divflux = np.zeros(N)
phi = init_simulation(x_phi,N)
phi_init = phi.copy()
number_of_iterations = 100
dt = Simulation_time/number_of_iterations
t = 0.
for it in range (number_of_iterations):
flux = compute_flux_advanced(phi,u,N,num_scheme)
divflux = flux_divergence(flux,N,dx)
phi -= dt*divflux
t += dt
plt.plot(x_phi,phi,lw=2,label='simulated')
plt.plot(x_phi,phi_init,lw=2,label='initial')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=2, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: <h3>Step 5: Writing the simulation code</h3>
The first code solves:
<p class='alert alert-info'>
$$
\phi_i^{n+1}=\phi_i^n-\frac{\Delta t}{\Delta x}\left(F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}\right)
$$
</p>
for whatever scheme you choose. Play with the different schemes. Consider that the analytical solution is:
$$
\phi(x,t)=\begin{cases}
\cos\left[x-\left(\frac{L}{2}+u_0t\right)\right]&,\text{ for }\left\vert x-\left(\frac{L}{2}+u_0t\right)\right\vert\leq\pi\
0&,\text{ for }\left\vert x-\left(\frac{L}{2}+u_0t\right)\right\vert>\pi
\end{cases}
$$
End of explanation
N=200
Simulation_time = 5.
dx = L/N
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
u_0 = 1.
num_scheme = 'CS'
u = u_0*np.ones(N+1)
phi = np.zeros(N)
flux = np.zeros(N+1)
divflux = np.zeros(N)
phiold = np.zeros(N)
phi = init_simulation(x_phi,N)
phi_init = phi.copy()
rk_coef = np.array([0.5,1.])
number_of_iterations = 100
dt = Simulation_time/number_of_iterations
t = 0.
for it in range (number_of_iterations):
phiold = phi
for irk in range(2):
flux = compute_flux_advanced(phi,u,N,num_scheme)
divflux = flux_divergence(flux,N,dx)
phi = phiold-rk_coef[irk]*dt*divflux
t += dt
plt.plot(x_phi,phi,lw=2,label='simulated')
plt.plot(x_phi,phi_init,lw=2,label='initial')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=2, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
Explanation: The discretization of the time derivative is crude. A better discretization is the 2<sup>nd</sup>-order Runge-Kutta:
<p class='alert alert-info'>
\begin{eqnarray}
\phi_i^{n+1/2}&=&\phi_i^n-\frac{\Delta t}{2}\frac{F^n_{i+\frac{1}{2}} - F^n_{i-\frac{1}{2}}}{\Delta x}\\
\phi_i^{n+1}&=&\phi_i^n-\Delta t\frac{F^{n+1/2}_{i+\frac{1}{2}} - F^{n+1/2}_{i-\frac{1}{2}}}{\Delta x}
\end{eqnarray}
</p>
End of explanation |
5,463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 1
Step1: (2) Plot of the means $\mathbf{\mu}$ of the learnt mixture
Step2: (3) It is not possible to have one center per class with only 10 components even though there are only 10 different digits. Multiple components can represent the same digits (as we can see from the plot). When this happens, it is not possible to represent all of them with only 10 components.
It is possible to avoid this by initializing each component's $\mu_k$ to the mean of the corresponding digit calculated from the labelized dataset. But then it becomes supervised learning, which is not what we want.
Here is the result with this kind of initialization
Step3: (4) For each label we select the subset of the data that corresponds to this label and train a bmm to represent the corresponding class. We then have 10 bmm which, together, form a digit classifier.
Step4: Part 3
Step5: Classification using a GMM with a diagonal covariance matrix
Step6: The results are a little better than with the BMM
Classification using a GMM with a full covariance matrix | Python Code:
# settings
data_path = '/home/data/ml/mnist'
k = 10
# we load pre-calculated k-means
import kmeans as kmeans_
kmeans = kmeans_.load_kmeans('kmeans-20.dat')
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import scipy
import bmm
import visualize
# loading the data
from mnist import load_mnist
train_data, train_labels = load_mnist(dataset='training', path=data_path)
# pre-processing the data (reshape + making it binary)
train_data = np.reshape(train_data, (60000, 784))
train_data_binary = np.where(train_data > 0.5, 1, 0)
# creating our model
model = bmm.bmm(k, n_iter=20, verbose=True)
model.fit(train_data_binary)
Explanation: Part 1: Bernoulli Mixture Model: Theory
To train a Bernoulli Mixture Model, the formulae are:
Expectation step
$$z_{n, k} \leftarrow \frac{\pi_k \prod_{i = 1}^D \mu_{k, i}^{x_{n, i}} (1 - \mu_{k, i})^{1 - x_{n, i}} }{\sum_{m = 1}^K \pi_m \prod_{i = 1}^D \mu_{m, i}^{x_{n, i}} (1 - \mu_{m, i})^{1 - x_{n, i}}}$$
Maximization step
$$\mathbf{\mu_m} \leftarrow \mathbf{\bar{x}_m}$$
$$\pi_m \leftarrow \frac{N_m}{N}$$
where $\mathbf{\bar{x}m} = \frac{1}{N_m} \sum{n = 1}^N z_{n, m} \mathbf{x_n}$ and $N_m = \sum_{n = 1}^N z_{n, m}$
Part 2: BMM Implementation
(1) see bmm.py for the complete implementation of the BMM
the source code of this project is available at https://github.com/toogy/mnist-em-bmm-gmm
End of explanation
visualize.plot_means(model.means)
Explanation: (2) Plot of the means $\mathbf{\mu}$ of the learnt mixture
End of explanation
model = bmm.bmm(10, verbose=True)
model.fit(train_data_binary, means_init_heuristic='data_classes_mean', labels=train_labels)
visualize.plot_means(model.means)
Explanation: (3) It is not possible to have one center per class with only 10 components even though there are only 10 different digits. Multiple components can represent the same digits (as we can see from the plot). When this happens, it is not possible to represent all of them with only 10 components.
It is possible to avoid this by initializing each component's $\mu_k$ to the mean of the corresponding digit calculated from the labelized dataset. But then it becomes supervised learning, which is not what we want.
Here is the result with this kind of initialization:
End of explanation
import classifier
# number of components for each BMM
k = 7
bayesian_classifier = classifier.classifier(k, means_init_heuristic='kmeans',
means=kmeans, model_type='bmm')
bayesian_classifier.fit(train_data_binary, train_labels)
visualize.plot_means(bayesian_classifier.models[3].means)
visualize.plot_means(bayesian_classifier.models[8].means)
test_data, test_labels = load_mnist(dataset='testing', path=data_path)
test_data = np.reshape(test_data, (test_data.shape[0], 784))
test_data_binary = np.where(test_data > 0.5, 1, 0)
label_set = set(train_labels)
predicted_labels = bayesian_classifier.predict(test_data_binary, label_set)
print('accuracy: {}'.format(np.mean(predicted_labels == test_labels)))
Explanation: (4) For each label we select the subset of the data that corresponds to this label and train a bmm to represent the corresponding class. We then have 10 bmm which, together, form a digit classifier.
End of explanation
import sklearn.decomposition
d = 40
reducer = sklearn.decomposition.PCA(n_components=d)
reducer.fit(train_data)
train_data_reduced = reducer.transform(train_data)
test_data_reduced = reducer.transform(test_data)
kmeans_reduced = reducer.transform(kmeans)
import gmm
k = 20
model = gmm.gmm(k, verbose=True)
model.fit(train_data_reduced, means_init_heuristic='kmeans', means=kmeans_reduced)
means_projected = reducer.inverse_transform(model.means)
visualize.plot_means(means_projected)
Explanation: Part 3: Gaussian Mixture Models
BMM are adapted to binary images because they work with 0s and 1s. MNIST data initially was in the range $[0, 255]$. By binarizing the images, information is lost when it could make the model more accurate. GMM can work with real numbers and perform better than BMM for classifying digits.
The Gaussian mixture distribution can be written as a linear superposition of Gaussians in the form
$$p(\mathbf{x}) = \sum_{k=1}^K \pi_k \mathcal{N}(\mathbf{x}|\mathbf{\mu}_k, \mathbf{\Sigma}_k)$$
End of explanation
bayesian_classifier = classifier.classifier(k, model_type='gmm',
means_init_heuristic='kmeans',
means=kmeans_reduced,
covariance_type='diag')
bayesian_classifier.fit(train_data_reduced, train_labels)
means_projected = reducer.inverse_transform(bayesian_classifier.models[4].means)
visualize.plot_means(means_projected)
predicted_labels = bayesian_classifier.predict(test_data_reduced, label_set)
print('accuracy: {}'.format(np.mean(predicted_labels == test_labels)))
Explanation: Classification using a GMM with a diagonal covariance matrix
End of explanation
bayesian_classifier = classifier.classifier(k, model_type='gmm',
means_init_heuristic='kmeans',
means=kmeans_reduced,
covariance_type='full')
bayesian_classifier.fit(train_data_reduced, train_labels)
means_projected = reducer.inverse_transform(bayesian_classifier.models[4].means)
visualize.plot_means(means_projected)
predicted_labels = bayesian_classifier.predict(test_data_reduced, label_set)
print('accuracy: {}'.format(np.mean(predicted_labels == test_labels)))
Explanation: The results are a little better than with the BMM
Classification using a GMM with a full covariance matrix
End of explanation |
5,464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Problem Set 8 Review & Transfer Learning with word2vec
Import various modules that we need for this notebook (now using Keras 1.0.0)
Step1: I. Problem Set 8, Part 1
Let's work through a solution to the first part of problem set 8, where you applied various techniques to the STL-10 dataset.
Step2: And construct a flattened version of it, for the linear model case
Step3: (1) neural network
We now build and evaluate a neural network.
Step4: (2) support vector machine
And now, a basic linear support vector machine.
Step5: (3) penalized logistc model
And finally, an L1 penalized model
Step6: II. Problem Set 8, Part 2
Now, let's read in the Chicago crime dataset and see how well we can get a neural network to perform on it.
Step7: Now, built a neural network for the model
Step8: III. Transfer Learning IMDB Sentiment analysis
Now, let's use the word2vec embeddings on the IMDB sentiment analysis corpus. This will allow us to use a significantly larger vocabulary of words. I'll start by reading in the IMDB corpus again from the raw text.
Step9: I'll fit a significantly larger vocabular this time, as the embeddings are basically given for us. | Python Code:
%pylab inline
import copy
import numpy as np
import pandas as pd
import sys
import os
import re
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, RMSprop
from keras.layers.normalization import BatchNormalization
from keras.layers.wrappers import TimeDistributed
from keras.preprocessing.text import Tokenizer
from keras.preprocessing import sequence
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import SimpleRNN, LSTM, GRU
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from gensim.models import word2vec
Explanation: Problem Set 8 Review & Transfer Learning with word2vec
Import various modules that we need for this notebook (now using Keras 1.0.0)
End of explanation
dir_in = "../../../class_data/stl10/"
X_train = np.genfromtxt(dir_in + 'X_train_new.csv', delimiter=',')
Y_train = np.genfromtxt(dir_in + 'Y_train.csv', delimiter=',')
X_test = np.genfromtxt(dir_in + 'X_test_new.csv', delimiter=',')
Y_test = np.genfromtxt(dir_in + 'Y_test.csv', delimiter=',')
Explanation: I. Problem Set 8, Part 1
Let's work through a solution to the first part of problem set 8, where you applied various techniques to the STL-10 dataset.
End of explanation
Y_train_flat = np.zeros(Y_train.shape[0])
Y_test_flat = np.zeros(Y_test.shape[0])
for i in range(10):
Y_train_flat[Y_train[:,i] == 1] = i
Y_test_flat[Y_test[:,i] == 1] = i
Explanation: And construct a flattened version of it, for the linear model case:
End of explanation
model = Sequential()
model.add(Dense(1024, input_shape = (X_train.shape[1],)))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('softmax'))
rms = RMSprop()
model.compile(loss='categorical_crossentropy', optimizer=rms,
metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=32, nb_epoch=5, verbose=1)
test_rate = model.evaluate(X_test, Y_test)[1]
print("Test classification rate %0.05f" % test_rate)
Explanation: (1) neural network
We now build and evaluate a neural network.
End of explanation
svc_obj = SVC(kernel='linear', C=1)
svc_obj.fit(X_train, Y_train_flat)
pred = svc_obj.predict(X_test)
pd.crosstab(pred, Y_test_flat)
c_rate = sum(pred == Y_test_flat) / len(pred)
print("Test classification rate %0.05f" % c_rate)
Explanation: (2) support vector machine
And now, a basic linear support vector machine.
End of explanation
lr = LogisticRegression(penalty = 'l1')
lr.fit(X_train, Y_train_flat)
pred = lr.predict(X_test)
pd.crosstab(pred, Y_test_flat)
c_rate = sum(pred == Y_test_flat) / len(pred)
print("Test classification rate %0.05f" % c_rate)
Explanation: (3) penalized logistc model
And finally, an L1 penalized model:
End of explanation
dir_in = "../../../class_data/chi_python/"
X_train = np.genfromtxt(dir_in + 'chiCrimeMat_X_train.csv', delimiter=',')
Y_train = np.genfromtxt(dir_in + 'chiCrimeMat_Y_train.csv', delimiter=',')
X_test = np.genfromtxt(dir_in + 'chiCrimeMat_X_test.csv', delimiter=',')
Y_test = np.genfromtxt(dir_in + 'chiCrimeMat_Y_test.csv', delimiter=',')
Explanation: II. Problem Set 8, Part 2
Now, let's read in the Chicago crime dataset and see how well we can get a neural network to perform on it.
End of explanation
model = Sequential()
model.add(Dense(1024, input_shape = (434,)))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(1024))
model.add(Activation("relu"))
model.add(BatchNormalization())
model.add(Dropout(0.2))
model.add(Dense(5))
model.add(Activation('softmax'))
rms = RMSprop()
model.compile(loss='categorical_crossentropy', optimizer=rms,
metrics=['accuracy'])
# downsample, if need be:
num_sample = X_train.shape[0]
model.fit(X_train[:num_sample], Y_train[:num_sample], batch_size=32,
nb_epoch=10, verbose=1)
test_rate = model.evaluate(X_test, Y_test)[1]
print("Test classification rate %0.05f" % test_rate)
Explanation: Now, built a neural network for the model
End of explanation
path = "../../../class_data/aclImdb/"
ff = [path + "train/pos/" + x for x in os.listdir(path + "train/pos")] + \
[path + "train/neg/" + x for x in os.listdir(path + "train/neg")] + \
[path + "test/pos/" + x for x in os.listdir(path + "test/pos")] + \
[path + "test/neg/" + x for x in os.listdir(path + "test/neg")]
TAG_RE = re.compile(r'<[^>]+>')
def remove_tags(text):
return TAG_RE.sub('', text)
input_label = ([1] * 12500 + [0] * 12500) * 2
input_text = []
for f in ff:
with open(f) as fin:
pass
input_text += [remove_tags(" ".join(fin.readlines()))]
Explanation: III. Transfer Learning IMDB Sentiment analysis
Now, let's use the word2vec embeddings on the IMDB sentiment analysis corpus. This will allow us to use a significantly larger vocabulary of words. I'll start by reading in the IMDB corpus again from the raw text.
End of explanation
num_words = 5000
max_len = 400
tok = Tokenizer(num_words)
tok.fit_on_texts(input_text[:25000])
X_train = tok.texts_to_sequences(input_text[:25000])
X_test = tok.texts_to_sequences(input_text[25000:])
y_train = input_label[:25000]
y_test = input_label[25000:]
X_train = sequence.pad_sequences(X_train, maxlen=max_len)
X_test = sequence.pad_sequences(X_test, maxlen=max_len)
words = []
for iter in range(num_words):
words += [key for key,value in tok.word_index.items() if value==iter+1]
loc = "/Users/taylor/files/word2vec_python/GoogleNews-vectors-negative300.bin"
w2v = word2vec.Word2Vec.load_word2vec_format(loc, binary=True)
weights = np.zeros((num_words,300))
for idx, w in enumerate(words):
try:
weights[idx,:] = w2v[w]
except KeyError as e:
pass
model = Sequential()
model.add(Embedding(num_words, 300, input_length=max_len))
model.add(Dropout(0.5))
model.add(GRU(16,activation='relu'))
model.add(Dense(128))
model.add(Dropout(0.5))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.layers[0].set_weights([weights])
model.layers[0].trainable = False
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=32, nb_epoch=10, verbose=1,
validation_data=(X_test, y_test))
Explanation: I'll fit a significantly larger vocabular this time, as the embeddings are basically given for us.
End of explanation |
5,465 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic MEG and EEG data processing
MNE-Python reimplements most of MNE-C's (the original MNE command line utils)
functionality and offers transparent scripting.
On top of that it extends MNE-C's functionality considerably
(customize events, compute contrasts, group statistics, time-frequency
analysis, EEG-sensor space analyses, etc.) It uses the same files as standard
MNE unix commands
Step1: If you'd like to turn information status messages off
Step2: But it's generally a good idea to leave them on
Step3: You can set the default level by setting the environment variable
"MNE_LOGGING_LEVEL", or by having mne-python write preferences to a file
Step4: Note that the location of the mne-python preferences file (for easier manual
editing) can be found using
Step5: By default logging messages print to the console, but look at
Step6: <div class="alert alert-info"><h4>Note</h4><p>The MNE sample dataset should be downloaded automatically but be
patient (approx. 2GB)</p></div>
Read data from file
Step7: Look at the channels in raw
Step8: Read and plot a segment of raw data
Step9: Save a segment of 150s of raw data (MEG only)
Step10: Define and read epochs
^^^^^^^^^^^^^^^^^^^^^^
First extract events
Step11: Note that, by default, we use stim_channel='STI 014'. If you have a different
system (e.g., a newer system that uses channel 'STI101' by default), you can
use the following to set the default stim channel to use for finding events
Step12: Events are stored as a 2D numpy array where the first column is the time
instant and the last one is the event number. It is therefore easy to
manipulate.
Define epochs parameters
Step13: Exclude some channels (original bads + 2 more)
Step14: The variable raw.info['bads'] is just a python list.
Pick the good channels, excluding raw.info['bads']
Step15: Alternatively one can restrict to magnetometers or gradiometers with
Step16: Define the baseline period
Step17: Define peak-to-peak rejection parameters for gradiometers, magnetometers
and EOG
Step18: Read epochs
Step19: Get single epochs for one condition
Step20: epochs_data is a 3D array of dimension (55 epochs, 365 channels, 106 time
instants).
Scipy supports read and write of matlab files. You can save your single
trials with
Step21: or if you want to keep all the information about the data you can save your
epochs in a fif file
Step22: and read them later with
Step23: Compute evoked responses for auditory responses by averaging and plot it
Step24: .. topic
Step25: It is also possible to read evoked data stored in a fif file
Step26: Or another one stored in the same file
Step27: Two evoked objects can be contrasted using
Step28: To do a weighted sum based on the number of averages, which will give
you what you would have gotten from pooling all trials together in
Step29: Instead of dealing with mismatches in the number of averages, we can use
trial-count equalization before computing a contrast, which can have some
benefits in inverse imaging (note that here weights='nave' will
give the same result as weights='equal')
Step30: Time-Frequency
Step31: Compute induced power and phase-locking values and plot gradiometers
Step32: Inverse modeling
Step33: Read the inverse operator
Step34: Define the inverse parameters
Step35: Compute the inverse solution
Step36: Save the source time courses to disk
Step37: Now, let's compute dSPM on a raw file within a label
Step38: Compute inverse solution during the first 15s
Step39: Save result in stc files
Step40: What else can you do?
^^^^^^^^^^^^^^^^^^^^^
- detect heart beat QRS component
- detect eye blinks and EOG artifacts
- compute SSP projections to remove ECG or EOG artifacts
- compute Independent Component Analysis (ICA) to remove artifacts or
select latent sources
- estimate noise covariance matrix from Raw and Epochs
- visualize cross-trial response dynamics using epochs images
- compute forward solutions
- estimate power in the source space
- estimate connectivity in sensor and source space
- morph stc from one brain to another for group studies
- compute mass univariate statistics base on custom contrasts
- visualize source estimates
- export raw, epochs, and evoked data to other python data analysis
libraries e.g. pandas
- and many more things ...
Want to know more ?
^^^^^^^^^^^^^^^^^^^
Browse the examples gallery <auto_examples/index.html>_. | Python Code:
import mne
Explanation: Basic MEG and EEG data processing
MNE-Python reimplements most of MNE-C's (the original MNE command line utils)
functionality and offers transparent scripting.
On top of that it extends MNE-C's functionality considerably
(customize events, compute contrasts, group statistics, time-frequency
analysis, EEG-sensor space analyses, etc.) It uses the same files as standard
MNE unix commands: no need to convert your files to a new system or database.
What you can do with MNE Python
Raw data visualization to visualize recordings, can also use
mne_browse_raw for extended functionality (see ch_browse)
Epoching: Define epochs, baseline correction, handle conditions etc.
Averaging to get Evoked data
Compute SSP projectors to remove ECG and EOG artifacts
Compute ICA to remove artifacts or select latent sources.
Maxwell filtering to remove environmental noise.
Boundary Element Modeling: single and three-layer BEM model
creation and solution computation.
Forward modeling: BEM computation and mesh creation
(see ch_forward)
Linear inverse solvers (MNE, dSPM, sLORETA, eLORETA, LCMV, DICS)
Sparse inverse solvers (L1/L2 mixed norm MxNE, Gamma Map,
Time-Frequency MxNE)
Connectivity estimation in sensor and source space
Visualization of sensor and source space data
Time-frequency analysis with Morlet wavelets (induced power,
intertrial coherence, phase lock value) also in the source space
Spectrum estimation using multi-taper method
Mixed Source Models combining cortical and subcortical structures
Dipole Fitting
Decoding multivariate pattern analysis of M/EEG topographies
Compute contrasts between conditions, between sensors, across
subjects etc.
Non-parametric statistics in time, space and frequency
(including cluster-level)
Scripting (batch and parallel computing)
What you're not supposed to do with MNE Python
- **Brain and head surface segmentation** for use with BEM
models -- use Freesurfer.
<div class="alert alert-info"><h4>Note</h4><p>This package is based on the FIF file format from Neuromag. It
can read and convert CTF, BTI/4D, KIT and various EEG formats to
FIF.</p></div>
Installation of the required materials
See install_python_and_mne_python.
<div class="alert alert-info"><h4>Note</h4><p>The expected location for the MNE-sample data is
``~/mne_data``. If you downloaded data and an example asks
you whether to download it again, make sure
the data reside in the examples directory and you run the script from its
current directory.
From IPython e.g. say::
cd examples/preprocessing
%run plot_find_ecg_artifacts.py</p></div>
From raw data to evoked data
Now, launch ipython_ (Advanced Python shell) using the QT backend, which
is best supported across systems::
$ ipython --matplotlib=qt
First, load the mne package:
<div class="alert alert-info"><h4>Note</h4><p>In IPython, you can press **shift-enter** with a given cell
selected to execute it and advance to the next cell:</p></div>
End of explanation
mne.set_log_level('WARNING')
Explanation: If you'd like to turn information status messages off:
End of explanation
mne.set_log_level('INFO')
Explanation: But it's generally a good idea to leave them on:
End of explanation
mne.set_config('MNE_LOGGING_LEVEL', 'WARNING', set_env=True)
Explanation: You can set the default level by setting the environment variable
"MNE_LOGGING_LEVEL", or by having mne-python write preferences to a file:
End of explanation
mne.get_config_path()
Explanation: Note that the location of the mne-python preferences file (for easier manual
editing) can be found using:
End of explanation
from mne.datasets import sample # noqa
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
print(raw_fname)
Explanation: By default logging messages print to the console, but look at
:func:mne.set_log_file to save output to a file.
Access raw data
^^^^^^^^^^^^^^^
End of explanation
raw = mne.io.read_raw_fif(raw_fname)
print(raw)
print(raw.info)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>The MNE sample dataset should be downloaded automatically but be
patient (approx. 2GB)</p></div>
Read data from file:
End of explanation
print(raw.ch_names)
Explanation: Look at the channels in raw:
End of explanation
start, stop = raw.time_as_index([100, 115]) # 100 s to 115 s data segment
data, times = raw[:, start:stop]
print(data.shape)
print(times.shape)
data, times = raw[2:20:3, start:stop] # access underlying data
raw.plot()
Explanation: Read and plot a segment of raw data
End of explanation
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True,
exclude='bads')
raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
Explanation: Save a segment of 150s of raw data (MEG only):
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5])
Explanation: Define and read epochs
^^^^^^^^^^^^^^^^^^^^^^
First extract events:
End of explanation
mne.set_config('MNE_STIM_CHANNEL', 'STI101', set_env=True)
Explanation: Note that, by default, we use stim_channel='STI 014'. If you have a different
system (e.g., a newer system that uses channel 'STI101' by default), you can
use the following to set the default stim channel to use for finding events:
End of explanation
event_id = dict(aud_l=1, aud_r=2) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
Explanation: Events are stored as a 2D numpy array where the first column is the time
instant and the last one is the event number. It is therefore easy to
manipulate.
Define epochs parameters:
End of explanation
raw.info['bads'] += ['MEG 2443', 'EEG 053']
Explanation: Exclude some channels (original bads + 2 more):
End of explanation
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, stim=False,
exclude='bads')
Explanation: The variable raw.info['bads'] is just a python list.
Pick the good channels, excluding raw.info['bads']:
End of explanation
mag_picks = mne.pick_types(raw.info, meg='mag', eog=True, exclude='bads')
grad_picks = mne.pick_types(raw.info, meg='grad', eog=True, exclude='bads')
Explanation: Alternatively one can restrict to magnetometers or gradiometers with:
End of explanation
baseline = (None, 0) # means from the first instant to t = 0
Explanation: Define the baseline period:
End of explanation
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
Explanation: Define peak-to-peak rejection parameters for gradiometers, magnetometers
and EOG:
End of explanation
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=False, reject=reject)
print(epochs)
Explanation: Read epochs:
End of explanation
epochs_data = epochs['aud_l'].get_data()
print(epochs_data.shape)
Explanation: Get single epochs for one condition:
End of explanation
from scipy import io # noqa
io.savemat('epochs_data.mat', dict(epochs_data=epochs_data), oned_as='row')
Explanation: epochs_data is a 3D array of dimension (55 epochs, 365 channels, 106 time
instants).
Scipy supports read and write of matlab files. You can save your single
trials with:
End of explanation
epochs.save('sample-epo.fif')
Explanation: or if you want to keep all the information about the data you can save your
epochs in a fif file:
End of explanation
saved_epochs = mne.read_epochs('sample-epo.fif')
Explanation: and read them later with:
End of explanation
evoked = epochs['aud_l'].average()
print(evoked)
evoked.plot(time_unit='s')
Explanation: Compute evoked responses for auditory responses by averaging and plot it:
End of explanation
max_in_each_epoch = [e.max() for e in epochs['aud_l']] # doctest:+ELLIPSIS
print(max_in_each_epoch[:4]) # doctest:+ELLIPSIS
Explanation: .. topic:: Exercise
Extract the max value of each epoch
End of explanation
evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked1 = mne.read_evokeds(
evoked_fname, condition='Left Auditory', baseline=(None, 0), proj=True)
Explanation: It is also possible to read evoked data stored in a fif file:
End of explanation
evoked2 = mne.read_evokeds(
evoked_fname, condition='Right Auditory', baseline=(None, 0), proj=True)
Explanation: Or another one stored in the same file:
End of explanation
contrast = mne.combine_evoked([evoked1, evoked2], weights=[0.5, -0.5])
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
Explanation: Two evoked objects can be contrasted using :func:mne.combine_evoked.
This function can use weights='equal', which provides a simple
element-by-element subtraction (and sets the
mne.Evoked.nave attribute properly based on the underlying number
of trials) using either equivalent call:
End of explanation
average = mne.combine_evoked([evoked1, evoked2], weights='nave')
print(contrast)
Explanation: To do a weighted sum based on the number of averages, which will give
you what you would have gotten from pooling all trials together in
:class:mne.Epochs before creating the :class:mne.Evoked instance,
you can use weights='nave':
End of explanation
epochs_eq = epochs.copy().equalize_event_counts(['aud_l', 'aud_r'])[0]
evoked1, evoked2 = epochs_eq['aud_l'].average(), epochs_eq['aud_r'].average()
print(evoked1)
print(evoked2)
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
Explanation: Instead of dealing with mismatches in the number of averages, we can use
trial-count equalization before computing a contrast, which can have some
benefits in inverse imaging (note that here weights='nave' will
give the same result as weights='equal'):
End of explanation
import numpy as np # noqa
n_cycles = 2 # number of cycles in Morlet wavelet
freqs = np.arange(7, 30, 3) # frequencies of interest
Explanation: Time-Frequency: Induced power and inter trial coherence
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Define parameters:
End of explanation
from mne.time_frequency import tfr_morlet # noqa
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles,
return_itc=True, decim=3, n_jobs=1)
power.plot([power.ch_names.index('MEG 1332')])
Explanation: Compute induced power and phase-locking values and plot gradiometers:
End of explanation
from mne.minimum_norm import apply_inverse, read_inverse_operator # noqa
Explanation: Inverse modeling: MNE and dSPM on evoked and raw data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Import the required functions:
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inverse_operator = read_inverse_operator(fname_inv)
Explanation: Read the inverse operator:
End of explanation
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM"
Explanation: Define the inverse parameters:
End of explanation
stc = apply_inverse(evoked, inverse_operator, lambda2, method)
Explanation: Compute the inverse solution:
End of explanation
stc.save('mne_dSPM_inverse')
Explanation: Save the source time courses to disk:
End of explanation
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
label = mne.read_label(fname_label)
Explanation: Now, let's compute dSPM on a raw file within a label:
End of explanation
from mne.minimum_norm import apply_inverse_raw # noqa
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
stc = apply_inverse_raw(raw, inverse_operator, lambda2, method, label,
start, stop)
Explanation: Compute inverse solution during the first 15s:
End of explanation
stc.save('mne_dSPM_raw_inverse_Aud')
Explanation: Save result in stc files:
End of explanation
print("Done!")
Explanation: What else can you do?
^^^^^^^^^^^^^^^^^^^^^
- detect heart beat QRS component
- detect eye blinks and EOG artifacts
- compute SSP projections to remove ECG or EOG artifacts
- compute Independent Component Analysis (ICA) to remove artifacts or
select latent sources
- estimate noise covariance matrix from Raw and Epochs
- visualize cross-trial response dynamics using epochs images
- compute forward solutions
- estimate power in the source space
- estimate connectivity in sensor and source space
- morph stc from one brain to another for group studies
- compute mass univariate statistics base on custom contrasts
- visualize source estimates
- export raw, epochs, and evoked data to other python data analysis
libraries e.g. pandas
- and many more things ...
Want to know more ?
^^^^^^^^^^^^^^^^^^^
Browse the examples gallery <auto_examples/index.html>_.
End of explanation |
5,466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-2', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-2
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
5,467 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2. Gender Detection
Figuring out genders from names
We're going to use 3 different methods, all of which use a similar philosophy. Essentially, each of these services have build databases from datasets where genders are known or can be identified. For example, national census data and social media profiles.
GenderDetector can be run locally, but only provides "male", "female" or "unknown", and has a limitted number of names in the database.
genderize.io and Gender API are web services that allow us to query names and return genders
Each of these services provides a "probability" that the gender is correct (so if "Jamie" shows up 80 times in their data as a female name, and 20 times as a male name, they'll say it's "female" with a probability of 0.8)
They also tell us how certain we can be of that gender by telling us how many times that name shows up (in the above example, the count would be 100. This is useful because some names might only have 1 or 2 entries, in which case a 100% probability of being male would be less reliable than a name that has 1000 entries.
The web APIs have superior data, but the problem is that they are services that require you to pay if you make more than a certain number of queries in a short period of time. The owners of both services have generously provided me with enough queries to do this research for free.
Getting names to query
First, we'll take the names from our pubmed queries and collapse them into sets. We don't really need to query the
name "John" a thousand times - once will do. I'm going to loop through the csv we wrote out in the last section and pull the fourth column, which contains our author name.
Step1: Then we'll convert the list to a set, which is an unordered array of unique values (so it removes duplicates
Step2: Here's a function that does the same thing.
Step3: The set.union() function will merge 2 sets into a single set, so we'll do this with our other datasets.
Step4: Getting genders from names
GenderDetector
First up - GenderDetector. The usage is pretty straighforward
Step5: Output datasets
Step6: Genderize.io
This one is a bit more complicated, since we have to make a call to the web api, and then parse the json that's returned. Happily, someone already wrote a python package to do most of the work. We can query 10 names at a time rather than each one individually, and we'll get back a list of dictionaries, one for each query
Step7: Gender-API
This is a similar service, but I didn't find a python package for it. Thankfully, it's pretty easy too. The following code is for python2, but you can find the python3 code on the website. The vaule that gets returned comes in the form of a dictionary as well | Python Code:
import os
os.chdir("../data/pubs")
names = []
with open("git.csv") as infile:
for line in infile:
names.append(line.split(",")[3])
Explanation: 2. Gender Detection
Figuring out genders from names
We're going to use 3 different methods, all of which use a similar philosophy. Essentially, each of these services have build databases from datasets where genders are known or can be identified. For example, national census data and social media profiles.
GenderDetector can be run locally, but only provides "male", "female" or "unknown", and has a limitted number of names in the database.
genderize.io and Gender API are web services that allow us to query names and return genders
Each of these services provides a "probability" that the gender is correct (so if "Jamie" shows up 80 times in their data as a female name, and 20 times as a male name, they'll say it's "female" with a probability of 0.8)
They also tell us how certain we can be of that gender by telling us how many times that name shows up (in the above example, the count would be 100. This is useful because some names might only have 1 or 2 entries, in which case a 100% probability of being male would be less reliable than a name that has 1000 entries.
The web APIs have superior data, but the problem is that they are services that require you to pay if you make more than a certain number of queries in a short period of time. The owners of both services have generously provided me with enough queries to do this research for free.
Getting names to query
First, we'll take the names from our pubmed queries and collapse them into sets. We don't really need to query the
name "John" a thousand times - once will do. I'm going to loop through the csv we wrote out in the last section and pull the fourth column, which contains our author name.
End of explanation
print(len(names))
names = set(names)
print(len(names))
Explanation: Then we'll convert the list to a set, which is an unordered array of unique values (so it removes duplicates
End of explanation
def get_unique_names(csv_file):
names = []
with open(csv_file) as infile:
for line in infile:
names.append(line.split(",")[3])
return set(names)
Explanation: Here's a function that does the same thing.
End of explanation
all_names = names.union(get_unique_names("comp.csv"))
all_names = all_names.union(get_unique_names("bio.csv"))
print(len(all_names))
Explanation: The set.union() function will merge 2 sets into a single set, so we'll do this with our other datasets.
End of explanation
from gender_detector import GenderDetector
detector = GenderDetector('us')
print(detector.guess("kevin"))
print(detector.guess("melanie"))
print(detector.guess("ajasja"))
gender_dict = {}
counter = 0
# for name in all_names:
# try:
# gender = detector.guess(name)
# gender_dict[name] = gender
# except:
# print(name)
print(len(gender_dict))
print(sum([1 for x in gender_dict if gender_dict[x] == 'unknown']))
print(sum([1 for x in gender_dict if gender_dict[x] != 'unknown']))
Explanation: Getting genders from names
GenderDetector
First up - GenderDetector. The usage is pretty straighforward:
End of explanation
import json
with open("GenderDetector_genders.json", "w+") as outfile:
outfile.write(json.dumps(gender_dict, indent=4))
Explanation: Output datasets
End of explanation
from api_keys import genderize_key
from genderize import Genderize
all_names = list(all_names)
genderize = Genderize(
user_agent='Kevin_Bonham',
api_key=genderize_key)
genderize_dict = {}
for i in range(0, len(all_names), 10):
if i % 10000 == 0:
print i
query = all_names[i:i+10]
genders = genderize.get(query)
for gender in genders:
n = gender["name"]
g = gender["gender"]
if g != None:
p = gender["probability"]
c = gender["count"]
else:
p = None
c = 0
genderize_dict[n] = {"gender":g, "probability":p, "count": c}
with open("genderize_genders.json", "w+") as outfile:
outfile.write(json.dumps(genderize_dict, indent=4))
print(len(genderize_dict))
print(sum([1 for x in genderize_dict if genderize_dict[x]["gender"] == 'unknown']))
print(sum([1 for x in genderize_dict if genderize_dict[x]["gender"] != 'unknown']))
Explanation: Genderize.io
This one is a bit more complicated, since we have to make a call to the web api, and then parse the json that's returned. Happily, someone already wrote a python package to do most of the work. We can query 10 names at a time rather than each one individually, and we'll get back a list of dictionaries, one for each query:
[{u'count': 1037, u'gender': u'male', u'name': u'James', u'probability': 0.99},
{u'count': 234, u'gender': u'female', u'name': u'Eva', u'probability': 1.0},
{u'gender': None, u'name': u'Thunderhorse'}]
I will turn that into a dictionary of dictionaries, where the name is the key, and the other elements are stored under them. Eg:
{
u'James':{
u'count': 1037,
u'gender': u'male',
u'probability': 0.99
},
u'Eva':{
u'count': 234,
u'gender': u'female',
u'probability': 1.0
},
u'Thunderhorse':{
u'count: 0,
u'gender': None,
u'probability': None
}
}
Note:
I've got an API key stored in a separate file called api_keys.py (that I'm not putting on git because you can't have my queries!) that looks like this:
genderize_key = "s0m3numb3rsandl3tt3rs"
genderAPI_key = "0th3rnumb3rsandl3tt3rs"
You can get a key from both services for free, but you'll be limited in the number of queries you can make. Just make a similar file, or add them in below in place of the proper variables.
End of explanation
from api_keys import genderAPI_key
import urllib2
genderAPI_dict = {}
counter = 0
for i in range(counter, len(all_names), 20):
counter += 20
if counter %1000 == 0:
print counter
names = all_names[i:i+20]
query = ";".join(names)
data = json.load(urllib2.urlopen("https://gender-api.com/get?key={}&name={}".format(genderAPI_key, query)))
for r in data['result']:
n = r["name"]
g = r["gender"]
if g != u"unknown":
p = float(r["accuracy"]) / 100
c = r["samples"]
else:
p = None
c = 0
genderAPI_dict[n] = {"gender":g, "probability":p, "count": c}
with open("../data/pubs/genderAPI_genders.json", "w+") as outfile:
outfile.write(json.dumps(genderAPI_dict, indent=4))
Explanation: Gender-API
This is a similar service, but I didn't find a python package for it. Thankfully, it's pretty easy too. The following code is for python2, but you can find the python3 code on the website. The vaule that gets returned comes in the form of a dictionary as well:
{u'accuracy': 99,
u'duration': u'26ms',
u'gender': u'male',
u'name': u'markus',
u'samples': 26354}
Which I'll convert to the same keys and value types used from genderize above (eg. "probability" instead of "accuracy", "count" instead of "samples", and 0.99 instead of 99),
End of explanation |
5,468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup
Step1: Adapted from http
Step2: Image from https
Step3: Label Encoding
Step4: Multiclass Classification
Data subset (1 feature only)
Step5: Plotting
Step6: Model (Logistic Regression)
https
Step7: Softmax
Converts arbitrary "scores" to normalized probabilities
Step8: Full dataset (all 4 features)
Use all features and split dataset in train and test subsets
Step9: Overfitting | Python Code:
from __future__ import print_function, unicode_literals, absolute_import, division
from six.moves import range, zip, map, reduce, filter
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('whitegrid')
plt.rc('figure', figsize=(7.0, 5.0))
import keras
from keras import backend as K
from keras.models import Sequential, Model
from keras.layers import Input, Dense, Activation
from keras.optimizers import Adam
from keras.callbacks import LambdaCallback
from keras.utils import np_utils
def plot_callback(func,p=20):
def plot_epoch_end(epoch,logs):
if epoch == 0 or (epoch+1) % p == 0:
plt.clf(); func(); plt.title('epoch %d' % (epoch+1))
display.clear_output(wait=True); display.display(plt.gcf())
def clear(*args):
plt.clf()
return LambdaCallback(on_epoch_end=plot_epoch_end,on_train_end=clear)
def plot_loss_acc(hist):
plt.figure(figsize=(15,4));
if len(hist.params['metrics']) == 2:
plt.subplot(121); plt.semilogy(hist.epoch,hist.history['loss'])
plt.xlabel('epoch'); plt.ylabel('loss'); plt.legend(['train'],loc='upper right')
plt.subplot(122); plt.plot(hist.epoch,hist.history['acc']);
plt.xlabel('epoch'); plt.ylabel('accuracy'); plt.legend(['train'],loc='lower right');
else:
plt.subplot(121); plt.semilogy(hist.epoch,hist.history['loss'], hist.epoch,hist.history['val_loss']);
plt.xlabel('epoch'); plt.ylabel('loss'); plt.legend(['train','test'],loc='upper right')
plt.subplot(122); plt.plot(hist.epoch,hist.history['acc'], hist.epoch,hist.history['val_acc'])
plt.xlabel('epoch'); plt.ylabel('accuracy'); plt.legend(['train','test'],loc='lower right');
Explanation: Setup
End of explanation
iris = sns.load_dataset("iris")
iris.sample(10)
Explanation: Adapted from http://blog.fastforwardlabs.com/2016/02/24/hello-world-in-keras-or-scikit-learn-versus.html
Dataset
Iris flower data set
End of explanation
sns.pairplot(iris, hue='species');
Explanation: Image from https://commons.wikimedia.org/wiki/File:Petal-sepal.jpg
End of explanation
def label_encode(arr):
uniques, ids = np.unique(arr, return_inverse=True)
return ids
classes = ('setosa', 'versicolor', 'virginica')
labels = label_encode(classes)
for i,c in enumerate(classes):
print('%10s → %d' % (c, labels[i]))
def onehot_encode(arr):
uniques, ids = np.unique(arr, return_inverse=True)
return np_utils.to_categorical(ids, len(uniques))
classes = ('setosa', 'versicolor', 'virginica')
onehot = onehot_encode(classes)
for i,c in enumerate(classes):
print('%10s → [%d,%d,%d]' % (c, onehot[i,0], onehot[i,1], onehot[i,2]))
Explanation: Label Encoding
End of explanation
data = iris
feature_name = 'petal_length'
data = data[[feature_name,'species']]
X = data.values[:,0]
y = label_encode(data.values[:,1])
y_oh = onehot_encode(data.values[:,1])
N = len(y)
Explanation: Multiclass Classification
Data subset (1 feature only)
End of explanation
R = np.linspace(X.min()-1,X.max()+1,100)
Xp = np.zeros(X.shape[0])-.1
Rp = np.zeros(R.shape[0])-.2
def plot_all(model=None):
plt.scatter(X, Xp, c=y, cmap='jet');
plt.xlabel(feature_name)
if model is not None:
prob = model.predict(R)
yhat = np.argmax(prob,axis=1)
plt.scatter(R, Rp, c=yhat);
plt.plot(R,prob)
leg = plt.legend(map(lambda s:'p("%s")'%s,classes),loc='upper center',frameon=False,ncol=3)
plt.xlim(X.min()-1.5,X.max()+1.5)
plt.ylim(-.4,1.2)
plot_all()
Explanation: Plotting
End of explanation
model = Sequential()
model.add(Dense(16, input_shape=(1,)))
model.add(Activation('tanh'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
Explanation: Model (Logistic Regression)
https://en.wikipedia.org/wiki/Multinomial_logistic_regression
https://en.wikipedia.org/wiki/Softmax_function
https://en.wikipedia.org/wiki/Cross_entropy
End of explanation
hist = model.fit(X,y_oh,batch_size=5,epochs=300,verbose=0,
callbacks=[plot_callback(lambda:plot_all(model))]);
plot_loss_acc(hist)
Explanation: Softmax
Converts arbitrary "scores" to normalized probabilities:
$
\large
\sigma(\mathbf{z})_i = \frac{\exp(z_i)}{\sum_j \exp(z_j)}
$
Example: for $\mathbf{z} = [0.451, -0.599, 0.006]$, we get
$\sigma(\mathbf{z}) = [ 0.50232021, 0.1757808 , 0.32189899]$.
Cross entropy
$H(p, q) = \mathrm{E}p[-\log q] = H(p) + D{\mathrm{KL}}(p \| q)$
defines the cross entropy for distributions $p$ and $q$, where
- $H(p)$ is the entropy of $p$, and
- $D_{\mathrm{KL}}(p \| q)$ is the Kullback–Leibler divergence of $q$ from $p$.
End of explanation
N = iris.shape[0] # number of data points / table rows
data = iris.sample(N,replace=False) # shuffle data
X = data.values[:,0:4]
y_oh = onehot_encode(data.values[:,4])
N_train = N//2 # random 50/50 train/test split
X_train, y_train = X[:N_train], y_oh[:N_train]
X_test, y_test = X[N_train:], y_oh[N_train:]
model = Sequential()
model.add(Dense(16, input_shape=(4,)))
model.add(Activation('tanh'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
hist = model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=200, verbose=0, batch_size=5)
plot_loss_acc(hist)
loss, accuracy = model.evaluate(X_train, y_train, verbose=0)
print('train set: loss = %.5f, accuracy = %.5f' % (loss,accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
print('test set: loss = %.5f, accuracy = %.5f' % (loss,accuracy))
Explanation: Full dataset (all 4 features)
Use all features and split dataset in train and test subsets:
End of explanation
N_train = 20 # only 20 of 150 samples for training, rest for testing
X_train, y_train = X[:N_train], y_oh[:N_train]
X_test, y_test = X[N_train:], y_oh[N_train:]
model = Sequential()
model.add(Dense(16, input_shape=(4,)))
model.add(Activation('tanh'))
model.add(Dense(16))
model.add(Activation('tanh'))
model.add(Dense(16))
model.add(Activation('tanh'))
model.add(Dense(3))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
hist = model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=1000, verbose=0, batch_size=5)
plot_loss_acc(hist)
loss, accuracy = model.evaluate(X_train, y_train, verbose=0)
print('train set: loss = %.5f, accuracy = %.5f' % (loss,accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=0)
print('test set: loss = %.5f, accuracy = %.5f' % (loss,accuracy))
Explanation: Overfitting
End of explanation |
5,469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: In the below cell, import pandas and make a DataFrame object using the above poll data and using the dates list as the index.
Then, display the data by printing your DataFrame object.
Hint
Step2: a) For each month, what is the fraction of users who responded with their opinion $P(M^C)$?
Using your DataFrame object created above, compute $P(M^C)$. See pandas.DataFrame.sum to sum rows or columns of the table.
Hint
Step3: b) For each month, what is the probability that a user said they approve, given that they responded to the poll $P(A|M^C)$?
You know the drill
Step4: c) Compute $P(A)$ under the following assumptions | Python Code:
num_respondents = 1156
dates = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'June', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec', 'Jan2020']
data = {}
data['approve'] = [37, 40, 42, 43, 43, 43, 40, 39, 41, 42, 43, 43]
data['disapprove'] = [57, 55, 51, 52, 52, 52, 54, 55, 57, 54, 53, 53]
data['no_opinion'] = [7, 5, 8, 5, 5, 5, 6, 6, 2, 4, 4, 4]
Explanation: <a href="https://colab.research.google.com/github/AnoshZahir/Greengraph/blob/master/Section_2_Notebook_Student_Version.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Section 2 Notebook
In this notebook we will reason about recent presidential approval poll data. We will explore how the concepts of conditional probability, Law of Total Probability and Bayes' Theorem help us better understand a simple survey. Along the way we will learn how the Python data analysis library pandas facilitates easy manipulation of data tables.
Learning Goals:
1. Analyze poll data with conditional probability, Law of Total Probability and Bayes' Theorem
2. Learn some basic pandas skills
Poll Data - Presidential Approval
Problem: You collect data on whether or not people approve of President Trump, a potential candidate in the upcoming election. We have collected real poll data from the last 13 CNN polls, which can be found here (link directly to the CNN poll here).
Let $A$ be the event that a person says they approve of the way President Trump is handling his job as president. Let $M$ be the event that a user answered "No opinion." We are interested in estimating $P(A)$, however that is hard given the small but significant number of users who answered "No opinion".
Note 1: We assume in our model that given enough information the "No opinion" users would make an approve/disapprove decision.
Note 2: The latest CNN poll (Jan 16-19, 2020) had a sample of 1156 respondents. For simplicity we will assume all polls also had this sample size.
End of explanation
import pandas as pd
polldf = # TODO
polldf
Explanation: In the below cell, import pandas and make a DataFrame object using the above poll data and using the dates list as the index.
Then, display the data by printing your DataFrame object.
Hint: Instead of using print, try using the DataFrame variable name alone on a single line at the end of the cell. It will look prettier :)
End of explanation
# TODO
Explanation: a) For each month, what is the fraction of users who responded with their opinion $P(M^C)$?
Using your DataFrame object created above, compute $P(M^C)$. See pandas.DataFrame.sum to sum rows or columns of the table.
Hint: Try accessing the DataFrame using its column names and then doing elementwise vector math. For example, use polldf['approve'] / ... instead of for loops.
End of explanation
# TODO
Explanation: b) For each month, what is the probability that a user said they approve, given that they responded to the poll $P(A|M^C)$?
You know the drill :)
End of explanation
polldf['P(A) w/ A.1'] = # TODO
polldf['P(A) w/ A.2'] = # TODO
polldf['P(A) w/ A.3'] = # TODO
polldf
Explanation: c) Compute $P(A)$ under the following assumptions:
$P(A|M) = P(A|M^C)$. That is, people with no opinion will have similar approval ratios as the others.
$P(A|M) = 0$. That is, people with no opinion actually disapprove.
$P(A^C|M) = 0$. That is, people with no opinion actually approve.
End of explanation |
5,470 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running MSAF
The main MSAF functionality is demonstrated here.
Step1: Single File Mode
This mode analyzes one audio file at a time.
Note
Step2: Using different Algorithms
MSAF includes multiple algorithms both for boundary retrieval and structural grouping (or labeling). In this section we demonstrate how to try them out.
Note
Step3: Using different Features
Some algorithms allow the input of different type of features (e.g., harmonic, timbral). In this section we show how we can input different features to MSAF.
Step4: Using Annotated Beats
MSAF can calculate the beats or use annotated ones. The annotations should be store in a jams file, for this notebook we used a simple jams example.
Step5: Evaluate Results
The results can be evaluated as long as there is an existing file containing reference annotations. The results are stored in a pandas DataFrame. MSAF has to run these algorithms (using msaf.process described above) before being able to evaluate its results.
Step6: Explore Algorithm Parameters
Now let's modify the configuration of one of the files, and modify it to see how different the results are.
We will use Widgets, which will become handy here.
Step7: Collection Mode
MSAF is able to run and evaluate mutliple files using multi-threading. In this section we show this functionality. | Python Code:
from __future__ import print_function
import msaf
import librosa
import seaborn as sns
# and IPython.display for audio output
import IPython.display
# Setup nice plots
sns.set(style="dark")
%matplotlib inline
Explanation: Running MSAF
The main MSAF functionality is demonstrated here.
End of explanation
# Choose an audio file and listen to it
audio_file = "../datasets/Sargon/audio/01-Sargon-Mindless.mp3"
IPython.display.Audio(filename=audio_file)
# Segment the file using the default MSAF parameters
boundaries, labels = msaf.process(audio_file)
print(boundaries)
# Sonify boundaries
sonified_file = "my_boundaries.wav"
sr = 44100
boundaries, labels = msaf.process(audio_file, sonify_bounds=True,
out_bounds=sonified_file, out_sr=sr)
# Listen to results
audio = librosa.load(sonified_file, sr=sr)[0]
IPython.display.Audio(audio, rate=sr)
Explanation: Single File Mode
This mode analyzes one audio file at a time.
Note: Make sure to download the datasets from https://github.com/urinieto/msaf-data/
End of explanation
# First, let's list all the available boundary algorithms
print(msaf.get_all_boundary_algorithms())
# Try one of these boundary algorithms and print results
boundaries, labels = msaf.process(audio_file, boundaries_id="foote", plot=True)
# Let's check all the structural grouping (label) algorithms available
print(msaf.get_all_label_algorithms())
# Try one of these label algorithms
boundaries, labels = msaf.process(audio_file, boundaries_id="foote", labels_id="fmc2d")
print(boundaries)
print(labels)
# If available, you can use previously annotated boundaries and a specific labels algorithm
# Set plot = True to plot the results
boundaries, labels = msaf.process(audio_file, boundaries_id="foote",
labels_id="fmc2d", plot=True)
Explanation: Using different Algorithms
MSAF includes multiple algorithms both for boundary retrieval and structural grouping (or labeling). In this section we demonstrate how to try them out.
Note: more algorithms are available in msaf-gpl.
End of explanation
# Let's check what available features are there in MSAF
print(msaf.features_registry)
# Segment the file using the Foote method for boundaries, C-NMF method for labels, and MFCC features
boundaries, labels = msaf.process(audio_file, feature="mfcc", boundaries_id="gt",
labels_id="fmc2d", plot=True)
Explanation: Using different Features
Some algorithms allow the input of different type of features (e.g., harmonic, timbral). In this section we show how we can input different features to MSAF.
End of explanation
sr = 44100
hop_length = 1024
beats_audio_file = "../datasets/Sargon/audio/02-Sargon-Shattered World.mp3"
audio = librosa.load(beats_audio_file, sr=sr)[0]
audio_harmonic, audio_percussive = librosa.effects.hpss(audio)
# Compute beats
tempo, frames = librosa.beat.beat_track(y=audio_percussive,
sr=sr, hop_length=hop_length)
# To times
beat_times = librosa.frames_to_time(frames, sr=sr,
hop_length=hop_length)
# We will now save or beats to a JAMS file.
import jams
jam = jams.JAMS()
jam.file_metadata.duration = len(audio_file)/sr
beat_a = jams.Annotation(namespace='beat')
beat_a.annotation_metadata = jams.AnnotationMetadata(data_source='librosa beat tracker')
# Add beat timings to the annotation record.
# The beat namespace does not require value or confidence fields,
# so we can leave those blank.
for t in beat_times:
beat_a.append(time=t, duration=0.0)
# Store the new annotation in the jam file. This need to be located on the references folder
# and be named like the audio file except for the jams extension.
jam.annotations.append(beat_a)
jam.save('../datasets/Sargon/references/01-Sargon-Mindless.jams')
# Using the annotated beats then is straight forward.
# Just be sure you don't have a temporary features file in the directory.
boundaries, labels = msaf.process(audio_file, boundaries_id="foote",
annot_beats=True, labels_id="fmc2d", plot=True)
Explanation: Using Annotated Beats
MSAF can calculate the beats or use annotated ones. The annotations should be store in a jams file, for this notebook we used a simple jams example.
End of explanation
# Evaluate the results. It returns a pandas data frame.
evaluations = msaf.eval.process(audio_file, boundaries_id="foote", labels_id="fmc2d")
IPython.display.display(evaluations)
Explanation: Evaluate Results
The results can be evaluated as long as there is an existing file containing reference annotations. The results are stored in a pandas DataFrame. MSAF has to run these algorithms (using msaf.process described above) before being able to evaluate its results.
End of explanation
# First, check which are foote's algorithm parameters:
print(msaf.algorithms.foote.config)
# play around with IPython.Widgets
from ipywidgets import interact
# Obtain the default configuration
bid = "foote" # Boundaries ID
lid = None # Labels ID
feature = "pcp"
config = msaf.io.get_configuration(feature, annot_beats=False, framesync=False,
boundaries_id=bid, labels_id=lid)
# Sweep M_gaussian parameters
@interact(M_gaussian=(50, 500, 25))
def _run_msaf(M_gaussian):
# Set the configuration
config["M_gaussian"] = M_gaussian
# Segment the file using the Foote method, and Pitch Class Profiles for the features
results = msaf.process(audio_file, feature=feature, boundaries_id=bid,
config=config, plot=True)
# Evaluate the results. It returns a pandas data frame.
evaluations = msaf.eval.process(audio_file, feature=feature, boundaries_id=bid,
config=config)
IPython.display.display(evaluations)
Explanation: Explore Algorithm Parameters
Now let's modify the configuration of one of the files, and modify it to see how different the results are.
We will use Widgets, which will become handy here.
End of explanation
dataset = "../datasets/Sargon/"
results = msaf.process(dataset, n_jobs=1, boundaries_id="foote")
# Evaluate in collection mode
evaluations = msaf.eval.process(dataset, n_jobs=4, boundaries_id="foote")
IPython.display.display(evaluations)
IPython.display.display(evaluations.mean())
Explanation: Collection Mode
MSAF is able to run and evaluate mutliple files using multi-threading. In this section we show this functionality.
End of explanation |
5,471 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using ROOT to bind Python and C++
What is PyROOT?
PyROOT is the name of the Python bindings offered by ROOT
All the ROOT C++ functions and classes are accessible from Python via PyROOT
Python façade, C++ performance
But PyROOT is not just for ROOT!
It can also call into user-defined C++ code
How does PyROOT work?
PyROOT is a special type of bindings, since it's automatic and dynamic
No static wrapper generation
Dynamic python proxies are created for C++ entities
Lazy class/variable lookup
Powered by cppyy, the ROOT type system and Cling
Reflection information
JIT C++ compilation and execution
And on top of the automatic bindings
Step1: The ROOT Python module is the entry point for all the ROOT C++ functionality.
For example, we can create a histogram with ROOT using the TH1F C++ class from Python
Step3: Calling user-defined C++ code via PyROOT
We've seen how PyROOT allows to access all the functions and classes that the ROOT C++ libraries define.
In addition, it is possible to make PyROOT call into user-defined C++. For example, it is possible to declare a C++ function, as it is done below by passing its code as a string argument of the ProcessLine function
Step4: and use it right away from Python
Step5: What about code in C++ libraries?
In the example we just saw, the user-defined C++ code is contained in strings in our program, but PyROOT can also load and call into C++ libraries. This enables you to write high-performance C++, compile it and use it from Python.
More information can be found here.
Type conversions
When calling C++ from Python via PyROOT, there needs to be a conversion between the Python arguments we pass and the C++ arguments that the C++ side expects. PyROOT takes care of such conversion automatically, for example from Python integer to C++ integer
Step6: Of course not every conversion is allowed!
Step8: An example of a useful allowed conversion is Python list to std | Python Code:
import ROOT
Explanation: Using ROOT to bind Python and C++
What is PyROOT?
PyROOT is the name of the Python bindings offered by ROOT
All the ROOT C++ functions and classes are accessible from Python via PyROOT
Python façade, C++ performance
But PyROOT is not just for ROOT!
It can also call into user-defined C++ code
How does PyROOT work?
PyROOT is a special type of bindings, since it's automatic and dynamic
No static wrapper generation
Dynamic python proxies are created for C++ entities
Lazy class/variable lookup
Powered by cppyy, the ROOT type system and Cling
Reflection information
JIT C++ compilation and execution
And on top of the automatic bindings: pythonizations
To make the use of C++ from Python simpler, more pythonic
Using ROOT from Python
To start working with PyROOT, we need to import the ROOT module.
End of explanation
h = ROOT.TH1F("my_histo", "Example histogram", 100, -4, 4)
Explanation: The ROOT Python module is the entry point for all the ROOT C++ functionality.
For example, we can create a histogram with ROOT using the TH1F C++ class from Python:
End of explanation
ROOT.gInterpreter.ProcessLine(
double add(double a, double b) {
return a + b;
}
)
Explanation: Calling user-defined C++ code via PyROOT
We've seen how PyROOT allows to access all the functions and classes that the ROOT C++ libraries define.
In addition, it is possible to make PyROOT call into user-defined C++. For example, it is possible to declare a C++ function, as it is done below by passing its code as a string argument of the ProcessLine function:
End of explanation
ROOT.add(3.14, 100)
Explanation: and use it right away from Python:
End of explanation
ROOT.gInterpreter.ProcessLine("void print_integer(int i) { std::cout << i << std::endl; }")
ROOT.print_integer(7)
Explanation: What about code in C++ libraries?
In the example we just saw, the user-defined C++ code is contained in strings in our program, but PyROOT can also load and call into C++ libraries. This enables you to write high-performance C++, compile it and use it from Python.
More information can be found here.
Type conversions
When calling C++ from Python via PyROOT, there needs to be a conversion between the Python arguments we pass and the C++ arguments that the C++ side expects. PyROOT takes care of such conversion automatically, for example from Python integer to C++ integer:
End of explanation
ROOT.print_integer([]) # fails with TypeError
Explanation: Of course not every conversion is allowed!
End of explanation
ROOT.gInterpreter.ProcessLine(
void print_vector(const std::vector<std::string> &v) {
for (auto &s : v) {
std::cout << s << std::endl;
}
}
)
ROOT.print_vector(['Two', 'Words'])
Explanation: An example of a useful allowed conversion is Python list to std::vector:
End of explanation |
5,472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing basic semantic similarities between GO terms
Adapted from book chapter written by Alex Warwick Vesztrocy and Christophe Dessimoz
In this section we look at how to compute semantic similarity between GO terms. First we need to write a function that calculates the minimum number of branches connecting two GO terms.
Step1: Let's get all the annotations from arabidopsis.
Step2: Now we can calculate the semantic distance and semantic similarity, as so
Step3: Then we can calculate the information content of the single term, <code>GO
Step4: Resnik's similarity measure is defined as the information content of the most informative common ancestor. That is, the most specific common parent-term in the GO. Then we can calculate this as follows
Step5: Lin's similarity measure is defined as | Python Code:
%load_ext autoreload
%autoreload 2
import sys
sys.path.insert(0, "..")
from goatools import obo_parser
go = obo_parser.GODag("../go-basic.obo")
go_id3 = 'GO:0048364'
go_id4 = 'GO:0044707'
print(go[go_id3])
print(go[go_id4])
Explanation: Computing basic semantic similarities between GO terms
Adapted from book chapter written by Alex Warwick Vesztrocy and Christophe Dessimoz
In this section we look at how to compute semantic similarity between GO terms. First we need to write a function that calculates the minimum number of branches connecting two GO terms.
End of explanation
from goatools.associations import read_gaf
associations = read_gaf("http://geneontology.org/gene-associations/gene_association.tair.gz")
Explanation: Let's get all the annotations from arabidopsis.
End of explanation
from goatools.semantic import semantic_similarity
sim = semantic_similarity(go_id3, go_id4, go)
print('The semantic similarity between terms {} and {} is {}.'.format(go_id3, go_id4, sim))
Explanation: Now we can calculate the semantic distance and semantic similarity, as so:
End of explanation
from goatools.semantic import TermCounts, get_info_content
# First get the counts of each GO term.
termcounts = TermCounts(go, associations)
# Calculate the information content
go_id = "GO:0048364"
infocontent = get_info_content(go_id, termcounts)
print('Information content ({}) = {}'.format(go_id, infocontent))
Explanation: Then we can calculate the information content of the single term, <code>GO:0048364</code>.
End of explanation
from goatools.semantic import resnik_sim
sim_r = resnik_sim(go_id3, go_id4, go, termcounts)
print('Resnik similarity score ({}, {}) = {}'.format(go_id3, go_id4, sim_r))
Explanation: Resnik's similarity measure is defined as the information content of the most informative common ancestor. That is, the most specific common parent-term in the GO. Then we can calculate this as follows:
End of explanation
from goatools.semantic import lin_sim
sim_l = lin_sim(go_id3, go_id4, go, termcounts)
print('Lin similarity score ({}, {}) = {}'.format(go_id3, go_id4, sim_l))
Explanation: Lin's similarity measure is defined as:
$$ \textrm{sim}{\textrm{Lin}}(t{1}, t_{2}) = \frac{-2*\textrm{sim}_{\textrm{Resnik}}(t_1, t_2)}{IC(t_1) + IC(t_2)} $$
Then we can calculate this as
End of explanation |
5,473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Download RetinaNet code to notebook instance
Step1: Create API Key on Kaggle
Please go https | Python Code:
!pip install --user --upgrade kaggle
import IPython
IPython.Application.instance().kernel.do_shutdown(True) #automatically restarts kernel
Explanation: Download RetinaNet code to notebook instance
End of explanation
!ls ./kaggle.json
import os
current_dir=!pwd
current_dir=current_dir[0]
os.environ['KAGGLE_CONFIG_DIR']=current_dir
!${HOME}/.local/bin/kaggle datasets download mistag/arthropod-taxonomy-orders-object-detection-dataset
!unzip -q *dataset.zip
Explanation: Create API Key on Kaggle
Please go https://www.kaggle.com/ and go to your Account profile page
Click on "Create New API Token"
Upload the kaggle.json to this directory.
End of explanation |
5,474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing Networks
The following demonstrates basic use of nupic.frameworks.viz.NetworkVisualizer to visualize a network.
Before you begin, you will need to install the otherwise optional dependencies. From the root of nupic repository
Step1: Render with nupic.frameworks.viz.NetworkVisualizer, which takes as input any nupic.engine.Network instance
Step2: That's interesting, but not necessarily useful if you don't understand dot. Let's capture that output and do something else
Step3: outp now contains the rendered output, render to an image with graphviz
Step4: In the example above, each three-columned rectangle is a discrete region, the user-defined name for which is in the middle column. The left-hand and right-hand columns are respective inputs and outputs, the names for which, e.g. "bottumUpIn" and "bottomUpOut", are specific to the region type. The arrows indicate links between outputs from one region to the input of another.
I know what you're thinking. That's a cool trick, but nobody cares about your contrived example. I want to see something real!
Continuing below, I'll instantiate a CLA model and visualize it. In this case, I'll use one of the "hotgym" examples.
Step5: Same deal as before, create a NetworkVisualizer instance, render to a buffer, then to an image, and finally display it inline. | Python Code:
from nupic.engine import Network, Dimensions
# Create Network instance
network = Network()
# Add three TestNode regions to network
network.addRegion("region1", "TestNode", "")
network.addRegion("region2", "TestNode", "")
network.addRegion("region3", "TestNode", "")
# Set dimensions on first region
region1 = network.getRegions().getByName("region1")
region1.setDimensions(Dimensions([1, 1]))
# Link regions
network.link("region1", "region2", "UniformLink", "")
network.link("region2", "region1", "UniformLink", "")
network.link("region1", "region3", "UniformLink", "")
network.link("region2", "region3", "UniformLink", "")
# Initialize network
network.initialize()
Explanation: Visualizing Networks
The following demonstrates basic use of nupic.frameworks.viz.NetworkVisualizer to visualize a network.
Before you begin, you will need to install the otherwise optional dependencies. From the root of nupic repository:
pip install --user .[viz]
Setup a simple network so we have something to work with:
End of explanation
from nupic.frameworks.viz import NetworkVisualizer
# Initialize Network Visualizer
viz = NetworkVisualizer(network)
# Render to dot (stdout)
viz.render()
Explanation: Render with nupic.frameworks.viz.NetworkVisualizer, which takes as input any nupic.engine.Network instance:
End of explanation
from nupic.frameworks.viz import DotRenderer
from io import StringIO
outp = StringIO()
viz.render(renderer=lambda: DotRenderer(outp))
Explanation: That's interesting, but not necessarily useful if you don't understand dot. Let's capture that output and do something else:
End of explanation
# Render dot to image
from graphviz import Source
from IPython.display import Image
Image(Source(outp.getvalue()).pipe("png"))
Explanation: outp now contains the rendered output, render to an image with graphviz:
End of explanation
from nupic.frameworks.opf.modelfactory import ModelFactory
# Note: parameters copied from examples/opf/clients/hotgym/simple/model_params.py
model = ModelFactory.create({'aggregationInfo': {'hours': 1, 'microseconds': 0, 'seconds': 0, 'fields': [('consumption', 'sum')], 'weeks': 0, 'months': 0, 'minutes': 0, 'days': 0, 'milliseconds': 0, 'years': 0}, 'model': 'CLA', 'version': 1, 'predictAheadTime': None, 'modelParams': {'sensorParams': {'verbosity': 0, 'encoders': {'timestamp_timeOfDay': {'type': 'DateEncoder', 'timeOfDay': (21, 1), 'fieldname': u'timestamp', 'name': u'timestamp_timeOfDay'}, u'consumption': {'resolution': 0.88, 'seed': 1, 'fieldname': u'consumption', 'name': u'consumption', 'type': 'RandomDistributedScalarEncoder'}, 'timestamp_weekend': {'type': 'DateEncoder', 'fieldname': u'timestamp', 'name': u'timestamp_weekend', 'weekend': 21}}, 'sensorAutoReset': None}, 'spParams': {'columnCount': 2048, 'spVerbosity': 0, 'spatialImp': 'cpp', 'synPermConnected': 0.1, 'seed': 1956, 'numActiveColumnsPerInhArea': 40, 'globalInhibition': 1, 'inputWidth': 0, 'synPermInactiveDec': 0.005, 'synPermActiveInc': 0.04, 'potentialPct': 0.85, 'boostStrength': 3.0}, 'spEnable': True, 'clParams': {'implementation': 'cpp', 'alpha': 0.1, 'verbosity': 0, 'steps': '1,5', 'regionName': 'SDRClassifierRegion'}, 'inferenceType': 'TemporalMultiStep', 'tpEnable': True, 'tpParams': {'columnCount': 2048, 'activationThreshold': 16, 'pamLength': 1, 'cellsPerColumn': 32, 'permanenceInc': 0.1, 'minThreshold': 12, 'verbosity': 0, 'maxSynapsesPerSegment': 32, 'outputType': 'normal', 'initialPerm': 0.21, 'globalDecay': 0.0, 'maxAge': 0, 'permanenceDec': 0.1, 'seed': 1960, 'newSynapseCount': 20, 'maxSegmentsPerCell': 128, 'temporalImp': 'cpp', 'inputWidth': 2048}, 'trainSPNetOnlyIfRequested': False}})
Explanation: In the example above, each three-columned rectangle is a discrete region, the user-defined name for which is in the middle column. The left-hand and right-hand columns are respective inputs and outputs, the names for which, e.g. "bottumUpIn" and "bottomUpOut", are specific to the region type. The arrows indicate links between outputs from one region to the input of another.
I know what you're thinking. That's a cool trick, but nobody cares about your contrived example. I want to see something real!
Continuing below, I'll instantiate a CLA model and visualize it. In this case, I'll use one of the "hotgym" examples.
End of explanation
# New network, new NetworkVisualizer instance
viz = NetworkVisualizer(model._netInfo.net)
# Render to Dot output to buffer
outp = StringIO()
viz.render(renderer=lambda: DotRenderer(outp))
# Render Dot to image, display inline
Image(Source(outp.getvalue()).pipe("png"))
Explanation: Same deal as before, create a NetworkVisualizer instance, render to a buffer, then to an image, and finally display it inline.
End of explanation |
5,475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interfaces
In Nipype, interfaces are python modules that allow you to use various external packages (e.g. FSL, SPM or FreeSurfer), even if they themselves are written in another programming language than python. Such an interface knows what sort of options an external program has and how to execute it.
To illustrate why interfaces are so useful, let's have a look at the brain extraction algorithm BET from FSL. Once in its original framework and once in the Nipype framework.
BET in the origional framework
Let's take a look at our T1 image on which we want to run BET.
Step1: In its simplest form, you can run BET by just specifying the input image and tell it what to name the output image
Step2: Let's take a look at the results
Step3: Perfect! Exactly what we want. Hmm... what else could we want from BET? Well, it's actually a fairly complicated program. As is the case for all FSL binaries, just call it with no arguments to see all its options.
Step4: We see that BET can also return a binary brain mask as a result of the skull-strip, which can be useful for masking our GLM analyses (among other things). Let's run it again including that option and see the result.
Step5: Now let's look at the BET interface in Nipype. First, we have to import it.
BET in the Nipype framework
So how can we run BET in the Nipype framework?
First things first, we need to import the BET class from Nipype's interfaces module
Step6: Now that we have the BET function accessible, we just have to specify the input and output file. And finally we have to run the command. So exactly like in the original framework.
Step7: If we now look at the results from Nipype, we see that it is exactly the same as before.
Step8: This is not surprising, because Nipype used exactly the same bash code that we were using in the original framework example above. To verify this, we can call the cmdline function of the constructed BET instance.
Step9: Another way to set the inputs on an interface object is to use them as keyword arguments when you construct the interface instance. Let's write the Nipype code from above in this way, but let's also add the option to create a brain mask.
Step10: Now if we plot this, we see again that this worked exactly as before. No surprise there.
Step11: Help Function
But how did we know what the names of the input parameters are? In the original framework we were able to just run BET, without any additional parameters to get an information page. In the Nipype framework we can achieve the same thing by using the help() function on an interface class. For the BET example, this is
Step12: As you can see, we get three different informations. First, a general explanation of the class.
Wraps command **bet**
Use FSL BET command for skull stripping.
For complete details, see the `BET Documentation.
<http
Step13: Interface errors
To execute any interface class we use the run method on that object. For FSL, Freesurfer, and other programs, this will just make a system call with the command line we saw above. For MATLAB-based programs like SPM, it will actually generate a .m file and run a MATLAB process to execute it. All of that is handled in the background.
But what happens if we didn't specify all necessary inputs? For instance, you need to give BET a file to work on. If you try and run it without setting the input in_file, you'll get a Python exception before anything actually gets executed
Step14: Nipype also knows some things about what sort of values should get passed to the inputs, and will raise (hopefully) informative exceptions when they are violated -- before anything gets processed. For example, BET just lets you say "create a mask," it doesn't let you name it. You may forget this, and try to give it a name. In this case, Nipype will raise a TraitError telling you what you did wrong
Step15: Additionally, Nipype knows that, for inputs corresponding to files you are going to process, they should exist in your file system. If you pass a string that doesn't correspond to an existing file, it will error and let you know
Step16: It turns out that for default output files, you don't even need to specify a name. Nipype will know what files are going to be created and will generate a name for you
Step17: Note that it is going to write the output file to the local directory.
What if you just ran this interface and wanted to know what it called the file that was produced? As you might have noticed before, calling the run method returned an object called InterfaceResult that we saved under the variable res. Let's inspect that object
Step18: We see that four possible files can be generated by BET. Here we ran it in the most simple way possible, so it just generated an out_file, which is the skull-stripped image. Let's see what happens when we generate a mask. By the way, you can also set inputs at runtime by including them as arguments to the run method | Python Code:
%pylab inline
from nilearn.plotting import plot_anat
plot_anat('/data/ds102/sub-01/anat/sub-01_T1w.nii.gz', title='original',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False)
Explanation: Interfaces
In Nipype, interfaces are python modules that allow you to use various external packages (e.g. FSL, SPM or FreeSurfer), even if they themselves are written in another programming language than python. Such an interface knows what sort of options an external program has and how to execute it.
To illustrate why interfaces are so useful, let's have a look at the brain extraction algorithm BET from FSL. Once in its original framework and once in the Nipype framework.
BET in the origional framework
Let's take a look at our T1 image on which we want to run BET.
End of explanation
%%bash
FILENAME=/data/ds102/sub-01/anat/sub-01_T1w
bet ${FILENAME}.nii.gz ${FILENAME}_bet.nii.gz
Explanation: In its simplest form, you can run BET by just specifying the input image and tell it what to name the output image:
bet <input> <output>
End of explanation
plot_anat('/data/ds102/sub-01/anat/sub-01_T1w_bet.nii.gz', title='original',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False)
Explanation: Let's take a look at the results:
End of explanation
%%bash
bet
Explanation: Perfect! Exactly what we want. Hmm... what else could we want from BET? Well, it's actually a fairly complicated program. As is the case for all FSL binaries, just call it with no arguments to see all its options.
End of explanation
%%bash
FILENAME=/data/ds102/sub-01/anat/sub-01_T1w
bet ${FILENAME}.nii.gz ${FILENAME}_bet.nii.gz -m
plot_anat('/data/ds102/sub-01/anat/sub-01_T1w_bet_mask.nii.gz', title='original',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False)
Explanation: We see that BET can also return a binary brain mask as a result of the skull-strip, which can be useful for masking our GLM analyses (among other things). Let's run it again including that option and see the result.
End of explanation
from nipype.interfaces.fsl import BET
Explanation: Now let's look at the BET interface in Nipype. First, we have to import it.
BET in the Nipype framework
So how can we run BET in the Nipype framework?
First things first, we need to import the BET class from Nipype's interfaces module:
End of explanation
skullstrip = BET()
skullstrip.inputs.in_file = "/data/ds102/sub-01/anat/sub-01_T1w.nii.gz"
skullstrip.inputs.out_file = "/data/ds102/sub-01/anat/T1w_nipype_bet.nii.gz"
res = skullstrip.run()
Explanation: Now that we have the BET function accessible, we just have to specify the input and output file. And finally we have to run the command. So exactly like in the original framework.
End of explanation
plot_anat('/data/ds102/sub-01/anat/T1w_nipype_bet.nii.gz', title='original',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False)
Explanation: If we now look at the results from Nipype, we see that it is exactly the same as before.
End of explanation
print skullstrip.cmdline
Explanation: This is not surprising, because Nipype used exactly the same bash code that we were using in the original framework example above. To verify this, we can call the cmdline function of the constructed BET instance.
End of explanation
skullstrip = BET(in_file="/data/ds102/sub-01/anat/sub-01_T1w.nii.gz",
out_file="/data/ds102/sub-01/anat/T1w_nipype_bet.nii.gz",
mask=True)
res = skullstrip.run()
Explanation: Another way to set the inputs on an interface object is to use them as keyword arguments when you construct the interface instance. Let's write the Nipype code from above in this way, but let's also add the option to create a brain mask.
End of explanation
plot_anat('/data/ds102/sub-01/anat/T1w_nipype_bet_mask.nii.gz', title='original',
display_mode='ortho', dim=-1, draw_cross=False, annotate=False)
Explanation: Now if we plot this, we see again that this worked exactly as before. No surprise there.
End of explanation
BET.help()
Explanation: Help Function
But how did we know what the names of the input parameters are? In the original framework we were able to just run BET, without any additional parameters to get an information page. In the Nipype framework we can achieve the same thing by using the help() function on an interface class. For the BET example, this is:
End of explanation
print res.outputs.mask_file
Explanation: As you can see, we get three different informations. First, a general explanation of the class.
Wraps command **bet**
Use FSL BET command for skull stripping.
For complete details, see the `BET Documentation.
<http://www.fmrib.ox.ac.uk/fsl/bet2/index.html>`_
Examples
--------
>>> from nipype.interfaces import fsl
>>> from nipype.testing import example_data
>>> btr = fsl.BET()
>>> btr.inputs.in_file = example_data('structural.nii')
>>> btr.inputs.frac = 0.7
>>> res = btr.run() # doctest: +SKIP
Second, a list of all possible input parameters.
Inputs::
[Mandatory]
in_file: (an existing file name)
input file to skull strip
flag: %s, position: 0
[Optional]
args: (a string)
Additional parameters to the command
flag: %s
center: (a list of at most 3 items which are an integer (int or
long))
center of gravity in voxels
flag: -c %s
environ: (a dictionary with keys which are a value of type 'str' and
with values which are a value of type 'str', nipype default value:
{})
Environment variables
frac: (a float)
fractional intensity threshold
flag: -f %.2f
functional: (a boolean)
apply to 4D fMRI data
flag: -F
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
ignore_exception: (a boolean, nipype default value: False)
Print an error message instead of throwing an exception in case the
interface fails to run
mask: (a boolean)
create binary mask image
flag: -m
mesh: (a boolean)
generate a vtk mesh brain surface
flag: -e
no_output: (a boolean)
Don't generate segmented output
flag: -n
out_file: (a file name)
name of output skull stripped image
flag: %s, position: 1
outline: (a boolean)
create surface outline image
flag: -o
output_type: ('NIFTI_PAIR' or 'NIFTI_PAIR_GZ' or 'NIFTI_GZ' or
'NIFTI')
FSL output type
padding: (a boolean)
improve BET if FOV is very small in Z (by temporarily padding end
slices)
flag: -Z
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
radius: (an integer (int or long))
head radius
flag: -r %d
reduce_bias: (a boolean)
bias field and neck cleanup
flag: -B
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
remove_eyes: (a boolean)
eye & optic nerve cleanup (can be useful in SIENA)
flag: -S
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
robust: (a boolean)
robust brain centre estimation (iterates BET several times)
flag: -R
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
skull: (a boolean)
create skull image
flag: -s
surfaces: (a boolean)
run bet2 and then betsurf to get additional skull and scalp surfaces
(includes registrations)
flag: -A
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
t2_guided: (a file name)
as with creating surfaces, when also feeding in non-brain-extracted
T2 (includes registrations)
flag: -A2 %s
mutually_exclusive: functional, reduce_bias, robust, padding,
remove_eyes, surfaces, t2_guided
terminal_output: ('stream' or 'allatonce' or 'file' or 'none')
Control terminal output: `stream` - displays to terminal immediately
(default), `allatonce` - waits till command is finished to display
output, `file` - writes output to file, `none` - output is ignored
threshold: (a boolean)
apply thresholding to segmented brain image and mask
flag: -t
vertical_gradient: (a float)
vertical gradient in fractional intensity threshold (-1, 1)
flag: -g %.2f
And third, a list of all possible output parameters.
Outputs::
inskull_mask_file: (a file name)
path/name of inskull mask (if generated)
inskull_mesh_file: (a file name)
path/name of inskull mesh outline (if generated)
mask_file: (a file name)
path/name of binary brain mask (if generated)
meshfile: (a file name)
path/name of vtk mesh file (if generated)
out_file: (a file name)
path/name of skullstripped file (if generated)
outline_file: (a file name)
path/name of outline file (if generated)
outskin_mask_file: (a file name)
path/name of outskin mask (if generated)
outskin_mesh_file: (a file name)
path/name of outskin mesh outline (if generated)
outskull_mask_file: (a file name)
path/name of outskull mask (if generated)
outskull_mesh_file: (a file name)
path/name of outskull mesh outline (if generated)
skull_mask_file: (a file name)
path/name of skull mask (if generated)
So here we see that Nipype also has output parameters. This is very practical. Because instead of typing the full path name to the mask volume, we can also more directly use the mask_file parameter.
End of explanation
skullstrip2 = BET()
skullstrip2.run()
Explanation: Interface errors
To execute any interface class we use the run method on that object. For FSL, Freesurfer, and other programs, this will just make a system call with the command line we saw above. For MATLAB-based programs like SPM, it will actually generate a .m file and run a MATLAB process to execute it. All of that is handled in the background.
But what happens if we didn't specify all necessary inputs? For instance, you need to give BET a file to work on. If you try and run it without setting the input in_file, you'll get a Python exception before anything actually gets executed:
End of explanation
skullstrip.inputs.mask = "mask_file.nii"
Explanation: Nipype also knows some things about what sort of values should get passed to the inputs, and will raise (hopefully) informative exceptions when they are violated -- before anything gets processed. For example, BET just lets you say "create a mask," it doesn't let you name it. You may forget this, and try to give it a name. In this case, Nipype will raise a TraitError telling you what you did wrong:
End of explanation
skullstrip.inputs.in_file = "/data/oops_a_typo.nii"
Explanation: Additionally, Nipype knows that, for inputs corresponding to files you are going to process, they should exist in your file system. If you pass a string that doesn't correspond to an existing file, it will error and let you know:
End of explanation
skullstrip = BET(in_file="/data/ds102/sub-01/anat/sub-01_T1w.nii.gz")
print(skullstrip.cmdline)
Explanation: It turns out that for default output files, you don't even need to specify a name. Nipype will know what files are going to be created and will generate a name for you:
End of explanation
res = skullstrip.run()
print(res.outputs)
Explanation: Note that it is going to write the output file to the local directory.
What if you just ran this interface and wanted to know what it called the file that was produced? As you might have noticed before, calling the run method returned an object called InterfaceResult that we saved under the variable res. Let's inspect that object:
End of explanation
res2 = skullstrip.run(mask=True)
print(res2.outputs)
Explanation: We see that four possible files can be generated by BET. Here we ran it in the most simple way possible, so it just generated an out_file, which is the skull-stripped image. Let's see what happens when we generate a mask. By the way, you can also set inputs at runtime by including them as arguments to the run method:
End of explanation |
5,476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Session 02 - Chromosome $k$-mers <img src="data/JHI_STRAP_Web.png" style="width
Step1: Sequence data
Like Session 01, we will be dealing with sequence data directly, but there are again helper functions for this exercise, in the module ex02.
There is a dictionary stored in the variable ex02.bact_files. This provides a tuple of sequence file names, for any organism name in the list stored in ex02.bacteria.
You can see the contents of this list and dictionary with
python
print(list(ex02.bacteria))
print(ex02.bact_files)
Step2: To choose a particular organism, you can use the square bracket notation for dictionaries
Step3: 1. Counting $k$-mers
A function is provided in the ex02 module to help you
Step4: We can also inspect the .shape attribute to find out how large the returned results are, as this returns a (rows, columns) tuple, using the code below
Step5: 2. Plotting $k$-mer spectra
You can use the built-in .hist() method of Pandas dataframes, that will plot a histogram directly in this notebook. By default, this has quite a wide bin width, but this can be overridden with the bins=n argument, as with
Step6: Exercise 1 (10min)
Step7: Exercise 2 (5min) | Python Code:
%matplotlib inline
from Bio import SeqIO # For working with sequence data files
from Bio.Seq import Seq # Seq object, needed for the last activity
from Bio.Alphabet import generic_dna # sequence alphabet, for the last activity
from bs32010 import ex02 # Local functions and data
Explanation: Session 02 - Chromosome $k$-mers <img src="data/JHI_STRAP_Web.png" style="width: 150px; float: right;">
Learning Outcomes
Read and manipulate prokaryotic genome sequences using Biopython.
Extract bulk genome properties from a genome sequence
Visualisation of bulk genome properties using Python
Introduction
$k$-mers
Empirical frequencies of DNA $k$-mers in whole genome sequences provide an interesting perspective on genomic complexity, and the availability of large segments of genomic sequence from many organisms means that analysis of $k$-mers with non-trivial lengths is now possible, as can be seen in Chor et al. (2009) Genome Biol. 10:R108.
You will visualise the distribution of $k$-mer counts as spectra, as in the image above, using Python.
Python code
We will use the Biopython libraries to interact with and manipulate sequence data, and the Pandas data analysis libraries to manipulate numerical data.
Some code is imported from the local bs32010 module in this directory, to avoid clutter in this notebook. You can inspect this module if you are interested.
End of explanation
# Enter code here
Explanation: Sequence data
Like Session 01, we will be dealing with sequence data directly, but there are again helper functions for this exercise, in the module ex02.
There is a dictionary stored in the variable ex02.bact_files. This provides a tuple of sequence file names, for any organism name in the list stored in ex02.bacteria.
You can see the contents of this list and dictionary with
python
print(list(ex02.bacteria))
print(ex02.bact_files)
End of explanation
# Enter code here
Explanation: To choose a particular organism, you can use the square bracket notation for dictionaries:
python
print(ex02.bact_files['Mycobacterium tuberculosis'])
End of explanation
# Enter code here
Explanation: 1. Counting $k$-mers
A function is provided in the ex02 module to help you:
count_seq_kmers(inseq, k): this counts all subsequences of size $k$ in the sequence inseq
Test the function using the code below, which conducts the analysis for a Pectobacterium chromosome:
python
inseq = SeqIO.read('genome_data/Pectobacterium/GCA_000769535.1.fasta', 'fasta')
kmer_count = ex02.count_seq_kmers(inseq, 6)
kmer_count.head()
The Pandas dataframe that is returned lets us use the .head() method to view the first few rows of the dataframe. This shows a column of six-character strings (the $k$-mers), with a second column showing the number of times that $k$-mer occurs in the genome.
End of explanation
# Enter code here
Explanation: We can also inspect the .shape attribute to find out how large the returned results are, as this returns a (rows, columns) tuple, using the code below:
python
kmer_count.shape
This tells us that there are 4096 distinct 6-mers in the sequence.
End of explanation
# Enter code here
Explanation: 2. Plotting $k$-mer spectra
You can use the built-in .hist() method of Pandas dataframes, that will plot a histogram directly in this notebook. By default, this has quite a wide bin width, but this can be overridden with the bins=n argument, as with:
python
kmer_count.hist(column='frequency', bins=100)
By default, the .hist() method will display the full range of data, but by specifying maximum and minimum values with the range=(min, max) argument, the extent of data displayed can be controlled.
Use the code below to visualise the 6-mer spectrum
python
kmer_count.hist(column='frequency', bins=100, range=(0, 1000))
End of explanation
# Enter code here
Explanation: Exercise 1 (10min): Recreate the plot in the upper left corner of the figure in the introduction, for one of the E. coli* genomes. *
HINT: Use print(ex02.bact_files['Escherichia coli']) to get a list of E.coli chromosome files.
End of explanation
# Enter code here
Explanation: Exercise 2 (5min): The E.coli* spectrum is unimodal, but how many modes does the Platypus chromosome 01 have? *
The platypus chromosome 01 file can be found in the file genome_data/Platypus/oan_ref_Ornithorhynchus_anatinus_5.0.1_chr1.fa
End of explanation |
5,477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimizing the SVM Classifier
Machine learning models are parameterized so that their behavior can be tuned for a given problem. Models can have many parameters and finding the best combination of parameters can be treated as a search problem. In this notebook, I aim to tune parameters of the SVM Classification model using scikit-learn.
Load Libraries and Data
Step1: Build a predictive model and evaluate with 5-cross validation using support vector classifies (ref NB4) for details
Step2: Importance of optimizing a classifier
We can tune two key parameters of the SVM algorithm
Step3: Decision boundaries of different classifiers
Let's see the decision boundaries produced by the linear, Gaussian and polynomial classifiers. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
#Load libraries for data processing
import pandas as pd #data processing, CSV file I/O (e.g. pd.read_csv)
import numpy as np
from scipy.stats import norm
## Supervised learning.
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.metrics import confusion_matrix
from sklearn import metrics, preprocessing
from sklearn.metrics import classification_report
from sklearn.feature_selection import SelectKBest, f_regression
# visualization
import seaborn as sns
plt.style.use('fivethirtyeight')
sns.set_style("white")
plt.rcParams['figure.figsize'] = (8,4)
#plt.rcParams['axes.titlesize'] = 'large'
Explanation: Optimizing the SVM Classifier
Machine learning models are parameterized so that their behavior can be tuned for a given problem. Models can have many parameters and finding the best combination of parameters can be treated as a search problem. In this notebook, I aim to tune parameters of the SVM Classification model using scikit-learn.
Load Libraries and Data
End of explanation
X
data = pd.read_csv('data/clean-data.csv', index_col=False)
data.drop('Unnamed: 0',axis=1, inplace=True)
#Assign predictors to a variable of ndarray (matrix) type
array = data.values
X = array[:,1:31]
y = array[:,0]
#transform the class labels from their original string representation (M and B) into integers
le = LabelEncoder()
y = le.fit_transform(y)
# Normalize the data (center around 0 and scale to remove the variance).
scaler =StandardScaler()
Xs = scaler.fit_transform(X)
from sklearn.decomposition import PCA
# feature extraction
pca = PCA(n_components=10)
fit = pca.fit(Xs)
X_pca = pca.transform(Xs)
# 5. Divide records in training and testing sets.
X_train, X_test, y_train, y_test = train_test_split(X_pca, y, test_size=0.3, random_state=2, stratify=y)
# 6. Create an SVM classifier and train it on 70% of the data set.
clf = SVC(probability=True)
clf.fit(X_train, y_train)
#7. Analyze accuracy of predictions on 30% of the holdout test sample.
classifier_score = clf.score(X_test, y_test)
print ('\nThe classifier accuracy score is {:03.2f}\n'.format(classifier_score))
clf2 = make_pipeline(SelectKBest(f_regression, k=3),SVC(probability=True))
scores = cross_val_score(clf2, X_pca, y, cv=3)
# Get average of 5-fold cross-validation score using an SVC estimator.
n_folds = 5
cv_error = np.average(cross_val_score(SVC(), X_pca, y, cv=n_folds))
#print ('\nThe {}-fold cross-validation accuracy score for this classifier is {:.2f}\n'.format(n_folds, cv_error))
y_pred = clf.fit(X_train, y_train).predict(X_test)
cm = metrics.confusion_matrix(y_test, y_pred)
print(classification_report(y_test, y_pred ))
fig, ax = plt.subplots(figsize=(5, 5))
ax.matshow(cm, cmap=plt.cm.Reds, alpha=0.3)
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(x=j, y=i,
s=cm[i, j],
va='center', ha='center')
plt.xlabel('Predicted Values', )
plt.ylabel('Actual Values')
plt.show()
Explanation: Build a predictive model and evaluate with 5-cross validation using support vector classifies (ref NB4) for details
End of explanation
# Train classifiers.
kernel_values = [ 'linear' , 'poly' , 'rbf' , 'sigmoid' ]
param_grid = {'C': np.logspace(-3, 2, 6), 'gamma': np.logspace(-3, 2, 6),'kernel': kernel_values}
grid = GridSearchCV(SVC(), param_grid=param_grid, cv=5)
grid.fit(X_train, y_train)
print("The best parameters are %s with a score of %0.2f"
% (grid.best_params_, grid.best_score_))
grid.best_estimator_.probability = True
clf = grid.best_estimator_
y_pred = clf.fit(X_train, y_train).predict(X_test)
cm = metrics.confusion_matrix(y_test, y_pred)
#print(cm)
print(classification_report(y_test, y_pred ))
fig, ax = plt.subplots(figsize=(5, 5))
ax.matshow(cm, cmap=plt.cm.Reds, alpha=0.3)
for i in range(cm.shape[0]):
for j in range(cm.shape[1]):
ax.text(x=j, y=i,
s=cm[i, j],
va='center', ha='center')
plt.xlabel('Predicted Values', )
plt.ylabel('Actual Values')
plt.show()
Explanation: Importance of optimizing a classifier
We can tune two key parameters of the SVM algorithm:
* the value of C (how much to relax the margin)
* and the type of kernel.
The default for SVM (the SVC class) is to use the Radial Basis Function (RBF) kernel with a C value set to 1.0. Like with KNN, we will perform a grid search using 10-fold cross validation with a standardized copy of the training dataset. We will try a number of simpler kernel types and C values with less bias and more bias (less than and more than 1.0 respectively).
Python scikit-learn provides two simple methods for algorithm parameter tuning:
* Grid Search Parameter Tuning.
* Random Search Parameter Tuning.
End of explanation
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import svm, datasets
def decision_plot(X_train, y_train, n_neighbors, weights):
h = .02 # step size in the mesh
Xtrain = X_train[:, :2] # we only take the first two features.
#================================================================
# Create color maps
#================================================================
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
#================================================================
# we create an instance of SVM and fit out data.
# We do not scale ourdata since we want to plot the support vectors
#================================================================
C = 1.0 # SVM regularization parameter
svm = SVC(kernel='linear', random_state=0, gamma=0.1, C=C).fit(Xtrain, y_train)
rbf_svc = SVC(kernel='rbf', gamma=0.7, C=C).fit(Xtrain, y_train)
poly_svc = SVC(kernel='poly', degree=3, C=C).fit(Xtrain, y_train)
%matplotlib inline
plt.rcParams['figure.figsize'] = (15, 9)
plt.rcParams['axes.titlesize'] = 'large'
# create a mesh to plot in
x_min, x_max = Xtrain[:, 0].min() - 1, Xtrain[:, 0].max() + 1
y_min, y_max = Xtrain[:, 1].min() - 1, Xtrain[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.1),
np.arange(y_min, y_max, 0.1))
# title for the plots
titles = ['SVC with linear kernel',
'SVC with RBF kernel',
'SVC with polynomial (degree 3) kernel']
for i, clf in enumerate((svm, rbf_svc, poly_svc)):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
plt.subplot(2, 2, i + 1)
plt.subplots_adjust(wspace=0.4, hspace=0.4)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
# Plot also the training points
plt.scatter(Xtrain[:, 0], Xtrain[:, 1], c=y_train, cmap=plt.cm.coolwarm)
plt.xlabel('radius_mean')
plt.ylabel('texture_mean')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title(titles[i])
plt.show()
Explanation: Decision boundaries of different classifiers
Let's see the decision boundaries produced by the linear, Gaussian and polynomial classifiers.
End of explanation |
5,478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Win/Loss Betting Model
Same as other one but I now filter by teams that have a ranking
Step1: Obtain results of teams within the past year
Step2: Pymc Model
Determining Binary Win Loss
Step3: Save Model
Step4: Diagnostics
Step5: Moar Plots
Step6: Non-MCMC Model | Python Code:
import pandas as pd
import numpy as np
import datetime as dt
from scipy.stats import norm, bernoulli
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from spcl_case import *
plt.style.use('fivethirtyeight')
Explanation: Win/Loss Betting Model
Same as other one but I now filter by teams that have a ranking
End of explanation
h_matches = pd.read_csv('hltv_csv/matchResults.csv')
h_matches['Date'] = pd.to_datetime(h_matches['Date'])
h_teams = pd.read_csv('hltv_csv/teams_w_ranking.csv')
h_teams = fix_teams(h_teams.set_index('ID'))
h_teams = h_teams.dropna()
FILTER_TEAMS = {'eslpl': ['OpTic', 'SK', 'Cloud9', 'Liquid', 'Luminosity', 'Misfits', 'Renegades', 'Immortals',
'Splyce', 'compLexity', 'Rogue', 'Ghost', 'CLG', 'NRG', 'FaZe', 'North',
'BIG', 'LDLC', 'mousesports', 'EnVyUs', 'NiP', 'Virtus.pro',
'Astralis', 'G2', 'GODSENT', 'Heroic', 'fnatic', 'NiP', 'Heroic'],
'mdleu': ['Virtus.pro', 'FlipSid3', 'eXtatus', 'AGO', 'Fragsters', 'Gambit', 'PRIDE', '1337HUANIA',
'VITALIS', 'Epsilon', 'CHAOS', 'Crowns', 'MK', 'Japaleno', 'Not Academy', 'aAa', 'Space Soldiers',
'Singularity', 'Nexus', 'Invictus Aquilas', 'Spirit', 'Kinguin', 'Seed', 'Endpoint', 'iGame.com', 'TEAM5',
'ALTERNATE aTTaX'],
'mdlna': ['Gale Force', 'FRENCH CANADIANS', 'Mythic', 'GX', 'Beacon', 'Torqued', 'Rise Nation', 'Denial', 'subtLe',
'SoaR', 'Muffin Lightning', 'Iceberg', 'ex-Nitrious', 'Adaptation', 'Morior Invictus', 'Naventic', 'CheckSix', 'Good People'
, 'LFAO', 'CLG Academy', 'Ambition', 'Mostly Harmless', 'Gorilla Core', 'ex-Nitrious', 'ANTI ECO'],
'mdlau': ['Grayhound', 'Tainted Minds', 'Kings', 'Chiefs', 'Dark Sided', 'seadoggs', 'Athletico', 'Legacy',
'SIN', 'Noxide', 'Control', 'SYF', 'Corvidae', 'Funkd', 'Masterminds', 'Conspiracy', 'AVANT']
}
MIN_DATE = dt.datetime(2017,1,1)
MAX_DATE = dt.datetime.today()
h_matches = h_matches[(h_matches['Date'] >= MIN_DATE) & (h_matches['Date'] <= MAX_DATE)]
h_matches = h_matches[h_matches['Team 1 ID'].isin(h_teams.index) | h_matches['Team 2 ID'].isin(h_teams.index)]
h_matches['winner'] = h_matches.apply(lambda x: x['Team 1 ID'] if x['Team 1 Score'] > x['Team 2 Score'] else x['Team 2 ID'], axis=1)
h_matches['score_diff'] = h_matches['Team 1 Score'] - h_matches['Team 2 Score']
obs = h_matches[['Map', 'Team 1 ID', 'Team 2 ID', 'score_diff', 'winner']]
obs = obs[obs.Map != 'Default']
obs.head()
teams = np.sort(np.unique(np.concatenate([h_matches['Team 1 ID'], h_matches['Team 2 ID']])))
maps = obs.Map.unique()
tmap = {v:k for k,v in dict(enumerate(teams)).items()}
mmap = {v:k for k,v in dict(enumerate(maps)).items()}
n_teams = len(teams)
n_maps = len(maps)
print('Number of Teams: %i ' % n_teams)
print('Number of Filtered Teams: %i' % len(h_teams))
print('Number of Matches: %i ' % len(h_matches))
print('Number of Maps: %i '% n_maps)
Explanation: Obtain results of teams within the past year
End of explanation
import pymc3 as pm
import theano.tensor as tt
obs_map = obs['Map'].map(mmap).values
obs_team_1 = obs['Team 1 ID'].map(tmap).values
obs_team_2 = obs['Team 2 ID'].map(tmap).values
with pm.Model() as rating_model:
omega = pm.HalfCauchy('omega', 0.5)
tau = pm.HalfCauchy('tau', 0.5)
rating = pm.Normal('rating', 0, omega, shape=n_teams)
theta_tilde = pm.Normal('rate_t', mu=0, sd=1, shape=(n_maps,n_teams))
rating_map = pm.Deterministic('rating | map', rating + tau * theta_tilde)
r = rating_map.flatten()
diff = r[obs_map*n_teams+obs_team_1] - r[obs_map*n_teams+obs_team_2]
p = 0.5*tt.tanh(diff)+0.5
beta = pm.Normal('beta', 0.5, 0.2)
kappa = 16*tt.tanh(beta*diff)
sigma = pm.HalfCauchy('sigma', 0.5)
sc = pm.Normal('observed score diff', kappa, sigma, observed=obs['score_diff'])
wl = pm.Bernoulli('observed wl', p=p, observed=(obs['Team 1 ID'] == obs['winner']).values)
with rating_model:
#start = approx.sample(1)[0]
#trace = pm.sample(5000, init='advi', nuts_kwargs={'target_accept': 0.99}, tune=0)
trace = pm.sample(5000, n_init=20000, init='jitter+adapt_diag', nuts_kwargs={'target_accept': 0.90}, tune=500) # tune=1000, nuts_kwargs={'target_accept': 0.95}
filt = h_teams[h_teams.Name.isin(FILTER_TEAMS['mdleu'])]
sns.set_palette('Paired', n_teams)
f, ax = plt.subplots(figsize=(16,10))
ax.set_ylim(0,5.0)
curr_trace = trace['rating']
[sns.kdeplot(curr_trace[:,tmap[i]], shade=True, alpha=0.55, legend=True, ax=ax, label=v['Name']) for i,v in filt.iterrows()]
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
set(filt.Name).symmetric_difference(FILTER_TEAMS['mdleu'])
get_prior_rating = lambda i: [curr_trace[:, tmap[i]].mean(), curr_trace[:, tmap[i]].std()] if i in tmap else [0,1]
params = {v['Name']: get_prior_rating(i) for i,v in h_teams.iterrows()}
pd.DataFrame(params, index=['mu', 'sig']).T.sort_values('mu', ascending=False)
Explanation: Pymc Model
Determining Binary Win Loss: $wl_{m,i,j}$
$$
\omega, \tau, \sim HC(0.5) \
R_{k} \sim N(0, \omega^2) \
\tilde{\theta}{m,k} \sim N(0,1) \
R{m,k} = R_{k} + \tau\tilde{\theta} \
wl_{m,i,j} \sim B(p = \text{Sig}(R_{m,i}-R_{m,j})) \
$$
and score difference: $sc_{m,i,j}$
$$
\alpha \sim Gamma(10,5) \
\kappa_{m,i,j} = 32\text{Sig}(\alpha(R_{m,i}-R_{m,j}))-16 \
\sigma_{m} \sim HC(0.5) \
sc_{m,i,j} \sim N(\kappa, \sigma_{m}^2)
$$
End of explanation
EVENT_SET = 'all'
pm.backends.text.dump('saved_model/'+EVENT_SET+'/trace', trace)
np.save('saved_model/'+EVENT_SET+'/teams.npy', teams)
np.save('saved_model/'+EVENT_SET+'/maps.npy', maps)
#np.save('saved_model/'+EVENT_SET+'/filter_teams.npy', FILTER_TEAMS[EVENT_SET])
Explanation: Save Model
End of explanation
with rating_model:
approx = pm.fit(15000)
ap_trace = approx.sample(5000)
print('Gelman Rubin: %s' % pm.diagnostics.gelman_rubin(trace))
print('Effective N: %s' % pm.diagnostics.effective_n(trace))
print('Accept Prob: %.4f' % trace.get_sampler_stats('mean_tree_accept').mean())
print('Percentage of Divergent %.5f' % (trace['diverging'].nonzero()[0].size/float(len(trace))))
pm.traceplot(trace, varnames=['beta'])
rating_model.profile(pm.gradient(rating_model.logpt, rating_model.vars), n=100).summary()
rating_model.profile(rating_model.logpt, n=100).summary()
Explanation: Diagnostics
End of explanation
sns.set_palette('Paired', n_teams)
f, ax = plt.subplots(figsize=(16,10))
ax.set_ylim(0,2.0)
[sns.kdeplot(trace['sigma'][:,i], shade=True, alpha=0.55, legend=True, ax=ax, label=m) for i,m in enumerate(maps)]
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
f, axes = plt.subplots(n_maps,1,figsize=(12,34), sharex=True)
for m, ax in enumerate(axes):
ax.set_title(dict(enumerate(maps))[m])
ax.set_ylim(0,2.0)
[sns.kdeplot(trace['rating | map'][:,m,tmap[i]], shade=True, alpha=0.55, legend=False ,
ax=ax, label=v['Name']) for i,v in filt.iterrows()]
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
filt
i = np.where(teams==7880)
j = np.where(teams==7924)
diff = (trace['rating'][:,j] - trace['rating'][:,i]).flatten()
kappa = 32./(1+np.exp(-1.*trace['alpha']*diff))-16.
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(10,6))
sns.kdeplot(kappa, ax=ax2)
sns.kdeplot(diff, ax=ax1)
Explanation: Moar Plots
End of explanation
def vec2dict(s, n_teams):
return {
'mu': np.array(s[:n_teams]),
'sigma': np.array(s[n_teams:n_teams*2]),
'beta': s[-1],
}
def dict2vec(s):
return s['mu'] + s['sigma'] + [s['beta']]
skills_0 = dict2vec({
'mu': [1000]*n_teams,
'sigma': [300]*n_teams,
'beta': 50
})
from scipy.optimize import minimize
def loglike(y,p):
return -1.*(np.sum(y*np.log(p)+(1-y)*np.log(1.-p)))
def obj(skills):
s = vec2dict(skills, n_teams)
mean_diff = s['mu'][obs['Team 1 ID'].map(tmap).values] - s['mu'][obs['Team 2 ID'].map(tmap).values]
var_diff = s['sigma'][obs['Team 1 ID'].map(tmap).values]**2 + s['sigma'][obs['Team 2 ID'].map(tmap).values]**2 + skills[-1]**2
p = 1.-norm.cdf(0., loc=mean_diff, scale = np.sqrt(var_diff))
return loglike((obs['Team 1 ID'] == obs['winner']).values, p)
obj(skills_0)
opt_skill = g.x
print(opt_skill)
plots = norm.rvs(opt_skill[:5], opt_skill[5:-1], size=(2000,5))
f, ax = plt.subplots(figsize=(12,8))
[sns.kdeplot(plots[:,i], shade=True, alpha=0.55, legend=True, ax=ax, label=i) for i in range(5)]
Explanation: Non-MCMC Model
End of explanation |
5,479 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ThreatExchange Data Dashboard
Purpose
The ThreatExchange APIs are designed to make consuming threat intelligence from multiple sources easy. This notebook will walk you through
Step1: Optionally, enable debug level logging
Step2: Search for data in ThreatExchange
Start by running a query against the ThreatExchange APIs to pull down any/all data relevant to you over a specified period of days.
Step3: Next, we execute the query using our search parameters and put the results in a Pandas DataFrame
Step4: Do some data munging for easier analysis and then preview as a sanity check
Step5: Create a Dashboard to Get a High-level View
The raw data is great, but it would be much better if we could take a higher level view of the data. This dashboard will provide more insight into
Step6: Dive A Little Deeper
Take a subset of the data and understand it a little more.
In this example, we presume that we'd like to take phishing related data and study it, to see if we can use it to better defend a corporate network or abuse in a product.
As a simple example, we'll filter down to data labeled MALICIOUS and the word phish in the description, to see if we can make a more detailed conclusion on how to apply the data to our existing internal workflows.
Step7: Extract The High Confidence / Severity Data For Use
With a better understanding of the data, let's filter the MALICIOUS, REVIEWED_MANUALLY labeled data down to a pre-determined threshold for confidence + severity.
You can add more filters, or change the threshold, as you see fit.
Step8: Now, output all of the high value data to a file as CSV or JSON, for consumption in our other systems and workflows. | Python Code:
from pytx.access_token import access_token
from pytx.logger import setup_logger
from pytx.vocabulary import PrivacyType as pt
# Specify the location of your token via one of several ways:
# https://pytx.readthedocs.org/en/latest/pytx.access_token.html
access_token()
Explanation: ThreatExchange Data Dashboard
Purpose
The ThreatExchange APIs are designed to make consuming threat intelligence from multiple sources easy. This notebook will walk you through:
building an initial dashboard for assessing the data visible to your appID;
filtering down to a subset you consider high value; and
exporting the high value data to a file.
What you need
Before getting started, you'll need a few Python packages installed:
Pandas for data manipulation and analysis
Pytx for ThreatExchange access
Seaborn for making charts pretty
All of the python packages mentioned can be installed via
pip install <package_name>
Setup a ThreatExchange access_token
If you don't already have an access_token for your app, use the Facebook Access Token Tool to get one.
End of explanation
# Uncomment this if you want debug logging enabled
#setup_logger(log_file="pytx.log")
Explanation: Optionally, enable debug level logging
End of explanation
# Our basic search parameters, we default to querying over the past 14 days
days_back = 14
search_terms = ['abuse', 'phishing', 'malware', 'exploit', 'apt', 'ddos', 'brute', 'scan', 'cve']
Explanation: Search for data in ThreatExchange
Start by running a query against the ThreatExchange APIs to pull down any/all data relevant to you over a specified period of days.
End of explanation
from datetime import datetime, timedelta
from time import strftime
import pandas as pd
import re
from pytx import ThreatDescriptor
from pytx.vocabulary import ThreatExchange as te
# Define your search string and other params, see
# https://pytx.readthedocs.org/en/latest/pytx.common.html#pytx.common.Common.objects
# for the full list of options
search_params = {
te.FIELDS: ThreatDescriptor._default_fields,
te.LIMIT: 1000,
te.SINCE: strftime('%Y-%m-%d %H:%m:%S +0000', (datetime.utcnow() + timedelta(days=(-1*days_back))).timetuple()),
te.TEXT: search_terms,
te.UNTIL: strftime('%Y-%m-%d %H:%m:%S +0000', datetime.utcnow().timetuple()),
te.STRICT_TEXT: False
}
data_frame = None
for search_term in search_terms:
print "Searching for '%s' over -%d days" % (search_term, days_back)
results = ThreatDescriptor.objects(
fields=search_params[te.FIELDS],
limit=search_params[te.LIMIT],
text=search_term,
since=search_params[te.SINCE],
until=search_params[te.UNTIL],
strict_text=search_params[te.STRICT_TEXT]
)
tmp = pd.DataFrame([result.to_dict() for result in results])
tmp['search_term'] = search_term
print "\t... found %d descriptors" % tmp.size
if data_frame is None:
data_frame = tmp
else:
data_frame = data_frame.append(tmp)
print "\nFound %d descriptors in total." % data_frame.size
Explanation: Next, we execute the query using our search parameters and put the results in a Pandas DataFrame
End of explanation
from time import mktime
# Extract a datetime and timestamp, for easier analysis
data_frame['ds'] = pd.to_datetime(data_frame.added_on.str[0:10], format='%Y-%m-%d')
data_frame['ts'] = pd.to_datetime(data_frame.added_on)
# Extract the owner data
owner = data_frame.pop('owner')
owner = owner.apply(pd.Series)
data_frame = pd.concat([data_frame, owner.email, owner.name], axis=1)
# Extract freeform 'tags' in the description
def extract_tags(text):
return re.findall(r'\[([a-zA-Z0-9\:\-\_]+)\]', text)
data_frame['tags'] = data_frame.description.map(lambda x: [] if x is None else extract_tags(x))
data_frame.head(n=5)
Explanation: Do some data munging for easier analysis and then preview as a sanity check
End of explanation
import math
import matplotlib.pyplot as plt
import seaborn as sns
from pytx.vocabulary import ThreatDescriptor as td
%matplotlib inline
# Setup subplots for our dashboard
fig, axes = plt.subplots(nrows=4, ncols=2, figsize=(16,32))
axes[0,0].set_color_cycle(sns.color_palette("coolwarm_r", 15))
# Plot by Type over time
type_over_time = data_frame.groupby(
[pd.Grouper(freq='d', key='ds'), te.TYPE]
).count().unstack(te.TYPE)
type_over_time.added_on.plot(
kind='line',
stacked=True,
title="Indicator Types Per Day (-" + str(days_back) + "d)",
ax=axes[0,0]
)
# Plot by threat_type over time
tt_over_time = data_frame.groupby(
[pd.Grouper(freq='w', key='ds'), 'threat_type']
).count().unstack('threat_type')
tt_over_time.added_on.plot(
kind='bar',
stacked=True,
title="Threat Types Per Week (-" + str(days_back) + "d)",
ax=axes[0,1]
)
# Plot the top 10 tags
tags = pd.DataFrame([item for sublist in data_frame.tags for item in sublist])
tags[0].value_counts().head(10).plot(
kind='bar',
stacked=True,
title="Top 10 Tags (-" + str(days_back) + "d)",
ax=axes[1,0]
)
# Plot by who is sharing
owner_over_time = data_frame.groupby(
[pd.Grouper(freq='w', key='ds'), 'name']
).count().unstack('name')
owner_over_time.added_on.plot(
kind='bar',
stacked=True,
title="Who's Sharing Each Week? (-" + str(days_back) + "d)",
ax=axes[1,1]
)
# Plot the data as a timeseries of when it was published
data_over_time = data_frame.groupby(pd.Grouper(freq='6H', key='ts')).count()
data_over_time.added_on.plot(
kind='line',
title="Data shared over time (-" + str(days_back) + "d)",
ax=axes[2,0]
)
# Plot by status label
data_frame.status.value_counts().plot(
kind='pie',
title="Threat Statuses (-" + str(days_back) + "d)",
ax=axes[2,1]
)
# Heatmap by type / source
owner_and_type = pd.DataFrame(data_frame[['name', 'type']])
owner_and_type['n'] = 1
grouped = owner_and_type.groupby(['name', 'type']).count().unstack('type').fillna(0)
ax = sns.heatmap(
data=grouped['n'],
robust=True,
cmap="YlGnBu",
ax=axes[3,0]
)
# These require a little data munging
# translate a severity enum to a value
# TODO Add this translation to Pytx
def severity_value(severity):
if severity == 'UNKNOWN': return 0
elif severity == 'INFO': return 1
elif severity == 'WARNING': return 3
elif severity == 'SUSPICIOUS': return 5
elif severity == 'SEVERE': return 7
elif severity == 'APOCALYPSE': return 10
return 0
# translate a severity
def value_severity(severity):
if severity >= 9: return 'APOCALYPSE'
elif severity >= 6: return 'SEVERE'
elif severity >= 4: return 'SUSPICIOUS'
elif severity >= 2: return 'WARNING'
elif severity >= 1: return 'INFO'
elif severity >= 0: return 'UNKNOWN'
# Plot by how actionable the data is
# Build a special dataframe and chart it
data_frame['severity_value'] = data_frame.severity.apply(severity_value)
df2 = pd.DataFrame({'count' : data_frame.groupby(['name', 'confidence', 'severity_value']).size()}).reset_index()
ax = df2.plot(
kind='scatter',
x='severity_value', y='confidence',
xlim=(-1,11), ylim=(-10,110),
title='Data by Conf / Sev With Threshold Line',
ax=axes[3,1],
s=df2['count'].apply(lambda x: 1000 * math.log10(x)),
use_index=td.SEVERITY
)
# Draw a threshhold for data we consider likely using for alerts (aka 'high value')
ax.plot([2,10], [100,0], c='red')
Explanation: Create a Dashboard to Get a High-level View
The raw data is great, but it would be much better if we could take a higher level view of the data. This dashboard will provide more insight into:
what data is available
who's sharing it
how is labeled
how much of it is likely to be directly applicable for alerting
End of explanation
from pytx.vocabulary import Status as s
phish_data = data_frame[(data_frame.status == s.MALICIOUS)
& data_frame.description.apply(lambda x: x.find('phish') if x != None else False)]
# TODO: also filter for attack_type == PHISHING, when Pytx supports it
%matplotlib inline
# Setup subplots for our deeper dive plots
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16,8))
# Heatmap of type / source
owner_and_type = pd.DataFrame(phish_data[['name', 'type']])
owner_and_type['n'] = 1
grouped = owner_and_type.groupby(['name', 'type']).count().unstack('type').fillna(0)
ax = sns.heatmap(
data=grouped['n'],
robust=True,
cmap="YlGnBu",
ax=axes[0]
)
# Tag breakdown of the top 10 tags
tags = pd.DataFrame([item for sublist in phish_data.tags for item in sublist])
tags[0].value_counts().head(10).plot(
kind='pie',
title="Top 10 Tags (-" + str(days_back) + "d)",
ax=axes[1]
)
Explanation: Dive A Little Deeper
Take a subset of the data and understand it a little more.
In this example, we presume that we'd like to take phishing related data and study it, to see if we can use it to better defend a corporate network or abuse in a product.
As a simple example, we'll filter down to data labeled MALICIOUS and the word phish in the description, to see if we can make a more detailed conclusion on how to apply the data to our existing internal workflows.
End of explanation
from pytx.vocabulary import ReviewStatus as rs
# define our threshold line, which is the same as the red, threshold line in the chart above
sev_min = 2
sev_max = 10
conf_min= 0
conf_max = 100
# build a new series, to indicate if a row passes our confidence + severity threshold
def is_high_value(conf, sev):
return (((sev_max - sev_min) * (conf - conf_max)) - ((conf_min - conf_max) * (sev - sev_min))) > 0
data_frame['is_high_value']= data_frame.apply(lambda x: is_high_value(x.confidence, x.severity_value), axis=1)
# filter down to just the data passing our criteria, you can add more here to filter by type, source, etc.
high_value_data = data_frame[data_frame.is_high_value
& (data_frame.status == s.MALICIOUS)
& (data_frame.review_status == rs.REVIEWED_MANUALLY)].reset_index(drop=True)
# get a count of how much we kept
print "Kept %d of %d data as high value" % (high_value_data.size, data_frame.size)
# ... and preview it
high_value_data.head()
Explanation: Extract The High Confidence / Severity Data For Use
With a better understanding of the data, let's filter the MALICIOUS, REVIEWED_MANUALLY labeled data down to a pre-determined threshold for confidence + severity.
You can add more filters, or change the threshold, as you see fit.
End of explanation
use_csv = False
if use_csv:
file_name = 'threat_exchange_high_value.csv'
high_value_data.to_csv(path_or_buf=file_name)
print "CSV data written to %s" % file_name
else:
file_name = 'threat_exchange_high_value.json'
high_value_data.to_json(path_or_buf=file_name, orient='index')
print "JSON data written to %s" % file_name
Explanation: Now, output all of the high value data to a file as CSV or JSON, for consumption in our other systems and workflows.
End of explanation |
5,480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kaggle's Predicting Red Hat Business Value
This is a first quick & dirty attempt at Kaggle's Predicting Red Hat Business Value competition.
Loading in the data
Step1: Joining together to get dataset
Step6: Building a preprocessing pipeline
Step7: Potential trouble with high dimensionality
Notice that char_10_action, group_1 and others have a ton of unique values; one-hot encoding will result in a dataframe with thousands of columns.
Being lazy and getting as fast as possible to a first attempt, let's skip those and only consider categorical variable with ~20 or less unique values. We'll get smarter about dealing with these variables to reinclude them in our model on a subsequent attempt
Step8: Sampling to reduce runtime in training large dataset
If we train models based on the entire test dataset provided it exhausts the memory on my laptop. Again, in the spirit of getting something quick and dirty working, we'll sample the dataset and train on that. We'll then evaluate our model by testing the accuracy on a larger sample.
Step9: Putting together classifiers
Step10: Reporting utilities
Some utilities to make reporting progress easier
Step11: Cross validation and full test set accuracy
We'll cross validate within the training set, and then train on the full training set and see how well it performs on the full test set.
Step12: Preparing the submission
Random forest beat logistic regression, let's start with a submission using that.
But first, let's see what the submission is supposed to look like
Step13: And now let's prepare the submission by fitting on the full provided training set and using it to predict on the provided test set. | Python Code:
import pandas as pd
people = pd.read_csv('people.csv.zip')
people.head(3)
actions = pd.read_csv('act_train.csv.zip')
actions.head(3)
Explanation: Kaggle's Predicting Red Hat Business Value
This is a first quick & dirty attempt at Kaggle's Predicting Red Hat Business Value competition.
Loading in the data
End of explanation
training_data_full = pd.merge(actions, people, how='inner', on='people_id', suffixes=['_action', '_person'], sort=False)
training_data_full.head(5)
(actions.shape, people.shape, training_data_full.shape)
Explanation: Joining together to get dataset
End of explanation
# %load "preprocessing_transforms.py"
from sklearn.base import TransformerMixin, BaseEstimator
import pandas as pd
class BaseTransformer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None, **fit_params):
return self
def transform(self, X, **transform_params):
return self
class ColumnSelector(BaseTransformer):
Selects columns from Pandas Dataframe
def __init__(self, columns, c_type=None):
self.columns = columns
self.c_type = c_type
def transform(self, X, **transform_params):
cs = X[self.columns]
if self.c_type is None:
return cs
else:
return cs.astype(self.c_type)
class SpreadBinary(BaseTransformer):
def transform(self, X, **transform_params):
return X.applymap(lambda x: 1 if x == 1 else -1)
class DfTransformerAdapter(BaseTransformer):
Adapts a scikit-learn Transformer to return a pandas DataFrame
def __init__(self, transformer):
self.transformer = transformer
def fit(self, X, y=None, **fit_params):
self.transformer.fit(X, y=y, **fit_params)
return self
def transform(self, X, **transform_params):
raw_result = self.transformer.transform(X, **transform_params)
return pd.DataFrame(raw_result, columns=X.columns, index=X.index)
class DfOneHot(BaseTransformer):
Wraps helper method `get_dummies` making sure all columns get one-hot encoded.
def __init__(self):
self.dummy_columns = []
def fit(self, X, y=None, **fit_params):
self.dummy_columns = pd.get_dummies(
X,
prefix=[c for c in X.columns],
columns=X.columns).columns
return self
def transform(self, X, **transform_params):
return pd.get_dummies(
X,
prefix=[c for c in X.columns],
columns=X.columns).reindex(columns=self.dummy_columns, fill_value=0)
class DfFeatureUnion(BaseTransformer):
A dataframe friendly implementation of `FeatureUnion`
def __init__(self, transformers):
self.transformers = transformers
def fit(self, X, y=None, **fit_params):
for l, t in self.transformers:
t.fit(X, y=y, **fit_params)
return self
def transform(self, X, **transform_params):
transform_results = [t.transform(X, **transform_params) for l, t in self.transformers]
return pd.concat(transform_results, axis=1)
training_data_full.columns
for col in training_data_full.columns:
print("in {} there are {} unique values".format(col, len(training_data_full[col].unique())))
None
Explanation: Building a preprocessing pipeline
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer, StandardScaler
cat_columns = ['activity_category',
'char_1_action', 'char_2_action', 'char_3_action', 'char_4_action',
'char_5_action', 'char_6_action', 'char_7_action', 'char_8_action',
'char_9_action', 'char_1_person',
'char_2_person', 'char_3_person',
'char_4_person', 'char_5_person', 'char_6_person', 'char_7_person',
'char_8_person', 'char_9_person', 'char_10_person', 'char_11',
'char_12', 'char_13', 'char_14', 'char_15', 'char_16', 'char_17',
'char_18', 'char_19', 'char_20', 'char_21', 'char_22', 'char_23',
'char_24', 'char_25', 'char_26', 'char_27', 'char_28', 'char_29',
'char_30', 'char_31', 'char_32', 'char_33', 'char_34', 'char_35',
'char_36', 'char_37']
q_columns = ['char_38']
preprocessor = Pipeline([
('features', DfFeatureUnion([
('quantitative', Pipeline([
('select-quantitative', ColumnSelector(q_columns, c_type='float')),
('impute-missing', DfTransformerAdapter(Imputer(strategy='median'))),
('scale', DfTransformerAdapter(StandardScaler()))
])),
('categorical', Pipeline([
('select-categorical', ColumnSelector(cat_columns)),
('apply-onehot', DfOneHot()),
('spread-binary', SpreadBinary())
])),
]))
])
Explanation: Potential trouble with high dimensionality
Notice that char_10_action, group_1 and others have a ton of unique values; one-hot encoding will result in a dataframe with thousands of columns.
Being lazy and getting as fast as possible to a first attempt, let's skip those and only consider categorical variable with ~20 or less unique values. We'll get smarter about dealing with these variables to reinclude them in our model on a subsequent attempt
End of explanation
from sklearn.cross_validation import train_test_split
training_frac = 0.05
test_frac = 0.8
training_data, the_rest = train_test_split(training_data_full, train_size=training_frac, random_state=0)
test_data = the_rest.sample(frac=test_frac)
training_data.shape
test_data.shape
wrangled = preprocessor.fit_transform(training_data)
wrangled.head()
Explanation: Sampling to reduce runtime in training large dataset
If we train models based on the entire test dataset provided it exhausts the memory on my laptop. Again, in the spirit of getting something quick and dirty working, we'll sample the dataset and train on that. We'll then evaluate our model by testing the accuracy on a larger sample.
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
pipe_lr = Pipeline([
('wrangle', preprocessor),
('lr', LogisticRegression(C=100.0, random_state=0))
])
pipe_rf = Pipeline([
('wrangle', preprocessor),
('rf', RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=0))
])
feature_columns = cat_columns + q_columns
def extract_X_y(df):
return df[feature_columns], df['outcome']
X_train, y_train = extract_X_y(training_data)
X_test, y_test = extract_X_y(test_data)
Explanation: Putting together classifiers
End of explanation
import time
import subprocess
class time_and_log():
def __init__(self, label, *, prefix='', say=False):
self.label = label
self.prefix = prefix
self.say = say
def __enter__(self):
msg = 'Starting {}'.format(self.label)
print('{}{}'.format(self.prefix, msg))
if self.say:
cmd_say(msg)
self.start = time.process_time()
return self
def __exit__(self, *exc):
self.interval = time.process_time() - self.start
msg = 'Finished {} in {:.2f} seconds'.format(self.label, self.interval)
print('{}{}'.format(self.prefix, msg))
if self.say:
cmd_say(msg)
return False
def cmd_say(msg):
subprocess.call("say '{}'".format(msg), shell=True)
Explanation: Reporting utilities
Some utilities to make reporting progress easier
End of explanation
from sklearn.metrics import accuracy_score
from sklearn.cross_validation import cross_val_score
import numpy as np
models = [
('logistic regression', pipe_lr),
('random forest', pipe_rf),
]
for label, model in models:
print('Evaluating {}'.format(label))
say('Evaluating {}'.format(label))
# with time_and_log('cross validating', say=True, prefix=" _"):
# scores = cross_val_score(estimator=model,
# X=X_train,
# y=y_train,
# cv=5,
# n_jobs=1)
# print(' CV accuracy: {:.3f} +/- {:.3f}'.format(np.mean(scores), np.std(scores)))
with time_and_log('fitting full training set', say=True, prefix=" _"):
model.fit(X_train, y_train)
with time_and_log('evaluating on full test set', say=True, prefix=" _"):
print(" Full test accuracy ({:.2f} of dataset): {:.3f}".format(
test_frac,
accuracy_score(y_test, model.predict(X_test))))
Explanation: Cross validation and full test set accuracy
We'll cross validate within the training set, and then train on the full training set and see how well it performs on the full test set.
End of explanation
pd.read_csv('sample_submission.csv.zip').head(5)
Explanation: Preparing the submission
Random forest beat logistic regression, let's start with a submission using that.
But first, let's see what the submission is supposed to look like:
End of explanation
kaggle_test_df = pd.merge(
pd.read_csv('act_test.csv.zip'),
people,
how='inner', on='people_id', suffixes=['_action', '_person'], sort=False)
kaggle_test_df.head(2)
kaggle_test_df.shape
X_kaggle_train, y_kaggle_train = extract_X_y(training_data_full)
with time_and_log('fitting rf on full kaggle training set', say=True):
pipe_rf.fit(X_kaggle_train, y_kaggle_train)
with time_and_log('preparing kaggle submission', say=True):
submission_df = kaggle_test_df[['activity_id']].copy()
submission_df['outcome'] = pipe_rf.predict(kaggle_test_df)
submission_df.to_csv("predicting-red-hat-business-value_1_rf.csv", index=False)
Explanation: And now let's prepare the submission by fitting on the full provided training set and using it to predict on the provided test set.
End of explanation |
5,481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Step1: Workflow for each analysis type (e.g basic, 1 Dense layer...)
Step2: Linear Model
Step3: Single Dense Layer
Step4: VGG-Style CNN
Step5: Data Augmentation
Step6: Batch Normalization + Data Augmentation
Step7: Batch Normalization + Data Augmentation + Dropout
Step8: Ensembling | Python Code:
%matplotlib inline
import math
import numpy as np
import utils; reload(utils)
from utils import *
from sympy import Symbol
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Lambda, Dense
from matplotlib import pyplot as plt
Explanation: Deep Learning: Mnist Analysis
End of explanation
# We set the "seed" so we make the results a bit more predictable.
np.random.seed(1)
# Let's load the data. Mnist can be loaded really easily with Keras!
(X_train, y_train), (X_test, y_test) = mnist.load_data()
(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
# Keras needs to have at least one channel (color), so we expand the dimensions here.
X_test = np.expand_dims(X_test,1)
X_train = np.expand_dims(X_train,1)
# We would like to have an output in the form: [0, 0, 1, 0...] so we transform the labels with
# "onehot".
y_train = onehot(y_train)
y_test = onehot(y_test)
mean_px = X_train.mean().astype(np.float32)
std_px = X_train.std().astype(np.float32)
# We normalize the inputs so the training is more stable.
def norm_input(x): return (x-mean_px)/std_px
Explanation: Workflow for each analysis type (e.g basic, 1 Dense layer...):
Create model
Train it with the default "Learning Rate" of 0.01 just 1 epoch so we see the speed with what the accuracy is increasing.
Increase the "Learning Rate" to 0.1 and train the model between 4 and 12 epochs.
Decrease the "Learning Rate" to 0.01 and train the model 4 epochs.
Decrease the "Learning Rate" to 0.001 and train the model 2 epochs.
Decrease the "Learning Rate" to 0.0001 and train the model 1 epoch.
End of explanation
# Let's start by implementing a really basic Linear Model.
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
# This class creates batches based on images in "array-form". It's also quite powerful
# as it allows us to do Data Augmentation.
gen = image.ImageDataGenerator()
batches = gen.flow(X_train, y_train, batch_size=64)
test_batches = gen.flow(X_test, y_test, batch_size=64)
# We train the model with the batches.
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
# We increase the learning rate until we get overfitting.
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
# We decrease the learning rate as we want it to go slower because our accuracy
# didn't increase too much in the last step.
model.optimizer.lr=0.01
# We train the model with the batches 4 times so we reach overfitting.
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
# We are still underfitting! Our model is clearly not complex enough.
Explanation: Linear Model
End of explanation
# We add a new hidden dense layer and follow the same process as before.
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Flatten(),
Dense(512, activation='softmax'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
# We are clearly overfitting this time!
# Meaning that the accuracy of the training data is much higher than the one in the
# validation set
Explanation: Single Dense Layer
End of explanation
# Now we try out a VGG-style model, with several Convolution2D layers and MaxPooling2D.
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
Convolution2D(64,3,3, activation='relu'),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=8,
validation_data=test_batches, nb_val_samples=test_batches.N)
# This result is incredible! But we are overfitting, let's introduce "Data Augmentation" so
# we can deal with that.
Explanation: VGG-Style CNN
End of explanation
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
Convolution2D(64,3,3, activation='relu'),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
# This command will randomly modify the images (e.g rotation, zoom, ...) so it seems like we have more
# images.
gen = image.ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
height_shift_range=0.08, zoom_range=0.08)
batches = gen.flow(X_train, y_train, batch_size=64)
test_batches = gen.flow(X_test, y_test, batch_size=64)
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=8,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
# Not bad, we are still overfitting but much less! Let's see other techniques that might be
# useful in your analyses.
Explanation: Data Augmentation
End of explanation
# Let's apply now "Batch Normalization" to normalize the different weights in the CNN.
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
Explanation: Batch Normalization + Data Augmentation
End of explanation
# We are overfitting again, let's add a Dropout layer
def get_model_bn_do():
model = Sequential([
Lambda(norm_input, input_shape=(1,28,28)),
Convolution2D(32,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32,3,3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64,3,3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = get_model_bn_do()
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=1,
validation_data=test_batches, nb_val_samples=test_batches.N)
Explanation: Batch Normalization + Data Augmentation + Dropout
End of explanation
# Let's try finally with "Ensembling"
def fit_model():
model = get_model_bn_do()
model.fit_generator(batches, batches.N, nb_epoch=1, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.1
model.fit_generator(batches, batches.N, nb_epoch=4, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.01
model.fit_generator(batches, batches.N, nb_epoch=12, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.N, nb_epoch=18, verbose=0,
validation_data=test_batches, nb_val_samples=test_batches.N)
return model
models = [fit_model() for i in range(6)]
path = "data/mnist/"
model_path = path + 'models/'
for i,m in enumerate(models):
m.save_weights(model_path+'cnn-mnist23-'+str(i)+'.pkl')
evals = np.array([m.evaluate(X_test, y_test, batch_size=256) for m in models])
evals.mean(axis=0)
all_preds = np.stack([m.predict(X_test, batch_size=256) for m in models])
all_preds.shape
avg_preds = all_preds.mean(axis=0)
keras.metrics.categorical_accuracy(y_test, avg_preds).eval()
Explanation: Ensembling
End of explanation |
5,482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Enter Team Member Names here (double click to edit)
Step1: <a id="linearnumpy"></a>
<a href="#top">Back to Top</a>
Using Linear Regression
In the videos, we derived the formula for calculating the optimal values of the regression weights (you must be connected to the internet for this equation to show up properly)
Step2: Exercise 1
Step3: <a id="sklearn"></a>
<a href="#top">Back to Top</a>
Start of Live Session Coding
Exercise 2
Step4: Recall that to predict the output from our model, $\hat{y}$, from $w$ and $X$ we need to use the following formula
Step5: <a id="classification"></a>
<a href="#top">Back to Top</a>
Using Linear Classification
Now lets use the code you created to make a classifier with linear boundaries. Run the following code in order to load the iris dataset.
Step6: Exercise 4
Step7: Exercise 5 | Python Code:
from sklearn.datasets import load_diabetes
import numpy as np
from __future__ import print_function
ds = load_diabetes()
# this holds the continuous feature data
# because ds.data is a matrix, there are some special properties we can access (like 'shape')
print('features shape:', ds.data.shape, 'format is:', ('rows','columns')) # there are 442 instances and 10 features per instance
print('range of target:', np.min(ds.target),np.max(ds.target))
from pprint import pprint
# we can set the fields inside of ds and set them to new variables in python
pprint(ds.data) # prints out elements of the matrix
pprint(ds.target) # prints the vector (all 442 items)
Explanation: Enter Team Member Names here (double click to edit):
Name 1: Ian Johnson
Name 2: Derek Phanekham
Name 3: Travis Siems
In Class Assignment One
In the following assignment you will be asked to fill in python code and derivations for a number of different problems. Please read all instructions carefully and turn in the rendered notebook (or HTML of the rendered notebook) before the end of class (or right after class). The initial portion of this notebook is given before class and the remainder is given during class. Please answer the initial questions before class, to the best of your ability. Once class has started you may rework your answers as a team for the initial part of the assignment.
<a id="top"></a>
Contents
<a href="#Loading">Loading the Data</a>
<a href="#linearnumpy">Linear Regression</a>
<a href="#sklearn">Using Scikit Learn for Regression</a>
<a href="#classification">Linear Classification</a>
<a id="Loading"></a>
<a href="#top">Back to Top</a>
Loading the Data
Please run the following code to read in the "diabetes" dataset from sklearn's data loading module.
This will load the data into the variable ds. ds is a bunch object with fields like ds.data and ds.target. The field ds.data is a numpy matrix of the continuous features in the dataset. The object is not a pandas dataframe. It is a numpy matrix. Each row is a set of observed instances, each column is a different feature. It also has a field called ds.target that is a continuous value we are trying to predict. Each entry in ds.target is a label for each row of the ds.data matrix.
End of explanation
# Enter your answer here (or write code to calculate it)
# 11
Explanation: <a id="linearnumpy"></a>
<a href="#top">Back to Top</a>
Using Linear Regression
In the videos, we derived the formula for calculating the optimal values of the regression weights (you must be connected to the internet for this equation to show up properly):
$$ w = (X^TX)^{-1}X^Ty $$
where $X$ is the matrix of values with a bias column of ones appended onto it. For the diabetes dataset one could construct this $X$ matrix by stacking a column of ones onto the ds.data matrix.
$$ X=\begin{bmatrix}
& \vdots & & 1 \
\dotsb & \text{ds.data} & \dotsb & \vdots\
& \vdots & & 1\
\end{bmatrix}
$$
Question 1: For the diabetes dataset, how many elements will the vector $w$ contain?
End of explanation
# Write you code here, print the values of the regression weights using the 'print()' function in python
X = np.hstack((ds.data, np.ones((len(ds.target),1))))
w = np.linalg.inv(X.T @ X) @ X.T @ ds.target
print(w)
Explanation: Exercise 1: In the following empty cell, use the given equation above (using numpy matrix operations) to find the values of the optimal vector $w$. You will need to be sure $X$ and $y$ are created like the instructor talked about in the video. Don't forget to include any modifications to $X$ to account for the bias term in $w$. You might be interested in the following functions:
import numpy as np
np.hstack((mat1,mat2)) stack two matrices horizontally, to create a new matrix
np.ones((rows,cols)) create a matrix full of ones
my_mat.T takes transpose of numpy matrix named my_mat
np.dot(mat1,mat2) or mat1 @ mat2 is matrix multiplication for two matrices
np.linalg.inv(mat) gets the inverse of the variable mat
End of explanation
from sklearn.linear_model import LinearRegression
# write your code here, print the values of model by accessing
# its properties that you looked up from the API
reg = LinearRegression()
reg.fit(ds.data, ds.target)
print('model coefficients are:', reg.coef_)
print('model intercept is', reg.intercept_)
print('Answer to question is', 'YES.')
Explanation: <a id="sklearn"></a>
<a href="#top">Back to Top</a>
Start of Live Session Coding
Exercise 2: Scikit-learn also has a linear regression fitting implementation. Look at the scikit learn API and learn to use the linear regression method. The API is here:
API Reference: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
Use the sklearn LinearRegression module to check your results from the previous question.
Question 2: Did you get the same parameters?
End of explanation
# Use this block to answer the questions
w = w.reshape((len(w),1)) # make w a column vector
y_hat_numpy = w.T @ X.T
y_hat_skl = reg.predict(ds.data)
MSE_numpy = np.mean((y_hat_numpy - ds.target)**2)
MSE_skl = np.mean((y_hat_skl - ds.target)**2)
print('MSE Sklearn is:', MSE_skl)
print('MSE Numpy is:', MSE_numpy)
Explanation: Recall that to predict the output from our model, $\hat{y}$, from $w$ and $X$ we need to use the following formula:
$\hat{y}=w^TX^T$
Where $X$ is a matrix with example instances in each row of the matrix (and the bias term).
Exercise 3:
- Part A: Use matrix multiplication to predict output using numpy, $\hat{y}{numpy}$.
- Note: you may need to make the regression weights a column vector using the following code: w = w.reshape((len(w),1)) This assumes your weights vector is assigned to the variable named w.
- Part B: Use the sklearn API to get the values for $\hat{y}{sklearn}$ (hint: use the .predict function of the API).
- Part C: Calculate the mean squared error between your prediction from numpy and the target, $\frac{1}{M}\sum_i(y-\hat{y}{numpy})^2$.
- Part D: Calculate the mean squared error between your sklearn prediction and the target, $\frac{1}{M}\sum_i(y-\hat{y}{sklearn})^2$.
- Note: parts C and D can each be completed in one line of code using numpy. There is no need to write a for loop.
End of explanation
from sklearn.datasets import load_iris
import numpy as np
# this will overwrite the diabetes dataset
ds = load_iris()
print('features shape:', ds.data.shape) # there are 150 instances and 4 features per instance
print('original number of classes:', len(np.unique(ds.target)))
# now let's make this a binary classification task
ds.target = ds.target>1
print ('new number of classes:', len(np.unique(ds.target)))
Explanation: <a id="classification"></a>
<a href="#top">Back to Top</a>
Using Linear Classification
Now lets use the code you created to make a classifier with linear boundaries. Run the following code in order to load the iris dataset.
End of explanation
# write your code here and print the values of the weights
reg = LinearRegression()
reg.fit(ds.data, ds.target)
# Print the weights of the linear classifier.
print('model coefficients are:', reg.coef_)
print('model intercept is', reg.intercept_)
Explanation: Exercise 4: Now use linear regression to come up with a set of weights, w, that predict the class value. You can use numpy or sklearn, whichever you prefer. This is exactly like you did before for the diabetes dataset. However, instead of regressing to continuous values, you are just regressing to the integer value of the class (0 or 1), like we talked about in the video (using the hard limit funciton).
- Note: If you are using numpy, remember to account for the bias term when constructing the feature matrix, X.
End of explanation
# use this box to predict the classification output
y_hat = reg.predict(ds.data)
#Populate list of possible alpha values
alphas = [n/100.0 for n in range(-100, 100)]
ls = [ (float(sum((y_hat > a)==ds.target)) / len(ds.target), a) for a in alphas]
print('Percentage accuracy:', max(ls)[0])
print('Alpha value: ', max(ls)[1])
Explanation: Exercise 5: Finally, use a hard decision function on the output of the linear regression to make this a binary classifier. This is just like we talked about in the video, where the output of the linear regression passes through a function:
$\hat{y}=g(w^TX^T)$ where
$g(w^TX^T)$ for $w^TX^T < \alpha$ maps the predicted class to 0
$g(w^TX^T)$ for $w^TX^T \geq \alpha$ maps the predicted class to 1.
Here, alpha is a threshold for deciding the class.
Question 3: What value for $\alpha$ makes the most sense? What is the accuracy of the classifier given the $\alpha$ you chose?
Note: You can calculate the accuracy with the following code: accuracy = float(sum(yhat==y)) / len(y) assuming you choose variable names y and yhat for the target and prediction, respectively.
End of explanation |
5,483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Latent Semantic Indexing
Here, we apply the technique Latent Semantic Indexing to capture the similarity of words. We are given a list of words and their frequencies in 9 documents, found on GitHub.
Step1: Now as per part (a), we compute the SVD and use the first two singular values. Recall the model is that
\begin{equation}
\mathbf{x} \sim \mathcal{N}\left(W\mathbf{z},\Psi\right),
\end{equation}
where $\Psi$ is diagonal. If the SVD is $X = UDV^\intercal,$ $W$ will be the first two columns of $V$.
Step2: In this way, we let $Z = UD$, so $X = ZV^\intercal$. Now, let $\tilde{Z}$ be the approximation from using 2 singular values, so $\tilde{X} = \tilde{Z}W^\intercal$, so $\tilde{Z} = \tilde{U}\tilde{D}$. For some reason, the textbook chooses not to scale by $\tilde{D}$, so we just have $\tilde{U}$. Recall that all the variables are messed up because we used the tranpose.
Step3: Now, let's plot these results.
Step4: I, respectfully, disagree with the book for this reason. The optimal latent representation $Z = XW$ (observations are rows here), should be chosen such that
\begin{equation}
J(W,Z) = \frac{1}{N}\left\lVert X - ZW^\intercal\right\rVert^2
\end{equation}
is minimized, where $W$ is orthonormal.
Step5: By section 12.2.3 of the book, $W$ is the first $2$ columns of $V$. Thus, our actual plot should be below.
Step6: Note that this is very similar with the $y$-axis flipped. That part does not actually matter. What matters is the scaling by eigenvalues for computing. Before that scaling the proximity of points may not mean much if the eigenvalue is actually very large.
Now, the second part asks us to see if we can properly identify documents related to abductions by using a document with the single word abducted as a probe.
Step7: Note that despite the first document being about abductions, it doesn't contain the word abducted.
Let's look at the latent variable representation. We'll use cosine similarity to account for the difference in magnitude. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import preprocessing
plt.rcParams['font.size'] = 16
words_list = list()
with open('lsiWords.txt') as f:
for line in f:
words_list.append(line.strip())
words = pd.Series(words_list, name="words")
word_frequencies = pd.read_csv('lsiMatrix.txt', sep=' ', index_col=False,
header=None, names=words)
word_frequencies.T.head(20)
Explanation: Latent Semantic Indexing
Here, we apply the technique Latent Semantic Indexing to capture the similarity of words. We are given a list of words and their frequencies in 9 documents, found on GitHub.
End of explanation
X = word_frequencies.as_matrix().astype(np.float64)
U, D, V = np.linalg.svd(X.T) # in matlab the matrix is read in as its transpose
Explanation: Now as per part (a), we compute the SVD and use the first two singular values. Recall the model is that
\begin{equation}
\mathbf{x} \sim \mathcal{N}\left(W\mathbf{z},\Psi\right),
\end{equation}
where $\Psi$ is diagonal. If the SVD is $X = UDV^\intercal,$ $W$ will be the first two columns of $V$.
End of explanation
Z = V.T[:,:2]
Z
Explanation: In this way, we let $Z = UD$, so $X = ZV^\intercal$. Now, let $\tilde{Z}$ be the approximation from using 2 singular values, so $\tilde{X} = \tilde{Z}W^\intercal$, so $\tilde{Z} = \tilde{U}\tilde{D}$. For some reason, the textbook chooses not to scale by $\tilde{D}$, so we just have $\tilde{U}$. Recall that all the variables are messed up because we used the tranpose.
End of explanation
plt.figure(figsize=(8,8))
def plot_latent_variables(Z, ax=None):
if ax == None:
ax = plt.gca()
ax.plot(Z[:,0], Z[:,1], 'o', markerfacecolor='none')
for i in range(len(Z)):
ax.text(Z[i,0] + 0.005, Z[i,1], i,
verticalalignment='center')
ax.set_xlabel('$z_1$')
ax.set_ylabel('$z_2$')
ax.set_title('PCA with $L = 2$ for Alien Documents')
ax.grid(True)
plot_latent_variables(Z)
plt.show()
Explanation: Now, let's plot these results.
End of explanation
U, D, V = np.linalg.svd(X)
V = V.T # python implementation of SVD factors X = UDV (note that V is not tranposed)
Explanation: I, respectfully, disagree with the book for this reason. The optimal latent representation $Z = XW$ (observations are rows here), should be chosen such that
\begin{equation}
J(W,Z) = \frac{1}{N}\left\lVert X - ZW^\intercal\right\rVert^2
\end{equation}
is minimized, where $W$ is orthonormal.
End of explanation
W = V[:,:2]
Z = np.dot(X, W)
plt.figure(figsize=(8,8))
ax = plt.gca();
plot_latent_variables(Z, ax=ax)
ax.set_aspect('equal')
plt.show()
Explanation: By section 12.2.3 of the book, $W$ is the first $2$ columns of $V$. Thus, our actual plot should be below.
End of explanation
probe_document = np.zeros_like(words, dtype=np.float64)
abducted_idx = (words=='abducted').as_matrix()
probe_document[abducted_idx] = 1
X[0:3,abducted_idx]
Explanation: Note that this is very similar with the $y$-axis flipped. That part does not actually matter. What matters is the scaling by eigenvalues for computing. Before that scaling the proximity of points may not mean much if the eigenvalue is actually very large.
Now, the second part asks us to see if we can properly identify documents related to abductions by using a document with the single word abducted as a probe.
End of explanation
from scipy.spatial import distance
z = np.dot(probe_document, W)
similarities = list(map(lambda i : (i, 1 - distance.cosine(z,Z[i,:])), range(len(Z))))
similarities.sort(key=lambda similarity_tuple : -similarity_tuple[1])
similarities
Explanation: Note that despite the first document being about abductions, it doesn't contain the word abducted.
Let's look at the latent variable representation. We'll use cosine similarity to account for the difference in magnitude.
End of explanation |
5,484 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python 内置排序方法
Python 提供两种内置排序方法,一个是只针对 List 的原地(in-place)排序方法 list.sort(),另一个是针对所有可迭代对象的非原地排序方法 sorted()。
所谓原地排序是指会立即改变被排序的列表对象,就像 append()/pop() 等方法一样:
Step1: sorted() 不限于列表,而且会生成并返回一个新的排序后的列表,原有对象不受影响:
Step2: 虽然不是原地排序,但如果是传入生成器,还是会被循环掉的:
Step3: Key
对简单的迭代对象进行排序只需要逐次提取元素进行比较即可,如果想要对元素进行一些操作再进行比较,可以通过 key 参数指定一个取值函数。这里的 key 用法很像 0x02 函数式编程提到的 map/filter 所接受的函数,不同之处在于这里的 key 函数只是在排序比较前对元素进行处理,并不会改变元素原本的值,例如我们对一组整数按照(key 可以理解为按照的意思)绝对值进行排序:
Step4: 或者,当迭代对象的元素较为复杂时,可以只按照其中的某些属性进行排序:
Step5: Python 的 operator 标准库提供了一些操作符相关的方法,可以更方便地获取元素的属性:
Step6: 经过 key 处理之后会通过 < 符号对两个元素进行比较,在 Python 2.7 的版本中,sorted() 还可以接收另外一个参数 cmp,用来接管 < 的比较过程。但是在 Python 3.5 中已经全面摒弃了这一做法,包括 sorted() 中的 cmp 参数和对象中的 __cmp__ 比较操作,只有在需要向后兼容的时候才可能在 Python 3.5 用到这一功能,其替换的方法为: | Python Code:
from random import randrange
lst = [randrange(1, 100) for _ in range(10)]
print(lst)
lst.sort()
print(lst)
Explanation: Python 内置排序方法
Python 提供两种内置排序方法,一个是只针对 List 的原地(in-place)排序方法 list.sort(),另一个是针对所有可迭代对象的非原地排序方法 sorted()。
所谓原地排序是指会立即改变被排序的列表对象,就像 append()/pop() 等方法一样:
End of explanation
lst = [randrange(1, 100) for _ in range(10)]
tup = tuple(lst)
print(sorted(tup)) # return List
print(tup)
Explanation: sorted() 不限于列表,而且会生成并返回一个新的排序后的列表,原有对象不受影响:
End of explanation
tup = (randrange(1, 100) for _ in range(10))
print(sorted(tup))
for i in tup:
print(i)
Explanation: 虽然不是原地排序,但如果是传入生成器,还是会被循环掉的:
End of explanation
lst = [randrange(-10, 10) for _ in range(10)]
print(lst)
print(sorted(lst, key=abs))
Explanation: Key
对简单的迭代对象进行排序只需要逐次提取元素进行比较即可,如果想要对元素进行一些操作再进行比较,可以通过 key 参数指定一个取值函数。这里的 key 用法很像 0x02 函数式编程提到的 map/filter 所接受的函数,不同之处在于这里的 key 函数只是在排序比较前对元素进行处理,并不会改变元素原本的值,例如我们对一组整数按照(key 可以理解为按照的意思)绝对值进行排序:
End of explanation
lst = list(zip("hello world hail python".split(), [randrange(1, 10) for _ in range(4)]))
print(lst)
print(sorted(lst, key=lambda item: item[1]))
Explanation: 或者,当迭代对象的元素较为复杂时,可以只按照其中的某些属性进行排序:
End of explanation
from operator import itemgetter, attrgetter
print(lst)
print(sorted(lst, key=itemgetter(1)))
# 一切都只是函数
fitemgetter = lambda ind: lambda item: item[ind]
print(sorted(lst, key=fitemgetter(1)))
class P(object):
def __init__(self, w, n):
self.w = w
self.n = n
def __repr__(self):
return "{}=>{}".format(self.w, self.n)
ps = [P(i[0], i[1]) for i in lst]
print(sorted(ps, key=attrgetter('n')))
Explanation: Python 的 operator 标准库提供了一些操作符相关的方法,可以更方便地获取元素的属性:
End of explanation
from functools import cmp_to_key as new_cmp_to_key
# new_cmp_to_key works like this
def cmp_to_key(mycmp):
'Convert a cmp= function into a key= function'
class K:
def __init__(self, obj, *args):
self.obj = obj
def __lt__(self, other):
return mycmp(self.obj, other.obj) < 0
return K
def reverse_cmp(x, y):
return y[1] - x[1]
sorted(lst, key=cmp_to_key(reverse_cmp))
Explanation: 经过 key 处理之后会通过 < 符号对两个元素进行比较,在 Python 2.7 的版本中,sorted() 还可以接收另外一个参数 cmp,用来接管 < 的比较过程。但是在 Python 3.5 中已经全面摒弃了这一做法,包括 sorted() 中的 cmp 参数和对象中的 __cmp__ 比较操作,只有在需要向后兼容的时候才可能在 Python 3.5 用到这一功能,其替换的方法为:
End of explanation |
5,485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
本章以第 9 章定义的二维向量 Vector2d 类为基础,向前迈出一大步,定义表示多维向量的 Vector 类。这个类的行为与 Python 标准中的不可变扁平序列一样。Vector 实例中的元素是浮点数,本章结束后 Vector2d 类将支持以下功能
基本的序列协议 -- __len__ 和 __getitem__
正确表述拥有很多元素的实例
适当的切片支持,用于生成新的 Vector 实例
综合各个元素的值计算散列值
自定义的格式语言扩展
此外,我们还将通过 __getattr__ 方法实现属性的动态存取,以此取代 Vector2d 使用的只读属性 -- 不过,序列类型通常不会这么做
在大量代码之间,我们将穿插讨论一个概念:把协议当做正式借口。我们将说明协议和鸭子类型之间的关系,以及对自定义类型的影响
Vector 第一版:与 Vector2d 类兼容
Vector 类要尽量与上一章的 Vector2d 类兼容。为了编写 Vector(3, 4),Vector(3, 4, 5) 这样的代码,我们可以让 __init__ 方法接受任意个参数(通过 *args);但是,序列类型的构造方法最好接受可迭代的对象为参数,因为所有内置的序列类型都是这样做的。下面是我们的第一版 Vector 代码
Step1: 我们使用 reprlib.repr 的方式需要做些说明。这个函数用于生成大型结构或递归结构的安全表示形式,它会限制输出字符串的长度,用 '...' 表示截断的部分。另外我们希望 Vector 实例的表现形式是 Vector([3.0, 4.0, 5.0]),而不是 Vector(array('d', [3.0, 4.0, 5.0])),因为 Vector 实例中的数组是实现细节。因为这两种构造方法调用方式所构建的 Vector 对象是一样的,所以我选择使用更简单的语法,即传入列表参数
编写 __repr__ 方法时,本可以使用这个表达生成简化的 components 显示形式:reprlib.repr(list(self._components)),然而,这么做有些浪费,因为要把 self._components 中的每一个元素复制到一个列表中,然后使用列表的表现形式。我没有这么做,而是直接把 self._components 传给 reprlib.repr 函数,然后去掉 [ ] 外面的字符。
调用 repr() 函数的目的是调试,因此绝对不能抛出异常,如果 __repr__ 方法实现有问题,那么必须处理,尽量输出有用的内容,让用户能够识别目标对象
注意,__str__, __eq__ 和 __bool__ 方法与 Vector2d 类中的一样,而 frombytes 方法也只是把 * 去掉了。这是 Vector2d 可迭代的好处之一
顺便说一下,我们本可以让 Vector 继承 Vector2d,但是没有这么做,原因有两点,一是两个构造方法不兼容,所以不建议继承,这一点可以通过适当处理 __init__ 方法解决,第二个原因更重要:我想把 Vector 类作为单独的示例,因此实现序列协议,接下来我们讨论 协议 这个术语,然后实现序列协议
协议和鸭子类型
在第一章我们就说过,Python 中创建功能完善的序列类型无需使用继承,只需实现符合序列协议的方法
在面向对象中,协议是非正式的接口,只在文档中定义,在代码中不定义。例如,Python 的序列协议只需要 __len__ 和 __getitem__ 两个方法。任何类(如 Spam)只要使用标准签名和语义实现了这两个方法,就能用在任何期待序列的地方。Spam 是不是哪个类的子类无关紧要,只要提供了所需方法即可。第一章见过一个例子,下面再次给出代码:
Step2: FrenchDeck 类能充分利用 Python 的很多功能,因为它实现了序列协议,不过代码中并没有声明这一点。任何有经验的 Python 程序员只要看一眼就知道它是序列,即便它是 Object 的子类也无妨。我们说它是序列,因为它的行为像序列,这才是重点
Alex Martelli 说不要检查它是不是鸭子,而是看它的叫声像不像鸭子,走路姿势像不像鸭子,等等,这样的类人称鸭子类型
协议是非正式的,没有强制力,因此如果你知道类的具体使用场景,通常只需要实现一个协议的部分。例如,为了支持迭代,只需要实现 __getitem__ 方法,没必要提供 __len__ 方法。
下面,我们将在 Vector 类中实现序列协议。我们先不支持完美的切片,稍后再完善
Vector 类第 2 版:可切片序列
如 FrenchDeck 类所示,如果能委托给对象中的序列属性(如 self._component 数组),支持序列协议特别简单。下面只有一行代码的 __len__ 和 __getitem__ 方法是个好的开始
Step3: 可以看到,现在连切片都支持了,不过不太完美,如果 Vector 实例的切片也是 Vector 实例,而不是数组,那就好了。前面那个 FrenchDeck 也有类似的问题:切片得到的是列表。对 Vector 类来说,如果切片生成普通的数组,将缺失大量功能
想想内置的序列类型,切片得到的都是各自类型的实例,而不是其他类型。为了将 Vector 的实例的切片也变成 Vector 实例,我们不能简单的将切片交给数组切片做,我们要分析传给 __getitem__ 方法的参数,做适当的处理
下面看看 Python 如何把 my_seq[1
Step4: 现在,我们来看看 slice 本身:
Step5: 上面的 indices 属性非常有用,但是鲜为人知。
Step6: 给定长度为 len 的序列,计算 S 表示的扩展切片的起始(start)和结尾(stop)索引,以及步幅(step)。超过边界的索引会被截掉,这与常规切片处理方式一样
换句话说,indices 方法开放了内置序列的棘手逻辑,用于优雅的处理缺失索引和负数索引,以及长度超过目标序列的切片。这个方法会 “整顿” 元组,把 start,stop,stride 都变成非负数,而且都落在边界內,下面举个例子,例如有个长度为 5 的序列:
Step7: 在 Vector 类中无需使用 slice.indices() 方法,因为收到切片参数时,我们会委托 _components 数组处理。但是,如果你没有底层序列类型作为依靠,那么使用这个方法能节省大量时间
现在我们知道如何处理切片了,来看看 Vector.__getitem__ 方法改进后的实现
Vector 第 2 版:能处理切片的 __getitem__ 方法
Step8: 大量使用 isinstance 可能表明面向对象设计的不好,不过在 __getitem__ 方法中使用它处理切片是合理的。注意上面例子中用的是 numbers.Integer,这是一个抽象基类(Abstract Base Class,ABC)。在 isinstance 中使用抽象基类做测试能让 API 更灵活且容易更新,原因在下章讲。可惜,Python 3.4 标准库没有 slice 抽象基类
这个异常 TypeError 也是从字符串切片学得,字符串切片报错就会抛出 TypeError,返回的错误消息也是抄的(= =),为了创建符合 Python 风格的对象,我们要模仿 Python 内置的对象
Step9: Vector 类第 3 版:动态存储属性
Vector2d 变成 Vector 之后,就没办法通过名称访问向量的分量了(如 v.x, v.y)。现在我们处理的向量可能有大量的分量。不过,如果能通过单个分母访问前几个分量的话会比较方便。比如,用 x,y 和 z 代替 v[0], v[1], v[2]
在 Vector2d 中,我们使用 @property 装饰器把 x 和 y 标记为只读特性。我们可以在 Vector 中编写 4 个特性,但这样太麻烦。特殊方法 __getattr__ 提供了更好的方式。
属性查找失败后,解释器会调用 __getattr__ 方法,简单的来说,对于 my_obj.x 表达式,Python 会检查 my_obj 实例有没有 x 属性,如果没有,到类(my_obj.__class__)中去查找,还是没有,顺着继承树查找。如果依旧找不到,调用 my_obj 所属类中定义的 __getattr__ 方法,传入 self 和属性名的字符串形式(如 'x')
属性查找机制复杂的多,更详细的我们在以后再讲解
下面 Vector 类定义 __getattr__ 方法,它的实现很简单,判断查找属性是不是 xyzt 中的某个字段,是则返回相应分量
Step10: 发生上面的向量分量没有改变的原因时因为 __getattr__ 运作方式导致的,仅当对象没有指定名称的属性时,Python 才会调用那个方法,这是一种后备机制。可是,像 v.x = 10 这样的赋值之后,v 对象有 x 属性了,因此使用 v.x 获取 x 属性时不会调用 __getattr__ 方法了,解释器直接返回绑定到 v.x 的值,即 10。另一方面,__getattr__ 方法实现没有考虑到 self._components 之外的实例属性,而是从这个属性中获取 shortcut_names 中所列的 “虚拟属性”
为了避免这种前后矛盾的现象,我们要改写 Vector 类中设置属性的逻辑。
回想前一章最后一个 Vector2d 实例中,如果为 .x 或 .y 实例属性赋值,会抛出 AttributeError,为了避免歧义,在 Vector 类中,如果为名称是单个小写字母属性赋值,我们也想抛出那个异常。为此,我们要实现 __setattr__ 方法,如下所示:
Step11: super 函数用于动态访问超类的方法,对 Python 这样支持多重继承的动态语言来说,必须能这么做,程序员经常使用这个函数把子类方法的某些任务委托给超类中适当的方法,如上面例子所示,在第 12 章我们会继续探索 super() 方法
为了给 AttributeError 选择错误消息,作者查看了 complex 类型的行为,当试图修改此类的只读属性会抛出 AttributeError,并且错误消息为 "can't set attribute",我们的错误消息参考了它
注意,我们没有禁止全部属性赋值,只是禁止为单个小写字母属性赋值,以防只读属性 x,y,z 和 t 混淆
我们知道,如果在类中声明 __slots__ 属性可以放之新实例属性,但是在这里没有这么做,因为 __slots__ 应该是你在内存严重不足时候使用的,不要滥用。
虽然这个示例不支持为 Vector 分量赋值,但是有一个问题要特别注意,多数时候,如果实现了 __getattr__ 方法,那么也要定义 __setattr__ 方法,防止对象的行为不一致
如果想允许修改分量,可以使用 __setitime__ 方法,支持 v[0] = 1.1 这样的赋值,以及(或者)实现 __setattr__ 方法,支持 v.x = 1.1 这样的赋值。不过,我们要保持 Vector 是不可变的,因为下一节中,我们将它变成可散列的。
Vector 第 4 版:散列和快速等值测试
我们要再次实现 __hash__ 方法,加上现有的 __eq__ 方法,这会把 Vector 实例变成可散列的对象,
之前的 __hash__ 方法简单地计算 hash(self.x) ^ hash(self.y)。这一次,我们要使用异或运算符以此计算各个分量的散列值,像这样:v[0] ^ v[1] ^ v[2] ...。我们有如下几种方法可以较为方便的完成这个功能:
Step12: 这三种方法中,我最喜欢最后一种,其次是 for 循环
为了用自己喜欢的方式计算 hash 值,我们引入了 functools 和 operator 模块,编写 __hash__ 方法,如下所示:
Step13: 使用 reduce 函数时最好提供第 3 个参数,reduce(function, iterable, initializer),这样能避免这个异常:"TypeError reduce() of empty sequence with no initial value"(这个错误消息很棒,说明了问题,还提供了解决方法)。如果序列为空,initializer 是返回的结果,否则,在规约中使用它作为第一个参数,因此应该使用恒等值,比如,对 +,|,和 ^ 来说, initializer 应该是 0,而对 * 和 & 来说,应该是 1
上面实现的 __hash__ 是一种规约映射计算:把函数应用到各个元素上,生成一个新的序列(映射,map),然后计算聚合值(规约,reduce)
映射的过程中计算各个分量的散列值,规约过程中使用 xor 运算符聚合所有散列值。把生成器表达式替换成 map 方法,映射过程更加明显:
Step14: 在 Python 2 中使用 map 函数效率会低一些,因为 map 函数要使用结果构建一个列表。但是在 Python 3 中,map 函数是惰性的,它会创建一个生成器,按需产出结果,因此能节省内存。这和上面例子中使用生成器表达式定义 __hash__ 方法的原理一样
既然讲到了规约函数,那就把前面草草实现的 __eq__ 方法修改一下,减少处理时间和内存用量 -- 至少对大型向量来说是这样。前面的 __eq__ 方法实现的非常简洁:
def __eq__(self, other)
Step15: zip 生成一个由元组构成的生成器,元组中的元素来自参数传入的各个可迭代对象,前面比较长度的测试是必要的,因为一旦有一个输入耗尽,zip 函数会立即停止生成值,而不发生警告
上面的代码效率很好,不过计算鞠和值的 for 循环可以替换成一行 all 函数调用:如果所有分量的比较结果都是 True,结果是 True。只要有一次结果是 False,all 函数就返回 False。使用 all 函数实现的 __eq__ 方法如下所示:
Step16: 下面是 zip 函数的一些使用示例:
Step17: 为了避免在 for 循环手动处理索引变量,还经常使用内置的 enumerate 生成器函数,在第 14 章介绍
Vector 类第 5 版:格式化
Vector 类的 __format__ 方法与 Vector2d 类相似,但是不使用极坐标,而是用球面坐标,因为 Vector 类支持 n 个维度,而超过 4 维之后,球体变成了 ”超球体“。因此我们把自定义格式后缀从 'p' 改成了 'h'
例如,对 4 维空间(len(v) == 4) 中的 Vector 对象来说,'h' 代码得到的结果是这样:<r, th1, th2, th3>。其中,r 是模,余下 3 个是角坐标
在小幅度改动 __format__ 方法之前,我们要定义两个辅助方法,一个是 angle(n)。用于计算某个角坐标,另一个是 angles(),返回所有角坐标构成的可迭代对象,我们不会讲解其中的数学原理,好奇可以上 wiki 查。
下面是完整代码
Step18: 注意 itertools.chain 函数
hain(iter1, iter2, ..., iterN):
给出一组迭代器(iter1, iter2, ..., iterN),此函数创建一个新迭代器来将所有的迭代器链接起来,返回的迭代器从iter1开始生成项,知道iter1被用完,然后从iter2生成项,这一过程会持续到iterN中所有的项都被用完。下面是一个简单的例子:
Step19: 下面是一些 Vector 类的测试: | Python Code:
from array import array
import reprlib
import math
class Vector:
typecode = 'd'
def __init__(self, components):
self._components = array(self.typecode, components) # 把 Vector 分量保存到一个数组中('d' 表示 double)
def __iter__(self):
return iter(self._components)
def __repr__(self):
# 使用 reprlib.repr() 函数获取 self._commponents 有限长度表示,如 array('d', [0.0, 1.0, 2.0, 3.0, 4.0, ...])
components = reprlib.repr(self._components)
components = components[components.find('['):-1] # 将字符串插入构造方法调用之前,去掉前面的 'd' 和后面的 )
return 'Vector({})'.format(components)
def __str__(self):
return str(tuple(self))
def __bytes__(self):
return (bytes([ord(self.typecode)]) +
bytes(self._components))
def __eq__(self, other):
return tuple(self) == tuple(other)
def __abs__(self):
return math.sqrt(sum(x * x for x in self)) # 迭代计算向量长度
def __bool__(self):
return bool(abs(self))
@classmethod
def frombytes(cls, octets):
typecode = chr(octets[0])
memv = memoryview(octets[1:]).cast(typecode)
return cls(memv) #我们只需要改动一行,直接把 memoryview 传给构造方法,不用使用 * 拆包
Vector([3.1, 4.2])
Vector((3, 4, 5))
Vector(range(10)) # reprlib.repr 限制了长度
Explanation: 本章以第 9 章定义的二维向量 Vector2d 类为基础,向前迈出一大步,定义表示多维向量的 Vector 类。这个类的行为与 Python 标准中的不可变扁平序列一样。Vector 实例中的元素是浮点数,本章结束后 Vector2d 类将支持以下功能
基本的序列协议 -- __len__ 和 __getitem__
正确表述拥有很多元素的实例
适当的切片支持,用于生成新的 Vector 实例
综合各个元素的值计算散列值
自定义的格式语言扩展
此外,我们还将通过 __getattr__ 方法实现属性的动态存取,以此取代 Vector2d 使用的只读属性 -- 不过,序列类型通常不会这么做
在大量代码之间,我们将穿插讨论一个概念:把协议当做正式借口。我们将说明协议和鸭子类型之间的关系,以及对自定义类型的影响
Vector 第一版:与 Vector2d 类兼容
Vector 类要尽量与上一章的 Vector2d 类兼容。为了编写 Vector(3, 4),Vector(3, 4, 5) 这样的代码,我们可以让 __init__ 方法接受任意个参数(通过 *args);但是,序列类型的构造方法最好接受可迭代的对象为参数,因为所有内置的序列类型都是这样做的。下面是我们的第一版 Vector 代码
End of explanation
import collections
Card = collections.namedtuple('Card', ['rank', 'suit'])
class FrenchDeck:
ranks = [str(n) for n in range(2, 11)] + list('JQKA')
suits = 'spades diamons clubs hearts'.split()
def __init__(self):
self._cards = [Card(rank, suit) for suit in self.suits
for rank in self.ranks]
def __len__(self):
return len(self._cards)
def __getitem__(self, position):
return self._cards[position]
Explanation: 我们使用 reprlib.repr 的方式需要做些说明。这个函数用于生成大型结构或递归结构的安全表示形式,它会限制输出字符串的长度,用 '...' 表示截断的部分。另外我们希望 Vector 实例的表现形式是 Vector([3.0, 4.0, 5.0]),而不是 Vector(array('d', [3.0, 4.0, 5.0])),因为 Vector 实例中的数组是实现细节。因为这两种构造方法调用方式所构建的 Vector 对象是一样的,所以我选择使用更简单的语法,即传入列表参数
编写 __repr__ 方法时,本可以使用这个表达生成简化的 components 显示形式:reprlib.repr(list(self._components)),然而,这么做有些浪费,因为要把 self._components 中的每一个元素复制到一个列表中,然后使用列表的表现形式。我没有这么做,而是直接把 self._components 传给 reprlib.repr 函数,然后去掉 [ ] 外面的字符。
调用 repr() 函数的目的是调试,因此绝对不能抛出异常,如果 __repr__ 方法实现有问题,那么必须处理,尽量输出有用的内容,让用户能够识别目标对象
注意,__str__, __eq__ 和 __bool__ 方法与 Vector2d 类中的一样,而 frombytes 方法也只是把 * 去掉了。这是 Vector2d 可迭代的好处之一
顺便说一下,我们本可以让 Vector 继承 Vector2d,但是没有这么做,原因有两点,一是两个构造方法不兼容,所以不建议继承,这一点可以通过适当处理 __init__ 方法解决,第二个原因更重要:我想把 Vector 类作为单独的示例,因此实现序列协议,接下来我们讨论 协议 这个术语,然后实现序列协议
协议和鸭子类型
在第一章我们就说过,Python 中创建功能完善的序列类型无需使用继承,只需实现符合序列协议的方法
在面向对象中,协议是非正式的接口,只在文档中定义,在代码中不定义。例如,Python 的序列协议只需要 __len__ 和 __getitem__ 两个方法。任何类(如 Spam)只要使用标准签名和语义实现了这两个方法,就能用在任何期待序列的地方。Spam 是不是哪个类的子类无关紧要,只要提供了所需方法即可。第一章见过一个例子,下面再次给出代码:
End of explanation
from array import array
import reprlib
import math
class Vector:
typecode = 'd'
def __init__(self, components):
self._components = array(self.typecode, components)
def __iter__(self):
return iter(self._components)
def __repr__(self):
components = reprlib.repr(self._components)
components = components[components.find('['):-1]
return 'Vector({})'.format(components)
def __str__(self):
return str(tuple(self))
def __bytes__(self):
return (bytes([ord(self.typecode)]) +
bytes(self._components))
def __eq__(self, other):
return tuple(self) == tuple(other)
def __abs__(self):
return math.sqrt(sum(x * x for x in self))
def __bool__(self):
return bool(abs(self))
@classmethod
def frombytes(cls, octets):
typecode = chr(octets[0])
memv = memoryview(octets[1:]).cast(typecode)
return cls(memv)
# 上面都一样
def __len__(self):
return len(self._components)
def __getitem__(self, index):
return self._components[index]
v1 = Vector([3, 4, 5])
len(v1)
v1[0], v1[1]
v7 = Vector(range(7))
v7[1: 4]
Explanation: FrenchDeck 类能充分利用 Python 的很多功能,因为它实现了序列协议,不过代码中并没有声明这一点。任何有经验的 Python 程序员只要看一眼就知道它是序列,即便它是 Object 的子类也无妨。我们说它是序列,因为它的行为像序列,这才是重点
Alex Martelli 说不要检查它是不是鸭子,而是看它的叫声像不像鸭子,走路姿势像不像鸭子,等等,这样的类人称鸭子类型
协议是非正式的,没有强制力,因此如果你知道类的具体使用场景,通常只需要实现一个协议的部分。例如,为了支持迭代,只需要实现 __getitem__ 方法,没必要提供 __len__ 方法。
下面,我们将在 Vector 类中实现序列协议。我们先不支持完美的切片,稍后再完善
Vector 类第 2 版:可切片序列
如 FrenchDeck 类所示,如果能委托给对象中的序列属性(如 self._component 数组),支持序列协议特别简单。下面只有一行代码的 __len__ 和 __getitem__ 方法是个好的开始
End of explanation
class MySeq:
def __getitem__(self, index):
return index
s = MySeq()
s[1]
s[1:4]
s[1:4:2]
s[1:4:2, 9] # [ ] 中如果有逗号,__getitem__ 收到的是元组
s[1:4:2, 7:9] # 元组中甚至有多个切片对象
Explanation: 可以看到,现在连切片都支持了,不过不太完美,如果 Vector 实例的切片也是 Vector 实例,而不是数组,那就好了。前面那个 FrenchDeck 也有类似的问题:切片得到的是列表。对 Vector 类来说,如果切片生成普通的数组,将缺失大量功能
想想内置的序列类型,切片得到的都是各自类型的实例,而不是其他类型。为了将 Vector 的实例的切片也变成 Vector 实例,我们不能简单的将切片交给数组切片做,我们要分析传给 __getitem__ 方法的参数,做适当的处理
下面看看 Python 如何把 my_seq[1:3] 语法变成 my_seq.__getitem__(...) 的参数
End of explanation
slice # slice 是内置的类型
dir(slice) # 有 start, stop, step 数据属性,以及 indices 方法
Explanation: 现在,我们来看看 slice 本身:
End of explanation
help(slice.indices)
Explanation: 上面的 indices 属性非常有用,但是鲜为人知。
End of explanation
slice(None, 10, 2).indices(5)
slice(-3, None, None).indices(5)
Explanation: 给定长度为 len 的序列,计算 S 表示的扩展切片的起始(start)和结尾(stop)索引,以及步幅(step)。超过边界的索引会被截掉,这与常规切片处理方式一样
换句话说,indices 方法开放了内置序列的棘手逻辑,用于优雅的处理缺失索引和负数索引,以及长度超过目标序列的切片。这个方法会 “整顿” 元组,把 start,stop,stride 都变成非负数,而且都落在边界內,下面举个例子,例如有个长度为 5 的序列:
End of explanation
from array import array
import reprlib
import math
import numbers
class Vector:
typecode = 'd'
def __init__(self, components):
self._components = array(self.typecode, components)
def __iter__(self):
return iter(self._components)
def __repr__(self):
components = reprlib.repr(self._components)
components = components[components.find('['):-1]
return 'Vector({})'.format(components)
def __str__(self):
return str(tuple(self))
def __bytes__(self):
return (bytes([ord(self.typecode)]) +
bytes(self._components))
def __eq__(self, other):
return tuple(self) == tuple(other)
def __abs__(self):
return math.sqrt(sum(x * x for x in self))
def __bool__(self):
return bool(abs(self))
@classmethod
def frombytes(cls, octets):
typecode = chr(octets[0])
memv = memoryview(octets[1:]).cast(typecode)
return cls(memv)
# 上面都一样
def __len__(self):
return len(self._components)
def __getitem__(self, index):
cls = type(self) # 获取实例所属的类
if isinstance(index, slice):
return cls(self._components[index])
elif isinstance(index, numbers.Integral): # index 是 int 或其他整数类型
return self._components[index]
else:
msg = '{cls.__name__} indices must be integers'
raise TypeError(msg.format(cls=cls))
Explanation: 在 Vector 类中无需使用 slice.indices() 方法,因为收到切片参数时,我们会委托 _components 数组处理。但是,如果你没有底层序列类型作为依靠,那么使用这个方法能节省大量时间
现在我们知道如何处理切片了,来看看 Vector.__getitem__ 方法改进后的实现
Vector 第 2 版:能处理切片的 __getitem__ 方法
End of explanation
v7 = Vector(range(7))
v7[-1]
v7[1:4] # 看到现在行为正确了
v7[-1:]
v7[1, 2] # Vector 不支持多维索引,所以索引元组或多个切片会抛出错误
Explanation: 大量使用 isinstance 可能表明面向对象设计的不好,不过在 __getitem__ 方法中使用它处理切片是合理的。注意上面例子中用的是 numbers.Integer,这是一个抽象基类(Abstract Base Class,ABC)。在 isinstance 中使用抽象基类做测试能让 API 更灵活且容易更新,原因在下章讲。可惜,Python 3.4 标准库没有 slice 抽象基类
这个异常 TypeError 也是从字符串切片学得,字符串切片报错就会抛出 TypeError,返回的错误消息也是抄的(= =),为了创建符合 Python 风格的对象,我们要模仿 Python 内置的对象
End of explanation
from array import array
import reprlib
import math
import numbers
class Vector:
typecode = 'd'
def __init__(self, components):
self._components = array(self.typecode, components)
def __iter__(self):
return iter(self._components)
def __repr__(self):
components = reprlib.repr(self._components)
components = components[components.find('['):-1]
return 'Vector({})'.format(components)
def __str__(self):
return str(tuple(self))
def __bytes__(self):
return (bytes([ord(self.typecode)]) +
bytes(self._components))
def __eq__(self, other):
return tuple(self) == tuple(other)
def __abs__(self):
return math.sqrt(sum(x * x for x in self))
def __bool__(self):
return bool(abs(self))
@classmethod
def frombytes(cls, octets):
typecode = chr(octets[0])
memv = memoryview(octets[1:]).cast(typecode)
return cls(memv)
# 上面都一样
def __len__(self):
return len(self._components)
def __getitem__(self, index):
cls = type(self) # 获取实例所属的类
if isinstance(index, slice):
return cls(self._components[index])
elif isinstance(index, numbers.Integral): # index 是 int 或其他整数类型
return self._components[index]
else:
msg = '{cls.__name__} indices must be integers'
raise TypeError(msg.format(cls=cls))
shortcut_names = 'xyzt'
def __getattr__(self, name):
cls = type(self)
if len(name) == 1:
pos = cls.shortcut_names.find(name)
if 0 <= pos < len(self._components):
return self._components[pos]
msg = '{.__name__!r} object has no attribute {!r}'
raise AttributeError(msg.format(cls, name))
v = Vector(range(5))
v
v.x
v.x = 10
v.x # 读取到新的值
v # 向量的分量却没有变
Explanation: Vector 类第 3 版:动态存储属性
Vector2d 变成 Vector 之后,就没办法通过名称访问向量的分量了(如 v.x, v.y)。现在我们处理的向量可能有大量的分量。不过,如果能通过单个分母访问前几个分量的话会比较方便。比如,用 x,y 和 z 代替 v[0], v[1], v[2]
在 Vector2d 中,我们使用 @property 装饰器把 x 和 y 标记为只读特性。我们可以在 Vector 中编写 4 个特性,但这样太麻烦。特殊方法 __getattr__ 提供了更好的方式。
属性查找失败后,解释器会调用 __getattr__ 方法,简单的来说,对于 my_obj.x 表达式,Python 会检查 my_obj 实例有没有 x 属性,如果没有,到类(my_obj.__class__)中去查找,还是没有,顺着继承树查找。如果依旧找不到,调用 my_obj 所属类中定义的 __getattr__ 方法,传入 self 和属性名的字符串形式(如 'x')
属性查找机制复杂的多,更详细的我们在以后再讲解
下面 Vector 类定义 __getattr__ 方法,它的实现很简单,判断查找属性是不是 xyzt 中的某个字段,是则返回相应分量
End of explanation
from array import array
import reprlib
import math
import numbers
class Vector:
typecode = 'd'
def __init__(self, components):
self._components = array(self.typecode, components)
def __iter__(self):
return iter(self._components)
def __repr__(self):
components = reprlib.repr(self._components)
components = components[components.find('['):-1]
return 'Vector({})'.format(components)
def __str__(self):
return str(tuple(self))
def __bytes__(self):
return (bytes([ord(self.typecode)]) +
bytes(self._components))
def __eq__(self, other):
return tuple(self) == tuple(other)
def __abs__(self):
return math.sqrt(sum(x * x for x in self))
def __bool__(self):
return bool(abs(self))
@classmethod
def frombytes(cls, octets):
typecode = chr(octets[0])
memv = memoryview(octets[1:]).cast(typecode)
return cls(memv)
# 上面都一样
def __len__(self):
return len(self._components)
def __getitem__(self, index):
cls = type(self) # 获取实例所属的类
if isinstance(index, slice):
return cls(self._components[index])
elif isinstance(index, numbers.Integral): # index 是 int 或其他整数类型
return self._components[index]
else:
msg = '{cls.__name__} indices must be integers'
raise TypeError(msg.format(cls=cls))
shortcut_names = 'xyzt'
def __getattr__(self, name):
cls = type(self)
if len(name) == 1:
pos = cls.shortcut_names.find(name)
if 0 <= pos < len(self._components):
return self._components[pos]
msg = '{.__name__!r} object has no attribute {!r}'
raise AttributeError(msg.format(cls, name))
def __setattr__(self, name, value):
cls = type(self)
if len(name) == 1:
if name in cls.shortcut_names:
error = 'readonly attribute {attr_name!}'
elif name.islower():
error = "can't set attributes 'a' to 'z' in {cls_name!r}"
else:
error = ''
if error:
msg = error.format(cls_name = cls.__name__, attr_name = name) # 这个方法好,无论错误是哪个,都可以给定值
raise AttributeError(msg)
super().__setattr__(name, value) # 默认情况,调用超类的 __setattr__ 方法,提供标准行为
Explanation: 发生上面的向量分量没有改变的原因时因为 __getattr__ 运作方式导致的,仅当对象没有指定名称的属性时,Python 才会调用那个方法,这是一种后备机制。可是,像 v.x = 10 这样的赋值之后,v 对象有 x 属性了,因此使用 v.x 获取 x 属性时不会调用 __getattr__ 方法了,解释器直接返回绑定到 v.x 的值,即 10。另一方面,__getattr__ 方法实现没有考虑到 self._components 之外的实例属性,而是从这个属性中获取 shortcut_names 中所列的 “虚拟属性”
为了避免这种前后矛盾的现象,我们要改写 Vector 类中设置属性的逻辑。
回想前一章最后一个 Vector2d 实例中,如果为 .x 或 .y 实例属性赋值,会抛出 AttributeError,为了避免歧义,在 Vector 类中,如果为名称是单个小写字母属性赋值,我们也想抛出那个异常。为此,我们要实现 __setattr__ 方法,如下所示:
End of explanation
n = 0
for i in range(1, 6): n ^= i
n
import functools
functools.reduce(lambda a, b: a ^ b, range(6))
import operator
functools.reduce(operator.xor, range(6))
Explanation: super 函数用于动态访问超类的方法,对 Python 这样支持多重继承的动态语言来说,必须能这么做,程序员经常使用这个函数把子类方法的某些任务委托给超类中适当的方法,如上面例子所示,在第 12 章我们会继续探索 super() 方法
为了给 AttributeError 选择错误消息,作者查看了 complex 类型的行为,当试图修改此类的只读属性会抛出 AttributeError,并且错误消息为 "can't set attribute",我们的错误消息参考了它
注意,我们没有禁止全部属性赋值,只是禁止为单个小写字母属性赋值,以防只读属性 x,y,z 和 t 混淆
我们知道,如果在类中声明 __slots__ 属性可以放之新实例属性,但是在这里没有这么做,因为 __slots__ 应该是你在内存严重不足时候使用的,不要滥用。
虽然这个示例不支持为 Vector 分量赋值,但是有一个问题要特别注意,多数时候,如果实现了 __getattr__ 方法,那么也要定义 __setattr__ 方法,防止对象的行为不一致
如果想允许修改分量,可以使用 __setitime__ 方法,支持 v[0] = 1.1 这样的赋值,以及(或者)实现 __setattr__ 方法,支持 v.x = 1.1 这样的赋值。不过,我们要保持 Vector 是不可变的,因为下一节中,我们将它变成可散列的。
Vector 第 4 版:散列和快速等值测试
我们要再次实现 __hash__ 方法,加上现有的 __eq__ 方法,这会把 Vector 实例变成可散列的对象,
之前的 __hash__ 方法简单地计算 hash(self.x) ^ hash(self.y)。这一次,我们要使用异或运算符以此计算各个分量的散列值,像这样:v[0] ^ v[1] ^ v[2] ...。我们有如下几种方法可以较为方便的完成这个功能:
End of explanation
from array import array
import reprlib
import math
import numbers
import functools
import operator
class Vector:
typecode = 'd'
def __init__(self, components):
self._components = array(self.typecode, components)
def __iter__(self):
return iter(self._components)
def __repr__(self):
components = reprlib.repr(self._components)
components = components[components.find('['):-1]
return 'Vector({})'.format(components)
def __str__(self):
return str(tuple(self))
def __bytes__(self):
return (bytes([ord(self.typecode)]) +
bytes(self._components))
def __eq__(self, other):
return tuple(self) == tuple(other)
def __abs__(self):
return math.sqrt(sum(x * x for x in self))
def __bool__(self):
return bool(abs(self))
@classmethod
def frombytes(cls, octets):
typecode = chr(octets[0])
memv = memoryview(octets[1:]).cast(typecode)
return cls(memv)
# 上面都一样
def __len__(self):
return len(self._components)
def __getitem__(self, index):
cls = type(self) # 获取实例所属的类
if isinstance(index, slice):
return cls(self._components[index])
elif isinstance(index, numbers.Integral): # index 是 int 或其他整数类型
return self._components[index]
else:
msg = '{cls.__name__} indices must be integers'
raise TypeError(msg.format(cls=cls))
shortcut_names = 'xyzt'
def __getattr__(self, name):
cls = type(self)
if len(name) == 1:
pos = cls.shortcut_names.find(name)
if 0 <= pos < len(self._components):
return self._components[pos]
msg = '{.__name__!r} object has no attribute {!r}'
raise AttributeError(msg.format(cls, name))
def __setattr__(self, name, value):
cls = type(self)
if len(name) == 1:
if name in cls.shortcut_names:
error = 'readonly attribute {attr_name!}'
elif name.islower():
error = "can't set attributes 'a' to 'z' in {cls_name!r}"
else:
error = ''
if error:
msg = error.format(cls_name = cls.__name__, attr_name = name) # 这个方法好,无论错误是哪个,都可以给定值
raise AttributeError(msg)
super().__setattr__(name, value) # 默认情况,调用超类的 __setattr__ 方法,提供标准行为
def __hash__(self):
hashs = (hash(x) for x in self._components) # 注意这里是生成器表达式,不是列表推导式,可以节省内存
return functools.reduce(operator.xor, hashs)
Explanation: 这三种方法中,我最喜欢最后一种,其次是 for 循环
为了用自己喜欢的方式计算 hash 值,我们引入了 functools 和 operator 模块,编写 __hash__ 方法,如下所示:
End of explanation
def __hash__(self):
hashs = map(hash, self._components)
return functools.reduce(operator.xor, hashes)
Explanation: 使用 reduce 函数时最好提供第 3 个参数,reduce(function, iterable, initializer),这样能避免这个异常:"TypeError reduce() of empty sequence with no initial value"(这个错误消息很棒,说明了问题,还提供了解决方法)。如果序列为空,initializer 是返回的结果,否则,在规约中使用它作为第一个参数,因此应该使用恒等值,比如,对 +,|,和 ^ 来说, initializer 应该是 0,而对 * 和 & 来说,应该是 1
上面实现的 __hash__ 是一种规约映射计算:把函数应用到各个元素上,生成一个新的序列(映射,map),然后计算聚合值(规约,reduce)
映射的过程中计算各个分量的散列值,规约过程中使用 xor 运算符聚合所有散列值。把生成器表达式替换成 map 方法,映射过程更加明显:
End of explanation
def __eq__(self, other):
if len(self) != len(other):
return False
for a, b in zip(self, other):
if a != b:
return False
return True
Explanation: 在 Python 2 中使用 map 函数效率会低一些,因为 map 函数要使用结果构建一个列表。但是在 Python 3 中,map 函数是惰性的,它会创建一个生成器,按需产出结果,因此能节省内存。这和上面例子中使用生成器表达式定义 __hash__ 方法的原理一样
既然讲到了规约函数,那就把前面草草实现的 __eq__ 方法修改一下,减少处理时间和内存用量 -- 至少对大型向量来说是这样。前面的 __eq__ 方法实现的非常简洁:
def __eq__(self, other):
return tuple(self) == tuple(other)
Vector2d 和 Vector 都可以这样做,它甚至还会认为 Vector([1, 2]) 和 (1, 2) 相等。这或许是个问题,我们这里先忽略,可是对于有几千个分量的 Vector 实例来说,效率十分低下。上述实现方式都要完整复制两个操作数,构建元组,而这么做只是为了用 tuple 类型的 __eq__ 方法。对 Vector2d(只有两个分量) 来说,这是个捷径,但是对于维数很多的向量来说就不同了。下面比较两个 Vector 实例(或者比较一个 Vector 和一个可迭代对象)的方式更好。
End of explanation
def __eq__(self, other):
return len(self) == len(other) and all(a == b for a, b in zip(self, other))
Explanation: zip 生成一个由元组构成的生成器,元组中的元素来自参数传入的各个可迭代对象,前面比较长度的测试是必要的,因为一旦有一个输入耗尽,zip 函数会立即停止生成值,而不发生警告
上面的代码效率很好,不过计算鞠和值的 for 循环可以替换成一行 all 函数调用:如果所有分量的比较结果都是 True,结果是 True。只要有一次结果是 False,all 函数就返回 False。使用 all 函数实现的 __eq__ 方法如下所示:
End of explanation
zip(range(3), 'ABC')
list(zip(range(3), 'ABC'))
list(zip(range(3), 'ABC', [0.0, 1.0, 2.0, 3.0])) # 当一个可迭代对象耗尽后,它不发出警告就停止
from itertools import zip_longest
# zip_longest 使用可选的 fillvalue(默认是 None) 填充缺失的值,直到最长的可迭代对象耗尽
list(zip_longest(range(3), 'ABC', [0.0, 1.0, 2.0, 3.0]))
Explanation: 下面是 zip 函数的一些使用示例:
End of explanation
from array import array
import reprlib
import math
import numbers
import functools
import operator
import itertools
class Vector:
typecode = 'd'
def __init__(self, components):
self._components = array(self.typecode, components)
def __iter__(self):
return iter(self._components)
def __repr__(self):
components = reprlib.repr(self._components)
components = components[components.find('['):-1]
return 'Vector({})'.format(components)
def __str__(self):
return str(tuple(self))
def __bytes__(self):
return (bytes([ord(self.typecode)]) +
bytes(self._components))
def __eq__(self, other):
return len(self) == len(other) and all(a == b for a, b in zip(self, other))
def __abs__(self):
return math.sqrt(sum(x * x for x in self))
def __bool__(self):
return bool(abs(self))
@classmethod
def frombytes(cls, octets):
typecode = chr(octets[0])
memv = memoryview(octets[1:]).cast(typecode)
return cls(memv)
# 上面都一样
def __len__(self):
return len(self._components)
def __getitem__(self, index):
cls = type(self) # 获取实例所属的类
if isinstance(index, slice):
return cls(self._components[index])
elif isinstance(index, numbers.Integral): # index 是 int 或其他整数类型
return self._components[index]
else:
msg = '{cls.__name__} indices must be integers'
raise TypeError(msg.format(cls=cls))
shortcut_names = 'xyzt'
def __getattr__(self, name):
cls = type(self)
if len(name) == 1:
pos = cls.shortcut_names.find(name)
if 0 <= pos < len(self._components):
return self._components[pos]
msg = '{.__name__!r} object has no attribute {!r}'
raise AttributeError(msg.format(cls, name))
def __setattr__(self, name, value):
cls = type(self)
if len(name) == 1:
if name in cls.shortcut_names:
error = 'readonly attribute {attr_name!}'
elif name.islower():
error = "can't set attributes 'a' to 'z' in {cls_name!r}"
else:
error = ''
if error:
msg = error.format(cls_name = cls.__name__, attr_name = name) # 这个方法好,无论错误是哪个,都可以给定值
raise AttributeError(msg)
super().__setattr__(name, value) # 默认情况,调用超类的 __setattr__ 方法,提供标准行为
def __hash__(self):
hashs = (hash(x) for x in self._components) # 注意这里是生成器表达式,不是列表推导式,可以节省内存
return functools.reduce(operator.xor, hashs)
def angle(self, n):
r = math.sqrt(sum(x * x for x in self[n:]))
a = math.atan2(r, self[n-1])
if (n == len(self) - 1) and (self[-1] < 0):
return math.pi * 2 - a
else:
return a
def angles(self):
return (self.angle(n) for n in range(1, len(self)))
def __format__(self, fmt_spec=''):
if fmt_spec.endswith('h'):
fmt_spec = fmt_spec[:-1]
coords = itertools.chain([abs(self)], # 使用 chain 函数生成生成器表达式,无缝迭代向量的模和各个角坐标
self.angles())
outer_fmt = '<{}>' # 球面坐标
else:
coords = self
outer_fmt = '({})' # 笛卡尔坐标
components = (format(c, fmt_spec) for c in coords)
return outer_fmt.format(', '.join(components))
Explanation: 为了避免在 for 循环手动处理索引变量,还经常使用内置的 enumerate 生成器函数,在第 14 章介绍
Vector 类第 5 版:格式化
Vector 类的 __format__ 方法与 Vector2d 类相似,但是不使用极坐标,而是用球面坐标,因为 Vector 类支持 n 个维度,而超过 4 维之后,球体变成了 ”超球体“。因此我们把自定义格式后缀从 'p' 改成了 'h'
例如,对 4 维空间(len(v) == 4) 中的 Vector 对象来说,'h' 代码得到的结果是这样:<r, th1, th2, th3>。其中,r 是模,余下 3 个是角坐标
在小幅度改动 __format__ 方法之前,我们要定义两个辅助方法,一个是 angle(n)。用于计算某个角坐标,另一个是 angles(),返回所有角坐标构成的可迭代对象,我们不会讲解其中的数学原理,好奇可以上 wiki 查。
下面是完整代码:
End of explanation
test = itertools.chain('abc', 'de', 'f')
test
for i in test:
print(i)
Explanation: 注意 itertools.chain 函数
hain(iter1, iter2, ..., iterN):
给出一组迭代器(iter1, iter2, ..., iterN),此函数创建一个新迭代器来将所有的迭代器链接起来,返回的迭代器从iter1开始生成项,知道iter1被用完,然后从iter2生成项,这一过程会持续到iterN中所有的项都被用完。下面是一个简单的例子:
End of explanation
Vector([3.1, 4.2])
Vector([3.0, 4.0, 5.0])
Vector(range(10))
v1 = Vector([3, 4])
x, y = v1
x, y
v1
v1_clone = eval(repr(v1))
v1 == v1_clone
print(v1)
octets = bytes(v1)
octets
abs(v1)
bool(v1), bool(Vector([0, 0]))
v1_clone = Vector.frombytes(bytes(v1))
v1 == v1_clone
v1 = Vector([3, 4, 5])
x, y, z = v1
x, y, z
v1
v1_clone = eval(repr(v1))
v1_clone == v1
print(v1)
abs(v1)
bool(v1), bool(Vector([0, 0, 0]))
v7 = Vector(range(7))
v7
abs(v7)
v1 = Vector([3, 4, 5])
v1_clone = Vector.frombytes(bytes(v1))
v1 == v1_clone
v1 = Vector([3, 4, 5])
len(v1)
v1[0], v1[len(v1)-1], v1[-1]
v7 = Vector(range(7))
v7[-1]
v7[1:4]
v7[-1:]
v7[1,2]
v7.x
v7.y, v7.z, v7.t
v7.k
v3 = Vector(range(3))
v3.t
v3.spam
v1 = Vector([3, 4])
v2 = Vector([3.1, 4.2])
v3 = Vector([3, 4, 5])
v6 = Vector(range(6))
hash(v1), hash(v3), hash(v6)
import sys
hash(v2) == (384307168202284039 if sys.maxsize > 2 ** 32 else 357915986)
v1 = Vector([3, 4])
format(v1)
format(v1, '.2f')
format(v1, '.3e')
v3 = Vector([3, 4, 5])
format(v3)
format(Vector(range(7)))
format(Vector([1, 1]), 'h')
format(Vector([1, 1]), '.3eh')
format(Vector([1, 1]), '0.5fh')
format(Vector([1, 1, 1]), 'h')
format(Vector([2, 2, 2]), '.3eh')
format(Vector([0, 0, 0]), '0.5fh')
format(Vector([-1, -1, -1, -1]), 'h')
format(Vector([2, 2, 2, 2]), '.3eh')
format(Vector([0, 1, 0, 0]), '.05fh')
Explanation: 下面是一些 Vector 类的测试:
End of explanation |
5,486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
automaton.is_coaccessible
Whether all its states are coaccessible, i.e., its transposed automaton is accessible, in other words, all its states cab reach a final state.
Preconditions
Step1: State 3 of the following automaton cannot reach a final state.
Step2: Calling accessible returns a copy of the automaton without non-accessible states | Python Code:
import vcsn
Explanation: automaton.is_coaccessible
Whether all its states are coaccessible, i.e., its transposed automaton is accessible, in other words, all its states cab reach a final state.
Preconditions:
- None
See also:
- automaton.coaccessible
- automaton.is_accessible
- automaton.trim
Examples
End of explanation
%%automaton a
context = "lal_char(abc), b"
$ -> 0
0 -> 1 a
1 -> $
2 -> 0 a
1 -> 3 a
a.is_coaccessible()
Explanation: State 3 of the following automaton cannot reach a final state.
End of explanation
a.coaccessible()
a.coaccessible().is_coaccessible()
Explanation: Calling accessible returns a copy of the automaton without non-accessible states:
End of explanation |
5,487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiple Stripe Analysis (MSA) for Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis using a suite of ground motion records scaled to multple stripes of intensity measure. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates the results of a Multiple Stripe Analysis, from which the fragility function is built.
<img src="../../../../figures/MSA_example.jpg" width="500" align="middle">
Note
Step1: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
Step2: Load ground motion records
For what concerns the ground motions to be used in th Multiple Stripe Analysis the following inputs are required
Step3: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
Step4: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix
Step5: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above
Step6: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above
Step7: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above
Step8: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions
Step9: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above | Python Code:
import numpy as np
from rmtk.vulnerability.common import utils
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import MSA_on_SDOF
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF import MSA_utils
from rmtk.vulnerability.derivation_fragility.NLTHA_on_SDOF.read_pinching_parameters import read_parameters
%matplotlib inline
Explanation: Multiple Stripe Analysis (MSA) for Single Degree of Freedom (SDOF) Oscillators
In this method, a single degree of freedom (SDOF) model of each structure is subjected to non-linear time history analysis using a suite of ground motion records scaled to multple stripes of intensity measure. The displacements of the SDOF due to each ground motion record are used as input to determine the distribution of buildings in each damage state for each level of ground motion intensity. A regression algorithm is then applied to derive the fragility model.
The figure below illustrates the results of a Multiple Stripe Analysis, from which the fragility function is built.
<img src="../../../../figures/MSA_example.jpg" width="500" align="middle">
Note: To run the code in a cell:
Click on the cell to select it.
Press SHIFT+ENTER on your keyboard or press the play button (<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
End of explanation
capacity_curves_file = '/Users/chiaracasotto/GitHub/rmtk_data/capacity_curves_sdof_first_mode.csv'
sdof_hysteresis = "/Users/chiaracasotto/GitHub/rmtk_data/pinching_parameters.csv"
capacity_curves = utils.read_capacity_curves(capacity_curves_file)
capacity_curves = utils.check_SDOF_curves(capacity_curves)
utils.plot_capacity_curves(capacity_curves)
hysteresis = read_parameters(sdof_hysteresis)
Explanation: Load capacity curves
In order to use this methodology, it is necessary to provide one (or a group) of capacity curves, defined according to the format described in the RMTK manual.
Please provide the location of the file containing the capacity curves using the parameter capacity_curves_file.
If the User wants to specify the cyclic hysteretic behaviour of the SDOF system, please input the path of the file where the hysteretic parameters are contained, using the variable sdof_hysteresis. The parameters should be defined according to the format described in the RMTK manual. If instead default parameters want to be assumed, please set the sdof_hysteresis variable to "Default"
End of explanation
gmrs_folder = "../../../../../rmtk_data/MSA_records"
minT, maxT = 0.1, 2.0
no_bins = 2
no_rec_bin = 10
record_scaled_folder = "../../../../../rmtk_data/Scaling_factors"
gmrs = utils.read_gmrs(gmrs_folder)
#utils.plot_response_spectra(gmrs, minT, maxT)
Explanation: Load ground motion records
For what concerns the ground motions to be used in th Multiple Stripe Analysis the following inputs are required:
1. gmrs_folder: path to the folder containing the ground motion records to be used in the analysis. Each accelerogram needs to be in a separate CSV file as described in the RMTK manual.
2. record_scaled_folder. In this folder there should be a csv file for each Intensity Measure bin selected for the MSA, containing the names of the records that should be scaled to that IM bin, and the corresponding scaling factors. An example of this type of file is provided in the RMTK manual.
3. no_bins: number of Intensity Measure bins.
4. no_rec_bin: number of records per bin
If the user wants to plot acceleration, displacement and velocity response spectra, the function utils.plot_response_spectra(gmrs, minT, maxT) should be un-commented. The parameters minT and maxT are used to define the period bounds when plotting the spectra for the provided ground motion fields.
End of explanation
damage_model_file = "../../../../../rmtk_data/damage_model_Sd.csv"
damage_model = utils.read_damage_model(damage_model_file)
Explanation: Load damage state thresholds
Please provide the path to your damage model file using the parameter damage_model_file in the cell below.
Currently the user can provide spectral displacement, capacity curve dependent and interstorey drift damage model type.
If the damage model type is interstorey drift the user has to input interstorey drift values of the MDOF system. The user can then provide the pushover curve in terms of Vb-dfloor to be able to convert interstorey drift limit states to roof displacements and spectral displacements of the SDOF system, otherwise a linear relationship is assumed.
End of explanation
damping_ratio = 0.05
degradation = False
msa = {}; msa['n. bins']=no_bins; msa['records per bin']=no_rec_bin; msa['input folder']=record_scaled_folder
PDM, Sds, IML_info = MSA_on_SDOF.calculate_fragility(capacity_curves, hysteresis, msa, gmrs,
damage_model, damping_ratio, degradation)
Explanation: Obtain the damage probability matrix
The following parameters need to be defined in the cell below in order to calculate the damage probability matrix:
1. damping_ratio: This parameter defines the damping ratio for the structure.
2. degradation: This boolean parameter should be set to True or False to specify whether structural degradation should be considered in the analysis or not.
End of explanation
IMT = "Sa"
T = 0.47
#T = np.arange(0.4,1.91,0.01)
regression_method = "least squares"
fragility_model = MSA_utils.calculate_fragility_model(PDM,gmrs,IML_info,IMT,msa,damage_model,
T,damping_ratio, regression_method)
Explanation: Fit lognormal CDF fragility curves
The following parameters need to be defined in the cell below in order to fit lognormal CDF fragility curves to the damage probability matrix obtained above:
1. IMT: This parameter specifies the intensity measure type to be used. Currently supported options are "PGA", "Sa","Sd" and "HI" (Housner Intensity).
2. period: This parameter defines the period for which a spectral intensity measure should be computed. If Housner Intensity is selected as intensity measure a range of periods should be defined instead (for example T=np.arange(0.3,3.61,0.01)).
3. regression_method: This parameter defines the regression method to be used for estimating the parameters of the fragility functions. The valid options are "least squares" and "max likelihood".
End of explanation
minIML, maxIML = 0.01, 4
utils.plot_fragility_model(fragility_model, minIML, maxIML)
print fragility_model['damage_states'][0:]
Explanation: Plot fragility functions
The following parameters need to be defined in the cell below in order to plot the lognormal CDF fragility curves obtained above:
* minIML and maxIML: These parameters define the limits of the intensity measure level for plotting the functions
End of explanation
taxonomy = "HI_Intact_v4_lq"
minIML, maxIML = 0.01, 3.00
output_type = "csv"
output_path = "../../../../../phd_thesis/results/damping_0.39/"
utils.save_mean_fragility(taxonomy, fragility_model, minIML, maxIML, output_type, output_path)
Explanation: Save fragility functions
The derived parametric fragility functions can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
2. minIML and maxIML: These parameters define the bounds of applicability of the functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation
cons_model_file = "../../../../../rmtk_data/cons_model.csv"
imls = [0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50,
0.60, 0.70, 0.80, 0.90, 1.00, 1.20, 1.40, 1.60, 1.80, 2.00,
2.20, 2.40, 2.60, 2.80, 3.00, 3.20, 3.40, 3.60, 3.80, 4.00]
distribution_type = "lognormal"
cons_model = utils.read_consequence_model(cons_model_file)
vulnerability_model = utils.convert_fragility_vulnerability(fragility_model, cons_model,
imls, distribution_type)
utils.plot_vulnerability_model(vulnerability_model)
Explanation: Obtain vulnerability function
A vulnerability model can be derived by combining the set of fragility functions obtained above with a consequence model. In this process, the fractions of buildings in each damage state are multiplied by the associated damage ratio from the consequence model, in order to obtain a distribution of loss ratio for each intensity measure level.
The following parameters need to be defined in the cell below in order to calculate vulnerability functions using the above derived fragility functions:
1. cons_model_file: This parameter specifies the path of the consequence model file.
2. imls: This parameter specifies a list of intensity measure levels in increasing order at which the distribution of loss ratios are required to be calculated.
3. distribution_type: This parameter specifies the type of distribution to be used for calculating the vulnerability function. The distribution types currently supported are "lognormal", "beta", and "PMF".
End of explanation
taxonomy = "RC"
output_type = "csv"
output_path = "../../../../../rmtk_data/output/"
utils.save_vulnerability(taxonomy, vulnerability_model, output_type, output_path)
Explanation: Save vulnerability function
The derived parametric or nonparametric vulnerability function can be saved to a file in either CSV format or in the NRML format that is used by all OpenQuake input models. The following parameters need to be defined in the cell below in order to save the lognormal CDF fragility curves obtained above:
1. taxonomy: This parameter specifies a taxonomy string for the the fragility functions.
3. output_type: This parameter specifies the file format to be used for saving the functions. Currently, the formats supported are "csv" and "nrml".
End of explanation |
5,488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Checking the model with superclass hierarchy with no augmentation. (It was manually switched off in the .json file)
Step1: Run the modification of check_test_score.py so that it can work with superclass representation.
Step2: Check which core is free.
Step3: Give the path to .json.
Step4: Best .pkl scores as
Step5: Recent .pkl scores as
Step6: Check the same model with 8 augmentation.
Step7: Best .pkl scored as
Step8: Strange. Not as good as we hoped. Is there a problem with augmentation?
Let's plot the nll.
Step9: Looks like it's pretty stable at 4 and had this random strange glitch which gave the best result.
Look at the best pkl of the none-aug model again
Step10: It was. Annoying. Let's plot the nll too | Python Code:
cd ..
Explanation: Checking the model with superclass hierarchy with no augmentation. (It was manually switched off in the .json file)
End of explanation
import numpy as np
import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import sklearn.metrics
import argparse
import os
import pylearn2.config.yaml_parse
Explanation: Run the modification of check_test_score.py so that it can work with superclass representation.
End of explanation
%env THEANO_FLAGS = 'device=gpu3,floatX=float32,base_compiledir=~/.theano/stonesoup3'
verbose = False
augment = 1
settings = neukrill_net.utils.Settings("settings.json")
Explanation: Check which core is free.
End of explanation
run_settings = neukrill_net.utils.load_run_settings('run_settings/alexnet_based_extra_convlayer_with_superclasses.json',
settings, force=True)
model = pylearn2.utils.serial.load(run_settings['pickle abspath'])
# format the YAML
yaml_string = neukrill_net.utils.format_yaml(run_settings, settings)
# load proxied objects
proxied = pylearn2.config.yaml_parse.load(yaml_string, instantiate=False)
# pull out proxied dataset
proxdata = proxied.keywords['dataset']
# force loading of dataset and switch to test dataset
proxdata.keywords['force'] = True
proxdata.keywords['training_set_mode'] = 'test'
proxdata.keywords['verbose'] = False
# then instantiate the dataset
dataset = pylearn2.config.yaml_parse._instantiate(proxdata)
if hasattr(dataset.X, 'shape'):
N_examples = dataset.X.shape[0]
else:
N_examples = len(dataset.X)
batch_size = 500
while N_examples%batch_size != 0:
batch_size += 1
n_batches = int(N_examples/batch_size)
model.set_batch_size(batch_size)
X = model.get_input_space().make_batch_theano()
Y = model.fprop(X)
f = theano.function([X],Y)
import neukrill_net.encoding as enc
hier = enc.get_hierarchy()
lengths = sum([len(array) for array in hier])
y = np.zeros((N_examples*augment,lengths))
# get the data specs from the cost function using the model
pcost = proxied.keywords['algorithm'].keywords['cost']
cost = pylearn2.config.yaml_parse._instantiate(pcost)
data_specs = cost.get_data_specs(model)
i = 0
for _ in range(augment):
# make sequential iterator
iterator = dataset.iterator(batch_size=batch_size,num_batches=n_batches,
mode='even_sequential', data_specs=data_specs)
for batch in iterator:
if verbose:
print(" Batch {0} of {1}".format(i+1,n_batches*augment))
y[i*batch_size:(i+1)*batch_size,:] = f(batch[0])
i += 1
Explanation: Give the path to .json.
End of explanation
logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)])
print("Log loss: {0}".format(logloss))
Explanation: Best .pkl scores as:
End of explanation
logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)])
print("Log loss: {0}".format(logloss))
%env THEANO_FLAGS = device=gpu2,floatX=float32,base_compiledir=~/.theano/stonesoup2
%env
Explanation: Recent .pkl scores as: (rerun relevant cells with a different path)
End of explanation
import numpy as np
import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import sklearn.metrics
import argparse
import os
import pylearn2.config.yaml_parse
verbose = False
augment = 1
settings = neukrill_net.utils.Settings("settings.json")
run_settings = neukrill_net.utils.load_run_settings('run_settings/alexnet_based_extra_convlayer_with_superclasses_aug.json',
settings, force=True)
model = pylearn2.utils.serial.load(run_settings['pickle abspath'])
# format the YAML
yaml_string = neukrill_net.utils.format_yaml(run_settings, settings)
# load proxied objects
proxied = pylearn2.config.yaml_parse.load(yaml_string, instantiate=False)
# pull out proxied dataset
proxdata = proxied.keywords['dataset']
# force loading of dataset and switch to test dataset
proxdata.keywords['force'] = True
proxdata.keywords['training_set_mode'] = 'test'
proxdata.keywords['verbose'] = False
# then instantiate the dataset
dataset = pylearn2.config.yaml_parse._instantiate(proxdata)
if hasattr(dataset.X, 'shape'):
N_examples = dataset.X.shape[0]
else:
N_examples = len(dataset.X)
batch_size = 500
while N_examples%batch_size != 0:
batch_size += 1
n_batches = int(N_examples/batch_size)
model.set_batch_size(batch_size)
X = model.get_input_space().make_batch_theano()
Y = model.fprop(X)
f = theano.function([X],Y)
import neukrill_net.encoding as enc
hier = enc.get_hierarchy()
lengths = sum([len(array) for array in hier])
y = np.zeros((N_examples*augment,lengths))
# get the data specs from the cost function using the model
pcost = proxied.keywords['algorithm'].keywords['cost']
cost = pylearn2.config.yaml_parse._instantiate(pcost)
data_specs = cost.get_data_specs(model)
i = 0
for _ in range(augment):
# make sequential iterator
iterator = dataset.iterator(batch_size=batch_size,num_batches=n_batches,
mode='even_sequential', data_specs=data_specs)
for batch in iterator:
if verbose:
print(" Batch {0} of {1}".format(i+1,n_batches*augment))
y[i*batch_size:(i+1)*batch_size,:] = f(batch[0])
i += 1
Explanation: Check the same model with 8 augmentation.
End of explanation
logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)])
print("Log loss: {0}".format(logloss))
Explanation: Best .pkl scored as:
End of explanation
import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
#import holoviews as hl
#load_ext holoviews.ipython
import sklearn.metrics
m = pylearn2.utils.serial.load(
"/disk/scratch/neuroglycerin/models/alexnet_based_extra_convlayer_with_superclasses_aug_recent.pkl")
channel = m.monitor.channels["valid_y_y_1_nll"]
plt.plot(channel.example_record,channel.val_record)
Explanation: Strange. Not as good as we hoped. Is there a problem with augmentation?
Let's plot the nll.
End of explanation
import numpy as np
import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import sklearn.metrics
import argparse
import os
import pylearn2.config.yaml_parse
verbose = False
augment = 1
settings = neukrill_net.utils.Settings("settings.json")
run_settings = neukrill_net.utils.load_run_settings('run_settings/alexnet_based_extra_convlayer_with_superclasses.json',
settings, force=True)
model = pylearn2.utils.serial.load(run_settings['pickle abspath'])
# format the YAML
yaml_string = neukrill_net.utils.format_yaml(run_settings, settings)
# load proxied objects
proxied = pylearn2.config.yaml_parse.load(yaml_string, instantiate=False)
# pull out proxied dataset
proxdata = proxied.keywords['dataset']
# force loading of dataset and switch to test dataset
proxdata.keywords['force'] = True
proxdata.keywords['training_set_mode'] = 'test'
proxdata.keywords['verbose'] = False
# then instantiate the dataset
dataset = pylearn2.config.yaml_parse._instantiate(proxdata)
if hasattr(dataset.X, 'shape'):
N_examples = dataset.X.shape[0]
else:
N_examples = len(dataset.X)
batch_size = 500
while N_examples%batch_size != 0:
batch_size += 1
n_batches = int(N_examples/batch_size)
model.set_batch_size(batch_size)
X = model.get_input_space().make_batch_theano()
Y = model.fprop(X)
f = theano.function([X],Y)
import neukrill_net.encoding as enc
hier = enc.get_hierarchy()
lengths = sum([len(array) for array in hier])
y = np.zeros((N_examples*augment,lengths))
# get the data specs from the cost function using the model
pcost = proxied.keywords['algorithm'].keywords['cost']
cost = pylearn2.config.yaml_parse._instantiate(pcost)
data_specs = cost.get_data_specs(model)
i = 0
for _ in range(augment):
# make sequential iterator
iterator = dataset.iterator(batch_size=batch_size,num_batches=n_batches,
mode='even_sequential', data_specs=data_specs)
for batch in iterator:
if verbose:
print(" Batch {0} of {1}".format(i+1,n_batches*augment))
y[i*batch_size:(i+1)*batch_size,:] = f(batch[0])
i += 1
logloss = sklearn.metrics.log_loss(dataset.y[:, :len(settings.classes)], y[:, :len(settings.classes)])
print("Log loss: {0}".format(logloss))
Explanation: Looks like it's pretty stable at 4 and had this random strange glitch which gave the best result.
Look at the best pkl of the none-aug model again: (just to confirm that it was indeed good)
End of explanation
m = pylearn2.utils.serial.load(
"/disk/scratch/neuroglycerin/models/alexnet_based_extra_convlayer_with_superclasses.pkl")
channel = m.monitor.channels["valid_y_y_1_nll"]
plt.plot(channel.example_record,channel.val_record)
Explanation: It was. Annoying. Let's plot the nll too:
End of explanation |
5,489 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Turing machine computation
Tape
We will represent the tape as a list of tape symbols and we will represent tape symbols as Python strings.
The string ' ' represents the blank symbol.
The string '|>' represents the start symbol, which indicates the beginning of the tape.
States
We will also encode states as Python strings.
The string 'start' represents that start state.
The strings 'accept', 'reject', and 'halt' represent final states of the machine, that indicate acceptance, rejection, and halting, respectively.
Simulation
The following function simulates a given Turing machine for a given number of steps on a given input
Step2: The following function checks that the transition functions satisfies some simple syntactic requirements (don't move to the left of the start symbol, don't remove or add start symbols, don't change state after accepting, rejecting, or halting.)
Step3: Examples
Copy machine
The following Turing machine copies its input, i.e., it computes the function $f(x)=xx$.
The actual implementation uses different versions of the '0' and '1' symbol (called '0-read', '0-write' and '1-read', '1-write') in the two copies of the string $x$.
We could replace those by regular '0' and '1' symbols by sweeping once more over the tape before the end of the computation.
Step4: Here is the full transitions function table of the machine
Step5: Here is an interactive simulation of the copy Turing machine (requires that ipython notebook is run locally).
You can either click on the simulate button to view the computation during a given range of steps or you can drag the current step slider to view the configuration of the machine at a particular step. (If you click on the current step slider, you can also change it using the arrow keys.)
Step6: Power-of-2 machine
The following Turing machine determines if the input is the unary encoding of a power of 2.
Furthermore, given any string $1^n$, it outputs a string of the form ${0,1}^n2^i$, where $i$ is the largest number such that $2^i$ divides $n$.
Step7: Here is the full transition function table of the Turing machine
Step8: Here is an interactive simulation of the power Turing machine (requires that ipython notebook is run locally).
You can either click on the simulate button to view the computation during a given range of steps or you can drag the current step slider to view the configuration of the machine at a particular step.
(If you click on the current step slider, you can also change it using the arrow keys.) | Python Code:
def run(transitions, input, steps):
simulate Turing machine for the given number of steps and the given input
# convert input from string to list of symbols
# we use '|>' as a symbol to indicate the beginning of the tape
input = ['|>'] + list(input) + [' ']
# sanitize transitions for 'accept' and 'reject' states and for symbol '|>'
transitions = sanitize_transitions(transitions)
# create initial configuration
c = Configuration(state='start', head=1, tape=input)
for i in range(0, steps):
# read tape content under head
current = c.state
read = c.tape[c.head]
# lookup transition based on state and read symbol
next, write, move = transitions(current, read)
# update configuration
c.state = next
c.tape[c.head] = write
c.head += move
if c.head >= len(c.tape):
c.tape += [' ']
# return final configuration
return c
Explanation: Turing machine computation
Tape
We will represent the tape as a list of tape symbols and we will represent tape symbols as Python strings.
The string ' ' represents the blank symbol.
The string '|>' represents the start symbol, which indicates the beginning of the tape.
States
We will also encode states as Python strings.
The string 'start' represents that start state.
The strings 'accept', 'reject', and 'halt' represent final states of the machine, that indicate acceptance, rejection, and halting, respectively.
Simulation
The following function simulates a given Turing machine for a given number of steps on a given input
End of explanation
def check_transitions(transitions, states, alphabet):
transitions = sanitize_transitions(transitions)
for current in states:
for read in alphabet:
next, write, move = transitions(current, read)
# we either stay in place or move one position
# to the left or right
assert(move in [-1,0,1])
# if we read the begin symbol,
if read == '|>':
# we need to write it back
assert(write == '|>')
# we need to move to the right
assert(move == 1)
else:
# we cannot write the begin symbol
assert(write != '|>')
# if we are in one of the final states
if current in ['accept', 'reject', 'halt']:
# we cannot change to a different state
assert(next == current)
print("transition checks passed")
Explanation: The following function checks that the transition functions satisfies some simple syntactic requirements (don't move to the left of the start symbol, don't remove or add start symbols, don't change state after accepting, rejecting, or halting.)
End of explanation
def transitions_copy(current, read):
if read == '|>':
return 'start', read, 1
elif current == 'start':
if 'write' not in read:
return read + '-write', read + '-read', 1
else:
return 'accept', read, 1
elif 'write' in current:
if read != ' ':
return current, read, 1
else:
return 'rewind', current, -1
elif current == 'rewind':
if 'read' not in read:
return current, read, -1
else:
return 'start', read, 1
Explanation: Examples
Copy machine
The following Turing machine copies its input, i.e., it computes the function $f(x)=xx$.
The actual implementation uses different versions of the '0' and '1' symbol (called '0-read', '0-write' and '1-read', '1-write') in the two copies of the string $x$.
We could replace those by regular '0' and '1' symbols by sweeping once more over the tape before the end of the computation.
End of explanation
transitions_table(transitions_copy,
['start', '0-write', '1-write', 'rewind'],
['0', '1', '0-read', '1-read', '0-write', '1-write'])
Explanation: Here is the full transitions function table of the machine:
End of explanation
simulate(transitions_copy, input='10011', unary=False)
Explanation: Here is an interactive simulation of the copy Turing machine (requires that ipython notebook is run locally).
You can either click on the simulate button to view the computation during a given range of steps or you can drag the current step slider to view the configuration of the machine at a particular step. (If you click on the current step slider, you can also change it using the arrow keys.)
End of explanation
def transitions_power(current,read):
if read == '|>':
return 'start', read, 1;
elif current == 'rewind':
return current, read, -1
elif read == 'x':
return current, read, 1
elif current == 'start':
if read != '1':
return 'reject', read, 1
else:
return 'start-even', read, 1
elif 'even' in current and read == '1':
return 'odd', 'x', 1
elif current == 'odd' and read == '1':
return 'even', read, 1
elif current == 'odd':
if read == ' ':
return 'rewind', '2', -1
else:
return current, read, 1
elif current == 'start-even' and read != '1':
return 'accept', read, -1
elif current == 'even' and read != '1':
return 'reject', read, -1
Explanation: Power-of-2 machine
The following Turing machine determines if the input is the unary encoding of a power of 2.
Furthermore, given any string $1^n$, it outputs a string of the form ${0,1}^n2^i$, where $i$ is the largest number such that $2^i$ divides $n$.
End of explanation
transitions_table(transitions_power,
['start', 'start-even', 'even', 'odd', 'rewind'],
['0', '1', 'x', ' ', '|>'])
Explanation: Here is the full transition function table of the Turing machine:
End of explanation
simulate(transitions_power, input_unary=16, step_to=200, unary=True)
Explanation: Here is an interactive simulation of the power Turing machine (requires that ipython notebook is run locally).
You can either click on the simulate button to view the computation during a given range of steps or you can drag the current step slider to view the configuration of the machine at a particular step.
(If you click on the current step slider, you can also change it using the arrow keys.)
End of explanation |
5,490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Arxiv summary auto translation
Set up
import modules.
Step1: Set credentials.<br>
Need to prepare the credentials file form GCP console.
Step2: Set the dates.<br>
No argument leads to set today as the reference date.<br>
You can set the reference data like '20150417' and days we go back.
Step3: Category list.<br>
Set the category key you would like to check.<br>
You can add categories by following https
Step4: Set the query.<br>
How to make a query
Step5: Get bulk data from arXiv.
Step6: Set target language and create the instance.<br>
You can select {'ja','de','es','fr','ko','pt','tr','zh-CN'} as of 20/3/2017.
Step7: Execute translations
nmt = True | Python Code:
import os
from modules.DataArxiv import get_date
from modules.DataArxiv import execute_query
from modules.Translate import Translate
Explanation: Arxiv summary auto translation
Set up
import modules.
End of explanation
CREDENTIALS_JSON = "credentials.json"
CREDENTIALS_PATH = os.path.normpath(
os.path.join(os.getcwd(), CREDENTIALS_JSON)
)
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = CREDENTIALS_PATH
Explanation: Set credentials.<br>
Need to prepare the credentials file form GCP console.
End of explanation
REF_DATE, PREV_DATE = get_date()
#REF_DATE, PREV_DATE = get_date(baseday='20170420', beforeNdays=7)
Explanation: Set the dates.<br>
No argument leads to set today as the reference date.<br>
You can set the reference data like '20150417' and days we go back.
End of explanation
CATEGORY_LIST = {
'ml' : ["cat:stat.ML","cat:cs.AI","cat:cs.CC","cat:cs.CE","cat:cs.CG","cat:cs.CV","cat:cs.DC","cat:cs.IR","cat:cs.IT","cat:cs.NE"]
, 'ph' : ["hep-ph"]
, 'th' : ["hep-th"]
}
CATEGORY_KEY = 'ml'
Explanation: Category list.<br>
Set the category key you would like to check.<br>
You can add categories by following https://arxiv.org/help/api/user-manual#subject_classifications.
End of explanation
CATEGORY = "+OR+".join(CATEGORY_LIST[CATEGORY_KEY])
QUERY = '''({})+AND+submittedDate:[{}0000+TO+{}0000]'''.format(
CATEGORY,PREV_DATE,REF_DATE
)
Explanation: Set the query.<br>
How to make a query : https://arxiv.org/help/api/index#about
End of explanation
BULK = execute_query(QUERY, prune=True, start=0, max_results=200)
Explanation: Get bulk data from arXiv.
End of explanation
TARGET_LANG = 'ja'
TRANSLATE_CLIENT = Translate(TARGET_LANG)
Explanation: Set target language and create the instance.<br>
You can select {'ja','de','es','fr','ko','pt','tr','zh-CN'} as of 20/3/2017.
End of explanation
TRANSLATE_CLIENT.check_arxiv(BULK, nmt=True)
Explanation: Execute translations
nmt = True : neural machine translation<br>
nmt = False : previous version
End of explanation |
5,491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Processing
Table of Contents
<p><div class="lev1 toc-item"><a href="#Text-Processing" data-toc-modified-id="Text-Processing-1"><span class="toc-item-num">1 </span>Text Processing</a></div><div class="lev2 toc-item"><a href="#Introduction" data-toc-modified-id="Introduction-11"><span class="toc-item-num">1.1 </span>Introduction</a></div><div class="lev1 toc-item"><a href="#Preparations" data-toc-modified-id="Preparations-2"><span class="toc-item-num">2 </span>Preparations</a></div><div class="lev2 toc-item"><a href="#Get-fulltext-" data-toc-modified-id="Get-fulltext--21"><span class="toc-item-num">2.1 </span>Get fulltext </a></div><div class="lev2 toc-item"><a href="#Segment-source-text" data-toc-modified-id="Segment-source-text-22"><span class="toc-item-num">2.2 </span>Segment source text</a></div><div class="lev2 toc-item"><a href="#Read-segments-into-a-variable-" data-toc-modified-id="Read-segments-into-a-variable--23"><span class="toc-item-num">2.3 </span>Read segments into a variable </a></div><div class="lev2 toc-item"><a href="#Tokenising-" data-toc-modified-id="Tokenising--24"><span class="toc-item-num">2.4 </span>Tokenising </a></div><div class="lev2 toc-item"><a href="#Stemming-/-Lemmatising-" data-toc-modified-id="Stemming-/-Lemmatising--25"><span class="toc-item-num">2.5 </span>Stemming / Lemmatising </a></div><div class="lev2 toc-item"><a href="#Eliminate-Stopwords-" data-toc-modified-id="Eliminate-Stopwords--26"><span class="toc-item-num">2.6 </span>Eliminate Stopwords </a></div><div class="lev1 toc-item"><a href="#Characterise-passages
Step1: Segment source text<a name="SegmentSourceText"></a>
Next, as mentioned above, we want to associate information with only passages of the text, not the text as a whole. Therefore, the text has to be segmented. The one big single file is being split into meaningful smaller chunks. What exactly constitutes a meaningful chunk -- a chapter, an article, a paragraph etc. -- cannot be known independently of the text in question and of the research questions. Therefore, a typical approach is that the scholar either splits the text manually or inserts some symbols that otherwise do not appear in the text. This is what we have here. Then, processing tools can find these symbols and split the file accordingly. For keeping things neat and orderly, the resulting files are saved in a directory of their own...
(Note here and in the following that in most cases, when the program is counting, it does so beginning with zero. Which means that if we end up with 20 segments, they are going to be called segment_0.txt, segment_1,txt, ..., segment_19.txt. There is not going to be a segment bearing the number twenty, although we do have twenty segments. The first one has the number zero and the twentieth one has the number nineteen. Even for more experienced coders, this sometimes leads to mistakes, called "off-by-one errors".)
Step2: Read segments into a variable <a name="ReadSegmentsIntoVariable"></a>
From the segments just created, we rebuild our corpus, iterating through them and reading them into another variable (which now stores, technically speaking, not just one long string of characters, as the variable input in the first code snippet did, but a list of strings, one for each segment).
Step3: Now we should have 20 strings in the variable corpus to play around with
Step4: For a quick impression, let's see the opening 500 characters of an arbitrary one of them; in this case, we take the fourth segment, i.e. the one at position '3' (remember that counting starts at 0)
Step5: Tokenising <a name="Tokenising"></a>
"Tokenising" means splitting the long lines of the input into single words. Since we are dealing with plain latin, we can use the default split method which relies on spaces to identify word boundaries. (In languages like Japanese or scripts like Arabic, this is more difficult.) Note that we do not compensate for words that are hyphenated/split across lines here! That is something that should be catered for in the transcription itself.
Step6: Now, instead of corpus, we can use tokenised for our subsequent routines
Step7: Already, we can have a first go at finding the most frequent words for a segment. (For this we use a simple library of functions that we import by the name of 'collections'.)
Step8: Nicer layout
Step9: Looks better now, doesn't it?
(The bold number in the very first column is the id as it were of the respective lemma. You see that 'hoc' has the id '0' - because it was the first word that occurred at all -, and 'ut' has the id '5' because it was the sixth word in our segment. Most probably, currently we are not interested in the position of the word and can ignore the first column.)
Stemming / Lemmatising <a name="StemmingLemmatising"></a>
Next, since we prefer to count different word forms as one and the same "lemma", we have to do a step called "lemmatisation". In languages that are not strongly inflected, like English, one can get away with "stemming", i.e. just eliminating the ending of words
Step10: So, we again build a dictionary of key-value pairs associating all the lemmata ("values") with their wordforms ("keys"). And afterwards, we can quickly look up the value under a given key
Step11: Again, a quick test
Step12: Now we can use this dictionary to build a new list of words, where only lemmatised forms occur
Step13: Again, let's see the first 50 words from the fourth segment, and compare them with the "tokenised" variant above
Step14: As you can see, the original text is lost now from the data that we are currently working with (unless we add another dimension to our lemmatised variable which can keep the original word form). But let us see if something in the 10 most frequent words has changed
Step15: Yes, things have changed
Step16: Now let's try and suppress the stopwords in the segments (and see what the "reduced" fourth segment gives)...
Step17: With this, we can already create a kind of first "profile" of, say, our first six segments, listing the most frequent words in each of them
Step18: Yay, look here, we have our words "indis", "tributum", "pensum" from the top ten above again, but this time the non-significant (for our present purposes) words in-between have been eliminated. Instead, new words like "numerata", "operis" etc. have made it into the top ten.
Step19: <div class="alert alertbox alert-success">So far our initial analyses, then. There are several ways in which we can continue now. We see that there are still word (like 'damnatione', tributorum' in the first or 'statuunt' in the second segment) that are not covered by our lemmatisation process. Also, abbreviations (like 'iur' in the second segment) could be expanded either in the transcription or by adding an appropriate line in our list of lemmata. Words like 'dom' in the fifth segment could maybe be added to the list of stopwords? Anyway, more need for review of these two lists (lemmata/stopwords) is explained below and that is something that should definitely be done - after all, they were taken from the context of quite another project and a scholar should control closely what is being suppressed and what is being replaced in the text under hand.</div>
<div class="alert alertbox alert-success">But we could also do more sophisticated things with the list. We could e.g. use either our lemma list or our stopwords list to filter out certain words, like all non-substantives. Or we could reduce all mentions of a certain name or literary work to a specific form (that would be easily recognizable in all the places).</div>
However, we can already observe that meaningful words like "indios/indis" are maybe not so helpful in characterising individual passages of this work, since they occur all over the place. After all, the work is called "De Indiarum Iure" and deals with various questions all related to indigenous people. Also, we would like to give some weight to the fact that a passage may consist of all stopwords and perhaps one or two substantial words, whereas another might be full of substantial words and few stopwords only (think e.g. of an abstract or an opening chapter describing the rest of the work). Or, since we have text segments of varying length, we would like our figures to reflect the fact that a tenfold occurrence in a very short passage may be more significant than a tenfold occurrence in a very, very, very long passage.
These phenomena are treated with more mathematical tools, so let's say that our preparatory work is done ...
Characterise passages
Step20: You can see how our corpus of four thousand "tokens" actually contains only one and a half thousand different words (plus stopwords, but these are at maximum 384). And, in contrast to simpler numbers that have been filtered out by our stopwords filter, I have left years like "1610" in place.
Calculate Terms' Text Frequencies (TF) <a name="CalculateTF"/>
However, our "vocab" object contains more than just all the unique words in our corpus. Let's get some information about it
Step21: It is actually a table with 20 rows (the number of our segments) and 1.672 columns (the number of unique words in the corpus). So what we do have is a table where for each segment the amount of occurrences of every "possible" (in the sense of used somewhere in the corpus) word is listed.
("Sparse" means that the majority of fields is zero. And 2.142 fields are populated, which is more than the number of unique words in the corpus (1.672, see above) - that's obviously because some words occur in multiple segments = rows. Not much of a surprise, actually.)
Here is the whole table
Step22: Each row of this table is a kind of fingerprint of a segment
Step23: Now we have seen above that "indis" is occurring in all of the segments, because, as the title indicates, the whole work is about issues related to the Indies and to indigenous people. When we want to characterize a segment by referring to some of its words, is there a way to weigh down words like "indis" a little bit? Not filter them out completely, as we do with stopwords, but give them just a little less weight than words not appearing all over the place? Yes there is...
## Inverse Document Frequencies (IDF) and TF-IDF <a name="CalculateTFIDF"/>
There is a measure called "text frequency / (inverse) document frequency" that combines a local measure (how frequently a word appears in a segment, in comparison to the other words appearing in the same segment, viz. the table above), with a global measure (how frequently the word appears throughout the whole corpus). Roughly speaking, we have to add to the table above a new, global, element
Step24: Now let's print a more qualified "top 10" words for each segment
Step25: <div class="alert alertbox alert-success">You can see that, in the fourth segment, pensum and tributum have moved up while indis has fallen from the first to the third place. But in other segments you can also see that abbreviations like "fol", "gl" or "hom" still are a major nuisance, and so are spanish passages. It would surely help to improve our stopwords and lemma lists.</div>
<div class="alert alertbox alert-success">Of course, having more text would also help
Step26: Extending the dimensions <a name="AddDimensions"/>
Of course, there is no reason why the dimensions should be restricted to or identical with the vocabulary (or the occurring n-grams, for that matter). In fact, in the examples above, we have dropped some of the words already by using our list of stopwords. <font color="green">We could also add other dimensions that are of interest for our current research question. We could add a dimension for the year in which the texts have been written, for their citing a certain author, or merely for their position in the encompassing work...</font>
Since in our examples, the position is represented in the "row number" and counting citations of a particular author require some more normalisations (e.g. with the lemmatisation dictionary above), let's add a dimension for the length of the respective segment (in characters) and another one for the number of occurrences of "_" (in our sample transcriptions, this character had been used to mark citations, although admittedly not all of them), just so you get the idea
Step27: You may notice that the segment with most occurrences of "_" (taken with a grain of salt, that's likely the segment with most citations), is not a particularly long one. If we had systematic markup of citations or author names in our transcription, we could be more certain or add even more columns/"dimensions" to our table.
If you bear with me for a final example, here is adding the labels that you could see in our initial one "big source file"
Step28: Word Clouds <a name="WordClouds"/>
We can use a library that takes word frequencies like above, calculates corresponding relative sizes of words and creates nice wordcloud images for our sections (again, taking the fourth segment as an example) like this
Step32: In order to have a nicer overview over the many segments than is possible in this notebook, let's create a new html file listing some of the characteristics that we have found so far...
Step33: This should have created a nice html file which we can open here.
Similarity <a name="DocumentSimilarity"/>
Also, once we have a representation of our text as a vector - which we can imagine as an arrow that goes a certain distance in one direction, another distance in another direction and so on - we can compare the different arrows. Do they go the same distance in a particular direction? And maybe almost the same in another direction? This would mean that one of the terms of our vocabulary has the same weight in both texts. Comparing the weight of our many, many dimensions, we can develop a measure for the similarity of the texts.
(Probably, similarity in words that are occurring all over the place in the corpus should not count so much, and in fact it is attenuated by our arrows being made up of tf/idf weights.)
Comparing arrows means calculating with angles and technically, what we are computing is the "cosine similarity" of texts. Again, there is a library ready for us to use (but you can find some documentation here, here and here.)
Step34: <div class="alert alertbox alert-success">Of course, in every set of documents, we will always find two that are similar in the sense of them being more similar to each other than to the other ones. Whether or not this actually *means* anything in terms of content is still up to scholarly interpretation. But at least it means that a scholar can look at the two documents and when she determines that they are not so similar after all, then perhaps there is something interesting to say about similar vocabulary used for different puproses. Or the other way round
Step41: <div class="alert alertbox alert-success">Our spanish wordfiles ([lemmata list](Solorzano/wordforms-es.txt) and [stopwords list](Solorzano/stopwords-es.txt)) are quite large and generous - they spare us some work of resolving quite a lot of abbreviations. However, since they are actually originating from a completely different project, it is very unlikely, that this goes without mistakes. Also some lemmata (like "de+el" in the eighth segment) are not really such. So we need to clean our wordlist and adapt it to the current text material urgently!</div>
Now imagine how we would bring the two documents together in a vector space. We would generate dimensions for all the words of our spanish vocabulary and would end up with a common space of roughly twice as many dimensions as before - and the latin work would be only in the first half of the dimensions and the spanish work only in the second half. The respective other half would be populated with only zeroes. So in effect, we would not really have a common space or something on the basis of which we could compare the two works.
Step42: Again, the resulting file can be opened here.
Translations?
Maybe there is an approach to inter-lingual comparison after all. Here is the API documentation of conceptnet.io, which we can use to lookup synonyms, related terms and translations. Like with such a URI | Python Code:
# This is the path to our file
bigsourcefile = 'Solorzano/Sections_I.1_TA.txt'
# We use a variable 'input' for keeping its contents.
input = open(bigsourcefile, encoding='utf-8').readlines()
# Just for information, let's see the first 10 lines of the file.
input[0:10] # actually, since python starts counting with '0', we get 11 lines.
# and since there is no line wrapping in the source file,
# a line can be quite long.
# You can see the lines ending with a "newline" character "\n" in the output.
Explanation: Text Processing
Table of Contents
<p><div class="lev1 toc-item"><a href="#Text-Processing" data-toc-modified-id="Text-Processing-1"><span class="toc-item-num">1 </span>Text Processing</a></div><div class="lev2 toc-item"><a href="#Introduction" data-toc-modified-id="Introduction-11"><span class="toc-item-num">1.1 </span>Introduction</a></div><div class="lev1 toc-item"><a href="#Preparations" data-toc-modified-id="Preparations-2"><span class="toc-item-num">2 </span>Preparations</a></div><div class="lev2 toc-item"><a href="#Get-fulltext-" data-toc-modified-id="Get-fulltext--21"><span class="toc-item-num">2.1 </span>Get fulltext </a></div><div class="lev2 toc-item"><a href="#Segment-source-text" data-toc-modified-id="Segment-source-text-22"><span class="toc-item-num">2.2 </span>Segment source text</a></div><div class="lev2 toc-item"><a href="#Read-segments-into-a-variable-" data-toc-modified-id="Read-segments-into-a-variable--23"><span class="toc-item-num">2.3 </span>Read segments into a variable </a></div><div class="lev2 toc-item"><a href="#Tokenising-" data-toc-modified-id="Tokenising--24"><span class="toc-item-num">2.4 </span>Tokenising </a></div><div class="lev2 toc-item"><a href="#Stemming-/-Lemmatising-" data-toc-modified-id="Stemming-/-Lemmatising--25"><span class="toc-item-num">2.5 </span>Stemming / Lemmatising </a></div><div class="lev2 toc-item"><a href="#Eliminate-Stopwords-" data-toc-modified-id="Eliminate-Stopwords--26"><span class="toc-item-num">2.6 </span>Eliminate Stopwords </a></div><div class="lev1 toc-item"><a href="#Characterise-passages:-TF/IDF" data-toc-modified-id="Characterise-passages:-TF/IDF-3"><span class="toc-item-num">3 </span>Characterise passages: TF/IDF</a></div><div class="lev2 toc-item"><a href="#Build-vocabulary-" data-toc-modified-id="Build-vocabulary--31"><span class="toc-item-num">3.1 </span>Build vocabulary </a></div><div class="lev2 toc-item"><a href="#Calculate-Terms'-Text-Frequencies-(TF)-" data-toc-modified-id="Calculate-Terms'-Text-Frequencies-(TF)--32"><span class="toc-item-num">3.2 </span>Calculate Terms' Text Frequencies (TF) </a></div><div class="lev2 toc-item"><a href="#Normalise-TF-" data-toc-modified-id="Normalise-TF--33"><span class="toc-item-num">3.3 </span>Normalise TF </a></div><div class="lev2 toc-item"><a href="#Inverse-Document-Frequencies-(IDF)-and-TF-IDF-" data-toc-modified-id="Inverse-Document-Frequencies-(IDF)-and-TF-IDF--34"><span class="toc-item-num">3.4 </span>Inverse Document Frequencies (IDF) and TF-IDF </a></div><div class="lev1 toc-item"><a href="#Vector-Space-Model-of-the-text-" data-toc-modified-id="Vector-Space-Model-of-the-text--4"><span class="toc-item-num">4 </span>Vector Space Model of the text </a></div><div class="lev2 toc-item"><a href="#Another-method-to-generate-the-dimensions:-n-grams-" data-toc-modified-id="Another-method-to-generate-the-dimensions:-n-grams--41"><span class="toc-item-num">4.1 </span>Another method to generate the dimensions: n-grams </a></div><div class="lev2 toc-item"><a href="#Extending-the-dimensions-" data-toc-modified-id="Extending-the-dimensions--42"><span class="toc-item-num">4.2 </span>Extending the dimensions </a></div><div class="lev2 toc-item"><a href="#Word-Clouds-" data-toc-modified-id="Word-Clouds--43"><span class="toc-item-num">4.3 </span>Word Clouds </a></div><div class="lev2 toc-item"><a href="#Similarity-" data-toc-modified-id="Similarity--44"><span class="toc-item-num">4.4 </span>Similarity </a></div><div class="lev2 toc-item"><a href="#Clustering-" data-toc-modified-id="Clustering--45"><span class="toc-item-num">4.5 </span>Clustering </a></div><div class="lev1 toc-item"><a href="#Working-with-several-languages" data-toc-modified-id="Working-with-several-languages-5"><span class="toc-item-num">5 </span>Working with several languages</a></div><div class="lev2 toc-item"><a href="#Translations?" data-toc-modified-id="Translations?-51"><span class="toc-item-num">5.1 </span>Translations?</a></div><div class="lev1 toc-item"><a href="#Graph-based-NLP" data-toc-modified-id="Graph-based-NLP-6"><span class="toc-item-num">6 </span>Graph-based NLP</a></div><div class="lev1 toc-item"><a href="#Topic-Modelling" data-toc-modified-id="Topic-Modelling-7"><span class="toc-item-num">7 </span>Topic Modelling</a></div><div class="lev1 toc-item"><a href="#Manual-Annotation" data-toc-modified-id="Manual-Annotation-8"><span class="toc-item-num">8 </span>Manual Annotation</a></div><div class="lev1 toc-item"><a href="#Further-information" data-toc-modified-id="Further-information-9"><span class="toc-item-num">9 </span>Further information</a></div>
## Introduction
This is an introduction to some algorithms used in text analysis. While I cannot define **what questions** a scholar can ask, I can and do describe here **what kind of information** about text some popular methods can deliver. From this, you need to draw on your own research interests and creativity...
I will describe methods of finding words that are characteristic for a certain passage ("tf/tdf"), constructing fingerprints or "wordclouds" for passages that go beyond the most significant words ("word vectors"). Of course, an important resource in text analysis is the hermeneutic interpretation of the scholar herself, so I will present a method of adding manual annotations to the text, and finally I will also say something about possible approaches to working across languages.
At the moment the following topics are still waiting to be discussed: grouping passages according to their similarity ("clustering"), and forming an idea about different contexts being treated in a passage ("topic modelling"). Some more prominent approaches in the areas that have been mentioned so far are "collocation" analyses and the "word2vec" tool; I would like add discussions of these at a later moment.
"Natural language processing" in the strict sense, i.e. analyses that have an understanding of how a language works, with its grammar, different modes, times, cases and the like, are *not* going to be covered; this implies "stylometric" analyses. Nor are there any discussions of "artificial intelligence" approaches. Maybe these can be discussed at another occasion and on another page.
For many of the steps discussed on this page there are ready-made tools and libraries, often with easy interfaces. But first, it is important to understand what these tools are **actually doing** and how their results are affected by the **selection of parameters** (that one can or cannot modify).
And second, most of these tools expect the **input to be in some particular format**, say, a series of plaintext files in their own directory, a list of word/number)-pairs, a table or a series of integer (or floating point) numbers, etc. So, by understanding the process, you should be better prepared to provide your text to the tools in the most productive way.
Finally, it is important to be aware of what information is **lost** at which point in the process. If the research requires so, one can then either look for a different tool or approach to this step (e.g. using an additional dimension in the list of words to keep both original and regularized word forms, or to remember the position of the current token in the original text), or one can compensate for the data loss (e.g. offering a lemmatised search to find occurrences after the analysis returns only normalised word forms)...
The programming language used in the following examples is called "python" and the tool used to get prose discussion and code samples together is called "jupyter". In jupyter, you have a "notebook" that you can populate with text or code and a program that pipes a nice rendering of the notebook to a web browser. In this notebook, in many places, the output that the code samples produce is printed right below the code itself. Sometimes this can be quite a lot of output and depending on your viewing environment you might have to scroll quite some way to get to the continuation of the discussion. You can save your notebook online (the current one is [here at github](https://github.com/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Solorzano.ipynb)) and there is an online service, nbviewer, able to render any notebook that it can access online. So chances are you are reading this present notebook at the web address [https://nbviewer.jupyter.org/github/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Solorzano.ipynb](https://nbviewer.jupyter.org/github/awagner-mainz/notebooks/blob/master/gallery/TextProcessing_Solorzano.ipynb).
A final word about the elements of this notebook:
<div class="alert alertbox alert-success">At some points I am mentioning things I consider to be important decisions or take-away messages for scholarly readers. E.g. whether or not to insert certain artefacts into the very transcription of your text, what the methodological ramifications of a certain approach or parameter are, what the implications of an example solution are, or what a possible interpretation of a certain result might be. I am highlighting these things in a block like this one here or at least in <font color="green">**green bold font**</font>.</div>
<div class="alert alertbox alert-danger">**NOTE:** As I have continued improving the notebook on the side of the source text, wordlists and other parameters, I could (for lack of time) not keep the prose description in synch. So while the actual descriptions still apply, the numbers that are mentioned in the prose (as where we have e.g. a "table with 20 rows and 1.672 columns"), they might no longer reflect the latest state of the sources, auxiliary files and parameters. I will try to update these as I get to it, but for now, you should take such numbers with a grain of salt and rely rather on the actual code and its diagnostic output. I apologize for the inconsistency.</div>
# Preparations
As indicated above, before doing maths, language processing tools normally expect their input to be in a certain format. First of all, you have to have an input in the first place: Therefore, a scholar wishing to experiment with such methods should avail herself of the text that should be studied, as a full transcription. This can be done by transcribing it herself, using transcriptions that are available from elsewhere, or even from OCR. (Although in the latter case, the results depend of course on the quality of the OCR output.) Second, many tools get tripped up when formatting or bibliographical metainformation is included in their input. And since the approaches presented here are not concerned with a digital edition or any other form of true representation of the source, *markup* (e.g. for bold font, heading or note elements) should be *suppressed*. (Other tools accept marked up text and strip the formatting internally.) So you should try to get a copy of the text(s) you are working with in **plaintext** format.
For another detail regarding these plain text files, we have to make a short excursus, because even with plain text, there are some important aspects to consider: As you surely know, computers understand number only and as you probably also know, the first standards to encode alphanumeric characters, like ASCII, in numbers were designed for teleprinters and the reduced character set of the english language. When more extraordinary characters, like *Umlauts* or *accents* were to be encoded, one had to rely on extra rules, of which - unfortunately - there have been quite a lot. These are called "**encodings**" and one of the more important set of such rules are the windows encodings (e.g. CP-1252), another one is called Latin-9/ISO 8859-15 (it differs from the older Latin-1 encoding among others by including the Euro sign). Maybe you have seen web pages with garbled *Umlauts* or other special characters, then that was probably because your browser interpreted the numbers according to an encoding different from the one that the webpage author used. Anyway, the point here is that there is another standard encompassing virtually all the special signs from all languages and for a few years now, it is also supported quite well by operating systems, programming languages and linguistic tools. This standard is called "Unicode" and the encoding you want to use is called **utf-8**. So when you export or import your texts, try to make sure that this is what is used. ([Here](https://unicode-table.com/) is a webpage with the complete unicode table - it is loaded incrementally, so make sure to scroll down in order to get an impression of what signs this standard covers. But on the other hand, it is so extensive that you don't want to scroll through all the table...)
Especially when you are coming from a windows operating system, you might have to do some searching about how to export your text to utf-8 (at one point I could make a unicode plaintext export in wordpad, only to find out after some time of desperate debugging that it was utf-*16* that I had been given. Maybe you can still find the traces of my own conversion of such files to utf-8 below).
<div class="alert alertbox alert-success">Also, you should consider whether or not you can replace *abbreviations* with their expanded versions in your transcription. While at some points (e.g. when lemmatising), you can associate expansions to abbreviations, the whole processing is easier when words in the text are indeed words, and periods are rather sentence punctuation than abbreviation signs. Of course, this also depends on the effort you can spend on the text...</div>
This section now describes how the plaintext can further be prepared for analyses: E.g. if you want to process the *distribution* of words in the text, the processing method has to have some notion of different places in the text -- normally you don't want to manage words according to their absolute position in the whole work (say, the 6.349th word and the 3.100th one), but according to their occurrence in a particular section (say, in the third chapter, without caring too much whether it is in the 13th or in the 643th position in this chapter). So, you partition the text into meaningful segments which you can then label, compare etc.
Other preparatory work includes suppressing stopwords (like "the", "is", "of" in english) or making the tools manage different forms of the same word or different historical writings identically. Here is what falls under this category:
1. [Get fulltext](#GetFulltext)
2. [Segment source text](#SegmentSourceText)
3. [Read segments into Variable/List](#ReadSegmentsIntoVariable)
4. [Tokenising](#Tokenising)
5. [Stemming/Lemmatising](#StemmingLemmatising)
6. [Eliminate stopwords](#EliminateStopwords)
## Get fulltext <a name="GetFulltext"></a>
For the examples given on this page, I am using a transcription of Juan de Solorzano's *De Indiarum Iure*, provided by Angela Ballone. Angela has inserted a special sequence of characters - "€€€ - [<Label for the section>]" - at places where she felt that a new section or argument is beginning, so that we can segment the big source file into different sections each dealing with one particular argument. (Our first task.) But first, let's have a look at our big source file; it is in the folder "Solorzano" and is called **Sections_I.1_TA.txt**.
End of explanation
# folder for the several segment files:
outputBase = 'Solorzano/segment'
# initialise some variables:
at = -1
dest = None # this later takes our destination files
# Now, for every line, if it starts with our special string,
# do nothing with the line,
# but close the current and open the next destination file;
# if it does not,
# append it to whatever is the current destination file
# (stripping leading and trailing whitespace).
for line in input:
if line[0:3] == '€€€':
# if there is a file open, then close it
if dest:
dest.close()
at += 1
# open the next destination file for writing
# (It's filename is build from our outputBase variable,
# the current position in the sequence of fragments,
# and a ".txt" ending)
dest = open(outputBase + '.' + str(at) + '.txt',
encoding='utf-8',
mode='w')
else:
# write the line (after it has been stripped of leading and closing whitespace)
dest.write(line.strip())
dest.close()
at += 1
# How many segments/files do we then have?
print(str(at) + ' files written.')
Explanation: Segment source text<a name="SegmentSourceText"></a>
Next, as mentioned above, we want to associate information with only passages of the text, not the text as a whole. Therefore, the text has to be segmented. The one big single file is being split into meaningful smaller chunks. What exactly constitutes a meaningful chunk -- a chapter, an article, a paragraph etc. -- cannot be known independently of the text in question and of the research questions. Therefore, a typical approach is that the scholar either splits the text manually or inserts some symbols that otherwise do not appear in the text. This is what we have here. Then, processing tools can find these symbols and split the file accordingly. For keeping things neat and orderly, the resulting files are saved in a directory of their own...
(Note here and in the following that in most cases, when the program is counting, it does so beginning with zero. Which means that if we end up with 20 segments, they are going to be called segment_0.txt, segment_1,txt, ..., segment_19.txt. There is not going to be a segment bearing the number twenty, although we do have twenty segments. The first one has the number zero and the twentieth one has the number nineteen. Even for more experienced coders, this sometimes leads to mistakes, called "off-by-one errors".)
End of explanation
path = 'Solorzano'
filename = 'segment.'
suffix = '.txt'
corpus = [] # This is our new variable. It will be populated below.
for i in range(0, at):
with open(path + '/' + filename + str(i) + suffix, encoding='utf-8') as f:
corpus.append(f.read()) # Here, a new element is added to our corpus.
# Its content is read from the file 'f' opened above
f.close()
Explanation: Read segments into a variable <a name="ReadSegmentsIntoVariable"></a>
From the segments just created, we rebuild our corpus, iterating through them and reading them into another variable (which now stores, technically speaking, not just one long string of characters, as the variable input in the first code snippet did, but a list of strings, one for each segment).
End of explanation
len(corpus)
Explanation: Now we should have 20 strings in the variable corpus to play around with:
End of explanation
corpus[3][0:500]
Explanation: For a quick impression, let's see the opening 500 characters of an arbitrary one of them; in this case, we take the fourth segment, i.e. the one at position '3' (remember that counting starts at 0):
End of explanation
# We need a python library, because we want to use a "regular expression"
import re
tokenised = [] # A new variable again
# Every segment, initially a long string of characters, is now split into a list of words,
# based on non-word characters (whitespace, punctuation, parentheses and others - that's
# what we need the regular expression library for).
# Also, we make everything lower-case.
for segment in corpus:
tokenised.append(list(filter(None, (word.lower() for word in re.split('\W+', segment)))))
print('We now have ' + str(sum(len(x) for x in tokenised)) + ' wordforms or "tokens" in our corpus of ' + str(len(tokenised)) + ' segments.')
Explanation: Tokenising <a name="Tokenising"></a>
"Tokenising" means splitting the long lines of the input into single words. Since we are dealing with plain latin, we can use the default split method which relies on spaces to identify word boundaries. (In languages like Japanese or scripts like Arabic, this is more difficult.) Note that we do not compensate for words that are hyphenated/split across lines here! That is something that should be catered for in the transcription itself.
End of explanation
print(tokenised[3][0:49])
Explanation: Now, instead of corpus, we can use tokenised for our subsequent routines: a variable which, at 20 positions, contains the list of words of the corresponding segment. In order to see the difference in structure to the corpus variable above, let's have a look at (the first 50 words of) the fourth segment again:
End of explanation
import collections
counter = collections.Counter(tokenised[3]) # Again, consider the fourth segment
print(counter.most_common(10)) # Making a counter 'object' of our segment,
# this now has a 'method' calles most_common,
# offering us the object's most common elements.
# More 'methods' can be found in the documentation:
# https://docs.python.org/3/library/collections.html#collections.Counter
Explanation: Already, we can have a first go at finding the most frequent words for a segment. (For this we use a simple library of functions that we import by the name of 'collections'.):
End of explanation
import pandas as pd
df1 = pd.DataFrame.from_dict(counter, orient='index').reset_index() # from our counter object,
# we now make a DataFrame object
df2 = df1.rename(columns={'index':'lemma',0:'count'}) # and we name our columns
df2.sort_values('count',0,False)[:10]
Explanation: Nicer layout: tables instead of lists of tuples
Perhaps now is a good opportunity for another small excursus. What we have printed in the last code is a series of pairs: Words associated to their number of occurrences, sorted by the latter. This is called a "dictionary" in python. However, the display looks a bit ugly. With another library called "pandas" (for "python data analysis"), we can make this look more intuitive. (Of course, your system must have this library installed in the first place so that we can import it in our code.):
End of explanation
wordfile_path = 'Solorzano/wordforms-lat-full.txt'
wordfile = open(wordfile_path, encoding='utf-8')
print(wordfile.read()[:64]) # in such from-to addresses, one can just skip the zero
wordfile.close;
Explanation: Looks better now, doesn't it?
(The bold number in the very first column is the id as it were of the respective lemma. You see that 'hoc' has the id '0' - because it was the first word that occurred at all -, and 'ut' has the id '5' because it was the sixth word in our segment. Most probably, currently we are not interested in the position of the word and can ignore the first column.)
Stemming / Lemmatising <a name="StemmingLemmatising"></a>
Next, since we prefer to count different word forms as one and the same "lemma", we have to do a step called "lemmatisation". In languages that are not strongly inflected, like English, one can get away with "stemming", i.e. just eliminating the ending of words: "wish", "wished", "wishing", "wishes" all can count as instances of "wish*". With Latin this is not so easy: we want to count occurrences of "legum", "leges", "lex" as one and the same word, but if we truncate after "le", we get too many hits that have nothing to do with lex at all. There are a couple of "lemmatising" tools available, although with classical languages (or even early modern ones), it's a bit more difficult. Anyway, we do our own, using a dictionary approach...
First, we have to have a dictionary which associates all known word forms to their lemma. This can also help us with historical orthography. Suppose from some other context, we have a file "wordforms-lat.txt" at our disposal in the "Solorzano" folder. Its contents looks like this:
End of explanation
lemma = {} # we build a so-called dictionary for the lookups
tempdict = []
# open the wordfile (defined above) for reading
wordfile = open(wordfile_path, encoding='utf-8')
for line in wordfile.readlines():
tempdict.append(tuple(line.split('>'))) # we split each line by ">" and append a tuple to a
# temporary list.
lemma = {k.strip(): v.strip() for k, v in tempdict} # for every tuple in the list,
# we strip whitespace and make a key-value
# pair, appending it to our "lemma" dictionary
wordfile.close
print(str(len(lemma)) + ' wordforms known to the system.')
Explanation: So, we again build a dictionary of key-value pairs associating all the lemmata ("values") with their wordforms ("keys"). And afterwards, we can quickly look up the value under a given key:
End of explanation
lemma['fidem']
Explanation: Again, a quick test: Let's see with which "lemma"/basic word the particular wordform "ciuicior" is associated, or, in other words, what value our lemma variable returns when we query for the key "ciuicior":
End of explanation
# For each segment, and for each word in it, add the lemma to our new "lemmatised"
# list, or, if we cannot find a lemma, add the actual word from from the tokenised list.
lemmatised = [[lemma[word] if word in lemma else word for word in segment]
for segment in tokenised]
Explanation: Now we can use this dictionary to build a new list of words, where only lemmatised forms occur:
End of explanation
print(lemmatised[3][:49])
Explanation: Again, let's see the first 50 words from the fourth segment, and compare them with the "tokenised" variant above:
End of explanation
counter2 = collections.Counter(lemmatised[3])
df1 = pd.DataFrame.from_dict(counter2, orient='index').reset_index()
df2 = df1.rename(columns={'index':'lemma',0:'count'})
df2.sort_values('count',0,False)[:10]
Explanation: As you can see, the original text is lost now from the data that we are currently working with (unless we add another dimension to our lemmatised variable which can keep the original word form). But let us see if something in the 10 most frequent words has changed:
End of explanation
stopwords_path = 'Solorzano/stopwords-lat.txt'
stopwords = open(stopwords_path, encoding='utf-8').read().splitlines()
print(str(len(stopwords)) + ' stopwords known to the system, e.g.: ' + str(stopwords[95:170]))
Explanation: Yes, things have changed: "tributum" has moved one place up, "non" is now counted as "nolo" (I am not sure this makes sense, but such is the dictionary of wordforms we have used) and "pensum" has now made it on the list!
Eliminate Stopwords <a name="EliminateStopwords"></a>
Probably "et", "in", "de", "qui", "ad", "sum/esse", "non/nolo" and many of the most frequent words are not really very telling words. They are what one calls stopwords, and we have another list of such words that we would rather want to ignore:
End of explanation
# For each segment, and for each word in it,
# add it to a new list called "stopped",
# but only if it is not listed in the list of stopwords.
stopped = [[item for item in lemmatised_segment if item not in stopwords] \
for lemmatised_segment in lemmatised]
print(stopped[3][:49])
Explanation: Now let's try and suppress the stopwords in the segments (and see what the "reduced" fourth segment gives)...
End of explanation
counter3 = collections.Counter(stopped[0])
df0_1 = pd.DataFrame.from_dict(counter3, orient='index').reset_index()
df0_2 = df0_1.rename(columns={'index':'lemma',0:'count'})
print(' Most frequent lemmata in the first text segment (segment number zero):')
df0_2.sort_values(by='count',axis=0,ascending=False)[:10]
counter4 = collections.Counter(stopped[1])
df1_1 = pd.DataFrame.from_dict(counter4, orient='index').reset_index()
df1_2 = df1_1.rename(columns={'index':'lemma',0:'count'})
print(' Most frequent lemmata in the second text segment (segment number one):')
df1_2.sort_values(by='count',axis=0,ascending=False)[:10]
counter5 = collections.Counter(stopped[2])
df2_1 = pd.DataFrame.from_dict(counter5, orient='index').reset_index()
df2_2 = df2_1.rename(columns={'index':'lemma',0:'count'})
print(' Most frequent lemmata in the third text segment:')
df2_2.sort_values(by='count',axis=0,ascending=False)[:10]
counter6 = collections.Counter(stopped[3])
df3_1 = pd.DataFrame.from_dict(counter6, orient='index').reset_index()
df3_2 = df3_1.rename(columns={'index':'lemma',0:'count'})
print(' Most frequent lemmata in the fourth text segment:')
df3_2.sort_values(by='count',axis=0,ascending=False)[:10]
Explanation: With this, we can already create a kind of first "profile" of, say, our first six segments, listing the most frequent words in each of them:
End of explanation
counter7 = collections.Counter(stopped[4])
df4_1 = pd.DataFrame.from_dict(counter7, orient='index').reset_index()
df4_2 = df4_1.rename(columns={'index':'lemma',0:'count'})
print(' Most frequent lemmata in the fifth text segment:')
df4_2.sort_values(by='count',axis=0,ascending=False)[:10]
counter8 = collections.Counter(stopped[5])
df5_1 = pd.DataFrame.from_dict(counter8, orient='index').reset_index()
df5_2 = df5_1.rename(columns={'index':'lemma',0:'count'})
print(' Most frequent lemmata in the sixth text segment:')
df5_2.sort_values(by='count',axis=0,ascending=False)[:10]
Explanation: Yay, look here, we have our words "indis", "tributum", "pensum" from the top ten above again, but this time the non-significant (for our present purposes) words in-between have been eliminated. Instead, new words like "numerata", "operis" etc. have made it into the top ten.
End of explanation
# We can use a library function for this
from sklearn.feature_extraction.text import CountVectorizer
# Since the library function can do all of the above (splitting, tokenising, lemmatising),
# and since it is providing hooks for us to feed our own tokenising, lemmatising and stopwords
# resources or functions to it,
# we use it and work on our rather raw "corpus" variable from way above again.
# So first we build a tokenising and lemmatising function to work as an input filter
# to the CountVectorizer function
def ourLemmatiser(str_input):
wordforms = re.split('\W+', str_input)
return [lemma[wordform].lower().strip() if wordform in lemma else wordform.lower().strip() for wordform in wordforms ]
# Then we initialize the CountVectorizer function to use our stopwords and lemmatising fct.
count_vectorizer = CountVectorizer(tokenizer=ourLemmatiser, stop_words=stopwords)
# Finally, we feed our corpus to the function, building a new "vocab" object
vocab = count_vectorizer.fit_transform(corpus)
# Print some results
print(str(len(count_vectorizer.get_feature_names())) + ' distinct words in the corpus:')
print(count_vectorizer.get_feature_names()[0:100])
Explanation: <div class="alert alertbox alert-success">So far our initial analyses, then. There are several ways in which we can continue now. We see that there are still word (like 'damnatione', tributorum' in the first or 'statuunt' in the second segment) that are not covered by our lemmatisation process. Also, abbreviations (like 'iur' in the second segment) could be expanded either in the transcription or by adding an appropriate line in our list of lemmata. Words like 'dom' in the fifth segment could maybe be added to the list of stopwords? Anyway, more need for review of these two lists (lemmata/stopwords) is explained below and that is something that should definitely be done - after all, they were taken from the context of quite another project and a scholar should control closely what is being suppressed and what is being replaced in the text under hand.</div>
<div class="alert alertbox alert-success">But we could also do more sophisticated things with the list. We could e.g. use either our lemma list or our stopwords list to filter out certain words, like all non-substantives. Or we could reduce all mentions of a certain name or literary work to a specific form (that would be easily recognizable in all the places).</div>
However, we can already observe that meaningful words like "indios/indis" are maybe not so helpful in characterising individual passages of this work, since they occur all over the place. After all, the work is called "De Indiarum Iure" and deals with various questions all related to indigenous people. Also, we would like to give some weight to the fact that a passage may consist of all stopwords and perhaps one or two substantial words, whereas another might be full of substantial words and few stopwords only (think e.g. of an abstract or an opening chapter describing the rest of the work). Or, since we have text segments of varying length, we would like our figures to reflect the fact that a tenfold occurrence in a very short passage may be more significant than a tenfold occurrence in a very, very, very long passage.
These phenomena are treated with more mathematical tools, so let's say that our preparatory work is done ...
Characterise passages: TF/IDF
As described, we are now going to delve a wee bit deeper into mathematics in order to get more precise characterizations of our text segments. The approach we are going to use is called "TF/IDF" and is a simple, yet powerful method that is very popular in text mining and search engine discussions.
Build vocabulary
Calculate Terms' Text Frequencies (TF)
Normalise TF
Inverse Document Frequencies (IDF) and TF-IDF
Build vocabulary <a name="BuildVocabulary"/>
Since maths works best with numbers, let's first of all build a list of all the words (in their basic form) that occur anywhere in the text, and give each one of those words an ID (say, the position of its first occurrence in the work):
End of explanation
vocab
Explanation: You can see how our corpus of four thousand "tokens" actually contains only one and a half thousand different words (plus stopwords, but these are at maximum 384). And, in contrast to simpler numbers that have been filtered out by our stopwords filter, I have left years like "1610" in place.
Calculate Terms' Text Frequencies (TF) <a name="CalculateTF"/>
However, our "vocab" object contains more than just all the unique words in our corpus. Let's get some information about it:
End of explanation
pd.DataFrame(vocab.toarray(), columns=count_vectorizer.get_feature_names())
Explanation: It is actually a table with 20 rows (the number of our segments) and 1.672 columns (the number of unique words in the corpus). So what we do have is a table where for each segment the amount of occurrences of every "possible" (in the sense of used somewhere in the corpus) word is listed.
("Sparse" means that the majority of fields is zero. And 2.142 fields are populated, which is more than the number of unique words in the corpus (1.672, see above) - that's obviously because some words occur in multiple segments = rows. Not much of a surprise, actually.)
Here is the whole table:
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
# Initialize the library's function
tfidf_vectorizer = TfidfVectorizer(stop_words=stopwords, use_idf=False, tokenizer=ourLemmatiser, norm='l1')
# Finally, we feed our corpus to the function to build a new "tf_matrix" object
tf_matrix = tfidf_vectorizer.fit_transform(corpus)
# Print some results
pd.DataFrame(tf_matrix.toarray(), columns=tfidf_vectorizer.get_feature_names())
Explanation: Each row of this table is a kind of fingerprint of a segment: We don't know the order of words in the segment - for us, it is just a "bag of words" -, but we know which words occur in the segment and how often they do. But as of now, it is a rather bad fingerprint, because how significant a certain number of occurences of a word in a segment is depends on the actual length of the segment. Ignorant as we are (per assumption) of the role and meaning of those words, still, if a word occurs twice in a short paragraph, that should prima facie count as more characteristic of the paragraph than if it occurs twice in a multi-volume work.
Normalise TF <a name="NormaliseTF"/>
We can reflect this if we divide the number of occurrences of a word by the number of tokens in the segment. Obviously the number will then be quite small - but what counts is the relations between the cells and we can account for scaling and normalizing later...
We're almost there and we are switching from the CountVectorizer function to another one, that does the division just mentioned and will do more later on...
End of explanation
# Initialize the library's function
tfidf_vectorizer = TfidfVectorizer(stop_words=stopwords, use_idf=True, tokenizer=ourLemmatiser, norm='l2')
# Finally, we feed our corpus to the function to build a new "tfidf_matrix" object
tfidf_matrix = tfidf_vectorizer.fit_transform(corpus)
# Print some results
tfidf_matrix_frame = pd.DataFrame(tfidf_matrix.toarray(), columns=tfidf_vectorizer.get_feature_names())
tfidf_matrix_frame
Explanation: Now we have seen above that "indis" is occurring in all of the segments, because, as the title indicates, the whole work is about issues related to the Indies and to indigenous people. When we want to characterize a segment by referring to some of its words, is there a way to weigh down words like "indis" a little bit? Not filter them out completely, as we do with stopwords, but give them just a little less weight than words not appearing all over the place? Yes there is...
## Inverse Document Frequencies (IDF) and TF-IDF <a name="CalculateTFIDF"/>
There is a measure called "text frequency / (inverse) document frequency" that combines a local measure (how frequently a word appears in a segment, in comparison to the other words appearing in the same segment, viz. the table above), with a global measure (how frequently the word appears throughout the whole corpus). Roughly speaking, we have to add to the table above a new, global, element: the number of documents the term appears in divided through the number of all documents in the corpus - or, rather, the other way round (that's why it is the "inverse" document frequency): the number of documents in the corpus divided by the number of documents the current term occurs in. (As with our local measure above, there is also some normalization, i.e. compensation for different lengths of documents and attenuation of high values, going on by using a logarithm on the quotient.)
When you multiply the term frequency (from above) with this inverse document frequeny, you have a formula which "rewards" frequent occurrences in one segment and rare occurrences over the whole corpus. (For more of the mathematical background, see this tutorial.)
Again, we do not have to implement all the counting, division and logarithm ourselves but can rely on SciKit-learn's TfidfVectorizer function to generate a matrix of our corpus in just a few lines of code:
End of explanation
# convert your matrix to an array to loop over it
mx_array = tfidf_matrix.toarray()
# get your feature names
fn = tfidf_vectorizer.get_feature_names()
pos = 0
for l in mx_array:
print(' ')
print(' Most significant words segment ' + str(pos) + ':')
print(pd.DataFrame.rename(pd.DataFrame.from_dict([(fn[x], l[x]) for x in (l*-1).argsort()][:20]), columns={0:'lemma',1:'tf/idf value'}))
pos += 1
Explanation: Now let's print a more qualified "top 10" words for each segment:
End of explanation
ngram_size_high = 3
ngram_size_low = 2
top_n = 5
# Initialize the TfidfVectorizer function from above
# (again using our lemmatising fct. but no stopwords this time)
vectorizer = CountVectorizer(ngram_range=(ngram_size_low, ngram_size_high), tokenizer=ourLemmatiser)
ngrams = vectorizer.fit_transform(corpus)
print('Most frequent 2-/3-grams')
print('========================')
print(' ')
ngrams_dict = []
df = []
df_2 = []
for i in range(0, len(corpus)):
# (probably that's way too complicated here...)
ngrams_dict.append(dict(zip(vectorizer.get_feature_names(), ngrams.toarray()[i])))
df.append(pd.DataFrame.from_dict(ngrams_dict[i], orient='index').reset_index())
df_2.append(df[i].rename(columns={'index':'n-gram',0:'count'}))
print('Segment ' + str(i) + ':')
if df_2[i]['count'].max() > 1:
print(df_2[i].sort_values(by='count',axis=0,ascending=False)[:top_n])
print(' ')
else:
print(' This segment has no bi- or 3-gram occurring more than just once.')
print(' ')
ngrams_corpus = pd.DataFrame(ngrams.todense(), columns=vectorizer.get_feature_names())
ngrams_total = ngrams_corpus.cumsum()
print(' ')
print("The 10 most frequent n-grams in the whole corpus")
print("================================================")
ngrams_total.tail(1).T.rename(columns={19:'count'}).nlargest(10, 'count')
Explanation: <div class="alert alertbox alert-success">You can see that, in the fourth segment, pensum and tributum have moved up while indis has fallen from the first to the third place. But in other segments you can also see that abbreviations like "fol", "gl" or "hom" still are a major nuisance, and so are spanish passages. It would surely help to improve our stopwords and lemma lists.</div>
<div class="alert alertbox alert-success">Of course, having more text would also help: The *idf* can kick in only when there are many documents... Also, you could play around with the segmentation. Make fewer but bigger segments or smaller ones...</div>
<div class="alert alertbox alert-success">And you can notice that in many segments, the lemmata at around rank 5 have the exact same value. Most certainly that's because they only occur one single time in the segment. (That those values differ from segment to segment has to do with the relation of the segment to the corpus as a whole.) And when four our fourteen of those words occur only once anyway, we should really not think that there is a meaningful sorting order between them (or that there is a good reason the 8th one is in the top ten list and the thirteenth one isn't). But in those areas where there _is_ variation in the tf/idf values, that is indeed telling.</div>
Due to the way that they have been encoded in our sample texts, we can also observe some references to other literature by the underscore (e.g. "de_oper", "de_iur", "et_cur" etc.), which makes you wonder if it would be worthwile marking all the references in some way so that we could either concentrate on them or filter them out altogether. But other than that, it's in fact almost meaningful. Apart from making such lists, what can we do with this?
Vector Space Model of the text <a name="#VectorSpaceModel"/>
First, let us recapitulate in more general terms what we have done so far, since a good part of it is extensible and applicable to many other methods: We have used a representation of each "document" (in our case, all those "documents" have been segments of one and the same text) as a series of values that indicated the document's relevance in particular "dimensions".
For example, the various values in the "alea dimension" indicate how characteristic this word, "alea", is for the present document. (By hypothesis, this also works the other way round, as an indication of which documents are the most relevant ones in matters of "alea". In fact, this is how search engines work.)
Many words did not occur at all in most of the documents and the series of values (matrix rows) contained many zeroes. Other words were stopwords which we would not want to affect our documents' scores - they did not yield a salient "dimension" and were dropped from the series of values (matrix columns). The values work independently and can be combined (when a document is relevant in one and in another dimension).
Each document is thus characterised by a so-called "vector" (a series of independent, combinable values) and is mapped in a "space" constituted by the dimensions of those vectors (matrix columns, series of values). In our case the dimensions have been derived from the corpus's vocabulary. Hence, the representation of all the documents is called their vector space model. You can really think of it as similar to a three-dimensional space: Document A goes quite some way in the x-direction, it goes not at all in the y-direction and it goes just a little bit in the z-direction. Document B goes quite some way, perhaps even further than A did, in both the y- and z- directions, but only a wee bit in the y-direction. Etc. etc. Only with many many more independent dimensions instead of just the three spatial dimensions we are used to.
The following sections are will discuss ways of manipulating the vector space -- using alternative or additional dimensions -- and also ways of leveraging the VSM representation of our text to make various analyses...
Alternative vector space constitution: n-grams
Extending the space's dimensions
Word clouds
Similarity
Clustering
Another method to generate the dimensions: n-grams <a name="N-Grams"/>
Instead of relying on the (either lemmatized or un-lemmatized) vocabulary of words occurring in your documents, you could also use other methods to generate a vector for them. A very popular such method is called n-grams and shall be presented just shortly:
Imagine a moving window which captures, say, three words and slides over your text word by word. The first capture would get the first three words, the second one the words two to four, the third one the words three to five, and so on up to the last three words of your document. This procedure would generate all the "3-grams" contained in your text - not all the possible combinations of the words present in the vocabulary but just the triples that happen to occur in the text. The meaningfulness of this method depends to a certain extent on how strongly the respective language inflects its words and on how freely it orders its sentences' parts (a sociolect or literary genre might constrain or enhance the potential variance of the language). Less variance here means that the same ideas tend to be (!) presented in the same formulations more than in languages with more variance on this syntactic level. To a certain extent, you could play around with lemmatization and stopwords and with the size of your window. But in general, there are more 3-grams repeated in human language than one would expect. Even more so if we imagine our window encompassing only two words, resulting in 2-grams or, rather, bigrams.
As a quick example, let's list the top bi- or 3-grams of our text segments, together with the respective number of occurrences, and the 10 most frequent n-grams in the whole corpus:
End of explanation
print("Original matrix of tf/idf values (rightmost columns):")
tfidf_matrix_frame.iloc[ :, -5:]
length = []
for i in range(0, len(corpus)):
length.append(len(tokenised[i]))
citnum = []
for i in range(0, len(corpus)):
citnum.append(corpus[i].count('_'))
print("New matrix extended with segment length and number of occurrences of '_':")
new_matrix = tfidf_matrix_frame.assign(seg_length = length).assign(cit_count = citnum)
new_matrix.iloc[ :, -6:]
Explanation: Extending the dimensions <a name="AddDimensions"/>
Of course, there is no reason why the dimensions should be restricted to or identical with the vocabulary (or the occurring n-grams, for that matter). In fact, in the examples above, we have dropped some of the words already by using our list of stopwords. <font color="green">We could also add other dimensions that are of interest for our current research question. We could add a dimension for the year in which the texts have been written, for their citing a certain author, or merely for their position in the encompassing work...</font>
Since in our examples, the position is represented in the "row number" and counting citations of a particular author require some more normalisations (e.g. with the lemmatisation dictionary above), let's add a dimension for the length of the respective segment (in characters) and another one for the number of occurrences of "_" (in our sample transcriptions, this character had been used to mark citations, although admittedly not all of them), just so you get the idea:
End of explanation
# input should still have a handle on our source file.
label = []
# Now, for every line, revisit the special string and extract just the lines marked by it
for line in input:
if line[0:3] == '€€€':
label.append(line[6:].strip())
# How many segments/files do we then have?
print(str(len(label)) + ' labels read.')
print("New matrix extended with segment length, number of occurrences of '_' and label:")
yet_another_matrix = new_matrix.assign(seg_length = length).assign(label = label)
yet_another_matrix.iloc[ :, -6:]
Explanation: You may notice that the segment with most occurrences of "_" (taken with a grain of salt, that's likely the segment with most citations), is not a particularly long one. If we had systematic markup of citations or author names in our transcription, we could be more certain or add even more columns/"dimensions" to our table.
If you bear with me for a final example, here is adding the labels that you could see in our initial one "big source file":
End of explanation
from wordcloud import WordCloud
import matplotlib.pyplot as plt
# We make tuples of (lemma, tf/idf score) for one of our segments
# But we have to convert our tf/idf weights to pseudo-frequencies (i.e. integer numbers)
frq = [ int(round(x * 100000, 0)) for x in mx_array[3]]
freq = dict(zip(fn, frq))
wc = WordCloud(background_color=None, mode="RGBA", max_font_size=40, relative_scaling=1).fit_words(freq)
# Now show/plot the wordcloud
plt.figure()
plt.imshow(wc, interpolation="bilinear")
plt.axis("off")
plt.show()
Explanation: Word Clouds <a name="WordClouds"/>
We can use a library that takes word frequencies like above, calculates corresponding relative sizes of words and creates nice wordcloud images for our sections (again, taking the fourth segment as an example) like this:
End of explanation
outputDir = "Solorzano"
htmlfile = open(outputDir + '/Overview.html', encoding='utf-8', mode='w')
# Write the html header and the opening of a layout table
htmlfile.write(<!DOCTYPE html>
<html>
<head>
<title>Section Characteristics</title>
<meta charset="utf-8"/>
</head>
<body>
<table>
)
a = [[]]
a.clear()
dicts = []
w = []
# For each segment, create a wordcloud and write it along with label and
# other information into a new row of the html table
for i in range(0, len(mx_array)):
# this is like above in the single-segment example...
a.append([ int(round(x * 100000, 0)) for x in mx_array[i]])
dicts.append(dict(zip(fn, a[i])))
w.append(WordCloud(background_color=None, mode="RGBA", \
max_font_size=40, min_font_size=10, \
max_words=60, relative_scaling=0.8).fit_words(dicts[i]))
# We write the wordcloud image to a file
w[i].to_file(outputDir + '/wc_' + str(i) + '.png')
# Finally we write the column row
htmlfile.write(
<tr>
<td>
<head>Section {a}: <b>{b}</b></head><br/>
<img src="./wc_{a}.png"/><br/>
<small><i>length: {c} words</i></small>
</td>
</tr>
<tr><td> </td></tr>
.format(a = str(i), b = label[i], c = len(tokenised[i])))
# And then we write the end of the html file.
htmlfile.write(
</table>
</body>
</html>
)
htmlfile.close()
Explanation: In order to have a nicer overview over the many segments than is possible in this notebook, let's create a new html file listing some of the characteristics that we have found so far...
End of explanation
from sklearn.metrics.pairwise import cosine_similarity
similarities = pd.DataFrame(cosine_similarity(tfidf_matrix))
similarities[round(similarities, 0) == 1] = 0 # Suppress a document's similarity to itself
print("Pairwise similarities:")
print(similarities)
print("The two most similar segments in the corpus are")
print("segments", \
similarities[similarities == similarities.values.max()].idxmax(axis=0).idxmax(axis=1), \
"and", \
similarities[similarities == similarities.values.max()].idxmax(axis=0)[ similarities[similarities == similarities.values.max()].idxmax(axis=0).idxmax(axis=1) ].astype(int), \
".")
print("They have a similarity score of")
print(similarities.values.max())
Explanation: This should have created a nice html file which we can open here.
Similarity <a name="DocumentSimilarity"/>
Also, once we have a representation of our text as a vector - which we can imagine as an arrow that goes a certain distance in one direction, another distance in another direction and so on - we can compare the different arrows. Do they go the same distance in a particular direction? And maybe almost the same in another direction? This would mean that one of the terms of our vocabulary has the same weight in both texts. Comparing the weight of our many, many dimensions, we can develop a measure for the similarity of the texts.
(Probably, similarity in words that are occurring all over the place in the corpus should not count so much, and in fact it is attenuated by our arrows being made up of tf/idf weights.)
Comparing arrows means calculating with angles and technically, what we are computing is the "cosine similarity" of texts. Again, there is a library ready for us to use (but you can find some documentation here, here and here.)
End of explanation
bigspanishfile = 'Solorzano/Sections_II.2_PI.txt'
spInput = open(bigspanishfile, encoding='utf-8').readlines()
spAt = -1
spDest = None
for line in spInput:
if line[0:3] == '€€€':
if spDest:
spDest.close()
spAt += 1
spDest = open(outputBase + '.' + str(spAt) +
'.spanish.txt', encoding='utf-8', mode='w')
else:
spDest.write(line.strip())
spAt += 1
spDest.close()
print(str(spAt) + ' files written.')
spSuffix = '.spanish.txt'
spCorpus = []
for i in range(0, spAt):
try:
with open(path + '/' + filename + str(i) + spSuffix, encoding='utf-8') as f:
spCorpus.append(f.read())
f.close()
except IOError as exc:
if exc.errno != errno.EISDIR: # Do not fail if a directory is found, just ignore it.
raise # Propagate other kinds of IOError.
print(str(len(spCorpus)) + ' files read.')
# Labels
spLabel = []
i = 0
for spLine in spInput:
if spLine[0:3] == '€€€':
spLabel.append(spLine[6:].strip())
i =+ 1
print(str(len(spLabel)) + ' labels found.')
# Tokens
spTokenised = []
for spSegment in spCorpus:
spTokenised.append(list(filter(None, (spWord.lower()
for spWord in re.split('\W+', spSegment)))))
# Lemmata
spLemma = {}
spTempdict = []
spWordfile_path = 'Solorzano/wordforms-es.txt'
spWordfile = open(spWordfile_path, encoding='utf-8')
for spLine in spWordfile.readlines():
spTempdict.append(tuple(spLine.split('>')))
spLemma = {k.strip(): v.strip() for k, v in spTempdict}
spWordfile.close
print(str(len(spLemma)) + ' spanish wordforms known to the system.')
# Stopwords
spStopwords_path = 'Solorzano/stopwords-es.txt'
spStopwords = open(spStopwords_path, encoding='utf-8').read().splitlines()
print(str(len(spStopwords)) + ' spanish stopwords known to the system.')
print(' ')
print('Significant words in the spanish text:')
# tokenising and lemmatising function
def spOurLemmatiser(str_input):
spWordforms = re.split('\W+', str_input)
return [spLemma[spWordform].lower() if spWordform in spLemma else spWordform.lower() for spWordform in spWordforms ]
spTfidf_vectorizer = TfidfVectorizer(stop_words=spStopwords, use_idf=True, tokenizer=spOurLemmatiser, norm='l2')
spTfidf_matrix = spTfidf_vectorizer.fit_transform(spCorpus)
spMx_array = spTfidf_matrix.toarray()
spFn = spTfidf_vectorizer.get_feature_names()
pos = 1
for l in spMx_array:
print(' ')
print(' Most significant words in the ' + str(pos) + '. segment:')
print(pd.DataFrame.rename(pd.DataFrame.from_dict([(spFn[x], l[x]) for x in (l*-1).argsort()][:10]), columns={0:'lemma',1:'tf/idf value'}))
pos += 1
Explanation: <div class="alert alertbox alert-success">Of course, in every set of documents, we will always find two that are similar in the sense of them being more similar to each other than to the other ones. Whether or not this actually *means* anything in terms of content is still up to scholarly interpretation. But at least it means that a scholar can look at the two documents and when she determines that they are not so similar after all, then perhaps there is something interesting to say about similar vocabulary used for different puproses. Or the other way round: When the scholar knows that two passages are similar, but they have a low "similarity score", shouldn't that say something about the texts's rhetorics?</div>
Clustering <a name="DocumentClustering"/>
Clustering is a method to find ways of grouping data into subsets, so that these do have some cohesion. Sentences that are more similar to a particular "paradigm" sentence than to another one are grouped with the first one, others are grouped with their respective "paradigm" sentence. Of course, one of the challenges is finding sentences that work well as such paradigm sentences. So we have two (or even three) stages: Find paradigms, group data accordingly. (And learn how many groups there are.)<img src="http://practicalcryptography.com/media/miscellaneous/files/k_mean_send.gif"/>
I hope to be able to add a discussion of this subject soon. For now, here are nice tutorials for the process:
- http://brandonrose.org/clustering
- https://datasciencelab.wordpress.com/2013/12/12/clustering-with-k-means-in-python/
- https://de.dariah.eu/tatom/working_with_text.html
- http://jonathansoma.com/lede/foundations/classes/text%20processing/tf-idf/
Find good measure (word vectors, authorities cited, style, ...)
Find starting centroids
Find good K value
K-Means clustering
Working with several languages
Let us prepare a second text, this time in Spanish, and see how they compare...
End of explanation
htmlfile2 = open(outputDir + '/Synopsis.html', encoding='utf-8', mode='w')
htmlfile2.write(<!DOCTYPE html>
<html>
<head>
<title>Section Characteristics, parallel view</title>
<meta charset="utf-8"/>
</head>
<body>
<table>
)
spA = [[]]
spA.clear()
spDicts = []
spW = []
for i in range(0, max(len(mx_array), len(spMx_array))):
if (i > len(mx_array) - 1):
htmlfile2.write(
<tr>
<td>
<head>Section {a}: n/a</head>
</td>.format(a = str(i)))
else:
htmlfile2.write(
<tr>
<td>
<head>Section {a}: <b>{b}</b></head><br/>
<img src="./wc_{a}.png"/><br/>
<small><i>length: {c} words</i></small>
</td>.format(a = str(i), b = label[i], c = len(tokenised[i])))
if (i > len(spMx_array) - 1):
htmlfile2.write(
<td>
<head>Section {a}: n/a</head>
</td>
</tr><tr><td> </td></tr>.format(a = str(i)))
else:
spA.append([ int(round(x * 100000, 0)) for x in spMx_array[i]])
spDicts.append(dict(zip(spFn, spA[i])))
spW.append(WordCloud(background_color=None, mode="RGBA", \
max_font_size=40, min_font_size=10, \
max_words=60, relative_scaling=0.8).fit_words(spDicts[i]))
spW[i].to_file(outputDir + '/wc_' + str(i) + '_sp.png')
htmlfile2.write(
<td>
<head>Section {d}: <b>{e}</b></head><br/>
<img src="./wc_{d}_sp.png"/><br/>
<small><i>length: {f} words</i></small>
</td>
</tr>
<tr><td> </td></tr>.format(d = str(i), e = spLabel[i], f = len(spTokenised[i])))
htmlfile2.write(
</table>
</body>
</html>
)
htmlfile2.close()
Explanation: <div class="alert alertbox alert-success">Our spanish wordfiles ([lemmata list](Solorzano/wordforms-es.txt) and [stopwords list](Solorzano/stopwords-es.txt)) are quite large and generous - they spare us some work of resolving quite a lot of abbreviations. However, since they are actually originating from a completely different project, it is very unlikely, that this goes without mistakes. Also some lemmata (like "de+el" in the eighth segment) are not really such. So we need to clean our wordlist and adapt it to the current text material urgently!</div>
Now imagine how we would bring the two documents together in a vector space. We would generate dimensions for all the words of our spanish vocabulary and would end up with a common space of roughly twice as many dimensions as before - and the latin work would be only in the first half of the dimensions and the spanish work only in the second half. The respective other half would be populated with only zeroes. So in effect, we would not really have a common space or something on the basis of which we could compare the two works. :-(
What might be an interesting perspective, however - since in this case, the second text is a translation of the first one - is a parallel, synoptic overview of both texts. So, let's at least add the second text to our html overview with the wordclouds:
End of explanation
import urllib
import json
from collections import defaultdict
segment_no = 6
spSegment_no = 8
print("Comparing words from segments " + str(segment_no) + " (latin) and " + str(spSegment_no) + " (spanish)...")
print(" ")
# Build List of most significant words for a segment
top10a = []
top10a = ([fn[x] for x in (mx_array[segment_no]*-1).argsort()][:12])
print("Most significant words in the latin text:")
print(top10a)
print(" ")
# Build lists of possible translations (the 15 most closely related ones)
top10a_possible_translations = defaultdict(list)
for word in top10a:
concepts_uri = "http://api.conceptnet.io/related/c/la/" + word + "?filter=/c/es"
response = urllib.request.urlopen(concepts_uri)
concepts = json.loads(response.read().decode(response.info().get_param('charset') or 'utf-8'))
for rel in concepts["related"][0:15]:
top10a_possible_translations[word].append(rel.get("@id").split('/')[-1])
print(" ")
print("For each of the latin words, here are possible translations:")
for word in top10a_possible_translations:
print(word + ":")
print(', '.join(trans for trans in top10a_possible_translations[word]))
print(" ")
print(" ")
# Build list of 10 most significant words in the second language
top10b = []
top10b = ([spFn[x] for x in (spMx_array[spSegment_no]*-1).argsort()][:12])
print("Most significant words in the spanish text:")
print(top10b)
# calculate number of overlapping terms
print(" ")
print(" ")
print("Overlaps:")
for word in top10a_possible_translations:
print(', '.join(trans for trans in top10a_possible_translations[word] if (trans in top10b or trans == word)))
# do a nifty ranking
Explanation: Again, the resulting file can be opened here.
Translations?
Maybe there is an approach to inter-lingual comparison after all. Here is the API documentation of conceptnet.io, which we can use to lookup synonyms, related terms and translations. Like with such a URI:
http://api.conceptnet.io/related/c/la/rex?filter=/c/es
We can get an identifier for a word and many possible translations for this word. So, we could - this remains to be tested in practice - look up our ten (or so) most frequent words in one language and collect all possible translations in the second language. Then we could compare these with what we actually find in the second work. How much overlap there is going to be and how univocal it is going to be remains to be seen, however...
For example, with a single segment, we could do something like this:
End of explanation |
5,492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of Liftingline Analysis
Step1: Creating wing defintion
Step2: Lift calculation using LiftAnalysis object
The LiftAnalysis object calculates base lift distributions (e.q. for aerodynamical twist, control surfaces and so on) and only superposes those, when calculations are invoked.
Step3: Lift calculation using calculate function
The calculate function does only calculate those distributions needed and does not cache results. Furhtermore it allows for calculation of moment coefficent regarding x axis (flight direction). This coefficient is defined as follows | Python Code:
# numpy and matplotlib imports
import numpy as np
from matplotlib import pyplot as plt
# import of wingstructure submodels
from wingstructure import data, aero
Explanation: Example of Liftingline Analysis
End of explanation
# create wing object
wing = data.Wing()
# add sections to wing
# leading edge position, chord length, twist
wing.append((0.0, 0.0, 0.0), 1.0, 0.0)
wing.append((0.05, 4.25, 0.0), 0.7, 0.0)
wing.append((0.1, 7.75, 0.0), 0.35, 0.0)
# define spoiler position
wing.add_controlsurface('BK', 1.5, 2.9, 0.5, 0.5, 'airbrake')
# define control-surfaces
wing.add_controlsurface('flap', 1, 2.8, 0.3, 0.3, 'aileron')
wing.add_controlsurface('flap2', 4.25, 7, 0.3, 0.2, 'aileron')
wing.plot()
Explanation: Creating wing defintion
End of explanation
liftana = aero.LiftAnalysis.generate(wing)
span_pos = liftana.ys
α, distribution, C_Dib, C_Mxb = liftana.calculate(C_L=0.8, all_results=True)
α_qr, distribution_q, C_Dia, C_Mxa = liftana.calculate(C_L=0.8,
controls={'flap2': [5, -5]}, all_results=True)
α_ab, distribution_ab, C_Di, C_Mx = liftana.calculate(C_L=0.8, airbrake=True,
all_results=True)
plt.figure(figsize=(8,5))
plt.plot(span_pos, distribution, label='clean')
plt.plot(span_pos, distribution_ab, '--', label='airbrakes')
plt.plot(span_pos, distribution_q, '-.', label='flaps')
plt.xlabel('wing span in m')
plt.ylabel('local lift coefficient $c_l$')
plt.title('Lift distribution for $C_L = 0,8$')
plt.grid()
plt.legend()
plt.savefig('Liftdistribution.png')
plt.savefig('Liftdistribution.pdf')
Explanation: Lift calculation using LiftAnalysis object
The LiftAnalysis object calculates base lift distributions (e.q. for aerodynamical twist, control surfaces and so on) and only superposes those, when calculations are invoked.
End of explanation
aero.calculate(wing, target=1.0, controls={'flap':(5,-5)}, calc_cmx=True)
Explanation: Lift calculation using calculate function
The calculate function does only calculate those distributions needed and does not cache results. Furhtermore it allows for calculation of moment coefficent regarding x axis (flight direction). This coefficient is defined as follows:
$$ C_\mathrm{Mx} = \frac{M_\mathrm{x}}{q S b}.$$
$q$ - dynamic pressure
$S$ - wing surface
$b$ - wing span
End of explanation |
5,493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework #1
This notebook contains the first homework for this class, and is due on Friday, October 23rd, 2016 at 11
Step1: Answers to questions (based on this model)
Step2: Note
Step3: Figure 1
Step5: Figure 2 | Python Code:
# write any code you need here!
# Create additional cells if you need them by using the
# 'Insert' menu at the top of the browser window.
import numpy as np
C_gas = [2.0, 3.0, 4.0, 5.0]
M_drive = [1.0e+5, 2.0e+5, 3.0e+5, 4.0e+5, 5.0e+5]
M_pg = [8,15,25,35,45,60]
V1g = 0.003785 # in m^3
M1g = 2.9 # in kg
gals_gas = []
cost_gas = []
for Md in M_drive:
for Mpg in M_pg:
gals_gas.append(Md/Mpg)
for Cg in C_gas:
cost_gas.append(Cg*Md/Mpg)
gals_gas = np.array(gals_gas)
cost_gas = np.array(cost_gas)
vol_gas = gals_gas * V1g
mass_gas = gals_gas * M1g
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(vol_gas,bins=20)
plt.title("Volume of gas")
plt.xlabel("Volume of gas [cubic meters]")
plt.ylabel("number")
plt.hist(mass_gas,bins=20)
plt.title("Mass of gas")
plt.xlabel("Mass [kg]")
plt.ylabel("number")
plt.hist(cost_gas,bins=20)
plt.title("Cost of gas")
plt.xlabel("Cost [dollars]")
plt.ylabel("number")
Explanation: Homework #1
This notebook contains the first homework for this class, and is due on Friday, October 23rd, 2016 at 11:59 p.m.. Please make sure to get started early, and come by the instructors' office hours if you have any questions. Office hours and locations can be found in the course syllabus. IMPORTANT: While it's fine if you talk to other people in class about this homework - and in fact we encourage it! - you are responsible for creating the solutions for this homework on your own, and each student must submit their own homework assignment.
Some links that you may find helpful:
Markdown tutorial
The matplotlib website
The matplotlib figure gallery (this is particularly helpful for getting ideas!)
The Pyplot tutorial
The Pandas tutorial
All CMSE 201 YouTube videos
Your name
Put your name here!
Section 1: Find me a model, any model
Look around online and find a model that you think is interesting. This model can be of anything that you are intrigued by, and as simple or complicated as you want. Write a paragraph or two describing the model, and identifying the components of the model that we talked about in class - the model's inputs, inner workings, outputs, and how one might decide if this is a good model or not. Make sure to cite your sources by providing links to the web pages that you looked at to create this description. You can either just paste the URL into the cell below, or do something prettier, like this: google. The syntax for that second one is [google](http://google.com).
ANSWER: There's not really a "solution" to this problem, per se. We want to see that students have found an actual model online (as opposed to something else that may have inputs and outputs and so on, but which is not a model). The model can be descriptive (i.e., carbon cycle), predictive (black hole mergers and gravitational waves), or some sort of statistical model (i.e., fantasy baseball-type modeling of player performance).
Section 2: Car conundrum
Part 1. Consider this: What volume of gasoline does a typical automobile (car, SUV, or pickup) use during its entire lifetime? How does the weight of the total fuel consumed compare to the weight of the car? How about the price of the fuel compared to the price of the car?
Come up with a simple order-of-magnitude approximation for each of those three questions, and in the cell below this one write a paragraph or two addressing each of the questions above. What are the factors you need to consider? What range of values might they have? In what way is your estimate limited? (Also, to add a twist: does it matter what type of car you choose?)
Note: if you use a Google search or two to figure out what range of values you might want to use, include links to the relevant web page(s). As described above, you can either just paste the URL, or do something prettier, like this: google. The syntax for that second one is [google](http://google.com).
ANSWER:
We want to figure out how much gas is consumed by a typical car during its lifetime, and then answer some questions relating to that. To figure this out, we need to know the volume of a gallon of gas ($V_{1 gal}$), the mass of a gallon of gas ($M_{1 gal}$), the cost of a gallon of gas ($C_{1 gal}$), the fuel efficiency of the car in question ($M_{PG}$), and the number of miles that a car is tyipcally driven ($M_{drive}$). In that case, the model would be:
$V_{gas} = V_{1 gal} * N_{gals} = V_{1 gal} * \frac{M_{drive}}{M_{PG}}$
and for the total mass of gas:
$M_{gas} = M_{1 gal} * N_{gals} = M_{1 gal} * \frac{M_{drive}}{M_{PG}}$
and the cost for all of the gas:
$C_{gas} = C_{1 gal} * N_{gals} = C_{1 gal} * \frac{M_{drive}}{M_{PG}}$
The various quantities we need are:
$V_{1 gal}$ - 3785 cubic cm (via google)
$M_{1 gal}$ - 2.9 kg (via Google)
$C_{1 gal}$ - between \$2 - \$5 depending on where you live and when it is
$M_{drive}$ - between $10^5$ and $5 \times 10^5$ miles
$M_{PG}$ - between $8-60$ miles per gallon depending on the type of car you drive
Mass of car: between 1000-4000 kg depending on the type of car you drive (cars with poorer fuel efficiency tend to be more massive)
This estimate is limited because the cost of gasoline varies over time and fuel efficiency also varies over time with weather and also with the age and maintenance of the car. It's also somewhat hard to tell how long a given car is driven, because cars are often resold.
Part 2. In the space below, write a Python program to model the answer to all three of those questions, and keep track of the answers in a numpy array. Plot your answers to both questions in some convenient way (probably not a scatter plot - look at the matplotlib gallery for inspiration!). Do the answers you get make sense to you?
End of explanation
# THIS CELL READS IN THE FLINT DATASET - DO NOT CHANGE ANYTHING!
# Make plots inline
%matplotlib inline
# Make inline plots vector graphics instead of raster graphics
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'svg')
# import modules for plotting and data analysis
import matplotlib.pyplot # as plt
import pandas
import numpy as np
import functools
def add_bottle_id_column(data, key_name):
data['bottleID'] = np.repeat(key_name, data.shape[0])
return data
'''
Loads the flint water quality dataset from the spreadsheet.
This manipulation is necessary because (1) the data is in a spreadsheet
rather than a CSV file or something else, and (2) the data is spread out
across multiple sheets in the spreadsheet.
'''
def load_flint_water_data():
flint_water_data = pandas.read_excel(
# NOTE: uncomment the following line and comment out the one after that if
# you have problems getting this to run on a Windows machine.
#io = “https://github.com/ComputationalModeling/flint-water-data/raw/f6093bba145b1745b68bac2964b341fa30f3a08a/Flint%20Lead%20Kits%20ICP%20Data.xlsx”,
io = "Flint_Lead_Kits_ICP_Data.xlsx",
sheetname = [
"Sub_B1-8.15",
"Sub_B2-8.15",
"Sub_B3-8.15",
"Sub_B1-3.16",
"Sub_B2-3.16",
"Sub_B3-3.16",
"Sub_B1-7.16",
"Sub_B2-7.16",
"Sub_B3-7.16"],
header = 0,
skiprows = 3,
names = [
"Sample",
"208Pb",
"",
"23Na",
"25Mg",
"27Al",
"28Si",
"31P",
"PO4",
"34S",
"35Cl",
"39K",
"43Ca",
"47Ti",
"51V",
"52Cr",
"54Fe",
"55Mn",
"59Co",
"60Ni",
"65Cu",
"66Zn",
"75As",
"78Se",
"88Sr",
"95Mo",
"107Ag",
"111Cd",
"112Sn",
"137Ba",
"238U"
]
)
data_with_id = [
add_bottle_id_column(value, key)
for key, value
in flint_water_data.items()]
# collapse dataframes into one long dataframe
flint_water_data = functools.reduce(lambda x,y: x.append(y), data_with_id)
return flint_water_data
def add_date_and_bottle_number(flint_data):
flint_data['bottle_number'] = flint_data['bottleID'].apply(lambda x: x.split('-')[0])
flint_data['date_collected'] = flint_data['bottleID'].apply(lambda x: x.split('-')[1])
return(flint_data)
bottle_map = {
'Sub_B1': 'bottle1',
'Sub_B2': 'bottle2',
'Sub_B3': 'bottle3'
}
date_map = {
'8.15': '2015-08-01',
'3.16': '2016-03-01',
'7.16': '2016-07-01'
}
flint_data = load_flint_water_data()
flint_data = add_date_and_bottle_number(flint_data)
flint_data = flint_data.replace(
{'bottle_number': bottle_map,
'date_collected': date_map })
flint_data['date_collected'] = pandas.DatetimeIndex(flint_data['date_collected'])
flint_data = flint_data.drop('bottleID', axis = 1)
# the end result is that you have a data frame called "flint_data"
flint_data.columns
# break data into individual days
aug15 = flint_data[flint_data['date_collected'] == '2015-08-01']
mar16 = flint_data[flint_data['date_collected'] == '2016-03-01']
jul16 = flint_data[flint_data['date_collected'] == '2016-07-01']
# break data into individual bottles (for each day)
# note: data is in ppb, and we want to think about it in mg/L.
# 1 part per million = 1 mg/L, so 1000 ppb = 1 mg/L.
# The EPA action limit is 0.015 mg/L, or 15 ppb.
aug15_b1 = aug15[aug15['bottle_number']=='bottle1']
aug15_b2 = aug15[aug15['bottle_number']=='bottle2']
aug15_b3 = aug15[aug15['bottle_number']=='bottle3']
mar16_b1 = mar16[mar16['bottle_number']=='bottle1']
mar16_b2 = mar16[mar16['bottle_number']=='bottle2']
mar16_b3 = mar16[mar16['bottle_number']=='bottle3']
jul16_b1 = jul16[jul16['bottle_number']=='bottle1']
jul16_b2 = jul16[jul16['bottle_number']=='bottle2']
jul16_b3 = jul16[jul16['bottle_number']=='bottle3']
Explanation: Answers to questions (based on this model):
The volume of gas ranges from a few cubic meters to 200+, with a median value around 40-50 cubic meters. This is typically much more than the volome of the car itself.
The mass of gas ranges from a few thousand kg to 150,000+ kg, with a median value around 25,000 kg. This is typically much more than the mass of the car.
The cost of gas ranges from a few tens of thousands of dollars to much more than 300,000, with a median of somewhere in the 30-40,000 range. The price of the fuel is comparable to the price of the car, on average.
It does matter what type of car I choose - smaller, more fuel efficient cars tend to have better mileage, which radically affects the other values.
Section 3: Get the Lead Out (continued from your in-class assignment)
You're going to make a Jupyter Notebook. We'll feature our class's work on the CMSE Homepage
This is real data. And, you're some of the first people with the chance to analyze it and make your results public.
We want you to create a new Jupyter notebook to answer this question (which will be uploaded separately, as a second document to this notebook) that we can post publicly, to the world, on the CMSE Department Homepage.
Your Notebook Presentation Should Answer These Questions:
Your presentation should try to answer the following questions:
How bad was the lead level situation in August, 2015 when the first lead levels were taken?
How has the lead situation changed since August, 2015?
Is it getting better? If so, show your readers and convince them
Is it getting worse? If so, show your readers and convince them
How you answer the questions is up to you. But, remember to:
State your positions clearly.
Justify your positions with graphics, calculations, and written analysis to explain why you think what you think.
Consider counterarguments. Could someone try to use the same data to arrive at a different conclusion than yours? If they could, explain that conclusion and (if appropriate) why you think that position is flawed.
Do your best. Write as clearly as you can, use what you know, and don't confuse sizzle with quality. You don't need fancy pants visual and graphical animations to be persuasive. The humble scatterplot and its cousins the bar chart and histogram are extraordinarily powerful when you craft them carefully. And all of them are built-in to pandas.
Lastly, This is real data and you really do have a chance to be featured on the CMSE webpage. So:
The conclusions you draw matter. These are Flint resident's actual living conditions.
Any numerical conclusions you draw should be backed up by your code. If you say the average lead level was below EPA limits, you'll need to be able to back up that claim in your notebook either with graphical evidence or numerical evidence (calculations). So, make sure to show your (computational) work!
Your analysis is a check on the scientific community. The more eyes we have looking at this data and offering reproducible analyses (which Jupyter Notebooks are), the more confidence we can have in the data.
You may find other results online, but you still have to do your own analysis to decide whether you agree with their results.
End of explanation
aug15_b1['208Pb'].describe()
# list containing all 3 days
days = [1,2,3]
'''
What I do here is a very pithy way of describing things; what I'm doing is counting
the number of lead samples above 15 ppb for a given bottle on a given day, and then
dividing by the total number of lead samples for a given bottle on a given day.
'''
frac_aug15_b1 = aug15_b1['208Pb'][aug15_b1['208Pb'] > 15.0].count()/aug15_b1['208Pb'].count()
frac_aug15_b2 = aug15_b2['208Pb'][aug15_b2['208Pb'] > 15.0].count()/aug15_b2['208Pb'].count()
frac_aug15_b3 = aug15_b3['208Pb'][aug15_b3['208Pb'] > 15.0].count()/aug15_b3['208Pb'].count()
frac_mar16_b1 = mar16_b1['208Pb'][mar16_b1['208Pb'] > 15.0].count()/mar16_b1['208Pb'].count()
frac_mar16_b2 = mar16_b2['208Pb'][mar16_b2['208Pb'] > 15.0].count()/mar16_b2['208Pb'].count()
frac_mar16_b3 = mar16_b3['208Pb'][mar16_b3['208Pb'] > 15.0].count()/mar16_b3['208Pb'].count()
frac_jul16_b1 = jul16_b1['208Pb'][jul16_b1['208Pb'] > 15.0].count()/jul16_b1['208Pb'].count()
frac_jul16_b2 = jul16_b2['208Pb'][jul16_b2['208Pb'] > 15.0].count()/jul16_b2['208Pb'].count()
frac_jul16_b3 = jul16_b3['208Pb'][jul16_b3['208Pb'] > 15.0].count()/jul16_b3['208Pb'].count()
# empty lists for bottle 1, 2, 3 evolution:
b1_evol = [frac_aug15_b1,frac_mar16_b1,frac_jul16_b1]
b2_evol = [frac_aug15_b2,frac_mar16_b2,frac_jul16_b2]
b3_evol = [frac_aug15_b3,frac_mar16_b3,frac_jul16_b3]
b1, = plt.plot(days,b1_evol,'bo--')
b2, = plt.plot(days,b2_evol,'ro--')
b3, = plt.plot(days,b3_evol,'go--')
key, = plt.plot([0.9,3.1],[0.1,0.1],'k-')
#plt.xlim(0.5,3.5)
plt.ylim(0.0,0.2)
plt.xlabel("Sample date")
plt.ylabel("Fraction of samples above limit")
plt.title("Fraction of samples above action limit")
plt.legend([b1,b2,b3,key],['Bottle 1', 'Bottle 2', 'Bottle 3', 'EPA cutoff'])
plt.xticks([1,2,3],['2015-08-01','2016-03-01','2016-07-01'])
plt.margins(0.3)
#plt.legend()
Explanation: Note: The raw data is in ppb, and we want to think about it in mg/L.
1 part per million = 1 mg/L, so 1000 ppb = 1 mg/L.
The EPA action limit is 0.015 mg/L, or 15 ppb.
Our strategy is to figure out what fraction of the houses are above the action limit for each day and for each bottle, and make a line plot of that showing the temporal evolution of the fraction of samples above the action limit.
End of explanation
aug15_b1_000 = aug15_b1['208Pb'].quantile(0.0)/1000
aug15_b1_025 = aug15_b1['208Pb'].quantile(0.25)/1000
aug15_b1_050 = aug15_b1['208Pb'].quantile(0.5)/1000
aug15_b1_075 = aug15_b1['208Pb'].quantile(0.75)/1000
aug15_b1_090 = aug15_b1['208Pb'].quantile(0.90)/1000
aug15_b1_095 = aug15_b1['208Pb'].quantile(0.95)/1000
aug15_b1_100 = aug15_b1['208Pb'].quantile(1.0)/1000
mar16_b1_000 = mar16_b1['208Pb'].quantile(0.0)/1000
mar16_b1_025 = mar16_b1['208Pb'].quantile(0.25)/1000
mar16_b1_050 = mar16_b1['208Pb'].quantile(0.5)/1000
mar16_b1_075 = mar16_b1['208Pb'].quantile(0.75)/1000
mar16_b1_090 = mar16_b1['208Pb'].quantile(0.90)/1000
mar16_b1_095 = mar16_b1['208Pb'].quantile(0.95)/1000
mar16_b1_100 = mar16_b1['208Pb'].quantile(1.0)/1000
jul16_b1_000 = jul16_b1['208Pb'].quantile(0.0)/1000
jul16_b1_025 = jul16_b1['208Pb'].quantile(0.25)/1000
jul16_b1_050 = jul16_b1['208Pb'].quantile(0.5)/1000
jul16_b1_075 = jul16_b1['208Pb'].quantile(0.75)/1000
jul16_b1_090 = jul16_b1['208Pb'].quantile(0.90)/1000
jul16_b1_095 = jul16_b1['208Pb'].quantile(0.95)/1000
jul16_b1_100 = jul16_b1['208Pb'].quantile(1.0)/1000
b1_000 = [aug15_b1_000, mar16_b1_000, jul16_b1_000]
b1_025 = [aug15_b1_025, mar16_b1_025, jul16_b1_025]
b1_050 = [aug15_b1_050, mar16_b1_050, jul16_b1_050]
b1_075 = [aug15_b1_075, mar16_b1_075, jul16_b1_075]
b1_090 = [aug15_b1_090, mar16_b1_090, jul16_b1_090]
b1_095 = [aug15_b1_095, mar16_b1_095, jul16_b1_095]
b1_100 = [aug15_b1_100, mar16_b1_100, jul16_b1_100]
b000, = plt.plot(days,b1_000,linewidth=2)
b025, = plt.plot(days,b1_025,linewidth=2)
b050, = plt.plot(days,b1_050,linewidth=2)
b075, = plt.plot(days,b1_075,linewidth=2)
b090, = plt.plot(days,b1_090,linewidth=2)
b095, = plt.plot(days,b1_095,linewidth=2)
b100, = plt.plot(days,b1_100,linewidth=2)
key, = plt.plot([0.9,3.1],[0.015,0.015],'k--')
plt.yscale('log')
plt.xlim(0.5,4.0)
plt.ylim(1.0e-4,3.0)
plt.xlabel("Sample date")
plt.ylabel("Lead content [mg/L]")
plt.title("Bottle 1 lead content (various data percentiles)")
plt.legend([b000,b025,b050,b075,b090,b095,b100,key],['min','25%','50%','75%','90%','95%','max','action'],loc='upper right')
plt.xticks([1,2,3],['2015-08-01','2016-03-01','2016-07-01'])
plt.margins(0.3)
Explanation: Figure 1: (above). This figure shows the fraction of samples above the EPA's "action limit" of 15 mg/L for each of the three days of sampling, for all three bottles sampled from each house. Bottle 1 typically has the highest lead value, bottle 2 the second-highest, and bottle 3 the third highest, which is what one would expect. In the first two dates, Bottle 1 has more than 10% of samples above the action limit, and in the third date it is slightly below the action limit. Overall, this indicates that the problem is getting slightly better over time, but is still worrisome.
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://docs.google.com/forms/d/e/1FAIpQLSd0yvuDR2XP5QhWHJZTZHgsSi84QAZU7x-C9NEA40y6NnArAA/viewform?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Figure 2: (above). This figure shows the lead content in water samples from Bottle 1 at various percentiles in each of the three dates, which effectively shows how the distributions change over time. The various lines show where the minimim sample value is, 25th percentile, etc., all the way up to the maximum sample value. This shows that the maximum lead measurements for Bottle 1 are always really high, but that the situation overall seems to be improving (as indicated by the values for houses between the 50th and 95th percentile generally decreasing over time). Nearly 10% of the houses are still over the action limit as of July 2016, and probably around 8% are still over it.
Summary of results: Figures 1 and 2 (above) show that things were bad in August 2015 - around 16% of houses had lead limits above the EPA max limit of 0.015 mg/L for the first sample taken from each house, and quite a few of them vastly exceeded the EPA action limit (10% were at least 2x that limit, and 5% were 4x that limit). Over time the situation has gotten better - slightly less than 10% of the samples were over the limit as of July 2016 for the first sample taken from each house - but quite a few of the houses are still substantially over the EPA limit.
One could take the two figures shown to say that the problem is getting better and that nothing further should be done, because 10% of the houses are no longer above the EPA's action limit for lead. This is a reasonable argument, but doesn't consider the whole story - some houses still have extraordinarily high lead levels, and almost 10% of the houses DO have lead levels above the EPA action limit.
Section 4: Feedback (required!)
End of explanation |
5,494 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Having difficulty generating a tridiagonal matrix from numpy arrays. I managed to replicate the results given here, but I'm not able to apply these techniques to my problem. I may also be misunderstanding the application of scipy.sparse.diag. | Problem:
from scipy import sparse
import numpy as np
matrix = np.array([[3.5, 13. , 28.5, 50. , 77.5],
[-5. , -23. , -53. , -95. , -149. ],
[2.5, 11. , 25.5, 46. , 72.5]])
result = sparse.spdiags(matrix, (1, 0, -1), 5, 5).T.A |
5,495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: 3
Step2: 4
Step3: 5
Step4: 6 | Python Code:
# %sh
# wget https://raw.githubusercontent.com/fivethirtyeight/data/master/avengers/avengers.csv
# ls -l
Explanation: https://www.dataquest.io/mission/114/challenge-cleaning-data/
2: Life And Death Of Avengers
The Avengers are a well-known and widely loved team of superheroes in the Marvel universe that were introduced in the 1960's in the original comic book series. They've since become popularized again through the recent Disney movies as part of the new Marvel Cinematic Universe.
The team at FiveThirtyEight wanted to dissect the deaths of the Avengers in the comics over the years. The writers were known to kill off and revive many of the superheroes so they were curious to know what data they could grab from the Marvel Wikia site, a fan-driven community site, to explore further. To learn how they collected their data, which is available on their Github repo, read the writeup they published on their site.
End of explanation
import pandas as pd
avengers = pd.read_csv("avengers.csv")
avengers.head(5)
Explanation: 3: Exploring The Data
While the FiveThirtyEight team has done a wonderful job acquiring this data, the data still has some inconsistencies. Your mission, if you choose to accept it, is to clean up their dataset so it can be more useful for analysis in Pandas. Read our dataset into Pandas as a DataFrame and preview the first 5 rows to get a better sense of our data.
End of explanation
true_avengers = avengers[avengers['Year'] >= 1960]
print('All: ' + str(len(avengers.index)))
print('After 1960: ' + str(len(true_avengers.index)))
Explanation: 4: Filter Out The Bad Years
Since the data was collected from a community site, where most of the contributions came from individual users, there's room for errors to surface in the dataset. If you plot a histogram of the values in the Year column, which describes the year each Avenger was introduced, you'll immediately notice some oddities. There are quite a few Avengers who look like they were introduced in 1900, which we know is a little fishy. The Avengers weren't introduced in the comic series until the 1960's!
This is obviously a mistake in the data and you should remove all Avengers before 1960 from the DataFrame.
End of explanation
columns = ['Death1', 'Death2', 'Death3', 'Death4', 'Death5']
def death_count(row):
death = 0
for column in columns:
if row[column] == 'YES':
death += 1
return death
true_avengers['Deaths'] = true_avengers[columns].apply(death_count, axis=1)
true_avengers['Deaths'].head()
# true_avengers[columns].head()
Explanation: 5: Consolidating Deaths
We are interested in the number of total deaths each character experienced and we'd like a field containing that distilled information. Right now, there are 5 fields (Death1 to Death5) that each contain a binary value representing if a superhero experienced that death or not. For example, a superhero can experience Death1, then Death2, etc. until they were no longer brought back to life by the writers.
We'd like to coalesce that information into just one field so we can do numerical analysis more easily.
End of explanation
joined_accuracy_count = len(true_avengers[true_avengers['Year'] + true_avengers['Years since joining'] == 2015])
print('Total number of rows: ' + str(len(true_avengers.index)))
print('Accurate rows: ' + str(joined_accuracy_count))
Explanation: 6: Years Since Joining
For the final task, we want to know if the Years since joining field accurately reflects the Year column. If an Avenger was introduced in Year 1960, is the Years since joining value for that Avenger 55?
End of explanation |
5,496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This example will setup the required electronic structures for usage in TBtrans.
You will also learn the importance of perform $k$-point convergence tests for systems using TBtrans.
We will continue with the graphene nearest neighbour tight-binding model and perform simple transport calculations using TBtrans.
Our example will again concentrate on graphene
Step1: Note that the above call of the graphene lattice is different from TB 1. In this example we will create an orthogonal graphene lattice, i.e. the lattice vectors are orthogonal to each other, unlike the minimal graphene lattice.
The minimal orthogonal graphene lattice consists of 4 Carbon atoms.
Assert that we have 16 non zero elements
Step2: The Hamiltonian we have thus far created will be our electrode. Lets write it to a TBtrans readable file
Step3: Now a file ELEC.nc file exists in the folder and it contains all the information (and more) that TBtrans requires to construct the self-energies for the electrode.
All that is required is now the device region.
An important aspect of any transport setup is that the electrodes must not have matrix elements crossing the device region. I.e. there must not be matrix elements between any of the electrodes. This restriction is easily accommodated in tight-binding setups, but for DFT systems it is less transparent.
In this tight-binding setup it simlpy means a repetition of the electrode 3 times; 1) left electrode, 2) scattering region, 3) right electrode.
1. Creating the device, Geometry; Hamiltonian; Hamiltonian.construct
Here we tile the orthogonal graphene lattice 3 times along the second lattice vector (Python is 0-based) and subsequently construct it using the same parameters.
This method of specifying all matrix elements is the most usable and easy scheme that is available in sisl.
Step4: 2. Creating the device, Hamiltonian $\to$ Hamiltonian
The Geometry.tile function is an explicit method to create bigger lattices from a smaller reference latice. Howewer, the tile routine is also available to the Hamiltonian object. Not only is it much easier to use, it also presents these advantages
Step5: For more information you may execute the following lines to view the
Step6: Sometimes it may be convenient to plot the entries of the matrix to assert the symmetry and structure. The second line asserts that it is indeed a Hermitian matrix
Step7: First run of TBtrans
You should first run tbtrans like this (the RUN.fdf file is already prepared with enough input options for a successfull run)
Step8: After calculating the transport properties of the transport problem you may also use sisl to interact with the TBtrans output (in the *.TBT.nc file)
Step9: There are several function calls present in the above code | Python Code:
graphene = sisl.geom.graphene(orthogonal=True)
H = sisl.Hamiltonian(graphene)
H.construct([[0.1, 1.43], [0., -2.7]])
Explanation: This example will setup the required electronic structures for usage in TBtrans.
You will also learn the importance of perform $k$-point convergence tests for systems using TBtrans.
We will continue with the graphene nearest neighbour tight-binding model and perform simple transport calculations using TBtrans.
Our example will again concentrate on graphene:
End of explanation
print(H)
Explanation: Note that the above call of the graphene lattice is different from TB 1. In this example we will create an orthogonal graphene lattice, i.e. the lattice vectors are orthogonal to each other, unlike the minimal graphene lattice.
The minimal orthogonal graphene lattice consists of 4 Carbon atoms.
Assert that we have 16 non zero elements:
End of explanation
H.write('ELEC.nc')
Explanation: The Hamiltonian we have thus far created will be our electrode. Lets write it to a TBtrans readable file:
End of explanation
device = graphene.tile(3, axis=1)
H_device = sisl.Hamiltonian(device)
H_device.construct([[0.1, 1.43], [0, -2.7]])
print(H_device)
Explanation: Now a file ELEC.nc file exists in the folder and it contains all the information (and more) that TBtrans requires to construct the self-energies for the electrode.
All that is required is now the device region.
An important aspect of any transport setup is that the electrodes must not have matrix elements crossing the device region. I.e. there must not be matrix elements between any of the electrodes. This restriction is easily accommodated in tight-binding setups, but for DFT systems it is less transparent.
In this tight-binding setup it simlpy means a repetition of the electrode 3 times; 1) left electrode, 2) scattering region, 3) right electrode.
1. Creating the device, Geometry; Hamiltonian; Hamiltonian.construct
Here we tile the orthogonal graphene lattice 3 times along the second lattice vector (Python is 0-based) and subsequently construct it using the same parameters.
This method of specifying all matrix elements is the most usable and easy scheme that is available in sisl.
End of explanation
H_device = H.tile(3, axis=1)
print(H_device)
Explanation: 2. Creating the device, Hamiltonian $\to$ Hamiltonian
The Geometry.tile function is an explicit method to create bigger lattices from a smaller reference latice. Howewer, the tile routine is also available to the Hamiltonian object. Not only is it much easier to use, it also presents these advantages:
It guarentees that the matrix elements are the same as the reference Hamiltonian, i.e. you need not specify the parameters to construct twice,
It is much faster when creating $>500,000$ samples from smaller reference systems,
It also requires less code which increases readability and is less prone to errors.
End of explanation
H_device.write('DEVICE.nc')
Explanation: For more information you may execute the following lines to view the :
help(Geometry.tile)
help(Hamiltonian.tile)
Now we have created the device electronic structure. The final step is to store it in a TBtrans readable format:
End of explanation
plt.spy(H_device.Hk());
print('Hermitian deviation: ',np.amax(np.abs(H.Hk() - H.Hk().T.conj())))
Explanation: Sometimes it may be convenient to plot the entries of the matrix to assert the symmetry and structure. The second line asserts that it is indeed a Hermitian matrix:
End of explanation
tbt = sisl.get_sile('siesta.TBT.nc')
Explanation: First run of TBtrans
You should first run tbtrans like this (the RUN.fdf file is already prepared with enough input options for a successfull run):
tbtrans RUN.fdf
After TBtrans is complete a number of files will be present:
siesta.TBT.nc
The main data-file of TBtrans, this contains all calculated quantities, and everything that can be orbital resolved is orbital resolved, such as density of states.
siesta.TBT.CC
The energy points at which TBtrans has calculated physical quantities.
siesta.TBT.KP
Used $k$-points and their corresponding weights for integrating the Brillouin zone.
siesta.TBT.TRANS_Left-Right
The $k$-resolved transmission from Left to the Right electrode. This is a consecutive list of transmissions for all energy points. Each $k$-point transmission is separated with a description of the $k$-point and its weight.
siesta.TBT.AVTRANS_Left-Right
The $k$-averaged transmission from Left to the Right electrode. This is the $k$-averaged equivalent of siesta.TBT.TRANS_Left-Right.
End of explanation
plt.plot(tbt.E, tbt.transmission(), label='k-averaged');
plt.plot(tbt.E, tbt.transmission(kavg=tbt.kindex([0, 0, 0])), label=r'$\Gamma$');
plt.xlabel('Energy [eV]'); plt.ylabel('Transmission'); plt.ylim([0, None]); plt.legend();
Explanation: After calculating the transport properties of the transport problem you may also use sisl to interact with the TBtrans output (in the *.TBT.nc file):
End of explanation
plt.plot(tbt.E, tbt.DOS(), label='DOS');
plt.plot(tbt.E, tbt.ADOS(), label='ADOS');
plt.xlabel('Energy [eV]'); plt.ylabel('DOS [1/eV]'); plt.ylim([0, None]); plt.legend();
Explanation: There are several function calls present in the above code:
get_sile
is a sisl function to read and parse any file that is enabled through sisl. You can check the documentation to find the available files. Here we use it to make tbt be an object with all the information that is present in the siesta.TBT.nc file.
tbt.transmission
is a function that retrieves the transmission function from the file. It has three optional arguments, the first two being the origin electrode and the second the absorbing electrode. They are defaulting to the first and second electrode.
tbt.transmission
takes a third and optional argument, if True, or not specified, it returns the k averaged transmission, else one may provide an array of integers that represent the internal k-points. I.e. the code above searches for the k-index of the $\Gamma$ point, and requests only that sampled transmission function.
You will see a very crude step-like transmission function.
Why is it not smooth, V-shaped (as it should be)? Can you change something to obtain a smooth transmission function?
Why is the $\Gamma$ transmission a fixed non zero value? Should it be zero somewhere?
HINT: checkout the energies used for evaluating the transmission function.
The siesta.TBT.nc file also contains two different density-of-states quantities. How do they differ?
End of explanation |
5,497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP example
Step1: Two-level system
Step2: Harmonic oscillator
Step3: Zero temperature
Step4: Finite temperature
Step5: Storing states instead of expectation values
Step6: Atom-Cavity
Step7: Weak coupling
Step8: In the weak coupling regime there is no significant difference between the Lindblad master equation and the Bloch-Redfield master equation.
Strong coupling
Step9: In the strong coupling regime there are some corrections to the Lindblad master equation that is due to the fact system eigenstates are hybridized states with both atomic and cavity contributions.
Versions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
Explanation: QuTiP example: Bloch-Redfield Master Equation
End of explanation
delta = 0.0 * 2 * np.pi
epsilon = 0.5 * 2 * np.pi
gamma = 0.25
times = np.linspace(0, 10, 100)
H = delta/2 * sigmax() + epsilon/2 * sigmaz()
H
psi0 = (2 * basis(2, 0) + basis(2, 1)).unit()
c_ops = [np.sqrt(gamma) * sigmam()]
a_ops = [sigmax()]
e_ops = [sigmax(), sigmay(), sigmaz()]
result_me = mesolve(H, psi0, times, c_ops, e_ops)
result_brme = brmesolve(H, psi0, times, a_ops, e_ops, spectra_cb=[lambda w : gamma * (w > 0)])
plot_expectation_values([result_me, result_brme]);
b = Bloch()
b.add_points(result_me.expect, meth='l')
b.add_points(result_brme.expect, meth='l')
b.make_sphere()
Explanation: Two-level system
End of explanation
N = 10
w0 = 1.0 * 2 * np.pi
g = 0.05 * w0
kappa = 0.15
times = np.linspace(0, 25, 1000)
a = destroy(N)
H = w0 * a.dag() * a + g * (a + a.dag())
# start in a superposition state
psi0 = ket2dm((basis(N, 4) + basis(N, 2) + basis(N,0)).unit())
c_ops = [np.sqrt(kappa) * a]
a_ops = [[a + a.dag(),lambda w : kappa * (w > 0)]]
e_ops = [a.dag() * a, a + a.dag()]
Explanation: Harmonic oscillator
End of explanation
result_me = mesolve(H, psi0, times, c_ops, e_ops)
result_brme = brmesolve(H, psi0, times, a_ops, e_ops)
plot_expectation_values([result_me, result_brme]);
Explanation: Zero temperature
End of explanation
times = np.linspace(0, 25, 250)
n_th = 1.5
c_ops = [np.sqrt(kappa * (n_th + 1)) * a, np.sqrt(kappa * n_th) * a.dag()]
result_me = mesolve(H, psi0, times, c_ops, e_ops)
w_th = w0/np.log(1 + 1/n_th)
def S_w(w):
if w >= 0:
return (n_th + 1) * kappa
else:
return (n_th + 1) * kappa * np.exp(w / w_th)
a_ops = [[a + a.dag(),S_w]]
result_brme = brmesolve(H, psi0, times, a_ops, e_ops)
plot_expectation_values([result_me, result_brme]);
Explanation: Finite temperature
End of explanation
result_me = mesolve(H, psi0, times, c_ops, [])
result_brme = brmesolve(H, psi0, times, a_ops, [])
n_me = expect(a.dag() * a, result_me.states)
n_brme = expect(a.dag() * a, result_brme.states)
fig, ax = plt.subplots()
ax.plot(times, n_me, label='me')
ax.plot(times, n_brme, label='brme')
ax.legend()
ax.set_xlabel("t");
Explanation: Storing states instead of expectation values
End of explanation
N = 10
a = tensor(destroy(N), identity(2))
sm = tensor(identity(N), destroy(2))
psi0 = ket2dm(tensor(basis(N, 1), basis(2, 0)))
e_ops = [a.dag() * a, sm.dag() * sm]
Explanation: Atom-Cavity
End of explanation
w0 = 1.0 * 2 * np.pi
g = 0.05 * 2 * np.pi
kappa = 0.05
times = np.linspace(0, 5 * 2 * np.pi / g, 1000)
a_ops = [[(a + a.dag()),lambda w : kappa*(w > 0)]]
c_ops = [np.sqrt(kappa) * a]
H = w0 * a.dag() * a + w0 * sm.dag() * sm + g * (a + a.dag()) * (sm + sm.dag())
result_me = mesolve(H, psi0, times, c_ops, e_ops)
result_brme = brmesolve(H, psi0, times, a_ops, e_ops)
plot_expectation_values([result_me, result_brme]);
Explanation: Weak coupling
End of explanation
w0 = 1.0 * 2 * np.pi
g = 0.75 * 2 * np.pi
kappa = 0.05
times = np.linspace(0, 5 * 2 * np.pi / g, 1000)
c_ops = [np.sqrt(kappa) * a]
H = w0 * a.dag() * a + w0 * sm.dag() * sm + g * (a + a.dag()) * (sm + sm.dag())
result_me = mesolve(H, psi0, times, c_ops, e_ops)
result_brme = brmesolve(H, psi0, times, a_ops, e_ops)
plot_expectation_values([result_me, result_brme]);
Explanation: In the weak coupling regime there is no significant difference between the Lindblad master equation and the Bloch-Redfield master equation.
Strong coupling
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: In the strong coupling regime there are some corrections to the Lindblad master equation that is due to the fact system eigenstates are hybridized states with both atomic and cavity contributions.
Versions
End of explanation |
5,498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https
Step1: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
Step2: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code)
Step3: Below I'm running images through the VGG network in batches.
Exercise
Step4: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
Step5: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise
Step6: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so
Step7: If you did it right, you should see these sizes for the training sets
Step9: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
Step10: Training
Here, we'll train the network.
Exercise
Step11: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
Step12: Below, feel free to choose images and see how the trained classifier predicts the flowers in them. | Python Code:
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
vgg_dir = 'tensorflow_vgg/'
# Make sure vgg exists
if not isdir(vgg_dir):
raise Exception("VGG directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(vgg_dir + "vgg16.npy"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:
urlretrieve(
'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',
vgg_dir + 'vgg16.npy',
pbar.hook)
else:
print("Parameter file already exists!")
Explanation: Transfer Learning
Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.
<img src="assets/cnnarchitecture.jpg" width=700px>
VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.
You can read more about transfer learning from the CS231n course notes.
Pretrained VGGNet
We'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. This code is already included in 'tensorflow_vgg' directory, sdo you don't have to clone it.
This is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.
End of explanation
import tarfile
dataset_folder_path = 'flower_photos'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('flower_photos.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:
urlretrieve(
'http://download.tensorflow.org/example_images/flower_photos.tgz',
'flower_photos.tar.gz',
pbar.hook)
if not isdir(dataset_folder_path):
with tarfile.open('flower_photos.tar.gz') as tar:
tar.extractall()
tar.close()
Explanation: Flower power
Here we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.
End of explanation
import os
import numpy as np
import tensorflow as tf
from tensorflow_vgg import vgg16
from tensorflow_vgg import utils
data_dir = 'flower_photos/'
contents = os.listdir(data_dir)
classes = [each for each in contents if os.path.isdir(data_dir + each)]
Explanation: ConvNet Codes
Below, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.
Here we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \times 224 \times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):
```
self.conv1_1 = self.conv_layer(bgr, "conv1_1")
self.conv1_2 = self.conv_layer(self.conv1_1, "conv1_2")
self.pool1 = self.max_pool(self.conv1_2, 'pool1')
self.conv2_1 = self.conv_layer(self.pool1, "conv2_1")
self.conv2_2 = self.conv_layer(self.conv2_1, "conv2_2")
self.pool2 = self.max_pool(self.conv2_2, 'pool2')
self.conv3_1 = self.conv_layer(self.pool2, "conv3_1")
self.conv3_2 = self.conv_layer(self.conv3_1, "conv3_2")
self.conv3_3 = self.conv_layer(self.conv3_2, "conv3_3")
self.pool3 = self.max_pool(self.conv3_3, 'pool3')
self.conv4_1 = self.conv_layer(self.pool3, "conv4_1")
self.conv4_2 = self.conv_layer(self.conv4_1, "conv4_2")
self.conv4_3 = self.conv_layer(self.conv4_2, "conv4_3")
self.pool4 = self.max_pool(self.conv4_3, 'pool4')
self.conv5_1 = self.conv_layer(self.pool4, "conv5_1")
self.conv5_2 = self.conv_layer(self.conv5_1, "conv5_2")
self.conv5_3 = self.conv_layer(self.conv5_2, "conv5_3")
self.pool5 = self.max_pool(self.conv5_3, 'pool5')
self.fc6 = self.fc_layer(self.pool5, "fc6")
self.relu6 = tf.nn.relu(self.fc6)
```
So what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
This creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,
feed_dict = {input_: images}
codes = sess.run(vgg.relu6, feed_dict=feed_dict)
End of explanation
# Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
#import gc
with tf.Session() as sess:
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
#gc.collect()
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels)
Explanation: Below I'm running images through the VGG network in batches.
Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
End of explanation
# read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1))
Explanation: Building the Classifier
Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
End of explanation
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels)# Your one-hot encoded labels array here
Explanation: Data prep
As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!
Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
End of explanation
from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
train_idx, val_idx = next(ss.split(codes, labels))
half = len(val_idx)//2
val_idx, test_idx = val_idx[:half], val_idx[half:]
train_x, train_y = codes[train_idx], labels_vecs[train_idx]
val_x, val_y = codes[val_idx], labels_vecs[val_idx]
test_x, test_y = codes[test_idx], labels_vecs[test_idx]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape)
Explanation: Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.
You can create the splitter like so:
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
Then split the data with
splitter = ss.split(x, y)
ss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.
Exercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.
End of explanation
inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
# TODO: Classifier layers and operations
hidden_units = 256
fc = tf.contrib.layers.fully_connected(inputs_, hidden_units)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: If you did it right, you should see these sizes for the training sets:
Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
Classifier layers
Once you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.
Exercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.
End of explanation
def get_batches(x, y, n_batches=10):
Return a generator that yields batches from arrays x and y.
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y
Explanation: Batches!
Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.
End of explanation
epochs = 20
batch_n = 0
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for batch_x, batch_y in get_batches(train_x, train_y):
feed = {inputs_: batch_x, labels_: batch_y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(batch_n),
"Training loss: {:.5f}".format(loss))
if batch_n%5 == 0:
feed = {inputs_: val_x, labels_: val_y}
acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(batch_n),
"Validation accuracy: {:.5f}".format(acc))
batch_n += 1
saver.save(sess, "checkpoints/flowers.ckpt")
Explanation: Training
Here, we'll train the network.
Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread
Explanation: Testing
Below you see the test accuracy. You can also see the predictions returned for images.
End of explanation
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_)
Explanation: Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
End of explanation |
5,499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
You are a bird conservation expert and want to understand migration patterns of purple martins. In your research, you discover that these birds typically spend the summer breeding season in the eastern United States, and then migrate to South America for the winter. But since this bird is under threat of endangerment, you'd like to take a closer look at the locations that these birds are more likely to visit.
<center>
<img src="https
Step1: Exercises
1) Load the data.
Run the next code cell (without changes) to load the GPS data into a pandas DataFrame birds_df.
Step2: There are 11 birds in the dataset, where each bird is identified by a unique value in the "tag-local-identifier" column. Each bird has several measurements, collected at different times of the year.
Use the next code cell to create a GeoDataFrame birds.
- birds should have all of the columns from birds_df, along with a "geometry" column that contains Point objects with (longitude, latitude) locations.
- Set the CRS of birds to {'init'
Step3: 2) Plot the data.
Next, we load in the 'naturalearth_lowres' dataset from GeoPandas, and set americas to a GeoDataFrame containing the boundaries of all countries in the Americas (both North and South America). Run the next code cell without changes.
Step4: Use the next code cell to create a single plot that shows both
Step5: 3) Where does each bird start and end its journey? (Part 1)
Now, we're ready to look more closely at each bird's path. Run the next code cell to create two GeoDataFrames
Step6: Use the next code cell to create a GeoDataFrame end_gdf containing the final location of each bird.
- The format should be identical to that of start_gdf, with two columns ("tag-local-identifier" and "geometry"), where the "geometry" column contains Point objects.
- Set the CRS of end_gdf to {'init'
Step7: 4) Where does each bird start and end its journey? (Part 2)
Use the GeoDataFrames from the question above (path_gdf, start_gdf, and end_gdf) to visualize the paths of all birds on a single map. You may also want to use the americas GeoDataFrame.
Step8: 5) Where are the protected areas in South America? (Part 1)
It looks like all of the birds end up somewhere in South America. But are they going to protected areas?
In the next code cell, you'll create a GeoDataFrame protected_areas containing the locations of all of the protected areas in South America. The corresponding shapefile is located at filepath protected_filepath.
Step9: 6) Where are the protected areas in South America? (Part 2)
Create a plot that uses the protected_areas GeoDataFrame to show the locations of the protected areas in South America. (You'll notice that some protected areas are on land, while others are in marine waters.)
Step10: 7) What percentage of South America is protected?
You're interested in determining what percentage of South America is protected, so that you know how much of South America is suitable for the birds.
As a first step, you calculate the total area of all protected lands in South America (not including marine area). To do this, you use the "REP_AREA" and "REP_M_AREA" columns, which contain the total area and total marine area, respectively, in square kilometers.
Run the code cell below without changes.
Step11: Then, to finish the calculation, you'll use the south_america GeoDataFrame.
Step12: Calculate the total area of South America by following these steps
Step13: Run the code cell below to calculate the percentage of South America that is protected.
Step14: 8) Where are the birds in South America?
So, are the birds in protected areas?
Create a plot that shows for all birds, all of the locations where they were discovered in South America. Also plot the locations of all protected areas in South America.
To exclude protected areas that are purely marine areas (with no land component), you can use the "MARINE" column (and plot only the rows in protected_areas[protected_areas['MARINE']!='2'], instead of every row in the protected_areas GeoDataFrame). | Python Code:
import pandas as pd
import geopandas as gpd
from shapely.geometry import LineString
from learntools.core import binder
binder.bind(globals())
from learntools.geospatial.ex2 import *
Explanation: Introduction
You are a bird conservation expert and want to understand migration patterns of purple martins. In your research, you discover that these birds typically spend the summer breeding season in the eastern United States, and then migrate to South America for the winter. But since this bird is under threat of endangerment, you'd like to take a closer look at the locations that these birds are more likely to visit.
<center>
<img src="https://i.imgur.com/qQcS0KM.png" width="1000"><br/>
</center>
There are several protected areas in South America, which operate under special regulations to ensure that species that migrate (or live) there have the best opportunity to thrive. You'd like to know if purple martins tend to visit these areas. To answer this question, you'll use some recently collected data that tracks the year-round location of eleven different birds.
Before you get started, run the code cell below to set everything up.
End of explanation
# Load the data and print the first 5 rows
birds_df = pd.read_csv("../input/geospatial-learn-course-data/purple_martin.csv", parse_dates=['timestamp'])
print("There are {} different birds in the dataset.".format(birds_df["tag-local-identifier"].nunique()))
birds_df.head()
Explanation: Exercises
1) Load the data.
Run the next code cell (without changes) to load the GPS data into a pandas DataFrame birds_df.
End of explanation
# Your code here: Create the GeoDataFrame
birds = ____
# Your code here: Set the CRS to {'init': 'epsg:4326'}
birds.crs = ____
# Check your answer
q_1.check()
#%%RM_IF(PROD)%%
# Create the GeoDataFrame
birds = gpd.GeoDataFrame(birds_df, geometry=gpd.points_from_xy(birds_df["location-long"], birds_df["location-lat"]))
# Set the CRS to {'init': 'epsg:4326'}
birds.crs = {'init' :'epsg:4326'}
q_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
Explanation: There are 11 birds in the dataset, where each bird is identified by a unique value in the "tag-local-identifier" column. Each bird has several measurements, collected at different times of the year.
Use the next code cell to create a GeoDataFrame birds.
- birds should have all of the columns from birds_df, along with a "geometry" column that contains Point objects with (longitude, latitude) locations.
- Set the CRS of birds to {'init': 'epsg:4326'}.
End of explanation
# Load a GeoDataFrame with country boundaries in North/South America, print the first 5 rows
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
americas = world.loc[world['continent'].isin(['North America', 'South America'])]
americas.head()
Explanation: 2) Plot the data.
Next, we load in the 'naturalearth_lowres' dataset from GeoPandas, and set americas to a GeoDataFrame containing the boundaries of all countries in the Americas (both North and South America). Run the next code cell without changes.
End of explanation
# Your code here
____
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q_2.hint()
#%%RM_IF(PROD)%%
ax = americas.plot(figsize=(10,10), color='white', linestyle=':', edgecolor='gray')
birds.plot(ax=ax, markersize=10)
# Uncomment to zoom in
#ax.set_xlim([-110, -30])
#ax.set_ylim([-30, 60])
# Get credit for your work after you have created a map
q_2.check()
# Uncomment to see our solution (your code may look different!)
#_COMMENT_IF(PROD)_
#q_2.solution()
Explanation: Use the next code cell to create a single plot that shows both: (1) the country boundaries in the americas GeoDataFrame, and (2) all of the points in the birds_gdf GeoDataFrame.
Don't worry about any special styling here; just create a preliminary plot, as a quick sanity check that all of the data was loaded properly. In particular, you don't have to worry about color-coding the points to differentiate between birds, and you don't have to differentiate starting points from ending points. We'll do that in the next part of the exercise.
End of explanation
# GeoDataFrame showing path for each bird
path_df = birds.groupby("tag-local-identifier")['geometry'].apply(list).apply(lambda x: LineString(x)).reset_index()
path_gdf = gpd.GeoDataFrame(path_df, geometry=path_df.geometry)
path_gdf.crs = {'init' :'epsg:4326'}
# GeoDataFrame showing starting point for each bird
start_df = birds.groupby("tag-local-identifier")['geometry'].apply(list).apply(lambda x: x[0]).reset_index()
start_gdf = gpd.GeoDataFrame(start_df, geometry=start_df.geometry)
start_gdf.crs = {'init' :'epsg:4326'}
# Show first five rows of GeoDataFrame
start_gdf.head()
Explanation: 3) Where does each bird start and end its journey? (Part 1)
Now, we're ready to look more closely at each bird's path. Run the next code cell to create two GeoDataFrames:
- path_gdf contains LineString objects that show the path of each bird. It uses the LineString() method to create a LineString object from a list of Point objects.
- start_gdf contains the starting points for each bird.
End of explanation
# Your code here
end_gdf = ____
# Check your answer
q_3.check()
#%%RM_IF(PROD)%%
end_df = birds.groupby("tag-local-identifier")['geometry'].apply(list).apply(lambda x: x[-1]).reset_index()
end_gdf = gpd.GeoDataFrame(end_df, geometry=end_df.geometry)
end_gdf.crs = {'init': 'epsg:4326'}
q_3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_3.hint()
#_COMMENT_IF(PROD)_
q_3.solution()
Explanation: Use the next code cell to create a GeoDataFrame end_gdf containing the final location of each bird.
- The format should be identical to that of start_gdf, with two columns ("tag-local-identifier" and "geometry"), where the "geometry" column contains Point objects.
- Set the CRS of end_gdf to {'init': 'epsg:4326'}.
End of explanation
# Your code here
____
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q_4.hint()
#%%RM_IF(PROD)%%
ax = americas.plot(figsize=(10, 10), color='white', linestyle=':', edgecolor='gray')
start_gdf.plot(ax=ax, color='red', markersize=30)
path_gdf.plot(ax=ax, cmap='tab20b', linestyle='-', linewidth=1, zorder=1)
end_gdf.plot(ax=ax, color='black', markersize=30)
# Uncomment to zoom in
#ax.set_xlim([-110, -30])
#ax.set_ylim([-30, 60])
# Get credit for your work after you have created a map
q_4.check()
# Uncomment to see our solution (your code may look different!)
#_COMMENT_IF(PROD)_
q_4.solution()
Explanation: 4) Where does each bird start and end its journey? (Part 2)
Use the GeoDataFrames from the question above (path_gdf, start_gdf, and end_gdf) to visualize the paths of all birds on a single map. You may also want to use the americas GeoDataFrame.
End of explanation
# Path of the shapefile to load
protected_filepath = "../input/geospatial-learn-course-data/SAPA_Aug2019-shapefile/SAPA_Aug2019-shapefile/SAPA_Aug2019-shapefile-polygons.shp"
# Your code here
protected_areas = ____
# Check your answer
q_5.check()
#%%RM_IF(PROD)%%
protected_areas = gpd.read_file(protected_filepath)
q_5.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_5.hint()
#_COMMENT_IF(PROD)_
q_5.solution()
Explanation: 5) Where are the protected areas in South America? (Part 1)
It looks like all of the birds end up somewhere in South America. But are they going to protected areas?
In the next code cell, you'll create a GeoDataFrame protected_areas containing the locations of all of the protected areas in South America. The corresponding shapefile is located at filepath protected_filepath.
End of explanation
# Country boundaries in South America
south_america = americas.loc[americas['continent']=='South America']
# Your code here: plot protected areas in South America
____
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q_6.hint()
#%%RM_IF(PROD)%%
# Plot protected areas in South America
ax = south_america.plot(figsize=(10,10), color='white', edgecolor='gray')
protected_areas.plot(ax=ax, alpha=0.4)
# Get credit for your work after you have created a map
q_6.check()
# Uncomment to see our solution (your code may look different!)
#_COMMENT_IF(PROD)_
q_6.solution()
Explanation: 6) Where are the protected areas in South America? (Part 2)
Create a plot that uses the protected_areas GeoDataFrame to show the locations of the protected areas in South America. (You'll notice that some protected areas are on land, while others are in marine waters.)
End of explanation
P_Area = sum(protected_areas['REP_AREA']-protected_areas['REP_M_AREA'])
print("South America has {} square kilometers of protected areas.".format(P_Area))
Explanation: 7) What percentage of South America is protected?
You're interested in determining what percentage of South America is protected, so that you know how much of South America is suitable for the birds.
As a first step, you calculate the total area of all protected lands in South America (not including marine area). To do this, you use the "REP_AREA" and "REP_M_AREA" columns, which contain the total area and total marine area, respectively, in square kilometers.
Run the code cell below without changes.
End of explanation
south_america.head()
Explanation: Then, to finish the calculation, you'll use the south_america GeoDataFrame.
End of explanation
# Your code here: Calculate the total area of South America (in square kilometers)
totalArea = ____
# Check your answer
q_7.check()
#%%RM_IF(PROD)%%
# Calculate the total area of South America (in square kilometers)
totalArea = sum(south_america.geometry.to_crs(epsg=3035).area) / 10**6
q_7.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_7.hint()
#_COMMENT_IF(PROD)_
q_7.solution()
Explanation: Calculate the total area of South America by following these steps:
- Calculate the area of each country using the area attribute of each polygon (with EPSG 3035 as the CRS), and add up the results. The calculated area will be in units of square meters.
- Convert your answer to have units of square kilometeters.
End of explanation
# What percentage of South America is protected?
percentage_protected = P_Area/totalArea
print('Approximately {}% of South America is protected.'.format(round(percentage_protected*100, 2)))
Explanation: Run the code cell below to calculate the percentage of South America that is protected.
End of explanation
# Your code here
____
# Uncomment to see a hint
#_COMMENT_IF(PROD)_
q_8.hint()
#%%RM_IF(PROD)%%
ax = south_america.plot(figsize=(10,10), color='white', edgecolor='gray')
protected_areas[protected_areas['MARINE']!='2'].plot(ax=ax, alpha=0.4, zorder=1)
birds[birds.geometry.y < 0].plot(ax=ax, color='red', alpha=0.6, markersize=10, zorder=2)
# Uncomment to zoom in
#ax.set_xlim([-70, -45])
#ax.set_ylim([-20, 0])
# Get credit for your work after you have created a map
q_8.check()
# Uncomment to see our solution (your code may look different!)
#_COMMENT_IF(PROD)_
q_8.solution()
Explanation: 8) Where are the birds in South America?
So, are the birds in protected areas?
Create a plot that shows for all birds, all of the locations where they were discovered in South America. Also plot the locations of all protected areas in South America.
To exclude protected areas that are purely marine areas (with no land component), you can use the "MARINE" column (and plot only the rows in protected_areas[protected_areas['MARINE']!='2'], instead of every row in the protected_areas GeoDataFrame).
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.