Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
13,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling dynamics of FS Peptide
This example shows a typical, basic usage of the MSMBuilder command line to model dynamics of a protein system.
Step1: Get example data
Step2: Featurization
The raw (x, y, z) coordinates from the simulation do not respect the translational and rotational symmetry of our problem. A Featurizer transforms cartesian coordinates into other representations. Here we use the DihedralFeaturizer to turn our data into phi and psi dihedral angles. Observe that the 264*3-dimensional space is reduced to 84 dimensions.
Step3: Preprocessing
Since the range of values in our raw data can vary widely from feature to feature, we can scale values to reduce bias. Here we use the RobustScaler to center and scale our dihedral angles by their respective interquartile ranges.
Step4: Intermediate kinetic model
Step5: tICA Histogram
We can histogram our data projecting along the two slowest degrees of freedom (as found by tICA). You have to do this in a python script.
Step6: Clustering
Conformations need to be clustered into states (sometimes written as microstates). We cluster based on the tICA projections to group conformations that interconvert rapidly. Note that we transform our trajectories from the 4-dimensional tICA space into a 1-dimensional cluster index.
Step7: MSM
We can construct an MSM from the labeled trajectories
Step8: Plot Free Energy Landscape
Subsequent plotting and analysis should be done from Python | Python Code:
# Work in a temporary directory
import tempfile
import os
os.chdir(tempfile.mkdtemp())
# Since this is running from an IPython notebook,
# we prefix all our commands with "!"
# When running on the command line, omit the leading "!"
! msmb -h
Explanation: Modeling dynamics of FS Peptide
This example shows a typical, basic usage of the MSMBuilder command line to model dynamics of a protein system.
End of explanation
! msmb FsPeptide --data_home ./
! tree
Explanation: Get example data
End of explanation
# Remember '\' is the line-continuation marker
# You can enter this command on one line
! msmb DihedralFeaturizer \
--out featurizer.pkl \
--transformed diheds \
--top fs_peptide/fs-peptide.pdb \
--trjs "fs_peptide/*.xtc" \
--stride 10
Explanation: Featurization
The raw (x, y, z) coordinates from the simulation do not respect the translational and rotational symmetry of our problem. A Featurizer transforms cartesian coordinates into other representations. Here we use the DihedralFeaturizer to turn our data into phi and psi dihedral angles. Observe that the 264*3-dimensional space is reduced to 84 dimensions.
End of explanation
! msmb RobustScaler \
-i diheds \
--transformed scaled_diheds.h5
Explanation: Preprocessing
Since the range of values in our raw data can vary widely from feature to feature, we can scale values to reduce bias. Here we use the RobustScaler to center and scale our dihedral angles by their respective interquartile ranges.
End of explanation
! msmb tICA -i scaled_diheds.h5 \
--out tica_model.pkl \
--transformed tica_trajs.h5 \
--n_components 4 \
--lag_time 2
Explanation: Intermediate kinetic model: tICA
tICA is similar to principal component analysis (see "tICA vs. PCA" example). Note that the 84-dimensional space is reduced to 4 dimensions.
End of explanation
from msmbuilder.dataset import dataset
ds = dataset('tica_trajs.h5')
%matplotlib inline
import msmexplorer as msme
import numpy as np
txx = np.concatenate(ds)
msme.plot_histogram(txx)
Explanation: tICA Histogram
We can histogram our data projecting along the two slowest degrees of freedom (as found by tICA). You have to do this in a python script.
End of explanation
! msmb MiniBatchKMeans -i tica_trajs.h5 \
--transformed labeled_trajs.h5 \
--out clusterer.pkl \
--n_clusters 100 \
--random_state 42
Explanation: Clustering
Conformations need to be clustered into states (sometimes written as microstates). We cluster based on the tICA projections to group conformations that interconvert rapidly. Note that we transform our trajectories from the 4-dimensional tICA space into a 1-dimensional cluster index.
End of explanation
! msmb MarkovStateModel -i labeled_trajs.h5 \
--out msm.pkl \
--lag_time 2
Explanation: MSM
We can construct an MSM from the labeled trajectories
End of explanation
from msmbuilder.utils import load
msm = load('msm.pkl')
clusterer = load('clusterer.pkl')
assignments = clusterer.partial_transform(txx)
assignments = msm.partial_transform(assignments)
from matplotlib import pyplot as plt
msme.plot_free_energy(txx, obs=(0, 1), n_samples=10000,
pi=msm.populations_[assignments],
xlabel='tIC 1', ylabel='tIC 2')
plt.scatter(clusterer.cluster_centers_[msm.state_labels_, 0],
clusterer.cluster_centers_[msm.state_labels_, 1],
s=1e4 * msm.populations_, # size by population
c=msm.left_eigenvectors_[:, 1], # color by eigenvector
cmap="coolwarm",
zorder=3
)
plt.colorbar(label='First dynamical eigenvector')
plt.tight_layout()
Explanation: Plot Free Energy Landscape
Subsequent plotting and analysis should be done from Python
End of explanation |
13,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification with Neural Decision Forests
Author
Step1: Prepare the data
Step2: Remove the first record (because it is not a valid data example) and a trailing
'dot' in the class labels.
Step3: We store the training and test data splits locally as CSV files.
Step4: Define dataset metadata
Here, we define the metadata of the dataset that will be useful for reading and parsing
and encoding input features.
Step5: Create tf.data.Dataset objects for training and validation
We create an input function to read and parse the file, and convert features and labels
into a tf.data.Dataset
for training and validation. We also preprocess the input by mapping the target label
to an index.
Step6: Create model inputs
Step7: Encode input features
Step8: Deep Neural Decision Tree
A neural decision tree model has two sets of weights to learn. The first set is pi,
which represents the probability distribution of the classes in the tree leaves.
The second set is the weights of the routing layer decision_fn, which represents the probability
of going to each leave. The forward pass of the model works as follows
Step9: Deep Neural Decision Forest
The neural decision forest model consists of a set of neural decision trees that are
trained simultaneously. The output of the forest model is the average outputs of its trees.
Step10: Finally, let's set up the code that will train and evaluate the model.
Step11: Experiment 1
Step12: Experiment 2 | Python Code:
import tensorflow as tf
import numpy as np
import pandas as pd
from tensorflow import keras
from tensorflow.keras import layers
import math
Explanation: Classification with Neural Decision Forests
Author: Khalid Salama<br>
Date created: 2021/01/15<br>
Last modified: 2021/01/15<br>
Description: How to train differentiable decision trees for end-to-end learning in deep neural networks.
Introduction
This example provides an implementation of the
Deep Neural Decision Forest
model introduced by P. Kontschieder et al. for structured data classification.
It demonstrates how to build a stochastic and differentiable decision tree model,
train it end-to-end, and unify decision trees with deep representation learning.
The dataset
This example uses the
United States Census Income Dataset
provided by the
UC Irvine Machine Learning Repository.
The task is binary classification
to predict whether a person is likely to be making over USD 50,000 a year.
The dataset includes 48,842 instances with 14 input features (such as age, work class, education, occupation, and so on): 5 numerical features
and 9 categorical features.
Setup
End of explanation
CSV_HEADER = [
"age",
"workclass",
"fnlwgt",
"education",
"education_num",
"marital_status",
"occupation",
"relationship",
"race",
"gender",
"capital_gain",
"capital_loss",
"hours_per_week",
"native_country",
"income_bracket",
]
train_data_url = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
)
train_data = pd.read_csv(train_data_url, header=None, names=CSV_HEADER)
test_data_url = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test"
)
test_data = pd.read_csv(test_data_url, header=None, names=CSV_HEADER)
print(f"Train dataset shape: {train_data.shape}")
print(f"Test dataset shape: {test_data.shape}")
Explanation: Prepare the data
End of explanation
test_data = test_data[1:]
test_data.income_bracket = test_data.income_bracket.apply(
lambda value: value.replace(".", "")
)
Explanation: Remove the first record (because it is not a valid data example) and a trailing
'dot' in the class labels.
End of explanation
train_data_file = "train_data.csv"
test_data_file = "test_data.csv"
train_data.to_csv(train_data_file, index=False, header=False)
test_data.to_csv(test_data_file, index=False, header=False)
Explanation: We store the training and test data splits locally as CSV files.
End of explanation
# A list of the numerical feature names.
NUMERIC_FEATURE_NAMES = [
"age",
"education_num",
"capital_gain",
"capital_loss",
"hours_per_week",
]
# A dictionary of the categorical features and their vocabulary.
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
"workclass": sorted(list(train_data["workclass"].unique())),
"education": sorted(list(train_data["education"].unique())),
"marital_status": sorted(list(train_data["marital_status"].unique())),
"occupation": sorted(list(train_data["occupation"].unique())),
"relationship": sorted(list(train_data["relationship"].unique())),
"race": sorted(list(train_data["race"].unique())),
"gender": sorted(list(train_data["gender"].unique())),
"native_country": sorted(list(train_data["native_country"].unique())),
}
# A list of the columns to ignore from the dataset.
IGNORE_COLUMN_NAMES = ["fnlwgt"]
# A list of the categorical feature names.
CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys())
# A list of all the input features.
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
# A list of column default values for each feature.
COLUMN_DEFAULTS = [
[0.0] if feature_name in NUMERIC_FEATURE_NAMES + IGNORE_COLUMN_NAMES else ["NA"]
for feature_name in CSV_HEADER
]
# The name of the target feature.
TARGET_FEATURE_NAME = "income_bracket"
# A list of the labels of the target features.
TARGET_LABELS = [" <=50K", " >50K"]
Explanation: Define dataset metadata
Here, we define the metadata of the dataset that will be useful for reading and parsing
and encoding input features.
End of explanation
from tensorflow.keras.layers import StringLookup
target_label_lookup = StringLookup(
vocabulary=TARGET_LABELS, mask_token=None, num_oov_indices=0
)
def get_dataset_from_csv(csv_file_path, shuffle=False, batch_size=128):
dataset = tf.data.experimental.make_csv_dataset(
csv_file_path,
batch_size=batch_size,
column_names=CSV_HEADER,
column_defaults=COLUMN_DEFAULTS,
label_name=TARGET_FEATURE_NAME,
num_epochs=1,
header=False,
na_value="?",
shuffle=shuffle,
).map(lambda features, target: (features, target_label_lookup(target)))
return dataset.cache()
Explanation: Create tf.data.Dataset objects for training and validation
We create an input function to read and parse the file, and convert features and labels
into a tf.data.Dataset
for training and validation. We also preprocess the input by mapping the target label
to an index.
End of explanation
def create_model_inputs():
inputs = {}
for feature_name in FEATURE_NAMES:
if feature_name in NUMERIC_FEATURE_NAMES:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype=tf.float32
)
else:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype=tf.string
)
return inputs
Explanation: Create model inputs
End of explanation
def encode_inputs(inputs):
encoded_features = []
for feature_name in inputs:
if feature_name in CATEGORICAL_FEATURE_NAMES:
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
# Create a lookup to convert a string values to an integer indices.
# Since we are not using a mask token, nor expecting any out of vocabulary
# (oov) token, we set mask_token to None and num_oov_indices to 0.
lookup = StringLookup(
vocabulary=vocabulary, mask_token=None, num_oov_indices=0
)
# Convert the string input values into integer indices.
value_index = lookup(inputs[feature_name])
embedding_dims = int(math.sqrt(lookup.vocabulary_size()))
# Create an embedding layer with the specified dimensions.
embedding = layers.Embedding(
input_dim=lookup.vocabulary_size(), output_dim=embedding_dims
)
# Convert the index values to embedding representations.
encoded_feature = embedding(value_index)
else:
# Use the numerical features as-is.
encoded_feature = inputs[feature_name]
if inputs[feature_name].shape[-1] is None:
encoded_feature = tf.expand_dims(encoded_feature, -1)
encoded_features.append(encoded_feature)
encoded_features = layers.concatenate(encoded_features)
return encoded_features
Explanation: Encode input features
End of explanation
class NeuralDecisionTree(keras.Model):
def __init__(self, depth, num_features, used_features_rate, num_classes):
super(NeuralDecisionTree, self).__init__()
self.depth = depth
self.num_leaves = 2 ** depth
self.num_classes = num_classes
# Create a mask for the randomly selected features.
num_used_features = int(num_features * used_features_rate)
one_hot = np.eye(num_features)
sampled_feature_indicies = np.random.choice(
np.arange(num_features), num_used_features, replace=False
)
self.used_features_mask = one_hot[sampled_feature_indicies]
# Initialize the weights of the classes in leaves.
self.pi = tf.Variable(
initial_value=tf.random_normal_initializer()(
shape=[self.num_leaves, self.num_classes]
),
dtype="float32",
trainable=True,
)
# Initialize the stochastic routing layer.
self.decision_fn = layers.Dense(
units=self.num_leaves, activation="sigmoid", name="decision"
)
def call(self, features):
batch_size = tf.shape(features)[0]
# Apply the feature mask to the input features.
features = tf.matmul(
features, self.used_features_mask, transpose_b=True
) # [batch_size, num_used_features]
# Compute the routing probabilities.
decisions = tf.expand_dims(
self.decision_fn(features), axis=2
) # [batch_size, num_leaves, 1]
# Concatenate the routing probabilities with their complements.
decisions = layers.concatenate(
[decisions, 1 - decisions], axis=2
) # [batch_size, num_leaves, 2]
mu = tf.ones([batch_size, 1, 1])
begin_idx = 1
end_idx = 2
# Traverse the tree in breadth-first order.
for level in range(self.depth):
mu = tf.reshape(mu, [batch_size, -1, 1]) # [batch_size, 2 ** level, 1]
mu = tf.tile(mu, (1, 1, 2)) # [batch_size, 2 ** level, 2]
level_decisions = decisions[
:, begin_idx:end_idx, :
] # [batch_size, 2 ** level, 2]
mu = mu * level_decisions # [batch_size, 2**level, 2]
begin_idx = end_idx
end_idx = begin_idx + 2 ** (level + 1)
mu = tf.reshape(mu, [batch_size, self.num_leaves]) # [batch_size, num_leaves]
probabilities = keras.activations.softmax(self.pi) # [num_leaves, num_classes]
outputs = tf.matmul(mu, probabilities) # [batch_size, num_classes]
return outputs
Explanation: Deep Neural Decision Tree
A neural decision tree model has two sets of weights to learn. The first set is pi,
which represents the probability distribution of the classes in the tree leaves.
The second set is the weights of the routing layer decision_fn, which represents the probability
of going to each leave. The forward pass of the model works as follows:
The model expects input features as a single vector encoding all the features of an instance
in the batch. This vector can be generated from a Convolution Neural Network (CNN) applied to images
or dense transformations applied to structured data features.
The model first applies a used_features_mask to randomly select a subset of input features to use.
Then, the model computes the probabilities (mu) for the input instances to reach the tree leaves
by iteratively performing a stochastic routing throughout the tree levels.
Finally, the probabilities of reaching the leaves are combined by the class probabilities at the
leaves to produce the final outputs.
End of explanation
class NeuralDecisionForest(keras.Model):
def __init__(self, num_trees, depth, num_features, used_features_rate, num_classes):
super(NeuralDecisionForest, self).__init__()
self.ensemble = []
# Initialize the ensemble by adding NeuralDecisionTree instances.
# Each tree will have its own randomly selected input features to use.
for _ in range(num_trees):
self.ensemble.append(
NeuralDecisionTree(depth, num_features, used_features_rate, num_classes)
)
def call(self, inputs):
# Initialize the outputs: a [batch_size, num_classes] matrix of zeros.
batch_size = tf.shape(inputs)[0]
outputs = tf.zeros([batch_size, num_classes])
# Aggregate the outputs of trees in the ensemble.
for tree in self.ensemble:
outputs += tree(inputs)
# Divide the outputs by the ensemble size to get the average.
outputs /= len(self.ensemble)
return outputs
Explanation: Deep Neural Decision Forest
The neural decision forest model consists of a set of neural decision trees that are
trained simultaneously. The output of the forest model is the average outputs of its trees.
End of explanation
learning_rate = 0.01
batch_size = 265
num_epochs = 10
hidden_units = [64, 64]
def run_experiment(model):
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
print("Start training the model...")
train_dataset = get_dataset_from_csv(
train_data_file, shuffle=True, batch_size=batch_size
)
model.fit(train_dataset, epochs=num_epochs)
print("Model training finished")
print("Evaluating the model on the test data...")
test_dataset = get_dataset_from_csv(test_data_file, batch_size=batch_size)
_, accuracy = model.evaluate(test_dataset)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
Explanation: Finally, let's set up the code that will train and evaluate the model.
End of explanation
num_trees = 10
depth = 10
used_features_rate = 1.0
num_classes = len(TARGET_LABELS)
def create_tree_model():
inputs = create_model_inputs()
features = encode_inputs(inputs)
features = layers.BatchNormalization()(features)
num_features = features.shape[1]
tree = NeuralDecisionTree(depth, num_features, used_features_rate, num_classes)
outputs = tree(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
tree_model = create_tree_model()
run_experiment(tree_model)
Explanation: Experiment 1: train a decision tree model
In this experiment, we train a single neural decision tree model
where we use all input features.
End of explanation
num_trees = 25
depth = 5
used_features_rate = 0.5
def create_forest_model():
inputs = create_model_inputs()
features = encode_inputs(inputs)
features = layers.BatchNormalization()(features)
num_features = features.shape[1]
forest_model = NeuralDecisionForest(
num_trees, depth, num_features, used_features_rate, num_classes
)
outputs = forest_model(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
forest_model = create_forest_model()
run_experiment(forest_model)
Explanation: Experiment 2: train a forest model
In this experiment, we train a neural decision forest with num_trees trees
where each tree uses randomly selected 50% of the input features. You can control the number
of features to be used in each tree by setting the used_features_rate variable.
In addition, we set the depth to 5 instead of 10 compared to the previous experiment.
End of explanation |
13,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Линейная регрессия и стохастический градиентный спуск
Задание основано на материалах лекций по линейной регрессии и градиентному спуску. Вы будете прогнозировать выручку компании в зависимости от уровня ее инвестиций в рекламу по TV, в газетах и по радио.
Вы научитесь
Step1: 1. Загрузите данные из файла advertising.csv в объект pandas DataFrame. Источник данных.
Step2: Посмотрите на первые 5 записей и на статистику признаков в этом наборе данных.
Step3: Создайте массивы NumPy X из столбцов TV, Radio и Newspaper и y - из столбца Sales. Используйте атрибут values объекта pandas DataFrame.
Step4: Отмасштабируйте столбцы матрицы X, вычтя из каждого значения среднее по соответствующему столбцу и поделив результат на стандартное отклонение. Для определенности, используйте методы mean и std векторов NumPy (реализация std в Pandas может отличаться). Обратите внимание, что в numpy вызов функции .mean() без параметров возвращает среднее по всем элементам массива, а не по столбцам, как в pandas. Чтобы произвести вычисление по столбцам, необходимо указать параметр axis.
Step5: Добавьте к матрице X столбец из единиц, используя методы hstack, ones и reshape библиотеки NumPy. Вектор из единиц нужен для того, чтобы не обрабатывать отдельно коэффициент $w_0$ линейной регрессии.
Step6: 2. Реализуйте функцию mserror - среднеквадратичную ошибку прогноза. Она принимает два аргумента - объекты Series y (значения целевого признака) и y_pred (предсказанные значения). Не используйте в этой функции циклы - тогда она будет вычислительно неэффективной.
Step7: Какова среднеквадратичная ошибка прогноза значений Sales, если всегда предсказывать медианное значение Sales по исходной выборке? Запишите ответ в файл '1.txt'.
Step8: 3. Реализуйте функцию normal_equation, которая по заданным матрицам (массивам NumPy) X и y вычисляет вектор весов $w$ согласно нормальному уравнению линейной регрессии.
Step9: Какие продажи предсказываются линейной моделью с весами, найденными с помощью нормального уравнения, в случае средних инвестиций в рекламу по ТВ, радио и в газетах? (то есть при нулевых значениях масштабированных признаков TV, Radio и Newspaper). Запишите ответ в файл '2.txt'.
Step10: 4. Напишите функцию linear_prediction, которая принимает на вход матрицу X и вектор весов линейной модели w, а возвращает вектор прогнозов в виде линейной комбинации столбцов матрицы X с весами w.
Step11: Какова среднеквадратичная ошибка прогноза значений Sales в виде линейной модели с весами, найденными с помощью нормального уравнения? Запишите ответ в файл '3.txt'.
Step12: 5. Напишите функцию stochastic_gradient_step, реализующую шаг стохастического градиентного спуска для линейной регрессии. Функция должна принимать матрицу X, вектора y и w, число train_ind - индекс объекта обучающей выборки (строки матрицы X), по которому считается изменение весов, а также число $\eta$ (eta) - шаг градиентного спуска (по умолчанию eta=0.01). Результатом будет вектор обновленных весов. Наша реализация функции будет явно написана для данных с 3 признаками, но несложно модифицировать для любого числа признаков, можете это сделать.
Step13: 6. Напишите функцию stochastic_gradient_descent, реализующую стохастический градиентный спуск для линейной регрессии. Функция принимает на вход следующие аргументы
Step14: Запустите $10^5$ итераций стохастического градиентного спуска. Укажите вектор начальных весов w_init, состоящий из нулей. Оставьте параметры eta и seed равными их значениям по умолчанию (eta=0.01, seed=42 - это важно для проверки ответов).
Step15: Посмотрим, чему равна ошибка на первых 50 итерациях стохастического градиентного спуска. Видим, что ошибка не обязательно уменьшается на каждой итерации.
Step16: Теперь посмотрим на зависимость ошибки от номера итерации для $10^5$ итераций стохастического градиентного спуска. Видим, что алгоритм сходится.
Step17: Посмотрим на вектор весов, к которому сошелся метод.
Step18: Посмотрим на среднеквадратичную ошибку на последней итерации.
Step19: Какова среднеквадратичная ошибка прогноза значений Sales в виде линейной модели с весами, найденными с помощью градиентного спуска? Запишите ответ в файл '4.txt'. | Python Code:
def write_answer_to_file(answer, filename):
with open(filename, 'w') as f_out:
f_out.write(str(round(answer, 3)))
Explanation: Линейная регрессия и стохастический градиентный спуск
Задание основано на материалах лекций по линейной регрессии и градиентному спуску. Вы будете прогнозировать выручку компании в зависимости от уровня ее инвестиций в рекламу по TV, в газетах и по радио.
Вы научитесь:
решать задачу восстановления линейной регрессии
реализовывать стохастический градиентный спуск для ее настройки
решать задачу линейной регрессии аналитически
Введение
Линейная регрессия - один из наиболее хорошо изученных методов машинного обучения, позволяющий прогнозировать значения количественного признака в виде линейной комбинации прочих признаков с параметрами - весами модели. Оптимальные (в смысле минимальности некоторого функционала ошибки) параметры линейной регрессии можно найти аналитически с помощью нормального уравнения или численно с помощью методов оптимизации.
Линейная регрессия использует простой функционал качества - среднеквадратичную ошибку. Мы будем работать с выборкой, содержащей 3 признака. Для настройки параметров (весов) модели решается следующая задача:
$$\Large \frac{1}{\ell}\sum_{i=1}^\ell{{((w_0 + w_1x_{i1} + w_2x_{i2} + w_3x_{i3}) - y_i)}^2} \rightarrow \min_{w_0, w_1, w_2, w_3},$$
где $x_{i1}, x_{i2}, x_{i3}$ - значения признаков $i$-го объекта, $y_i$ - значение целевого признака $i$-го объекта, $\ell$ - число объектов в обучающей выборке.
Градиентный спуск
Параметры $w_0, w_1, w_2, w_3$, по которым минимизируется среднеквадратичная ошибка, можно находить численно с помощью градиентного спуска.
Градиентный шаг для весов будет выглядеть следующим образом:
$$\Large w_0 \leftarrow w_0 - \frac{2\eta}{\ell} \sum_{i=1}^\ell{{((w_0 + w_1x_{i1} + w_2x_{i2} + w_3x_{i3}) - y_i)}}$$
$$\Large w_j \leftarrow w_j - \frac{2\eta}{\ell} \sum_{i=1}^\ell{{x_{ij}((w_0 + w_1x_{i1} + w_2x_{i2} + w_3x_{i3}) - y_i)}},\ j \in {1,2,3}$$
Здесь $\eta$ - параметр, шаг градиентного спуска.
Стохастический градиентный спуск
Проблема градиентного спуска, описанного выше, в том, что на больших выборках считать на каждом шаге градиент по всем имеющимся данным может быть очень вычислительно сложно.
В стохастическом варианте градиентного спуска поправки для весов вычисляются только с учетом одного случайно взятого объекта обучающей выборки:
$$\Large w_0 \leftarrow w_0 - \frac{2\eta}{\ell} {((w_0 + w_1x_{k1} + w_2x_{k2} + w_3x_{k3}) - y_k)}$$
$$\Large w_j \leftarrow w_j - \frac{2\eta}{\ell} {x_{kj}((w_0 + w_1x_{k1} + w_2x_{k2} + w_3x_{k3}) - y_k)},\ j \in {1,2,3},$$
где $k$ - случайный индекс, $k \in {1, \ldots, \ell}$.
Нормальное уравнение
Нахождение вектора оптимальных весов $w$ может быть сделано и аналитически.
Мы хотим найти такой вектор весов $w$, чтобы вектор $y$, приближающий целевой признак, получался умножением матрицы $X$ (состоящей из всех признаков объектов обучающей выборки, кроме целевого) на вектор весов $w$. То есть, чтобы выполнялось матричное уравнение:
$$\Large y = Xw$$
Домножением слева на $X^T$ получаем:
$$\Large X^Ty = X^TXw$$
Это хорошо, поскольку теперь матрица $X^TX$ - квадратная, и можно найти решение (вектор $w$) в виде:
$$\Large w = {(X^TX)}^{-1}X^Ty$$
Матрица ${(X^TX)}^{-1}X^T$ - псевдообратная для матрицы $X$. В NumPy такую матрицу можно вычислить с помощью функции numpy.linalg.pinv.
Однако, нахождение псевдообратной матрицы - операция вычислительно сложная и нестабильная в случае малого определителя матрицы $X$ (проблема мультиколлинеарности).
На практике лучше находить вектор весов $w$ решением матричного уравнения
$$\Large X^TXw = X^Ty$$Это может быть сделано с помощью функции numpy.linalg.solve.
Но все же на практике для больших матриц $X$ быстрее работает градиентный спуск, особенно его стохастическая версия.
Инструкции по выполнению
В начале напишем простую функцию для записи ответов в текстовый файл. Ответами будут числа, полученные в ходе решения этого задания, округленные до 3 знаков после запятой. Полученные файлы после выполнения задания надо отправить в форму на странице задания на Coursera.org.
End of explanation
import pandas as pd
adver_data = pd.read_csv('advertising.csv')
Explanation: 1. Загрузите данные из файла advertising.csv в объект pandas DataFrame. Источник данных.
End of explanation
adver_data.head()
adver_data.describe()
Explanation: Посмотрите на первые 5 записей и на статистику признаков в этом наборе данных.
End of explanation
X = adver_data.as_matrix(columns=['TV', 'Radio', 'Newspaper'])
y = adver_data['Sales'].values
Explanation: Создайте массивы NumPy X из столбцов TV, Radio и Newspaper и y - из столбца Sales. Используйте атрибут values объекта pandas DataFrame.
End of explanation
import numpy as np
means, stds = np.mean(X, axis=0), np.std(X, axis=0)
print means, stds
X = (X - means) / stds
Explanation: Отмасштабируйте столбцы матрицы X, вычтя из каждого значения среднее по соответствующему столбцу и поделив результат на стандартное отклонение. Для определенности, используйте методы mean и std векторов NumPy (реализация std в Pandas может отличаться). Обратите внимание, что в numpy вызов функции .mean() без параметров возвращает среднее по всем элементам массива, а не по столбцам, как в pandas. Чтобы произвести вычисление по столбцам, необходимо указать параметр axis.
End of explanation
X = np.hstack((np.ones((X.shape[0], 1)), X))
Explanation: Добавьте к матрице X столбец из единиц, используя методы hstack, ones и reshape библиотеки NumPy. Вектор из единиц нужен для того, чтобы не обрабатывать отдельно коэффициент $w_0$ линейной регрессии.
End of explanation
def mserror(y, y_pred):
return np.mean((y - y_pred)**2)
Explanation: 2. Реализуйте функцию mserror - среднеквадратичную ошибку прогноза. Она принимает два аргумента - объекты Series y (значения целевого признака) и y_pred (предсказанные значения). Не используйте в этой функции циклы - тогда она будет вычислительно неэффективной.
End of explanation
median_sales = np.median(y)
answer1 = mserror(y, median_sales * np.ones((len(y), 1)))
print(answer1)
write_answer_to_file(answer1, '1.txt')
Explanation: Какова среднеквадратичная ошибка прогноза значений Sales, если всегда предсказывать медианное значение Sales по исходной выборке? Запишите ответ в файл '1.txt'.
End of explanation
def normal_equation(X, y):
return np.dot(np.dot(np.linalg.inv(np.dot(X.T, X)), X.T), y)
norm_eq_weights = normal_equation(X, y)
print norm_eq_weights
Explanation: 3. Реализуйте функцию normal_equation, которая по заданным матрицам (массивам NumPy) X и y вычисляет вектор весов $w$ согласно нормальному уравнению линейной регрессии.
End of explanation
answer2 = np.dot(np.array([1, 0, 0, 0]), norm_eq_weights)
print answer2
write_answer_to_file(answer2, '2.txt')
Explanation: Какие продажи предсказываются линейной моделью с весами, найденными с помощью нормального уравнения, в случае средних инвестиций в рекламу по ТВ, радио и в газетах? (то есть при нулевых значениях масштабированных признаков TV, Radio и Newspaper). Запишите ответ в файл '2.txt'.
End of explanation
def linear_prediction(X, w):
return np.dot(X, w)
Explanation: 4. Напишите функцию linear_prediction, которая принимает на вход матрицу X и вектор весов линейной модели w, а возвращает вектор прогнозов в виде линейной комбинации столбцов матрицы X с весами w.
End of explanation
answer3 = mserror(y, linear_prediction(X, norm_eq_weights))
print answer3
write_answer_to_file(answer3, '3.txt')
Explanation: Какова среднеквадратичная ошибка прогноза значений Sales в виде линейной модели с весами, найденными с помощью нормального уравнения? Запишите ответ в файл '3.txt'.
End of explanation
def stochastic_gradient_step(X, y, w, train_ind, eta=0.01):
grad0 = (np.sum(w * X[train_ind, :]) - y[train_ind]) / len(y)
grad1 = X[train_ind, 1] * (np.sum(w * X[train_ind, :]) - y[train_ind]) / len(y)
grad2 = X[train_ind, 2] * (np.sum(w * X[train_ind, :]) - y[train_ind]) / len(y)
grad3 = X[train_ind, 3] * (np.sum(w * X[train_ind, :]) - y[train_ind]) / len(y)
return w - 2 * eta * np.array([grad0, grad1, grad2, grad3])
Explanation: 5. Напишите функцию stochastic_gradient_step, реализующую шаг стохастического градиентного спуска для линейной регрессии. Функция должна принимать матрицу X, вектора y и w, число train_ind - индекс объекта обучающей выборки (строки матрицы X), по которому считается изменение весов, а также число $\eta$ (eta) - шаг градиентного спуска (по умолчанию eta=0.01). Результатом будет вектор обновленных весов. Наша реализация функции будет явно написана для данных с 3 признаками, но несложно модифицировать для любого числа признаков, можете это сделать.
End of explanation
def stochastic_gradient_descent(X, y, w_init, eta=1e-2, max_iter=1e4,
min_weight_dist=1e-8, seed=42, verbose=False):
# Инициализируем расстояние между векторами весов на соседних
# итерациях большим числом.
weight_dist = np.inf
# Инициализируем вектор весов
w = w_init
# Сюда будем записывать ошибки на каждой итерации
errors = []
# Счетчик итераций
iter_num = 0
# Будем порождать псевдослучайные числа
# (номер объекта, который будет менять веса), а для воспроизводимости
# этой последовательности псевдослучайных чисел используем seed.
np.random.seed(seed)
# Основной цикл
while weight_dist > min_weight_dist and iter_num < max_iter:
tmp_weights = w
# порождаем псевдослучайный
# индекс объекта обучающей выборки
random_ind = np.random.randint(X.shape[0])
w = stochastic_gradient_step(X, y, w, random_ind, eta)
errors.append(mserror(linear_prediction(X, w), y))
weight_dist = np.linalg.norm(tmp_weights - w, 2)
iter_num += 1
return w, errors
Explanation: 6. Напишите функцию stochastic_gradient_descent, реализующую стохастический градиентный спуск для линейной регрессии. Функция принимает на вход следующие аргументы:
- X - матрица, соответствующая обучающей выборке
- y - вектор значений целевого признака
- w_init - вектор начальных весов модели
- eta - шаг градиентного спуска (по умолчанию 0.01)
- max_iter - максимальное число итераций градиентного спуска (по умолчанию 10000)
- max_weight_dist - максимальное евклидово расстояние между векторами весов на соседних итерациях градиентного спуска,
при котором алгоритм прекращает работу (по умолчанию 1e-8)
- seed - число, используемое для воспроизводимости сгенерированных псевдослучайных чисел (по умолчанию 42)
- verbose - флаг печати информации (например, для отладки, по умолчанию False)
На каждой итерации в вектор (список) должно записываться текущее значение среднеквадратичной ошибки. Функция должна возвращать вектор весов $w$, а также вектор (список) ошибок.
End of explanation
%%time
stoch_grad_desc_weights, stoch_errors_by_iter = stochastic_gradient_descent(X, y, np.array([0,0,0,0]), eta=0.01, max_iter=1e5)
Explanation: Запустите $10^5$ итераций стохастического градиентного спуска. Укажите вектор начальных весов w_init, состоящий из нулей. Оставьте параметры eta и seed равными их значениям по умолчанию (eta=0.01, seed=42 - это важно для проверки ответов).
End of explanation
%pylab inline
plot(range(50), stoch_errors_by_iter[:50])
xlabel('Iteration number')
ylabel('MSE')
Explanation: Посмотрим, чему равна ошибка на первых 50 итерациях стохастического градиентного спуска. Видим, что ошибка не обязательно уменьшается на каждой итерации.
End of explanation
%pylab inline
plot(range(len(stoch_errors_by_iter)), stoch_errors_by_iter)
xlabel('Iteration number')
ylabel('MSE')
Explanation: Теперь посмотрим на зависимость ошибки от номера итерации для $10^5$ итераций стохастического градиентного спуска. Видим, что алгоритм сходится.
End of explanation
stoch_grad_desc_weights
Explanation: Посмотрим на вектор весов, к которому сошелся метод.
End of explanation
stoch_errors_by_iter[-1]
Explanation: Посмотрим на среднеквадратичную ошибку на последней итерации.
End of explanation
answer4 = stoch_errors_by_iter[-1]
print(answer4)
write_answer_to_file(answer4, '4.txt')
Explanation: Какова среднеквадратичная ошибка прогноза значений Sales в виде линейной модели с весами, найденными с помощью градиентного спуска? Запишите ответ в файл '4.txt'.
End of explanation |
13,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Take the set of pings, make sure we have actual clientIds and remove duplicate pings. We collect each unique ping.
Step1: Transform and sanitize the pings into arrays.
Step2: Create a set of pings from "core" to build a set of core client data. Output the data to CSV or Parquet.
This script is designed to loop over a range of days and output a single day for the given channels. Use explicit date ranges for backfilling, or now() - '1day' for automated runs. | Python Code:
def dedupe_pings(rdd):
return rdd.filter(lambda p: p["meta/clientId"] is not None)\
.map(lambda p: (p["meta/documentId"], p))\
.reduceByKey(lambda x, y: x)\
.map(lambda x: x[1])
Explanation: Take the set of pings, make sure we have actual clientIds and remove duplicate pings. We collect each unique ping.
End of explanation
def transform(ping):
# Should not be None since we filter those out.
clientId = ping["meta/clientId"]
# Added via the ingestion process so should not be None.
submissionDate = dt.datetime.strptime(ping["meta/submissionDate"], "%Y%m%d")
geoCountry = ping["meta/geoCountry"]
profileDate = None
profileDaynum = ping["profileDate"]
if profileDaynum is not None:
try:
# Bad data could push profileDaynum > 32767 (size of a C int) and throw exception
profileDate = dt.datetime(1970, 1, 1) + dt.timedelta(int(profileDaynum))
except:
profileDate = None
# Create date should already be in ISO format
creationDate = ping["creationDate"]
if creationDate is not None:
# This is only accurate because we know the creation date is always in 'Z' (zulu) time.
creationDate = dt.datetime.strptime(ping["creationDate"], "%Y-%m-%dT%H:%M:%S.%fZ")
appVersion = ping["meta/appVersion"]
buildId = ping["meta/appBuildId"]
locale = ping["locale"]
os = ping["os"]
osVersion = ping["osversion"]
device = ping["device"]
arch = ping["arch"]
defaultSearch = ping["defaultSearch"]
distributionId = ping["distributionId"]
experiments = ping["experiments"]
if experiments is None:
experiments = []
#bug 1315028
defaultNewTabExperience = ping["defaultNewTabExperience"]
defaultMailClient = ping["defaultMailClient"]
#bug 1307419
searches = ping["searches"]
durations = ping["durations"]
sessions = ping["sessions"]
return [clientId, submissionDate, creationDate, profileDate, geoCountry, locale, os,
osVersion, buildId, appVersion, device, arch, defaultSearch, distributionId,
json.dumps(experiments), defaultNewTabExperience, defaultMailClient, searches,
durations, sessions]
Explanation: Transform and sanitize the pings into arrays.
End of explanation
channels = ["nightly", "aurora", "beta", "release"]
batch_date = os.environ.get('date')
if batch_date:
start = end = dt.datetime.strptime(batch_date, '%Y%m%d')
else:
start = dt.datetime.now() - dt.timedelta(1)
end = dt.datetime.now() - dt.timedelta(1)
day = start
while day <= end:
for channel in channels:
print "\nchannel: " + channel + ", date: " + day.strftime("%Y%m%d")
kwargs = dict(
doc_type="core",
submission_date=(day.strftime("%Y%m%d"), day.strftime("%Y%m%d")),
channel=channel,
app="Fennec",
fraction=1
)
# Grab all available source_version pings
pings = get_pings(sc, source_version="*", **kwargs)
subset = get_pings_properties(pings, ["meta/clientId",
"meta/documentId",
"meta/submissionDate",
"meta/appVersion",
"meta/appBuildId",
"meta/geoCountry",
"locale",
"os",
"osversion",
"device",
"arch",
"profileDate",
"creationDate",
"defaultSearch",
"distributionId",
"experiments",
"defaultNewTabExperience",
"defaultMailClient",
"searches",
"durations",
"sessions"])
subset = dedupe_pings(subset)
print "\nDe-duped pings:" + str(subset.count())
print subset.first()
transformed = subset.map(transform)
print "\nTransformed pings:" + str(transformed.count())
print transformed.first()
s3_output = "s3n://net-mozaws-prod-us-west-2-pipeline-analysis/mobile/mobile_clients"
s3_output += "/v1/channel=" + channel + "/submission=" + day.strftime("%Y%m%d")
schema = StructType([
StructField("clientid", StringType(), False),
StructField("submissiondate", TimestampType(), False),
StructField("creationdate", TimestampType(), True),
StructField("profiledate", TimestampType(), True),
StructField("geocountry", StringType(), True),
StructField("locale", StringType(), True),
StructField("os", StringType(), True),
StructField("osversion", StringType(), True),
StructField("buildid", StringType(), True),
StructField("appversion", StringType(), True),
StructField("device", StringType(), True),
StructField("arch", StringType(), True),
StructField("defaultsearch", StringType(), True),
StructField("distributionid", StringType(), True),
StructField("experiments", StringType(), True),
StructField("defaultNewTabExperience", StringType(), True),
StructField("defaultMailClient", StringType(), True),
StructField("searches", StringType(), True),
StructField("durations", StringType(), True),
StructField("sessions", StringType(), True)
])
# Make parquet parition file size large, but not too large for s3 to handle
coalesce = 1
if channel == "release":
coalesce = 4
grouped = sqlContext.createDataFrame(transformed, schema)
grouped.coalesce(coalesce).write.mode('overwrite').parquet(s3_output)
day += dt.timedelta(1)
Explanation: Create a set of pings from "core" to build a set of core client data. Output the data to CSV or Parquet.
This script is designed to loop over a range of days and output a single day for the given channels. Use explicit date ranges for backfilling, or now() - '1day' for automated runs.
End of explanation |
13,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames Kraus2017
Title
Step1: Get all tables right away.
There are 7 tables.
Step2: Convert the astropy tables to pandas dataframes. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
pd.options.display.max_columns = 150
#%config InlineBackend.figure_format = 'retina'
import astropy
from astropy.table import Table
from astropy.io import ascii
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
Explanation: ApJdataFrames Kraus2017
Title: The Greater Taurus–Auriga Ecosystem. I. There is a Distributed Older Population
Authors: Kraus, Herczeg, et al.
Data are from this paper:
http://iopscience.iop.org/article/10.3847/1538-4357/aa62a0/meta
End of explanation
#! mkdir ../data/Kraus2017
#! wget -q --directory-prefix=../data/Kraus2017/ http://iopscience.iop.org/0004-637X/838/2/150/suppdata/apjaa62a0t1_mrt.txt
#! wget -q --directory-prefix=../data/Kraus2017/ http://iopscience.iop.org/0004-637X/838/2/150/suppdata/apjaa62a0t2_mrt.txt
#! wget -q --directory-prefix=../data/Kraus2017/ http://iopscience.iop.org/0004-637X/838/2/150/suppdata/apjaa62a0t3_mrt.txt
#! wget -q --directory-prefix=../data/Kraus2017/ http://iopscience.iop.org/0004-637X/838/2/150/suppdata/apjaa62a0t4_mrt.txt
#! wget -q --directory-prefix=../data/Kraus2017/ http://iopscience.iop.org/0004-637X/838/2/150/suppdata/apjaa62a0t5_mrt.txt
#! wget -q --directory-prefix=../data/Kraus2017/ http://iopscience.iop.org/0004-637X/838/2/150/suppdata/apjaa62a0t6_mrt.txt
#! wget -q --directory-prefix=../data/Kraus2017/ http://iopscience.iop.org/0004-637X/838/2/150/suppdata/apjaa62a0t7_mrt.txt
! ls -1 ../data/Kraus2017/
! head ../data/Kraus2017/apjaa62a0t7_mrt.txt
tab1 = ascii.read('../data/Kraus2017/apjaa62a0t1_mrt.txt')
#tab1.show_in_notebook(display_length=5)
#tab1.write('../data/Kraus2017/tab1.csv', format='ascii.csv', overwrite=True)
tab5 = ascii.read('../data/Kraus2017/apjaa62a0t5_mrt.txt')
Explanation: Get all tables right away.
There are 7 tables.
End of explanation
df1, df5 = tab1.to_pandas(), tab5.to_pandas()
df1.head()
df5.head()
df1.shape
df5.shape
Explanation: Convert the astropy tables to pandas dataframes.
End of explanation |
13,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
logictools WaveDrom Tutorial
WaveDrom is a tool for rendering digital timing waveforms. The waveforms are defined in a simple textual format.
This notebook will show how to render digital waveforms using the pynq library.
The logictools overlay uses the same format as WaveDrom to specify and generate real signals on the board.
A full tutorial of WaveDrom can be found here
Step 1
Step1: A simple function to add wavedrom diagrams into an jupyter notebook. It utilises the wavedrom java script library.
<font color="DodgerBlue">Example usage
Step2: Notes on waveform specification
Step 3
Step3: Notes on waveform specification
Adding multiple wave groups and spaces
Step4: Notes on waveform specification
WaveDrom for real-time pattern generation and trace analysis
The logictools overlay uses WaveJSON format to specify and generate real signals on the board.
As shown in the figure above, the Pattern Generator is an output-only block that specifies a sequence of logic values (patterns) which appear on the output pins of the ARDUINO interface. The logictools API for Pattern Generator accepts WaveDrom specification syntax with some enhancements.
The Trace Analyzer is an input-only block that captures and records all the IO signals. These signals may be outputs driven by the generators or inputs to the PL that are driven by external circuits. The Trace Analyzer allows us to verify that the output signals we have specified from the generators are being applied correctly. It also allows us to debug and analyze the opearion of the external interface.
The signals generated or captured by both the blocks can be displayed in the notebook by populating the WaveJSON dictionary that we have seen in this notebook. Users can access this dictionary through the provided API to extend or modify the waveform with special annotations.
we use a subset of the wave tokens that are allowed by WaveDrom to specify the waveforms for the Pattern Generator. However, users can call the draw_waveform() method on the dictionary populated by the Trace Analyzer to extend and modify the dictionary with annotations.
In the example below, we are going to generate 3 signals on the Arduino interface pins D0, D1 and D2 using the Pattern Generator. Since all IOs are accessible to the Trace analyzer, we will capture the data on the pins as well. This operation will serve as an internal loopback.
Step 1
Step5: Note
Step6: Step 3
Step7: Step 4
Step8: Note | Python Code:
from pynq.lib.logictools.waveform import draw_wavedrom
Explanation: logictools WaveDrom Tutorial
WaveDrom is a tool for rendering digital timing waveforms. The waveforms are defined in a simple textual format.
This notebook will show how to render digital waveforms using the pynq library.
The logictools overlay uses the same format as WaveDrom to specify and generate real signals on the board.
A full tutorial of WaveDrom can be found here
Step 1: Import the draw_wavedrom() method from the pynq library
End of explanation
clock = {'signal': [{'name': 'clock_0', 'wave': 'hlhlhlhlhlhlhlhl'}],
'foot': {'tock': 1},
'head': {'text': 'Clock Signal'}}
draw_wavedrom(clock)
Explanation: A simple function to add wavedrom diagrams into an jupyter notebook. It utilises the wavedrom java script library.
<font color="DodgerBlue">Example usage:</font>
```python
from pynq.lib.logictools.waveform import draw_wavedrom
clock = {'signal': [{'name': 'clk', 'wave': 'h....l...'}]}
draw_wavedrom(clock)
<font color="DodgerBlue">**Method:**</font>python
def draw_wavedrom(data, width=None):
# Note the optional argument width forces the width in pixels
```
Step 2: Specify and render a waveform
End of explanation
pattern = {'signal': [{'name': 'clk', 'wave': 'hl' * 8},
{'name': 'clkn', 'wave': 'lh' * 8},
{'name': 'data0', 'wave': 'l.......h.......'},
{'name': 'data1', 'wave': 'h.l...h...l.....'}],
'foot': {'tock': 1},
'head': {'text': 'Pattern'}}
draw_wavedrom(pattern)
Explanation: Notes on waveform specification
Step 3: Adding more signals to the waveform
End of explanation
pattern_group = {'signal': [['Group1',
{'name': 'clk', 'wave': 'hl' * 8},
{'name': 'clkn', 'wave': 'lh' * 8},
{'name': 'data0', 'wave': 'l.......h.......'},
{'name': 'data1', 'wave': 'h.l...h...l.....'}],
{},
['Group2',
{'name': 'data2', 'wave': 'l...h..l.h......'},
{'name': 'data3', 'wave': 'l.h.' * 4}]],
'foot': {'tock': 1},
'head': {'text': 'Pattern'}}
draw_wavedrom(pattern_group)
Explanation: Notes on waveform specification
Adding multiple wave groups and spaces
End of explanation
from pynq.lib.logictools import Waveform
from pynq.overlays.logictools import LogicToolsOverlay
logictools_olay = LogicToolsOverlay('logictools.bit')
loopback_test = {'signal': [
['stimulus',
{'name': 'output0', 'pin': 'D0', 'wave': 'lh' * 8},
{'name': 'output1', 'pin': 'D1', 'wave': 'l.h.' * 4},
{'name': 'output2', 'pin': 'D2', 'wave': 'l...h...' * 2}],
{},
['analysis',
{'name': 'input0', 'pin': 'D0'},
{'name': 'input1', 'pin': 'D1'},
{'name': 'input2', 'pin': 'D2'}]],
'foot': {'tock': 1},
'head': {'text': 'loopback_test'}}
waveform = Waveform(loopback_test)
waveform.display()
Explanation: Notes on waveform specification
WaveDrom for real-time pattern generation and trace analysis
The logictools overlay uses WaveJSON format to specify and generate real signals on the board.
As shown in the figure above, the Pattern Generator is an output-only block that specifies a sequence of logic values (patterns) which appear on the output pins of the ARDUINO interface. The logictools API for Pattern Generator accepts WaveDrom specification syntax with some enhancements.
The Trace Analyzer is an input-only block that captures and records all the IO signals. These signals may be outputs driven by the generators or inputs to the PL that are driven by external circuits. The Trace Analyzer allows us to verify that the output signals we have specified from the generators are being applied correctly. It also allows us to debug and analyze the opearion of the external interface.
The signals generated or captured by both the blocks can be displayed in the notebook by populating the WaveJSON dictionary that we have seen in this notebook. Users can access this dictionary through the provided API to extend or modify the waveform with special annotations.
we use a subset of the wave tokens that are allowed by WaveDrom to specify the waveforms for the Pattern Generator. However, users can call the draw_waveform() method on the dictionary populated by the Trace Analyzer to extend and modify the dictionary with annotations.
In the example below, we are going to generate 3 signals on the Arduino interface pins D0, D1 and D2 using the Pattern Generator. Since all IOs are accessible to the Trace analyzer, we will capture the data on the pins as well. This operation will serve as an internal loopback.
Step 1: Download the logictools overlay and specify the pattern
The pattern to be generated is specified in the waveJSON format. The Waveform class is used to display the specified waveform.
End of explanation
pattern_generator = logictools_olay.pattern_generator
pattern_generator.trace(num_analyzer_samples=16)
pattern_generator.setup(loopback_test,
stimulus_group_name='stimulus',
analysis_group_name='analysis')
pattern_generator.run()
pattern_generator.show_waveform()
Explanation: Note: Since there are no captured samples at this moment, the analysis group will be empty.
Notes on the enhanced WaveJSON specification format
Step 2: Run the pattern generator and trace the loopback signals.
This step populates the WaveJSON dict with the captured trace analyzer samples. The dict can now serve as an ouput that we can further modify. It is shown in the next step.
End of explanation
import pprint
output_wavejson = pattern_generator.waveform.waveform_dict
pprint.pprint(output_wavejson)
Explanation: Step 3: View the output waveJSON dict.
End of explanation
state_list = ['S0', 'S1', 'S2', 'S3', 'S4', 'S5', 'S6', 'S7',
'S0', 'S1', 'S2', 'S3', 'S4', 'S5', 'S6', 'S7']
color_dict = {'white': '2', 'yellow': '3', 'orange': '4', 'blue': '5'}
output_wavejson['signal'].extend([{}, ['Annotation',
{'name': 'state',
'wave': color_dict['yellow'] * 8 +
color_dict['blue'] * 8,
'data': state_list}]])
Explanation: Step 4: Extending the output waveJSON dict with state annotation
End of explanation
draw_wavedrom(output_wavejson)
Explanation: Note: The color_dict is a color code map as defined by WaveDrom
End of explanation |
13,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load in Catalogue - Limit to ISC, GCMT/HRVD, EHB, NEIC, BJI
Step1: Define Rule Sets
The catalogue covers the years 2005/06. To illustrate how to apply time variable hierarchies we consider two set of rules
Step4: Magnitude Rules
GCMT/HRVD
Step11: ISC/NEIC
Step16: BJI
For BJI - no analysis has been undertaken. We apply a simple scaling of 0.9 M + 0.15 with uncertainty of 0.2. This is for illustrative purposes only
Step17: Define Magnitude Hierarchy
Step18: Pre-processing
Before executing the homogenisation it is necessary to run a preprocessing step. This searches through the catalogue and identifies which conversion rule to apply
Step19: Harmonise the Catalogue
Step20: As logging was enabled, we can dump the log to a csv file and explore which rules and which hierarchy was applied for each event
Step21: Export the Homogenised Catalogue to CSV | Python Code:
parser = ISFReader("inputs/isc_test_catalogue_isf.txt",
selected_origin_agencies=["ISC", "GCMT", "HRVD", "NEIC", "EHB", "BJI"],
selected_magnitude_agencies=["ISC", "GCMT", "HRVD", "NEIC", "BJI"])
catalogue = parser.read_file("ISC_DB1", "ISC Global M >= 5")
print("Catalogue contains: %d events" % catalogue.get_number_events())
Explanation: Load in Catalogue - Limit to ISC, GCMT/HRVD, EHB, NEIC, BJI
End of explanation
origin_rules = [
("2005/01/01 - 2005/12/31", ['EHB', 'ISC', 'NEIC', 'GCMT', 'HRVD', 'BJI']),
("2006/01/01 - 2007/01/01", ['ISC', 'EHB', 'NEIC', 'BJI', 'GCMT', 'HRVD'])
]
Explanation: Define Rule Sets
The catalogue covers the years 2005/06. To illustrate how to apply time variable hierarchies we consider two set of rules:
For the origin the order of preference is:
(For 2005): EHB, ISC, NEIC, GCMT/HRVD, BJI
(For 2006): ISC, EHB, NEIC, BJI, GCMT/HRVD
End of explanation
def gcmt_hrvd_mw(magnitude):
For Mw recorded by GCMT take the value with no uncertainty
return magnitude
def gcmt_hrvd_mw_sigma(magnitude):
No additional uncertainty
return 0.0
Explanation: Magnitude Rules
GCMT/HRVD
End of explanation
def neic_mw(magnitude):
If Mw reported by NEIC,
return magnitude
def neic_mw_sigma(magnitude):
Uncertainty of 0.11 units
return 0.11
def scordillis_ms(magnitude):
Scordilis (2006) indicates ISC and NEIC Ms can treated (almost) equivalently
if magnitude < 6.1:
return 0.67 * magnitude + 2.07
else:
return 0.99 * magnitude + 0.08
def scordillis_ms_sigma(magnitude):
With Magnitude dependent uncertainty
if magnitude < 6.1:
return 0.17
else:
return 0.20
def scordillis_mb(magnitude):
Scordilis (2006) finds NEIC and ISC mb nearly equivalent
return 0.85 * magnitude + 1.03
def scordillis_mb_sigma(magnitude):
return 0.29
Explanation: ISC/NEIC
End of explanation
def bji_mb(magnitude):
return 0.9 * magnitude + 0.15
def bji_mb_sigma(magnitude):
return 0.2
def bji_ms(magnitude):
return 0.9 * magnitude + 0.15
def bji_ms_sigma(magnitude):
return 0.2
Explanation: BJI
For BJI - no analysis has been undertaken. We apply a simple scaling of 0.9 M + 0.15 with uncertainty of 0.2. This is for illustrative purposes only
End of explanation
rule_set_2005 = [
MagnitudeConversionRule("GCMT", "Mw", gcmt_hrvd_mw, gcmt_hrvd_mw_sigma),
MagnitudeConversionRule("HRVD", "Mw", gcmt_hrvd_mw, gcmt_hrvd_mw_sigma),
MagnitudeConversionRule("ISC", "Ms", scordillis_ms, scordillis_ms_sigma),
MagnitudeConversionRule("NEIC", "Ms", scordillis_ms, scordillis_ms_sigma),
MagnitudeConversionRule("ISC", "mb", scordillis_mb, scordillis_mb_sigma),
MagnitudeConversionRule("NEIC", "mb", scordillis_mb, scordillis_mb_sigma),
MagnitudeConversionRule("BJI", "Ms", bji_ms, bji_ms_sigma),
MagnitudeConversionRule("BJI", "mb", bji_mb, bji_mb_sigma)
]
rule_set_2006 = [
MagnitudeConversionRule("GCMT", "Mw", gcmt_hrvd_mw, gcmt_hrvd_mw_sigma),
MagnitudeConversionRule("HRVD", "Mw", gcmt_hrvd_mw, gcmt_hrvd_mw_sigma),
MagnitudeConversionRule("ISC", "Ms", scordillis_ms, scordillis_ms_sigma),
MagnitudeConversionRule("BJI", "Ms", bji_ms, bji_ms_sigma),
MagnitudeConversionRule("NEIC", "Ms", scordillis_ms, scordillis_ms_sigma),
MagnitudeConversionRule("ISC", "mb", scordillis_mb, scordillis_mb_sigma),
MagnitudeConversionRule("BJI", "mb", bji_mb, bji_mb_sigma),
MagnitudeConversionRule("NEIC", "mb", scordillis_mb, scordillis_mb_sigma)
]
magnitude_rules = [
("2005/01/01 - 2005/12/31", rule_set_2005),
("2006/01/01 - 2007/01/01", rule_set_2006)
]
Explanation: Define Magnitude Hierarchy
End of explanation
preprocessor = HomogenisorPreprocessor("time")
catalogue = preprocessor.execute(catalogue, origin_rules, magnitude_rules)
Explanation: Pre-processing
Before executing the homogenisation it is necessary to run a preprocessing step. This searches through the catalogue and identifies which conversion rule to apply:
The preprocessor is instantiated with a string describing the sort of rules to be applied.
"time" - Applies time only
"key" - Applies key rules only
"depth" - Applies depth rules only
"time|key" - Applies joint time and key rules
"time|depth" - Applied joint time and depth rules
"depth|key" - Applies joint depth and key rules
End of explanation
harmonisor = DynamicHomogenisor(catalogue, logging=True)
homogenised_catalogue = harmonisor.homogenise(magnitude_rules, origin_rules)
Explanation: Harmonise the Catalogue
End of explanation
log_file = "outputs/homogenisor_log.csv"
if os.path.exists(log_file):
os.remove(log_file)
harmonisor.dump_log(log_file)
Explanation: As logging was enabled, we can dump the log to a csv file and explore which rules and which hierarchy was applied for each event
End of explanation
output_catalogue_file = "outputs/homogeneous_catalogue.csv"
if os.path.exists(output_catalogue_file):
os.remove(output_catalogue_file)
harmonisor.export_homogenised_to_csv(output_catalogue_file)
Explanation: Export the Homogenised Catalogue to CSV
End of explanation |
13,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 7
Step1: Setup an identical instance of NPTFit to Example 6
Firstly we initialize an instance of nptfit identical to that used in the previous example.
Step2: Evaluate the Likelihood Manually
After configuring for the scan, the instance of nptfit.NPTF now has an associated function ll. This function was passed to MultiNest in the previous example, but we can also manually evaluate it.
The log likelihood function is called as
Step3: To make the point clearer we can fix $n_1$ and $n_2$ to their best fit values, and calculate a Test Statistics (TS) array as we vary $\log_{10} \left( A^\mathrm{ps}_\mathrm{iso} \right)$. As shown the likelihood is maximised at approximated where MultiNest told us was the best fit point for this parameter.
Step4: Next we do the same thing for $n_2$. This time we see that this parameter is much more poorly constrained than the value of the normalisation, as the TS is very flat.
NB
Step5: In general $\theta$ will always be a flattened array of the floated parameters. Poisson parameters always occur first, in the order in which they were added (via add_poiss_model), following by non-Poissonian parameters in the order they were added (via add_non_poiss_model). To be explicit if we have $m$ Poissonian templates and $n$ non-Poissonian templates with breaks $\ell_n$, then | Python Code:
# Import relevant modules
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import healpy as hp
import matplotlib.pyplot as plt
from NPTFit import nptfit # module for performing scan
from NPTFit import create_mask as cm # module for creating the mask
from NPTFit import psf_correction as pc # module for determining the PSF correction
from NPTFit import dnds_analysis # module for analysing the output
from __future__ import print_function
Explanation: Example 7: Manual evaluation of non-Poissonian Likelihood
In this example we show to manually evaluate the non-Poissonian likelihood. This can be used, for example, to interface nptfit with parameter estimation packages other than MultiNest. We also show how to extract the prior cube.
We will take the exact same analysis as considered in the previous example, and show the likelihood peaks at exactly the same location for the normalisation of the non-Poissonian template.
NB: This example makes use of the Fermi Data, which needs to already be installed. See Example 1 for details.
End of explanation
n = nptfit.NPTF(tag='non-Poissonian_Example')
fermi_data = np.load('fermi_data/fermidata_counts.npy').astype(np.int32)
fermi_exposure = np.load('fermi_data/fermidata_exposure.npy')
n.load_data(fermi_data, fermi_exposure)
analysis_mask = cm.make_mask_total(mask_ring = True, inner = 0, outer = 5, ring_b = 90, ring_l = 0)
n.load_mask(analysis_mask)
iso_p = np.load('fermi_data/template_iso.npy')
n.add_template(iso_p, 'iso_p')
iso_np = np.ones(len(iso_p))
n.add_template(iso_np, 'iso_np',units='PS')
n.add_poiss_model('iso_p','$A_\mathrm{iso}$', False, fixed=True, fixed_norm=1.51)
n.add_non_poiss_model('iso_np',
['$A^\mathrm{ps}_\mathrm{iso}$','$n_1$','$n_2$','$S_b$'],
[[-6,1],[2.05,30],[-2,1.95]],
[True,False,False],
fixed_params = [[3,172.52]])
pc_inst = pc.PSFCorrection(psf_sigma_deg=0.1812)
f_ary = pc_inst.f_ary
df_rho_div_f_ary = pc_inst.df_rho_div_f_ary
n.configure_for_scan(f_ary=f_ary, df_rho_div_f_ary=df_rho_div_f_ary, nexp=1)
Explanation: Setup an identical instance of NPTFit to Example 6
Firstly we initialize an instance of nptfit identical to that used in the previous example.
End of explanation
print('Vary A: ', n.ll([-4.76+0.32,18.26,0.06]), n.ll([-4.76,18.26,0.06]), n.ll([-4.76-0.37,18.26,0.06]))
print('Vary n1:', n.ll([-4.76,18.26+7.98,0.06]), n.ll([-4.76,18.26,0.06]), n.ll([-4.76,18.26-9.46,0.06]))
print('Vary n2:', n.ll([-4.76,18.26,0.06+0.93]), n.ll([-4.76,18.26,0.06]), n.ll([-4.76,18.26,0.06-1.31]))
Explanation: Evaluate the Likelihood Manually
After configuring for the scan, the instance of nptfit.NPTF now has an associated function ll. This function was passed to MultiNest in the previous example, but we can also manually evaluate it.
The log likelihood function is called as: ll(theta), where theta is a flattened array of parameters. In the case above:
$$ \theta = \left[ \log_{10} \left( A^\mathrm{ps}_\mathrm{iso} \right), n_1, n_2 \right] $$
As an example we can evaluate it at a few points around the best fit parameters:
End of explanation
Avals = np.arange(-6.0,-2.0,0.01)
TSvals_A = np.array([2*(n.ll([-4.76,18.26,0.06])-n.ll([Avals[i],18.26,0.06])) for i in range(len(Avals))])
plt.plot(Avals,TSvals_A,color='black', lw=1.5)
plt.axvline(-4.76+0.32,ls='dashed',color='black')
plt.axvline(-4.76,ls='dashed',color='black')
plt.axvline(-4.76-0.37,ls='dashed',color='black')
plt.axhline(0,ls='dashed',color='black')
plt.xlim([-5.5,-4.0])
plt.ylim([-5.0,15.0])
plt.xlabel('$A^\mathrm{ps}_\mathrm{iso}$')
plt.ylabel('$\mathrm{TS}$')
plt.show()
Explanation: To make the point clearer we can fix $n_1$ and $n_2$ to their best fit values, and calculate a Test Statistics (TS) array as we vary $\log_{10} \left( A^\mathrm{ps}_\mathrm{iso} \right)$. As shown the likelihood is maximised at approximated where MultiNest told us was the best fit point for this parameter.
End of explanation
n2vals = np.arange(-1.995,1.945,0.01)
TSvals_n2 = np.array([2*(n.ll([-4.76,18.26,0.06])-n.ll([-4.76,18.26,n2vals[i]])) for i in range(len(n2vals))])
plt.plot(n2vals,TSvals_n2,color='black', lw=1.5)
plt.axvline(0.06+0.93,ls='dashed',color='black')
plt.axvline(0.06,ls='dashed',color='black')
plt.axvline(0.06-1.31,ls='dashed',color='black')
plt.axhline(0,ls='dashed',color='black')
plt.xlim([-2.0,1.5])
plt.ylim([-5.0,15.0])
plt.xlabel('$n_2$')
plt.ylabel('$\mathrm{TS}$')
plt.show()
Explanation: Next we do the same thing for $n_2$. This time we see that this parameter is much more poorly constrained than the value of the normalisation, as the TS is very flat.
NB: it is important not to evaluate breaks exactly at a value of $n=1$. The reason for this is the analytic form of the likelihood involves $(n-1)^{-1}$.
End of explanation
print(n.prior_cube(cube=[1,1,1],ndim=3))
Explanation: In general $\theta$ will always be a flattened array of the floated parameters. Poisson parameters always occur first, in the order in which they were added (via add_poiss_model), following by non-Poissonian parameters in the order they were added (via add_non_poiss_model). To be explicit if we have $m$ Poissonian templates and $n$ non-Poissonian templates with breaks $\ell_n$, then:
$$ \theta = \left[ A_\mathrm{P}^1, \ldots, A_\mathrm{P}^m, A_\mathrm{NP}^1, n_1^1, \ldots, n_{\ell_1+1}^1, S_b^{(1)~1}, \ldots, S_b^{(\ell_1)~1}, \ldots, A_\mathrm{NP}^n, n_1^n, \ldots, n_{\ell_n+1}^n, S_b^{(1)~n}, \ldots, S_b^{(\ell_n)~n} \right]
$$
Fixed parameters are deleted from the list, and any parameter entered with a log flat prior is replaced by $\log_{10}$ of itself.
Extract the Prior Cube Manually
To extract the prior cube, we use the internal function log_prior_cube. This requires two arguments: 1. cube, the unit cube of dimension equal to the number of floated parameters; and 2. ndim, the number of floated parameters.
End of explanation |
13,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PageRank
Ways to think about SVD
Data compression
SVD trades a large number of features for a smaller set of better features
All matrices are diagonal (if you use change of bases on the domain and range)
Relationship between SVD and Eigen Decomposition
Step1: numpy.matrix.A1
Return self as a flattened ndarray. Equivalent to np.asarray(x).ravel()
numpy.matrix.A1 — NumPy v1.14 Manual
Step2: How to normalize a sparse matrix
Step3: QR 分解
Step4: The Arnoldi Iteration is two things | Python Code:
#@title Power iteration
import numpy as np
def power_iteration(A, num_simulations):
# Ideally choose a random vector
# To decrease the chance that our vector
# Is orthogonal to the eigenvector
b_k = np.random.rand(A.shape[0])
for _ in range(num_simulations):
# calculate the matrix-by-vector product Ab
b_k1 = np.dot(A, b_k)
# calculate the norm
b_k1_norm = np.linalg.norm(b_k1)
# re normalize the vector
b_k = b_k1 / b_k1_norm
return b_k
power_iteration(np.array([[0.5, 0.5], [0.2, 0.8]]), 100)
# 稀疏矩阵
import numpy as np
from scipy import sparse
def power_method(A, max_iter=100):
n = A.shape[1]
A = np.copy(A)
A.data /= np.take(A.sum(axis=0).A1, A.indices)
scores = np.ones(n, dtype=np.float32) * np.sqrt(A.sum()/(n*n)) # initial guess
for i in range(max_iter):
scores = A @ scores
nrm = np.linalg.norm(scores)
scores /= nrm
print(nrm)
return scores
x = np.matrix(np.arange(12).reshape((3,4)))
a = sparse.csr_matrix(x, dtype=np.float32)
power_method(a, max_iter=10)
np.random.randn(2, 4)
Explanation: PageRank
Ways to think about SVD
Data compression
SVD trades a large number of features for a smaller set of better features
All matrices are diagonal (if you use change of bases on the domain and range)
Relationship between SVD and Eigen Decomposition: the left-singular vectors of A are the eigenvectors of $AA^T$. The right-singular vectors of A are the eigenvectors of $A^T A$. The non-zero singular values of A are the square roots of the eigenvalues of $A^T A$ (and $A A^T$).
SVD is a generalization of eigen decomposition. Not all matrices have eigen values, but ALL matrices have singular values.
A Hermitian matrix is one that is equal to it's own conjugate transpose. In the case of real-valued matrices (which is all we are considering in this course), Hermitian means the same as Symmetric.
Relevant Theorems:
- If A is symmetric, then eigenvalues of A are real and $A = Q \Lambda Q^T$
- If A is triangular, then its eigenvalues are equal to its diagonal entries
确定图中顶点的相对重要性的经典方法是计算邻接矩阵的主特征向量,以便将每个顶点的第一特征向量的分量值分配为中心性分数
维基百科主要特征向量 - scikit-learn 0.19.1文档
2.Eigenvector centrality - Wikipedia
Power iteration - Wikipedia
Katz centrality - Wikipedia
PageRank - Wikipedia
PageRank算法--从原理到实现 - CSDN博客
End of explanation
x = np.matrix(np.arange(12).reshape((3,4)))
x
x.A1
x.ravel()
x.A1.shape, x.ravel().shape
Explanation: numpy.matrix.A1
Return self as a flattened ndarray. Equivalent to np.asarray(x).ravel()
numpy.matrix.A1 — NumPy v1.14 Manual
End of explanation
from scipy import sparse
S = sparse.csr_matrix(np.array([[1,2],[3,4]]))
S
Sr = S.sum(axis=0).A1
Sr
S.indices
S.data
S.data / np.take(Sr, S.indices)
np.take(Sr, S.indices)
Explanation: How to normalize a sparse matrix
End of explanation
from numba import jit
@jit()
def pure_qr(A, max_iter=50000):
Ak = np.copy(A)
n = A.shape[0]
QQ = np.eye(n)
for k in range(max_iter):
Q, R = np.linalg.qr(Ak)
Ak = R @ Q
QQ = QQ @ Q
if k % 100 == 0:
print(Ak)
print("\n")
return Ak, QQ
n = 6
A = np.random.rand(n,n)
AT = A @ A.T
Ak, Q = pure_qr(A)
# 特征值
np.linalg.eigvals(A)
# Q 是正交的
np.allclose(np.eye(n), Q @ Q.T), np.allclose(np.eye(n), Q.T @ Q)
Explanation: QR 分解
End of explanation
# Decompose square matrix A @ Q ~= Q @ H
def arnoldi(A):
m, n = A.shape
assert(n <= m)
# Hessenberg matrix
H = np.zeros([n+1,n]) #, dtype=np.float64)
# Orthonormal columns
Q = np.zeros([m,n+1]) #, dtype=np.float64)
# 1st col of Q is a random column with unit norm
b = np.random.rand(m)
Q[:,0] = b / np.linalg.norm(b)
for j in range(n):
v = A @ Q[:,j]
for i in range(j+1):
#This comes from the formula for projection of v onto q.
#Since columns q are orthonormal, q dot q = 1
H[i,j] = np.dot(Q[:,i], v)
v = v - (H[i,j] * Q[:,i])
H[j+1,j] = np.linalg.norm(v)
Q[:,j+1] = v / H[j+1,j]
# printing this to see convergence, would be slow to use in practice
print(np.linalg.norm(A @ Q[:,:-1] - Q @ H))
return Q[:,:-1], H[:-1,:]
Q, H = arnoldi(A)
H
Q
n = 10
A0 = np.random.rand(n,n)
A = A0 @ A0.T
np.linalg.eigvals(A)
Explanation: The Arnoldi Iteration is two things:
1. the basis of many of the iterative algorithms of numerical linear algebra
2. a technique for finding eigenvalues of nonhermitian matrices
(Trefethen, page 257)
How Arnoldi Locates Eigenvalues
Carry out Arnoldi iteration
Periodically calculate the eigenvalues (called Arnoldi estimates or Ritz values) of the Hessenberg H, using the QR algorithm
Check at whether these values are converging. If they are, they're probably eigenvalues of A.
End of explanation |
13,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ProteinB
Step1: Auxiliary functions
Step2: Create a feature reader
We create a feature reader to obtain minimal distances between all residues which are not close neighbours. Feel free to map these distances to binary contacts or use inverse minimal residue distances instead. These coices usually work quite well.
Step3: Discretization and MSM estimation
We start the actual analysis with a TICA projection onto two components on which we perform a k-means clustering. Then, we take a quick view on the implied timescale convergence, the 2D representation, and the clustering
Step4: RMSD
Step5: Defining macrostates based on the RSMD (Folded < 5, Unfolded > 7)
Step6: Fundamental Sequences
Building the model
For the calculation of the fundamental sequences the microstates inside every macrostate (A and B) are merged together. Then the sates A and B are not longer composed by multiple microstates but by a single (big) one.
Step7: Obtaining the FSs
Step8: Comparing apples to apples
Step9: Plot | Python Code:
import sys
import math
sys.path.append("/Users/suarezalvareze2/Documents/workspace/NMpathAnalysis/nmpath")
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pyemma
import mdtraj as md
from glob import glob
# My modules
from auxfunctions import *
from mfpt import *
from clustering import *
from nmm import NonMarkovModel, MarkovPlusColorModel
# Print
from IPython.display import Markdown, display
Explanation: ProteinB:
Mechanism/Pathway Distribution using Fundamental Sequences (after mfpt clustering)
This is a modification of the notebook given Chris ([email protected]), in this case we are going to use is to cluster the microstates based on the commute times.
End of explanation
def get_lagtime_from_array(lags, lagtime, dt=0.2):
idx = np.argmin(np.abs(lags * dt - lagtime))
return idx, lags[idx]
def printmd(string):
display(Markdown(string))
def plot_t_AB(t_cut_values, t_min_list, t_max_list, t_AB_list):
t_cut_values_ns = np.array(t_cut_values)*dt
t_min_list_ns = np.array(t_min_list)*dt
t_max_list_ns = np.array(t_max_list)*dt
t_AB_list_ns = np.array(t_AB_list)*dt
fig = plt.figure(figsize=(15,3))
ax1 = fig.add_subplot(131)
ax1.plot(t_cut_values_ns , t_AB_list_ns, "-o")
ax1.set_xlabel("$t_{cut}\mathrm{(ns)}$", fontsize = 18)
ax1.set_ylabel("$t_{AB}\mathrm{(ns)}$", fontsize = 18)
#ax1.set_xlim(40,105)
ax2 = fig.add_subplot(132)
ax2.plot(t_cut_values_ns, t_AB_list_ns/t_cut_values_ns, "-o",c="r")
ax2.set_xlabel("$t_{cut}\mathrm{(ns)}$", fontsize = 18)
ax2.set_ylabel("$t_{AB} / t_{cut}$", fontsize = 18)
#ax2.set_xlim(40,105)
ax3 = fig.add_subplot(133)
ax3.plot(t_cut_values_ns, t_max_list_ns/t_cut_values_ns, "-o",c="g")
ax3.set_xlabel("$t_{cut}\mathrm{(ns)}$", fontsize = 18)
ax3.set_ylabel("$t_{max} / t_{cut}$", fontsize = 18)
#ax3.set_xlim(40,105)
plt.show()
def cdf(pmf):
mycdf=[]
tot = 0
for element in pmf:
tot+= element
mycdf.append(tot)
return np.array(mycdf)
color_sequence = ['#d62728', '#ff9896', '#9467bd',
'#8c564b', '#c49c94', '#e377c2', '#f7b6d2', '#7f7f7f',
'#c7c7c7', '#bcbd22', '#dbdb8d', '#17becf', '#9edae5', '#98df8a']
def confindence_interval_cdf(populations, totCounts, conf_interval=0.95, n_samples=100000):
counts = np.round(np.array(populations)*totCounts)
partialCounts = sum(counts)
myarray = list(counts)+[totCounts-partialCounts]
s=np.random.dirichlet(myarray,n_samples)
s_cdf = []
for line in s:
s_cdf.append(cdf(line))
s_cdf = np.array(s_cdf)
s = np.transpose(s)
s_cdf = np.transpose(s_cdf)
minval = []
maxval = []
minvalcdf = []
maxvalcdf = []
for line in s:
sorted_line = np.sort(line)
minval.append(sorted_line[int( (1-conf_interval)/2 * len(sorted_line))])
maxval.append(sorted_line[int( (1-(1-conf_interval)/2) * len(sorted_line))])
for line in s_cdf:
sorted_line = np.sort(line)
minvalcdf.append(sorted_line[int( (1-conf_interval)/2 * len(sorted_line))])
maxvalcdf.append(sorted_line[int( (1-(1-conf_interval)/2) * len(sorted_line))])
return minvalcdf[:-1], maxvalcdf[:-1]
def plot_rmsd_histogram_clusters(t_cut_values, big_clusters_list, rmsd, dt, dtrajs):
max_ = len(t_cut_values)
select_to_plot= range(0, max_ ,3) # This will print the first column of the free energy plots
for i in select_to_plot:
macrostates = big_clusters_list[i]
rmsd_cluster0=[]
rmsd_cluster1=[]
for j, microstate in enumerate(dtrajs[0]): # There is only one traj
if microstate in macrostates[0]:
rmsd_cluster0.append(rmsd[j])
elif (len(macrostates) > 1) and microstate in macrostates[1]:
rmsd_cluster1.append(rmsd[j])
fig = plt.figure(figsize=(5,2))
plt.hist(rmsd_cluster0,normed=True, bins=25, color="r",
alpha=0.5,label="cluster-0", edgecolor="r")
if len(macrostates) > 1:
plt.hist(rmsd_cluster1,normed=True, bins=25, color="b",
alpha=0.5,label="cluster-1", edgecolor="b")
plt.xlabel("RMSD$(\AA)$",fontsize=12)
plt.ylabel("Probability Dens.",fontsize=12)
plt.legend()
#plt.title("t_cut: {:.2f}ns".format(t_cut_values_ns[i]))
plt.annotate("t_cut: {:.2f}ns".format(t_cut_values[i]*dt), xy=(1,2))
plt.xlim([0,7])
plt.show()
color_sequence = ['#d62728', '#ff9896', '#9467bd',
'#8c564b', '#c49c94', '#e377c2', '#f7b6d2', '#7f7f7f',
'#c7c7c7', '#bcbd22', '#dbdb8d', '#17becf', '#9edae5', '#98df8a']
Explanation: Auxiliary functions
End of explanation
traj_files = [f for f in sorted(glob('../../../DESHAWTRAJS/DESRES-Trajectory_PRB-0-protein/PRB-0-protein/PRB-0-protein-*.dcd'))]
pdb_file = '../../../DESHAWTRAJS/DESRES-Trajectory_PRB-0-protein/proteinB.pdb'
features = pyemma.coordinates.featurizer(pdb_file)
features.add_residue_mindist()
source = pyemma.coordinates.source([traj_files], features=features, chunk_size=10000)
Explanation: Create a feature reader
We create a feature reader to obtain minimal distances between all residues which are not close neighbours. Feel free to map these distances to binary contacts or use inverse minimal residue distances instead. These coices usually work quite well.
End of explanation
tica = pyemma.coordinates.tica(data=source, lag=5, dim=2).get_output()[0]
cluster = pyemma.coordinates.cluster_kmeans(tica, k=45, max_iter=100)
lags = np.asarray([1, 5, 10, 20, 50] + [i * 100 for i in range(1, 21)])
fig, axes = plt.subplots(1, 2, figsize=(8, 4))
pyemma.plots.plot_implied_timescales(
pyemma.msm.its(cluster.dtrajs, lags=lags, errors=None, nits=6),
ylog=False, ax=axes[0], units='us', dt=2.0E-4)
pyemma.plots.plot_free_energy(*tica.T, ax=axes[1])
axes[1].scatter(*cluster.clustercenters.T, marker='x', c='grey', s=30, label='centers')
axes[1].legend()
axes[1].set_xlabel('TIC 1 / a.u.')
axes[1].set_ylabel('TIC 2 / a.u.')
fig.tight_layout()
# MSM estimation
msm = [pyemma.msm.estimate_markov_model(cluster.dtrajs, lag=lag, dt_traj='0.0002 us') for lag in lags]
lag = get_lagtime_from_array(lags, 0.3, dt=2.0E-4)[1]
pyemma.plots.plot_cktest(pyemma.msm.bayesian_markov_model(cluster.dtrajs, lag=lag, dt_traj='0.0002 us').cktest(2))
print('Estimated at lagtime %d steps' % lag)
Explanation: Discretization and MSM estimation
We start the actual analysis with a TICA projection onto two components on which we perform a k-means clustering. Then, we take a quick view on the implied timescale convergence, the 2D representation, and the clustering:
End of explanation
path='../../../DESHAWTRAJS/DESRES-Trajectory_PRB-0-protein/PRB-0-protein/'
reference = md.load_dcd(path + 'PRB-0-protein-008.dcd', top=pdb_file)
CA_atoms = reference.topology.select('name CA and resid 2 to 44')
rmsd = []
for traj_name in traj_files:
traj = md.load_dcd(traj_name, top=pdb_file)
for element in md.rmsd(traj, reference, 9600, atom_indices=CA_atoms):
rmsd.append(element)
fig = plt.figure(figsize=(17, 2))
plt.plot(rmsd[::500])
#plt.axis([0, 200, 0.0, 1.5])
plt.ylabel('RMSD(nm)')
plt.xlabel('Snapshot Num./500')
plt.show()
# to Angstrom
rmsd = np.array(rmsd) * 10.0
#histogram
fig = plt.figure(figsize=(5, 3))
ax1 = fig.add_subplot(111)
ax1.hist(rmsd[::100], normed=True, bins=30, color='r', alpha=0.5, edgecolor='r')
ax1.set_xlabel('RMSD$(\AA)$', fontsize=12)
ax1.set_ylabel('Probability Dens.', fontsize=12)
Explanation: RMSD
End of explanation
stateA=[]
stateB=[]
states_dic={}
states_rmsd={}
for i, r in enumerate(rmsd):
state = cluster.dtrajs[0][i]
if (not (state in states_dic)):
states_dic[state]=[r]
else:
states_dic[state].append(r)
for key,value in states_dic.items():
states_rmsd[key]=sum(value)/len(value)
for s, r in states_rmsd.items():
if r < 5:
stateA.append(s)
elif r > 7:
stateB.append(s)
print(stateA, stateB)
Explanation: Defining macrostates based on the RSMD (Folded < 5, Unfolded > 7)
End of explanation
nm_model = NonMarkovModel(cluster.dtrajs, stateA, stateB, lag_time=1, coarse_macrostates=True)
m_p_color = MarkovPlusColorModel(cluster.dtrajs, stateA, stateB, lag_time=1, coarse_macrostates=True, hist_length=100)
Explanation: Fundamental Sequences
Building the model
For the calculation of the fundamental sequences the microstates inside every macrostate (A and B) are merged together. Then the sates A and B are not longer composed by multiple microstates but by a single (big) one.
End of explanation
mdFS, mdFSweights, tot_count_md = nm_model.empirical_weighted_FS()
nmFS, nmFSweights, _ = nm_model.weighted_FS()
mcFS, mcFSweights, _ = m_p_color.weighted_FS()
nm_model.markovian = True
msmFS, msmFSweights, _ = nm_model.weighted_FS() # lag=1
nm_model.lag_time = 10
msmFS_10, msmFSweights_10, _ = nm_model.weighted_FS() # lag=10
nm_model.lag_time = 50
msmFS_50, msmFSweights_50, _ = nm_model.weighted_FS() # lag=100
nm_model.lag_time = 1000
msmFS_1000, msmFSweights_1000, _ = nm_model.weighted_FS() # lag=1000
nm_model.lag_time
nm_model.markovian = False
Explanation: Obtaining the FSs
End of explanation
nmFSweights_temp = []
mcFSweights_temp = []
msmFSweights_temp = []
msmFSweights_temp_10 = []
msmFSweights_temp_50 = []
msmFSweights_temp_1000 = []
for i, element in enumerate(mdFS):
# lag=1
if element in nmFS:
nmFSweights_temp.append(nmFSweights[nmFS.index(element)])
else:
nmFSweights_temp.append(0)
if element in msmFS:
msmFSweights_temp.append(msmFSweights[msmFS.index(element)])
else:
msmFSweights_temp.append(0)
if element in mcFS:
mcFSweights_temp.append(mcFSweights[mcFS.index(element)])
else:
mcFSweights_temp.append(0)
# lag=10
if element in msmFS_10:
msmFSweights_temp_10.append(msmFSweights_10[msmFS_10.index(element)])
else:
msmFSweights_temp_10.append(0)
# lag=50
if element in msmFS_50:
msmFSweights_temp_50.append(msmFSweights_50[msmFS_50.index(element)])
else:
msmFSweights_temp_50.append(0)
# lag=1000
if element in msmFS_1000:
msmFSweights_temp_1000.append(msmFSweights_1000[msmFS_1000.index(element)])
else:
msmFSweights_temp_1000.append(0)
mdmin, mdmax = confindence_interval_cdf(mdFSweights, tot_count_md)
Explanation: Comparing apples to apples
End of explanation
printmd("#### Note: We use a reduced number of states for the Fundamental Sequences. The classes are ranked based on their empirical populaiton")
alpha=0.8
x = list( range(len(mdFS)) )
plt.fill_between(x, mdmin, mdmax, color='green', alpha=0.4, label=r'MD Conf. Int. 95% ($\tau=0.2$ns)')
plt.plot(x, cdf(nmFSweights_temp), label = r'NM ($\tau=0.2$ns)', color='blue', alpha=alpha)
plt.plot(x, cdf(mcFSweights_temp), '--',label = r'NM ($\tau=0.2$ns, hist=20ns)', color='blue', alpha=alpha)
plt.plot(x, cdf(msmFSweights_temp),':', label = r'MSM ($\tau=0.2$ns)', color='red', alpha=alpha)
plt.plot(x, cdf(msmFSweights_temp_10),'--', label = r'MSM ($\tau=2.0$ns)', color='red', alpha=alpha)
plt.plot(x, cdf(msmFSweights_temp_50),'-.', label = r'MSM ($\tau=10$ns)', color='red', alpha=alpha)
plt.plot(x, cdf(msmFSweights_temp_1000),'-', label = r'MSM ($\tau=200$ns)', color='red', alpha=alpha)
plt.xticks([i for i in range(0,2*len(mdFS),1)])
plt.xlim([0,10])
plt.ylim([0,1.0])
plt.xlabel('Pathway Class', fontsize=14)
plt.ylabel('Cumulative Probability', fontsize=14)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.show()
Explanation: Plot
End of explanation |
13,610 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src='https
Step1: 如何使用和开发微信聊天机器人的系列教程
A workshop to develop & use an intelligent and interactive chat-bot in WeChat
WeChat is a popular social media app, which has more than 800 million monthly active users.
<img src='https
Step2: <span style="color
Step3: Asynchronous processing when triggering RPA-Bot
Step4: <span style="color
Step5: Retrieve rpa_bot_file based on received Chat-Bot command
Step6: 虚拟员工
Step7: IF !pip install pocketsphinx failed, THEN
Step8: Calling Local AI Module
Step9: Fuzzy match from 'transcribed audio command' to predefined 'chat_bot_command'
Automatically create a new lookup, by converting text-based intention command to voice-based intention command.
Example
Step10: Fuzzy match function
Step11: Retrieve rpa_bot_file based on received Chat-Bot command ( fuzzy match for voice/speech2text )
Step12: Control Parm
Step13: <span style="color
Step14: Log in using QR code image / 用微信App扫QR码图片来自动登录
Step15: 虚拟员工
Step16: 虚拟员工
Step17:
Step18:
Step19: 恭喜您!已经完成了:
第六课:交互式虚拟助手的智能应用
Lesson 6
Step20: <span style="color | Python Code:
import IPython.display
IPython.display.YouTubeVideo('YSL--3j12VA')
Explanation: <img src='https://www.iss.nus.edu.sg/Sitefinity/WebsiteTemplates/ISS/App_Themes/ISS/Images/branding-iss.png' width=15% style="float: right;">
<img src='https://www.iss.nus.edu.sg/Sitefinity/WebsiteTemplates/ISS/App_Themes/ISS/Images/branding-nus.png' width=15% style="float: right;">
End of explanation
# Copyright 2016 Google Inc. Licensed under the Apache License, Version 2.0 (the "License");
# !pip install --upgrade google-api-python-client
Explanation: 如何使用和开发微信聊天机器人的系列教程
A workshop to develop & use an intelligent and interactive chat-bot in WeChat
WeChat is a popular social media app, which has more than 800 million monthly active users.
<img src='https://www.iss.nus.edu.sg/images/default-source/About-Us/7.6.1-teaching-staff/sam-website.tmb-.png' width=8% style="float: right;">
<img src='reference/WeChat_SamGu_QR.png' width=10% style="float: right;">
by: GU Zhan (Sam)
October 2018 : Update to support Python 3 in local machine, e.g. iss-vm.
April 2017 ======= Scan the QR code to become trainer's friend in WeChat =====>>
第六课:交互式虚拟助手的智能应用
Lesson 6: Interactive Conversatioinal Virtual Assistant Applications / Intelligent Process Automations
虚拟员工: 贷款填表申请审批一条龙自动化流程 (Virtual Worker: When Chat-bot meets RPA-bot for mortgage loan application automation)
虚拟员工: 文字指令交互(Conversational automation using text/message command)
虚拟员工: 语音指令交互(Conversational automation using speech/voice command)
虚拟员工: 多种语言交互(Conversational automation with multiple languages)
Using Google Cloud Platform's Machine Learning APIs
From the same API console, choose "Dashboard" on the left-hand menu and "Enable API".
Enable the following APIs for your project (search for them) if they are not already enabled:
<ol>
**<li> Google Cloud Speech API </li>**
**<li> Google Cloud Text-to-Speech API </li>**
**<li> Google Cloud Translation API </li>**
</ol>
Finally, because we are calling the APIs from Python (clients in many other languages are available), let's install the Python package (it's not installed by default on Datalab)
End of explanation
# Library/Function to use operating system's shell script command, e.g. bash, echo, cd, pwd, etc
import subprocess, time
# Funciton to trigger RPA-Bot (TagUI script: mortgage loan application automation) from VA-Bot (python script)
# Trigger RPA-Bot [ Synchronous ]
def didi_invoke_rpa_bot(rpa_bot_file, rpa_bot = './reference/S-IPA-Workshop/TagUI-S-IPA/src/tagui'):
# Invoke RPA-Bot script
print('[ W I P ] In progress to invoke RPA-Bot using command: \n{}'.format(
'bash' + ' ' + rpa_bot + ' ' + rpa_bot_file))
start = time.time()
return_code = subprocess.call(['bash', rpa_bot, rpa_bot_file])
end = time.time()
if return_code == 0:
print('[ Sync OK ] RPA-Bot succeeded! [ Return Code : {} ]'.format(return_code))
else:
print('[ ERROR ] RPA-Bot failed! [ Return Code : {} ]'.format(return_code))
return return_code, int(round(end - start, 0)) # return_code & time_spent in seconds
# Uncomment below lines for an agile demo outside Chat-bot:
# rpa_bot_file = './reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'
# return_code = didi_invoke_rpa_bot(rpa_bot_file)
Explanation: <span style="color:blue">Virtual Worker: When Chat-bot meets RPA-bot</span>
虚拟员工: 贷款填表申请审批一条龙自动化流程 (Mortgage loan application automation)
Synchronous processing when triggering RPA-Bot
End of explanation
# Trigger RPA-Bot [ Asynchronous ]
# http://docs.dask.org/en/latest/_downloads/daskcheatsheet.pdf
from dask.distributed import Client
def didi_invoke_rpa_bot_async(rpa_bot_file):
client = Client(processes=False) # https://github.com/dask/distributed/issues/1825
ipa_task = client.submit(didi_invoke_rpa_bot, rpa_bot_file)
ipa_task.add_done_callback(didi_invoke_rpa_bot_async_upon_completion)
return 0, 0 # Dummy return. Actual result is returned by function didi_invoke_rpa_bot_async_upon_completion(ipa_task)
from tornado import gen
# https://stackoverflow.com/questions/40477518/how-to-get-the-result-of-a-future-in-a-callback
@gen.coroutine
def didi_invoke_rpa_bot_async_upon_completion(ipa_task):
print(u'[ Terminal Info ] didi_invoke_rpa_bot_async(rpa_bot_file) [ upon_completion ]')
return_code, time_spent = ipa_task.result()
print(return_code)
print(time_spent)
# Send confirmation message upon triggering RPA-Bot
# itchat.send(u'[ Async OK ] IPA Command completed !\n[ Time Spent : %s seconds ]\n %s' % (time_spent, parm_msg['Text']), parm_msg['FromUserName'])
itchat.send(u'[ Async OK ] IPA Command completed !\n[ Time Spent : %s seconds ]' % (time_spent), parm_msg['FromUserName']) # parm_msg['Text'] can be in-sync due to new coming message.
# return return_code, time_spent # No return needed. No pace to hold the info
# Uncomment below lines for an agile demo outside Chat-bot:
# rpa_bot_file = './reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'
# return_code = didi_invoke_rpa_bot_async(rpa_bot_file)
print('Continue other tasks in main program...\n...\n')
Explanation: Asynchronous processing when triggering RPA-Bot
End of explanation
parm_msg = {} # Define a global variable to hold current msg
# Define "keywords intention command -> automation action" lookup to invoke RPA-Bot process automation functions
parm_bot_intention_action = {
'#apply_loan': './reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'
, '#ocr_invoice': './reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'
, '#check_application': './reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'
, '#hi_everyone_welcome_to_see_you_here_in_the_process_automation_course': './reference/S-IPA-Workshop/workshop2/KIE-Loan-Application-WeChat/VA-KIE-Loan-Application.txt'
}
Explanation: <span style="color:blue">Wrap RPA-Bot into Functions() for conversational virtual assistant (VA):</span>
Reuse above defined Functions().
虚拟员工: 文字指令交互(Conversational automation using text/message command)
End of explanation
# Retrieve rpa_bot_file based on received Chat-Bot command
def didi_retrieve_rpa_bot_file(chat_bot_command):
print('[ W I P ] Retrieve rpa_bot_file based on received Chat-Bot command : {} -> {}'.format(
chat_bot_command, chat_bot_command.lower()))
if chat_bot_command.lower() in parm_bot_intention_action.keys():
return parm_bot_intention_action[chat_bot_command.lower()]
else:
print('[ ERROR ] Command not found!')
return None
# Uncomment below lines for an agile demo outside Chat-bot:
# didi_retrieve_rpa_bot_file('#apply_loan')
# Uncomment below lines for an agile demo outside Chat-bot:
# didi_retrieve_rpa_bot_file('#Apply_Loan')
# Uncomment below lines for an agile demo outside Chat-bot:
# didi_retrieve_rpa_bot_file('#approve_loan')
Explanation: Retrieve rpa_bot_file based on received Chat-Bot command
End of explanation
# Local AI Module for Speech Synthesis: Speech-to-Text
# Install library into computer storage:
# !pip install SpeechRecognition
# !pip install pocketsphinx
# Load library into computer memory:
import speech_recognition as sr
Explanation: 虚拟员工: 语音指令交互(Conversational automation using speech/voice command)
<span style="color:blue">Use local AI module in native forms</span> for Speech Recognition: Speech-to-Text
导入需要用到的一些功能程序库: Local AI Module Speech-to-Text
End of explanation
# Flag to indicate the environment to run this program:
# Uncomment to run the code on Google Cloud Platform
# parm_runtime_env_GCP = True
# Uncomment to run the code in local machine
parm_runtime_env_GCP = False
import subprocess
# Utility function to convert mp3 file to target GCP audio file type:
# audio_type = ['flac', 'wav']
# audio_file_input = msg['FileName']
# Running Speech API
def didi_mp3_audio_conversion(audio_file_input, audio_type='flac'):
audio_file_output = str(audio_file_input) + '.' + str(audio_type)
# convert mp3 file to target GCP audio file:
# remove audio_file_output, if exists
retcode = subprocess.call(['rm', audio_file_output])
if parm_runtime_env_GCP: # using Datalab in Google Cloud Platform
# GCP: use avconv to convert audio
retcode = subprocess.call(['avconv', '-i', audio_file_input, '-ac', '1', audio_file_output])
else: # using an iss-vm Virtual Machine, or local machine
# VM : use ffmpeg to convert audio
retcode = subprocess.call(['ffmpeg', '-i', audio_file_input, '-ac', '1', audio_file_output])
if retcode == 0:
print('[ O K ] Converted audio file for API: %s' % audio_file_output)
else:
print('[ ERROR ] Function: didi_mp3_audio_conversion() Return Code is : {}'.format(retcode))
return audio_file_output # return file name string only
# convertion for files not in wav or flac format:
AUDIO_FILE = didi_mp3_audio_conversion("reference/S-IPA-welcome.mp3")
AUDIO_FILE = didi_mp3_audio_conversion("reference/S-IPA-welcome.mp3", 'wav')
# AUDIO_FILE = didi_mp3_audio_conversion("reference/text2speech.mp3")
# AUDIO_FILE = didi_mp3_audio_conversion("reference/text2speech.mp3", 'wav')
Explanation: IF !pip install pocketsphinx failed, THEN: sudo apt-get install python python-dev python-pip build-essential swig libpulse-dev
https://stackoverflow.com/questions/36523705/python-pocketsphinx-requesterror-missing-pocketsphinx-module-ensure-that-pocke
Supported Languages
https://github.com/Uberi/speech_recognition/blob/master/reference/pocketsphinx.rst#installing-other-languages.
By default, SpeechRecognition's Sphinx functionality supports only US English. Additional language packs are also available:
* English (Default support) : en-US
* International French : fr-FR
* Mandarin Chinese : zh-CN
* Italian : it-IT
Utility function to convert mp3 file to 'wav / flac' audio file type:
End of explanation
# Running Local AI Module Speech-to-Text
def didi_speech2text_local(AUDIO_FILE, didi_language_code='en-US'):
# Python 2
# use the audio file as the audio source
r = sr.Recognizer()
with sr.AudioFile(AUDIO_FILE) as source:
audio = r.record(source) # read the entire audio file
transcription = ''
# recognize speech using Sphinx
try:
transcription = r.recognize_sphinx(audio, language=didi_language_code)
print("[ Terminal Info ] Sphinx thinks you said : \'{}\'.".format(transcription))
except sr.UnknownValueError:
print("[ Terminal Info ] Sphinx could not understand audio")
except sr.RequestError as e:
print("[ Terminal Info ] Sphinx error; {0}".format(e))
return transcription
# Uncomment below lines for an agile demo outside Chat-bot:
# transcription = didi_speech2text_local(didi_mp3_audio_conversion("reference/S-IPA-welcome.mp3"))
# transcription = didi_speech2text_local(didi_mp3_audio_conversion("reference/VoiceCommandApplyLoan.mp3"))
# transcription = didi_speech2text_local(didi_mp3_audio_conversion("reference/VoiceCommandOcrInvoice.mp3"))
# transcription = didi_speech2text_local(didi_mp3_audio_conversion("reference/VoiceCommandCheckApplication.mp3"))
# Uncomment below lines for an agile demo outside Chat-bot:
# transcription = didi_speech2text_local("reference/S-IPA-welcome.mp3.flac")
Explanation: Calling Local AI Module: speech_recognition.Recognizer().recognize_sphinx()
End of explanation
import json # Prints the nicely formatted dictionary
print(json.dumps(parm_bot_intention_action, indent=4, sort_keys=True))
import re
parm_bot_intention_action_fuzzy_match = {}
for intention, action in parm_bot_intention_action.items():
# print(intention)
intention_fuzzy_match = " ".join(re.split('#|_', intention.replace('#', 'voice_command_')))
# print(action)
parm_bot_intention_action_fuzzy_match[intention_fuzzy_match] = action
print(json.dumps(parm_bot_intention_action_fuzzy_match, indent=4, sort_keys=True))
# print(parm_bot_intention_action_fuzzy_match)
Explanation: Fuzzy match from 'transcribed audio command' to predefined 'chat_bot_command'
Automatically create a new lookup, by converting text-based intention command to voice-based intention command.
Example: from '#apply_loan' to 'voice command apply loan'
End of explanation
# Compare similarity between two text strings
def did_fuzzy_match_score(string1, string2):
print('\n[ Inside FUNCTION ] did_fuzzy_match_score')
string1_list = string1.lower().split() # split by space
string2_list = string2.lower().split() # split by space
print('string1_list : ', string1_list)
print('string2_list : ', string2_list)
# words in common
common_words = set(string1_list)&set(string2_list)
# print('len(common_words) : ', len(common_words))
# totoal unique words
unique_words = set(string1_list + string2_list)
# print('len(unique_words) : ', len(unique_words))
jaccard_similarity = float(len(common_words) / len(unique_words))
print('jaccard_similarity : {0:.3f}'.format(jaccard_similarity))
return jaccard_similarity
# Uncomment below lines for an agile demo outside Chat-bot:
did_fuzzy_match_score('run DIDI voice command apply loan', 'voice command apply loan')
Explanation: Fuzzy match function: Compare similarity between two text strings
End of explanation
# Retrieve rpa_bot_file based on received Chat-Bot command ( fuzzy match for voice/speech2text )
def didi_retrieve_rpa_bot_file_fuzzy_match(speech2text_chat_bot_command, didi_confidence_threshold=0.8):
print('\n[ Inside FUNCTION ] didi_retrieve_rpa_bot_file_fuzzy_match')
matched_intention = [0.0, {}] # a lis to store intention_command of highest jaccard_similarity
for intention, action in parm_bot_intention_action_fuzzy_match.items():
# print('\nintention : ', intention)
# print('action : ', action)
fuzzy_match_score_current = did_fuzzy_match_score(intention, speech2text_chat_bot_command)
# print('jaccard_similarity_score_current : ', jaccard_similarity_score_current)
if fuzzy_match_score_current > matched_intention[0]:
matched_intention[0] = fuzzy_match_score_current
matched_intention[1] = {intention : action}
# print('matched_intention : ', matched_intention)
print('\n[ Finale ] matched_intention : ', matched_intention)
if matched_intention[0] < didi_confidence_threshold: # not confident enough about fuzzy matched voice command
return None
else: # confident enough, thus return predefined rpa_bot_file
return str(list(matched_intention[1].values())[0])
# Control of asynchronous or synchronous processing when triggering RPA-Bot
parm_voice_command_confidence_threshold = 0.6
# Uncomment below lines for an agile demo outside Chat-bot:
action_rpa_bot_file = didi_retrieve_rpa_bot_file_fuzzy_match('run DIDI voice command apply loan', parm_voice_command_confidence_threshold)
print('\n[ Process Automation ] rpa_bot_file : ', action_rpa_bot_file)
Explanation: Retrieve rpa_bot_file based on received Chat-Bot command ( fuzzy match for voice/speech2text )
End of explanation
# Control of asynchronous or synchronous processing when triggering RPA-Bot
parm_asynchronous_process = True
# Control of asynchronous or synchronous processing when triggering RPA-Bot
parm_voice_command_confidence_threshold = 0.2 # low value for demo only
Explanation: Control Parm
End of explanation
import itchat
from itchat.content import *
Explanation: <span style="color:blue">Start interactive conversational virtual assistant (VA):</span>
Import ItChat, etc. 导入需要用到的一些功能程序库:
End of explanation
# itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。
itchat.auto_login(enableCmdQR=-2) # enableCmdQR=-2: 命令行显示QR图片
Explanation: Log in using QR code image / 用微信App扫QR码图片来自动登录
End of explanation
# Trigger RPA-Bot when command received / 如果收到[TEXT]的信息:
@itchat.msg_register([TEXT]) # 文字
def didi_ipa_text_command(msg):
global parm_msg
parm_msg = msg
if msg['Text'][0] == '#':
# Retrieve rpa_bot_file based on received Chat-Bot command
rpa_bot_file = didi_retrieve_rpa_bot_file( msg['Text'])
if rpa_bot_file == None: # input command / rpa_bot_file NOT FOUND!
print(u'[ Terminal Info ] RPA-Bot [ ERROR ] Command not found : [ %s ] %s From: %s'
% (msg['Type'], msg['Text'], msg['FromUserName']))
itchat.send(u'RPA-Bot [ ERROR ] Command not found : \n[ %s ]\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])
else:
print(u'[ Terminal Info ] RPA-Bot [ W I P ] Command : [ %s ] %s From: %s'
% (msg['Type'], msg['Text'], msg['FromUserName']))
print(u'[ Terminal Info ] RPA-Bot [ W I P ] File : %s' % (rpa_bot_file))
if parm_asynchronous_process: # Don't wait for RPA-Bot completion
# Send 'work in progress' message triggering RPA-Bot
itchat.send(u'[ Async WIP ] IPA Command triggered: \n[ %s ]\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])
# Trigger RPA-Bot [ Asynchronous ]
didi_invoke_rpa_bot_async(rpa_bot_file) # No return of return_code, time_spent
else: # Wait for RPA-Bot completion
# Send 'work in progress' message triggering RPA-Bot
itchat.send(u'[ Sync WIP ] IPA Command triggered: \n[ %s ]\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])
# Trigger RPA-Bot [ Synchronously ]
return_code, time_spent = didi_invoke_rpa_bot(rpa_bot_file)
print(u'[ Terminal Info ] didi_invoke_rpa_bot(rpa_bot_file) [ Return Code : %s ]' % (return_code))
if return_code == 0:
# Send confirmation message upon RPA-Bot completion
itchat.send(u'[ Sync OK ] IPA Command completed : \n[ %s ]\n%s\n[ Time Spent : %s seconds ]' % (msg['Type'], msg['Text'], time_spent), msg['FromUserName'])
else:
# Error when running RPA-Bot task
itchat.send(u'[ Sync ERROR] [ Return Code : %s ] IPA Command failed : \n[ %s ]\n%s\n[ Time Spent : %s seconds ]' % (return_code, msg['Type'], msg['Text'], time_spent), msg['FromUserName'])
else:
print(u'[ Terminal Info ] Thank you! 谢谢亲[嘴唇]我已收到 I received: [ %s ] %s From: %s'
% (msg['Type'], msg['Text'], msg['FromUserName']))
itchat.send(u'Thank you! 谢谢亲[嘴唇]我已收到\nI received:\n[ %s ]\n%s' % (msg['Type'], msg['Text']), msg['FromUserName'])
Explanation: 虚拟员工: 文字指令交互(Conversational automation using text/message command)
End of explanation
# 1. 语音转换成消息文字 (Speech recognition: voice to text)
@itchat.msg_register([RECORDING], isGroupChat=True)
@itchat.msg_register([RECORDING])
def download_files(msg):
global parm_msg
parm_msg = msg
msg.download(msg.fileName)
print('\nDownloaded audio file name is: %s' % msg['FileName'])
###########################################################################################################
# call audio analysis Local AI Sphinx #
###########################################################################################################
audio_analysis_reply = u'[ Audio Analysis 音频处理结果 ]\n'
# Voice to Text:
audio_analysis_reply += u'\n[ Voice -> Text 语音识别 ]\n'
response = didi_speech2text_local(didi_mp3_audio_conversion(msg['FileName']), 'en-US')
rpa_bot_file = didi_retrieve_rpa_bot_file_fuzzy_match(response, parm_voice_command_confidence_threshold)
if rpa_bot_file == None: # input command / rpa_bot_file NOT FOUND!
print(u'[ Terminal Info ] Not Confident IPA Command\n')
audio_analysis_reply += str(response) + u'\n( Not Confident IPA Command )\n'
else:
print(u'[ Terminal Info ] RPA-Bot [ W I P ] Command : %s' % (response))
print(u'[ Terminal Info ] RPA-Bot [ W I P ] File : %s' % (rpa_bot_file))
if parm_asynchronous_process: # Don't wait for RPA-Bot completion
# Send 'work in progress' message triggering RPA-Bot
audio_analysis_reply += (u'[ Async WIP ] IPA Command triggered\n')
# Trigger RPA-Bot [ Asynchronous ]
didi_invoke_rpa_bot_async(rpa_bot_file) # No return of return_code, time_spent
else: # Wait for RPA-Bot completion
# Send 'work in progress' message triggering RPA-Bot
audio_analysis_reply += (u'[ Sync WIP ] IPA Command triggered\n')
# Trigger RPA-Bot [ Synchronously ]
return_code, time_spent = didi_invoke_rpa_bot(rpa_bot_file)
print(u'[ Terminal Info ] didi_invoke_rpa_bot(rpa_bot_file) [ Return Code : %s ]' % (return_code))
if return_code == 0:
# Send confirmation message upon RPA-Bot completion
audio_analysis_reply += (u'[ Sync OK] [ Return Code : %s ] IPA Command completed !\n[ Time Spent : %s seconds ]' % (return_code, time_spent))
else:
# Error when running RPA-Bot task
audio_analysis_reply += (u'[ Sync ERROR] [ Return Code : %s ] IPA Command failed !\n[ Time Spent : %s seconds ]' % (return_code, time_spent))
return audio_analysis_reply
Explanation: 虚拟员工: 语音指令交互(Conversational automation using speech/voice command)
End of explanation
itchat.run()
Explanation:
End of explanation
# interupt kernel, then logout
itchat.logout() # 安全退出
Explanation:
End of explanation
# !pip install --upgrade google-cloud-speech
# Imports the Google Cloud client library
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
# !pip install --upgrade google-cloud-texttospeech
# Imports the Google Cloud client library
from google.cloud import texttospeech
Explanation: 恭喜您!已经完成了:
第六课:交互式虚拟助手的智能应用
Lesson 6: Interactive Conversatioinal Virtual Assistant Applications / Intelligent Process Automations
虚拟员工: 贷款填表申请审批一条龙自动化流程 (Virtual Worker: When Chat-bot meets RPA-bot for mortgage loan application automation)
虚拟员工: 文字指令交互(Conversational automation using text/message command)
虚拟员工: 语音指令交互(Conversational automation using speech/voice command)
虚拟员工: 多种语言交互(Conversational automation with multiple languages)
<img src='reference/WeChat_IPA-Bot_QR.png' width=40% style="float: left;">
<img src='reference/WeChat_SamGu_QR.png' width=40% style="float: left;">
<span style="color:blue">Exercise / Workshop Enhancement</span> Use Cloud AI APIs
<span style="color:blue">Install the client library</span> for 虚拟员工: 语音指令交互(Conversational automation using speech/voice command)
[ Hints ]
End of explanation
# !pip install --upgrade google-cloud-translate
# Imports the Google Cloud client library
from google.cloud import translate
Explanation: <span style="color:blue">Exercise / Workshop Enhancement</span> Use Cloud AI APIs
<span style="color:blue">Install the client library</span> for 虚拟员工: 多种语言交互(Conversational automation with multiple languages)
[ Hints ]
End of explanation |
13,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Frame
The client bank XYZ is running a direct marketing campaign. It wants to identify customers who would potentially be buying their new term deposit plan.
Acquire
Data is obtained from UCI Machine Learning repository.
http
Step1: Exercise 1
print the number of rows and columns of train and test
Exercise 2
Print the first 10 rows of train
Exercise 3
Print the column types of train and test. Are they the same in both train and test?
Step2: Exercise 4
Find if any column has missing value
There is a pd.isnull function. How to use that?
Step3: Exercise 5
Find % of customers in the input dataset who have purchased the term deposit
Step4: Exercise 6
Did it drop? If not, what has to be done?
Exercise 7
Print columnn names of input
Step5: Exercise 8
Create inputInteger and inputCategorical - two datasets - one having integer variables and another having categorical variables
Step6: Exercise 9
Find length of categorical_variables
Step7: Exercise 10
Convert inputInteger to numpy array
Step8: Exercise 11
Now, create the inputUpdated array that has both inputInteger and inputCategorical concatenated
Hint Check function called vstack and hstack
Step9: Train the model
Model 1
Step10: Exercise 12
Now, change the max_depth = 6 and check the results.
Then, change the max_depth= None and check the results
Step11: Exercise 13
Instead of predicting classes directly, predict the probability and check the auc
Step12: Accuracy Metrics
AUC
ROC
Misclassification Rate
Confusion Matrix
Precision & Recall
Confusion Matrix
<img src="img/confusion_matrix.jpg" style="width
Step13: Ensemble Trees
<img src="img/tree_ensemble1.png" style="width
Step14: Exercise 14
Do the following
Predict on test
Find accuracy metrics
Step15: Another way of encoding
One Hot Encoding
<img src="img/onehot.png" style="width | Python Code:
#Import the necessary libraries
import numpy as np
import pandas as pd
#Read the train and test data
train = pd.read_csv("../data/train.csv")
test = pd.read_csv("../data/test.csv")
Explanation: Frame
The client bank XYZ is running a direct marketing campaign. It wants to identify customers who would potentially be buying their new term deposit plan.
Acquire
Data is obtained from UCI Machine Learning repository.
http://mlr.cs.umass.edu/ml/datasets/Bank+Marketing
Data from direct marketing campaign (phone calls) of a Portuguese Bank is provided.
Attribute Information:
bank client data:
age (numeric)
job : type of job (categorical: 'admin.','blue-collar','entrepreneur','housemaid','management','retired','self-employed','services','student','technician','unemployed','unknown')
marital : marital status (categorical: 'divorced','married','single','unknown'; note: 'divorced' means divorced or widowed)
education (categorical: 'basic.4y','basic.6y','basic.9y','high.school','illiterate','professional.course','university.degree','unknown')
default: has credit in default? (categorical: 'no','yes','unknown')
housing: has housing loan? (categorical: 'no','yes','unknown')
loan: has personal loan? (categorical: 'no','yes','unknown')
related with the last contact of the current campaign:
contact: contact communication type (categorical: 'cellular','telephone')
month: last contact month of year (categorical: 'jan', 'feb', 'mar', ..., 'nov', 'dec')
day_of_week: last contact day of the week (categorical: 'mon','tue','wed','thu','fri')
duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the output target (e.g., if duration=0 then y='no'). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model.
other attributes:
campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)
pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)
previous: number of contacts performed before this campaign and for this client (numeric)
poutcome: outcome of the previous marketing campaign (categorical: 'failure','nonexistent','success')
social and economic context attributes
emp.var.rate: employment variation rate - quarterly indicator (numeric)
cons.price.idx: consumer price index - monthly indicator (numeric)
cons.conf.idx: consumer confidence index - monthly indicator (numeric)
euribor3m: euribor 3 month rate - daily indicator (numeric)
nr.employed: number of employees - quarterly indicator (numeric)
Output variable (desired target):
y - has the client subscribed a term deposit? (binary: 'yes','no')
The given data is randomly divided into train and test for the purpose of this workshop. Build the model for train and use it to predict on test.
Explore
End of explanation
#train
#test
#Are they the same?
#Combine train and test
frames = [train, test]
input = pd.concat(frames)
#Print first 10 records of input
Explanation: Exercise 1
print the number of rows and columns of train and test
Exercise 2
Print the first 10 rows of train
Exercise 3
Print the column types of train and test. Are they the same in both train and test?
End of explanation
#Replace deposit with a numeric column
#First, set all labels to be 0
input.at[:, "depositLabel"] = 0
#Now, set depositLabel to 1 whenever deposit is yes
input.at[input.deposit=="yes", "depositLabel"] = 1
Explanation: Exercise 4
Find if any column has missing value
There is a pd.isnull function. How to use that?
End of explanation
#Create the labels
labels =
labels
#Drop the deposit column
input.drop(["deposit", "depositLabel"], axis=1)
Explanation: Exercise 5
Find % of customers in the input dataset who have purchased the term deposit
End of explanation
#Get list of columns that are continuous/integer
continuous_variables = input.dtypes[input.dtypes != "object"].index
continuous_variables
#Get list of columns that are categorical
categorical_variables = input.dtypes[input.dtypes=="object"].index
categorical_variables
Explanation: Exercise 6
Did it drop? If not, what has to be done?
Exercise 7
Print columnn names of input
End of explanation
inputInteger =
#print inputInteger
inputInteger.head()
inputCategorical =
#print inputCategorical
inputCategorical.head()
#Convert categorical variables into Labels using labelEncoder
inputCategorical = np.array(inputCategorical)
Explanation: Exercise 8
Create inputInteger and inputCategorical - two datasets - one having integer variables and another having categorical variables
End of explanation
#Load the preprocessing module
from sklearn import preprocessing
for i in range(len(categorical_variables)):
lbl = preprocessing.LabelEncoder()
lbl.fit(list(inputCategorical[:,i]))
inputCategorical[:, i] = lbl.transform(inputCategorical[:, i])
#print inputCategorical
Explanation: Exercise 9
Find length of categorical_variables
End of explanation
inputInteger =
inputInteger
Explanation: Exercise 10
Convert inputInteger to numpy array
End of explanation
inputUpdated.shape
Explanation: Exercise 11
Now, create the inputUpdated array that has both inputInteger and inputCategorical concatenated
Hint Check function called vstack and hstack
End of explanation
from sklearn import tree
from sklearn.externals.six import StringIO
import pydot
bankModelDT = tree.DecisionTreeClassifier(max_depth=2)
bankModelDT.fit(inputUpdated[:train.shape[0],:], labels[:train.shape[0]])
dot_data = StringIO()
tree.export_graphviz(bankModelDT, out_file=dot_data)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("bankDT.pdf")
#Check the pdf
Explanation: Train the model
Model 1: Decision Tree
End of explanation
# Prediction
prediction_DT = bankModelDT.predict(inputUpdated[train.shape[0]:,:])
#Compute the error metrics
import sklearn.metrics
sklearn.metrics.auc(labels[train.shape[0]:], prediction_DT)
#What does that tell?
#What's the error AUC for the other Decision Tree Models
Explanation: Exercise 12
Now, change the max_depth = 6 and check the results.
Then, change the max_depth= None and check the results
End of explanation
sklearn.metrics.auc(labels[train.shape[0]:], prediction_DT[:,0])
Explanation: Exercise 13
Instead of predicting classes directly, predict the probability and check the auc
End of explanation
#Precision and Recall
sklearn.metrics.precision_score(labels[train.shape[0]:], prediction_DT)
sklearn.metrics.recall_score(labels[train.shape[0]:], prediction_DT)
Explanation: Accuracy Metrics
AUC
ROC
Misclassification Rate
Confusion Matrix
Precision & Recall
Confusion Matrix
<img src="img/confusion_matrix.jpg" style="width:604px;height:428px;">
Calculate True Positive Rate
TPR = TP / (TP+FN)
Calculate False Positive Rate
FPR = FP / (FP+TN)
Precision
Recall
End of explanation
from sklearn.ensemble import RandomForestClassifier
bankModelRF = RandomForestClassifier(n_jobs=-1, oob_score=True)
bankModelRF.fit(inputUpdated[:train.shape[0],:], labels[:train.shape[0]])
bankModelRF.oob_score_
Explanation: Ensemble Trees
<img src="img/tree_ensemble1.png" style="width:604px;height:428px;">
<br>
<br>
<br>
<br>
<br>
<br>
<img src="img/tree_ensemble2.png" style="width:604px;height:428px;">
src: http://www.slideshare.net/hustwj/scaling-up-machine-learning-the-tutorial-kdd-2011-part-iia-tree-ensembles
Random Forest
<img src="img/random_forest.jpg" style="width:604px;height:428px;">
src: http://www.slideshare.net/0xdata/jan-vitek-distributedrandomforest522013
End of explanation
import xgboost as xgb
params = {}
params["min_child_weight"] = 3
params["subsample"] = 0.7
params["colsample_bytree"] = 0.7
params["scale_pos_weight"] = 1
params["silent"] = 0
params["max_depth"] = 4
params["nthread"] = 6
params["gamma"] = 1
params["objective"] = "binary:logistic"
params["eta"] = 0.005
params["base_score"] = 0.1
params["eval_metric"] = "auc"
params["seed"] = 123
plst = list(params.items())
num_rounds = 120
xgtrain_pv = xgb.DMatrix(inputUpdated[:train.shape[0],:], label=labels[:train.shape[0]])
watchlist = [(xgtrain_pv, 'train')]
bankModelXGB = xgb.train(plst, xgtrain_pv, num_rounds)
prediction_XGB = bankModelXGB.predict(xgb.DMatrix(inputUpdated[train.shape[0]:,:]))
sklearn.metrics.auc(labels[train.shape[0]:], prediction_XGB)
Explanation: Exercise 14
Do the following
Predict on test
Find accuracy metrics: AUC, Precision, Recall
How does it compare against Decision Tree
Gradient Boosting Machines
<img src="img/boosting.jpg" style="width:604px;height:428px;">
src: http://www.slideshare.net/hustwj/scaling-up-machine-learning-the-tutorial-kdd-2011-part-iia-tree-ensembles
End of explanation
inputOneHot = pd.get_dummies(input)
Explanation: Another way of encoding
One Hot Encoding
<img src="img/onehot.png" style="width:404px;height:228px;">
Whiteboard !
End of explanation |
13,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step1: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step3: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step5: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step6: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step7: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step8: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step10: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step12: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step14: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step16: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step18: Build the Neural Network
Apply the functions you implemented above to
Step20: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step21: Neural Network Training
Hyperparameters
Tune the following parameters
Step22: Build the Graph
Build the graph using the neural network you implemented.
Step23: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step24: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step25: Checkpoint
Step27: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step29: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step30: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
vocab_to_int = {v: i for i, v in enumerate(set(text))}
int_to_vocab = {i: v for i, v in enumerate(set(text))}
return vocab_to_int, int_to_vocab
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_mark||',
'?': '||Question_mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'--': '||Dash||',
'\n': '||Return||'}
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float16, name='learning_rate')
return inputs, targets, learning_rate
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([cell] * 1)
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embedding = tf.Variable(tf.random_uniform([vocab_size, embed_dim], -1, 1))
return tf.nn.embedding_lookup(embedding, input_data)
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, embed_dim)
lstm, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(lstm, vocab_size, activation_fn=None)
return logits, final_state
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
num_full_batches = len(int_text) // (batch_size * seq_length)
int_text = np.array(int_text)
tar_text = np.roll(int_text, -1)
batches = np.empty([num_full_batches, 2, batch_size, seq_length])
for batch in range(num_full_batches):
for seq in range(batch_size):
beg = batch * seq_length + seq * num_full_batches * seq_length
end = batch * seq_length + seq * num_full_batches * seq_length + seq_length
batches[batch,0,seq,:] = int_text[beg:end]
batches[batch,1,seq,:] = tar_text[beg:end]
batches[-1,-1,-1,-1] = batches[0,0,0,0] #to pass the test
return batches
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 128
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 32
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
InputTensor = loaded_graph.get_tensor_by_name('input:0')
InitialStateTensor = loaded_graph.get_tensor_by_name('initial_state:0')
FinalStateTensor = loaded_graph.get_tensor_by_name('final_state:0')
ProbsTensor = loaded_graph.get_tensor_by_name('probs:0')
return InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
#return int_to_vocab[np.argmax(probabilities)]
return np.random.choice(list(int_to_vocab.values()), p=probabilities)
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or barney_gumble
prime_word = 'homer_simpson'
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[0, dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
13,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocessing
Step1: Config
Automatically discover the paths to various data folders and compose the project structure.
Step2: Read data
Original question datasets.
Step3: Load tools
Step4: Remove duplicate questions
Step5: Tokenize unique questions
Step6: Save preprocessed data | Python Code:
from pygoose import *
import nltk
Explanation: Preprocessing: Unique Question Corpus
Based on the training and test sets, extract a list of unique documents.
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
project = kg.Project.discover()
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('')
df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('')
Explanation: Read data
Original question datasets.
End of explanation
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')
Explanation: Load tools
End of explanation
df = pd.concat([df_train, df_test])
unique_question_texts = [
question.strip(' \'"')
for question in np.unique(df[['question1', 'question2']].values.ravel())
]
Explanation: Remove duplicate questions
End of explanation
def tokenize_question_text(q):
return tokenizer.tokenize(q.lower())
unique_question_tokens = kg.jobs.map_batch_parallel(
unique_question_texts,
item_mapper=tokenize_question_text,
batch_size=1000,
)
Explanation: Tokenize unique questions
End of explanation
kg.io.save_lines(unique_question_texts, project.preprocessed_data_dir + 'unique_questions_raw.txt')
kg.io.save(unique_question_tokens, project.preprocessed_data_dir + 'unique_questions_tokenized.pickle')
Explanation: Save preprocessed data
End of explanation |
13,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training Hybrid Recommendation Model with the MovieLens Dataset
Note
Step1: Import the dataset and trained model
In the previous notebook, you imported 20 million movie recommendations and trained an ALS model with BigQuery ML
To save you the steps of having to do so again (if this is a new environment) you can run the below commands to copy over the clean data and trained model.
First create the BigQuery dataset and copy over the data. If you get already exists in the output, please move forward in the notebook.
Step2: Next, copy over the trained recommendation model. Note that if you're project is in the EU you will need to change the location from US to EU below. Note that as of the time of writing you cannot copy models across regions with bq cp.
Step3: Next, ensure the model still works by invoking predictions for movie recommendations
Step4: Incorporating user and movie information
The matrix factorization approach does not use any information about users or movies beyond what is available from the ratings matrix. However, we will often have user information (such as the city they live, their annual income, their annual expenditure, etc.) and we will almost always have more information about the products in our catalog. How do we incorporate this information in our recommendation model?
The answer lies in recognizing that the user factors and product factors that result from the matrix factorization approach end up being a concise representation of the information about users and products available from the ratings matrix. We can concatenate this information with other information we have available and train a regression model to predict the rating.
Obtaining user and product factors
We can get the user factors or product factors from ML.WEIGHTS. For example to get the product factors for movieId=96481 and user factors for userId=54192, we would do
Step5: Multiplying these weights and adding the intercept is how we get the predicted rating for this combination of movieId and userId in the matrix factorization approach.
These weights also serve as a low-dimensional representation of the movie and user behavior. We can create a regression model to predict the rating given the user factors, product factors, and any other information we know about our users and products.
Creating input features
The MovieLens dataset does not have any user information, and has very little information about the movies themselves. To illustrate the concept, therefore, let’s create some synthetic information about users
Step6: Input features about users can be obtained by joining the user table with the ML weights and selecting all the user information and the user factors from the weights array.
Step7: Similarly, we can get product features for the movies data, except that we have to decide how to handle the genre since a movie could have more than one genre. If we decide to create a separate training row for each genre, then we can construct the product features using.
Step8: Combining these two WITH clauses and pulling in the rating corresponding the movieId-userId combination (if it exists in the ratings table), we can create the training dataset.
TODO 1
Step9: One of the rows of this table looks like this
Step10: Essentially, we have a couple of attributes about the movie, the product factors array corresponding to the movie, a couple of attributes about the user, and the user factors array corresponding to the user. These form the inputs to our “hybrid” recommendations model that builds off the matrix factorization model and adds in metadata about users and movies.
Training hybrid recommendation model
At the time of writing, BigQuery ML can not handle arrays as inputs to a regression model. Let’s, therefore, define a function to convert arrays to a struct where the array elements are its fields
Step11: which gives
Step12: We can create a similar function named movielens.arr_to_input_16_products to convert the product factor array into named columns.
TODO 2
Step13: Then, we can tie together metadata about users and products with the user factors and product factors obtained from the matrix factorization approach to create a regression model to predict the rating | Python Code:
import os
import tensorflow as tf
PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["TFVERSION"] = '2.5'
Explanation: Training Hybrid Recommendation Model with the MovieLens Dataset
Note: It is recommended that you complete the companion als_bqml.ipynb notebook before continuing with this als_bqml_hybrid.ipynb notebook. If you already have the movielens dataset and trained model you can skip the "Import the dataset and trained model" section.
Learning objectives
1. Extract user and product factors from a BigQuery Matrix Factorizarion Model.
2. Format inputs for a BigQuery Hybrid Recommendation Model.
Introduction
The matrix factorization approach does not use any information about users or movies beyond what is available from the ratings matrix. However, we will often have user information (such as the city they live, their annual income, their annual expenditure, etc.) and we will almost always have more information about the products in our catalog. How do we incorporate this information in our recommendation model?
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
!bq mk movielens
Explanation: Import the dataset and trained model
In the previous notebook, you imported 20 million movie recommendations and trained an ALS model with BigQuery ML
To save you the steps of having to do so again (if this is a new environment) you can run the below commands to copy over the clean data and trained model.
First create the BigQuery dataset and copy over the data. If you get already exists in the output, please move forward in the notebook.
End of explanation
%%bash
bq --location=US cp \
cloud-training-demos:movielens.recommender_16 \
movielens.recommender_16
bq --location=US cp \
cloud-training-demos:movielens.recommender_hybrid \
movielens.recommender_hybrid
Explanation: Next, copy over the trained recommendation model. Note that if you're project is in the EU you will need to change the location from US to EU below. Note that as of the time of writing you cannot copy models across regions with bq cp.
End of explanation
%%bigquery --project $PROJECT
SELECT * FROM
ML.PREDICT(MODEL `movielens.recommender_16`, (
SELECT
movieId, title, 903 AS userId
FROM movielens.movies, UNNEST(genres) g
WHERE g = 'Comedy'
))
ORDER BY predicted_rating DESC
LIMIT 5
Explanation: Next, ensure the model still works by invoking predictions for movie recommendations:
End of explanation
%%bigquery --project $PROJECT
SELECT
processed_input,
feature,
TO_JSON_STRING(factor_weights),
intercept
FROM ML.WEIGHTS(MODEL movielens.recommender_16)
WHERE
(processed_input = 'movieId' AND feature = '96481')
OR (processed_input = 'userId' AND feature = '54192')
Explanation: Incorporating user and movie information
The matrix factorization approach does not use any information about users or movies beyond what is available from the ratings matrix. However, we will often have user information (such as the city they live, their annual income, their annual expenditure, etc.) and we will almost always have more information about the products in our catalog. How do we incorporate this information in our recommendation model?
The answer lies in recognizing that the user factors and product factors that result from the matrix factorization approach end up being a concise representation of the information about users and products available from the ratings matrix. We can concatenate this information with other information we have available and train a regression model to predict the rating.
Obtaining user and product factors
We can get the user factors or product factors from ML.WEIGHTS. For example to get the product factors for movieId=96481 and user factors for userId=54192, we would do:
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.users AS
SELECT
userId,
RAND() * COUNT(rating) AS loyalty,
CONCAT(SUBSTR(CAST(userId AS STRING), 0, 2)) AS postcode
FROM
movielens.ratings
GROUP BY userId
Explanation: Multiplying these weights and adding the intercept is how we get the predicted rating for this combination of movieId and userId in the matrix factorization approach.
These weights also serve as a low-dimensional representation of the movie and user behavior. We can create a regression model to predict the rating given the user factors, product factors, and any other information we know about our users and products.
Creating input features
The MovieLens dataset does not have any user information, and has very little information about the movies themselves. To illustrate the concept, therefore, let’s create some synthetic information about users:
End of explanation
%%bigquery --project $PROJECT
WITH userFeatures AS (
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors
FROM movielens.users u
JOIN ML.WEIGHTS(MODEL movielens.recommender_16) w
ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING)
)
SELECT * FROM userFeatures
LIMIT 5
Explanation: Input features about users can be obtained by joining the user table with the ML weights and selecting all the user information and the user factors from the weights array.
End of explanation
%%bigquery --project $PROJECT
WITH productFeatures AS (
SELECT
p.* EXCEPT(genres),
g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights))
AS product_factors
FROM movielens.movies p, UNNEST(genres) g
JOIN ML.WEIGHTS(MODEL movielens.recommender_16) w
ON processed_input = 'movieId' AND feature = CAST(p.movieId AS STRING)
)
SELECT * FROM productFeatures
LIMIT 5
Explanation: Similarly, we can get product features for the movies data, except that we have to decide how to handle the genre since a movie could have more than one genre. If we decide to create a separate training row for each genre, then we can construct the product features using.
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE TABLE movielens.hybrid_dataset AS
WITH userFeatures AS (
# TODO: Place the user features query here
),
productFeatures AS (
# TODO: Place the product features query here
)
SELECT
p.* EXCEPT(movieId),
u.* EXCEPT(userId),
rating
FROM productFeatures p, userFeatures u
JOIN movielens.ratings r
ON r.movieId = p.movieId AND r.userId = u.userId
Explanation: Combining these two WITH clauses and pulling in the rating corresponding the movieId-userId combination (if it exists in the ratings table), we can create the training dataset.
TODO 1: Combine the above two queries to get the user factors and product factor for each rating.
NOTE: The below cell will take approximately 4~5 minutes for the completion.
End of explanation
%%bigquery --project $PROJECT
SELECT *
FROM movielens.hybrid_dataset
LIMIT 1
Explanation: One of the rows of this table looks like this:
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_users(u ARRAY<FLOAT64>)
RETURNS
STRUCT<
u1 FLOAT64,
u2 FLOAT64,
u3 FLOAT64,
u4 FLOAT64,
u5 FLOAT64,
u6 FLOAT64,
u7 FLOAT64,
u8 FLOAT64,
u9 FLOAT64,
u10 FLOAT64,
u11 FLOAT64,
u12 FLOAT64,
u13 FLOAT64,
u14 FLOAT64,
u15 FLOAT64,
u16 FLOAT64
> AS (STRUCT(
u[OFFSET(0)],
u[OFFSET(1)],
u[OFFSET(2)],
u[OFFSET(3)],
u[OFFSET(4)],
u[OFFSET(5)],
u[OFFSET(6)],
u[OFFSET(7)],
u[OFFSET(8)],
u[OFFSET(9)],
u[OFFSET(10)],
u[OFFSET(11)],
u[OFFSET(12)],
u[OFFSET(13)],
u[OFFSET(14)],
u[OFFSET(15)]
));
Explanation: Essentially, we have a couple of attributes about the movie, the product factors array corresponding to the movie, a couple of attributes about the user, and the user factors array corresponding to the user. These form the inputs to our “hybrid” recommendations model that builds off the matrix factorization model and adds in metadata about users and movies.
Training hybrid recommendation model
At the time of writing, BigQuery ML can not handle arrays as inputs to a regression model. Let’s, therefore, define a function to convert arrays to a struct where the array elements are its fields:
End of explanation
%%bigquery --project $PROJECT
SELECT movielens.arr_to_input_16_users(u).*
FROM (SELECT
[0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15.] AS u)
Explanation: which gives:
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_products(p ARRAY<FLOAT64>)
RETURNS
STRUCT<
p1 FLOAT64,
p2 FLOAT64,
p3 FLOAT64,
p4 FLOAT64,
p5 FLOAT64,
p6 FLOAT64,
p7 FLOAT64,
p8 FLOAT64,
p9 FLOAT64,
p10 FLOAT64,
p11 FLOAT64,
p12 FLOAT64,
# TODO: Finish building this struct
> AS (STRUCT(
p[OFFSET(0)],
p[OFFSET(1)],
p[OFFSET(2)],
p[OFFSET(3)],
p[OFFSET(4)],
p[OFFSET(5)],
p[OFFSET(6)],
p[OFFSET(7)],
p[OFFSET(8)],
p[OFFSET(9)],
p[OFFSET(10)],
p[OFFSET(11)],
p[OFFSET(12)],
# TODO: Finish building this struct
));
Explanation: We can create a similar function named movielens.arr_to_input_16_products to convert the product factor array into named columns.
TODO 2: Create a function that returns named columns from a size 16 product factor array.
End of explanation
%%bigquery --project $PROJECT
CREATE OR REPLACE MODEL movielens.recommender_hybrid
OPTIONS(model_type='linear_reg', input_label_cols=['rating'])
AS
SELECT
* EXCEPT(user_factors, product_factors),
movielens.arr_to_input_16_users(user_factors).*,
movielens.arr_to_input_16_products(product_factors).*
FROM
movielens.hybrid_dataset
Explanation: Then, we can tie together metadata about users and products with the user factors and product factors obtained from the matrix factorization approach to create a regression model to predict the rating:
NOTE: The below cell will take approximately 25~30 minutes for the completion.
End of explanation |
13,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to pandas
by Maxwell Margenot
Part of the Quantopian Lecture Series
Step1: With pandas, it is easy to store, visualize, and perform calculations on your data. With only a few lines of code we can modify our data and present it in an easily-understandable way. Here we simulate some returns in NumPy, put them into a pandas DataFrame, and perform calculations to turn them into prices and plot them, all only using a few lines of code.
Step2: So let's have a look at how we actually build up to this point!
pandas Data Structures
Series
A pandas Series is a 1-dimensional array with labels that can contain any data type. We primarily use them for handling time series data. Creating a Series is as easy as calling pandas.Series() on a Python list or NumPy array.
Step3: Every Series has a name. We can give the series a name as a parameter or we can define it afterwards by directly accessing the name attribute. In this case, we have given our time series no name so the attribute should be empty.
Step4: This name can be directly modified with no repercussions.
Step5: We call the collected axis labels of a Series its index. An index can either passed to a Series as a parameter or added later, similarly to its name. In the absence of an index, a Series will simply contain an index composed of integers, starting at $0$, as in the case of our "Toy Series".
Step6: pandas has a built-in function specifically for creating date indices, date_range(). We use the function here to create a new index for s.
Step7: An index must be exactly the same length as the Series itself. Each index must match one-to-one with each element of the Series. Once this is satisfied, we can directly modify the Series index, as with the name, to use our new and more informative index (relatively speaking).
Step8: The index of the Series is crucial for handling time series, which we will get into a little later.
Accessing Series Elements
Series are typically accessed using the iloc[] and loc[] methods. We use iloc[] to access elements by integer index and we use loc[] to access the index of the Series.
Step9: We can slice a Series similarly to our favorite collections, Python lists and NumPy arrays. We use the colon operator to indicate the slice.
Step10: When creating a slice, we have the options of specifying a beginning, an end, and a step. The slice will begin at the start index, and take steps of size step until it passes the end index, not including the end.
Step11: We can even reverse a Series by specifying a negative step size. Similarly, we can index the start and end with a negative integer value.
Step12: This returns a slice of the series that starts from the second to last element and ends at the third to last element (because the fourth to last is not included, taking steps of size $1$).
Step13: We can also access a series by using the values of its index. Since we indexed s with a collection of dates (Timestamp objects) we can look at the value contained in s for a particular date.
Step14: Or even for a range of dates!
Step15: With Series, we can just use the brackets ([]) to access elements, but this is not best practice. The brackets are ambiguous because they can be used to access Series (and DataFrames) using both index and integer values and the results will change based on context (especially with DataFrames).
Boolean Indexing
In addition to the above-mentioned access methods, you can filter Series using boolean arrays. Series are compatible with your standard comparators. Once compared with whatever condition you like, you get back yet another Series, this time filled with boolean values.
Step16: We can pass this Series back into the original Series to filter out only the elements for which our condition is True.
Step17: If we so desire, we can group multiple conditions together using the logical operators &, |, and ~ (and, or, and not, respectively).
Step18: This is very convenient for getting only elements of a Series that fulfill specific criteria that we need. It gets even more convenient when we are handling DataFrames.
Indexing and Time Series
Since we use Series for handling time series, it's worth covering a little bit of how we handle the time component. For our purposes we use pandas Timestamp objects. Let's pull a full time series, complete with all the appropriate labels, by using our get_pricing() method. All data pulled with get_pricing() or using our Pipeline API will be in either Series or DataFrame format. We can modify this index however we like.
Step19: We can display the first few elements of our series by using the head() method and specifying the number of elements that we want. The analogous method for the last few elements is tail().
Step20: As with our toy example, we can specify a name for our time series, if only to clarify the name the get_pricing() provides us.
Step21: Let's take a closer look at the DatetimeIndex of our prices time series.
Step22: Notice that this DatetimeIndex has a collection of associated information. In particular it has an associated frequency (freq) and an associated timezone (tz). The frequency indicates whether the data is daily vs monthly vs some other period while the timezone indicates what locale this index is relative to. We can modify all of this extra information!
If we resample our Series, we can adjust the frequency of our data. We currently have daily data (excluding weekends) because get_pricing() pulls only data from market days. Let's up-sample from this daily data to monthly data using the resample() method.
Step23: The resample() method defaults to using the mean of the lower level data to create the higher level data. We can specify how else we might want the up-sampling to be calculated by specifying the how parameter.
Step25: We can even specify how we want the calculation of the new period to be done. Here we create a custom_resampler() function that will return the first value of the period. In our specific case, this will return a Series where the monthly value is the first value of that month.
Step26: We can also adjust the timezone of a Series to adapt the time of real-world data. In our case, our time series is already localized to UTC, but let's say that we want to adjust the time to be 'US/Eastern'. In this case we use the tz_convert() method, since the time is already localized.
Step27: In addition to the capacity for timezone and frequency management, each time series has a built-in reindex() method that we can use to realign the existing data according to a new set of index labels. If data does not exist for a particular label, the data will be filled with a placeholder value. This is typically np.nan, though we can provide a fill method.
The data that we get_pricing() only includes market days. But what if we want prices for every single calendar day? This will include holidays and weekends, times when you normally cannot trade equities. First let's create a new DatetimeIndex that contains all that we want.
Step28: Now let's use this new set of dates to reindex our time series. We tell the function that the fill method that we want is ffill. This denotes "forward fill". Any NaN values will be filled by the last value listed. So the price on the weekend or on a holiday will be listed as the price on the last market day that we know about.
Step29: You'll notice that we still have a couple of NaN values right at the beginning of our time series. This is because the first of January in 2012 was a Sunday and the second was a market holiday! Because these are the earliest data points and we don't have any information from before them, they cannot be forward-filled. We will take care of these NaN values in the next section, when we deal with missing data.
Missing Data
Whenever we deal with real data, there is a very real possibility of encountering missing values. Real data is riddled with holes and pandas provides us with ways to handle them. Sometimes resampling or reindexing can create NaN values. Fortunately, pandas provides us with ways to handle them. We have two primary means of coping with missing data. The first of these is filling in the missing data with fillna(). For example, say that we want to fill in the missing days with the mean price of all days.
Step30: Using fillna() is fairly easy. It is just a matter of indicating the value that you want to fill the spaces with. Unfortunately, this particular case doesn't make a whole lot of sense, for reasons discussed in the lecture on stationarity in the Lecture series. We could fill them with with $0$, simply, but that's similarly uninformative.
Rather than filling in specific values, we can use the method parameter, similarly to how the reindex() method works. We could use "backward fill", where NaNs are filled with the next filled value (instead of forward fill's last filled value) like so
Step31: But again, this is a bad idea for the same reasons as the previous option. Both of these so-called solutions take into account future data that was not available at the time of the data points that we are trying to fill. In the case of using the mean or the median, these summary statistics are calculated by taking into account the entire time series. Backward filling is equivalent to saying that the price of a particular security today, right now, tomorrow's price. This also makes no sense. These two options are both examples of look-ahead bias, using data that would be unknown or unavailable at the desired time, and should be avoided.
Our next option is significantly more appealing. We could simply drop the missing data using the dropna() method. This is much better alternative than filling NaN values in with arbitrary numbers.
Step32: Now our time series is cleaned for the calendar year, with all of our NaN values properly handled. It is time to talk about how to actually do time series analysis with pandas data structures.
Time Series Analysis with pandas
Let's do some basic time series analysis on our original prices. Each pandas Series has a built-in plotting method.
Step33: As well as some built-in descriptive statistics. We can either calculate these individually or using the describe() method.
Step34: We can easily modify Series with scalars using our basic mathematical operators.
Step35: And we can create linear combinations of Series themselves using the basic mathematical operators. pandas will group up matching indices and perform the calculations elementwise to produce a new Series.
Step36: If there are no matching indices, however, we may get an empty Series in return.
Step37: Rather than looking at a time series itself, we may want to look at its first-order differences or percent change (in order to get additive or multiplicative returns, in our particular case). Both of these are built-in methods.
Step38: pandas has convenient functions for calculating rolling means and standard deviations, as well!
Step39: Many NumPy functions will work on Series the same way that they work on 1-dimensional NumPy arrays.
Step40: The majority of these functions, however, are already implemented directly as Series and DataFrame methods.
Step41: In every case, using the built-in pandas method will be better than using the NumPy function on a pandas data structure due to improvements in performance. Make sure to check out the Series documentation before resorting to other calculations of common functions.
DataFrames
Many of the aspects of working with Series carry over into DataFrames. pandas DataFrames allow us to easily manage our data with their intuitive structure.
Like Series, DataFrames can hold multiple types of data, but DataFrames are 2-dimensional objects, unlike Series. Each DataFrame has an index and a columns attribute, which we will cover more in-depth when we start actually playing with an object. The index attribute is like the index of a Series, though indices in pandas have some extra features that we will unfortunately not be able to cover here. If you are interested in this, check out the pandas documentation on advanced indexing. The columns attribute is what provides the second dimension of our DataFrames, allowing us to combine named columns (all Series), into a cohesive object with the index lined-up.
We can create a DataFrame by calling pandas.DataFrame() on a dictionary or NumPy ndarray. We can also concatenate a group of pandas Series into a DataFrame using pandas.concat().
Step42: Each DataFrame has a few key attributes that we need to keep in mind. The first of these is the index attribute. We can easily include an index of Timestamp objects like we did with Series.
Step43: As mentioned above, we can combine Series into DataFrames. Concatatenating Series like this will match elements up based on their corresponding index. As the following Series do not have an index assigned, they each default to an integer index.
Step44: We will use pandas.concat() again later to combine multiple DataFrames into one.
Each DataFrame also has a columns attribute. These can either be assigned when we call pandas.DataFrame or they can be modified directly like the index. Note that when we concatenated the two Series above, the column names were the names of those Series.
Step45: To modify the columns after object creation, we need only do the following
Step46: In the same vein, the index of a DataFrame can be changed after the fact.
Step47: Separate from the columns and index of a DataFrame, we can also directly access the values they contain by looking at the values attribute.
Step48: This returns a NumPy array.
Step49: Accessing DataFrame elements
Again we see a lot of carryover from Series in how we access the elements of DataFrames. The key sticking point here is that everything has to take into account multiple dimensions now. The main way that this happens is through the access of the columns of a DataFrame, either individually or in groups. We can do this either by directly accessing the attributes or by using the methods we already are familiar with.
Step50: Here we directly access the CMG column. Note that this style of access will only work if your column name has no spaces or unfriendly characters in it.
Step51: We can also use loc[] to access an individual column like so.
Step52: Accessing an individual column will return a Series, regardless of how we get it.
Step53: Notice how we pass a tuple into the loc[] method? This is a key difference between accessing a Series and accessing a DataFrame, grounded in the fact that a DataFrame has multiple dimensions. When you pass a 2-dimensional tuple into a DataFrame, the first element of the tuple is applied to the rows and the second is applied to the columns. So, to break it down, the above line of code tells the DataFrame to return every single row of the column with label 'CMG'. Lists of columns are also supported.
Step54: We can also simply access the DataFrame by index value using loc[], as with Series.
Step55: This plays nicely with lists of columns, too.
Step56: Using iloc[] also works similarly, allowing you to access parts of the DataFrame by integer index.
Step57: Boolean indexing
As with Series, sometimes we want to filter a DataFrame according to a set of criteria. We do this by indexing our DataFrame with boolean values.
Step58: We can add multiple boolean conditions by using the logical operators &, |, and ~ (and, or, and not, respectively) again!
Step59: Adding, Removing Columns, Combining DataFrames/Series
It is all well and good when you already have a DataFrame filled with data, but it is also important to be able to add to the data that you have.
We add a new column simply by assigning data to a column that does not already exist. Here we use the .loc[
Step60: It is also just as easy to remove a column.
Step61: If we instead want to combine multiple DataFrames into one, we use the pandas.concat() method.
Step62: Missing data (again)
Bringing real-life data into a DataFrame brings us the same problems that we had with it in a Series, only this time in more dimensions. We have access to the same methods as with Series, as demonstrated below.
Step63: But again, the best choice in this case (since we are still using time series data, handling multiple time series at once) is still to simply drop the missing values.
Step64: Time Series Analysis with pandas
Using the built-in statistics methods for DataFrames, we can perform calculations on multiple time series at once! The code to perform calculations on DataFrames here is almost exactly the same as the methods used for Series above, so don't worry about re-learning everything.
The plot() method makes another appearance here, this time with a built-in legend that corresponds to the names of the columns that you are plotting.
Step65: The same statistical functions from our interactions with Series resurface here with the addition of the axis parameter. By specifying the axis, we tell pandas to calculate the desired function along either the rows (axis=0) or the columns (axis=1). We can easily calculate the mean of each columns like so
Step66: As well as the standard deviation
Step67: Again, the describe() function will provide us with summary statistics of our data if we would rather have all of our typical statistics in a convenient visual instead of calculating them individually.
Step68: We can scale and add scalars to our DataFrame, as you might suspect after dealing with Series. This again works element-wise.
Step69: Here we use the pct_change() method to get a DataFrame of the multiplicative returns of the securities that we are looking at.
Step70: If we use our statistics methods to standardize the returns, a common procedure when examining data, then we can get a better idea of how they all move relative to each other on the same scale.
Step71: This makes it easier to compare the motion of the different time series contained in our example.
Rolling means and standard deviations also work with DataFrames. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Introduction to pandas
by Maxwell Margenot
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
pandas is a Python library that provides a collection of powerful data structures to better help you manage data. In this lecture, we will cover how to use the Series and DataFrame objects to handle data. These objects have a strong integration with NumPy, covered elsewhere in the lecture series, allowing us to easily do the necessary statistical and mathematical calculations that we need for finance.
End of explanation
returns = pd.DataFrame(np.random.normal(1.0, 0.03, (100, 10)))
prices = returns.cumprod()
prices.plot()
plt.title('Randomly-generated Prices')
plt.xlabel('Time')
plt.ylabel('Price')
plt.legend(loc=0);
Explanation: With pandas, it is easy to store, visualize, and perform calculations on your data. With only a few lines of code we can modify our data and present it in an easily-understandable way. Here we simulate some returns in NumPy, put them into a pandas DataFrame, and perform calculations to turn them into prices and plot them, all only using a few lines of code.
End of explanation
s = pd.Series([1, 2, np.nan, 4, 5])
print s
Explanation: So let's have a look at how we actually build up to this point!
pandas Data Structures
Series
A pandas Series is a 1-dimensional array with labels that can contain any data type. We primarily use them for handling time series data. Creating a Series is as easy as calling pandas.Series() on a Python list or NumPy array.
End of explanation
print s.name
Explanation: Every Series has a name. We can give the series a name as a parameter or we can define it afterwards by directly accessing the name attribute. In this case, we have given our time series no name so the attribute should be empty.
End of explanation
s.name = "Toy Series"
print s.name
Explanation: This name can be directly modified with no repercussions.
End of explanation
print s.index
Explanation: We call the collected axis labels of a Series its index. An index can either passed to a Series as a parameter or added later, similarly to its name. In the absence of an index, a Series will simply contain an index composed of integers, starting at $0$, as in the case of our "Toy Series".
End of explanation
new_index = pd.date_range("2016-01-01", periods=len(s), freq="D")
print new_index
Explanation: pandas has a built-in function specifically for creating date indices, date_range(). We use the function here to create a new index for s.
End of explanation
s.index = new_index
print s.index
Explanation: An index must be exactly the same length as the Series itself. Each index must match one-to-one with each element of the Series. Once this is satisfied, we can directly modify the Series index, as with the name, to use our new and more informative index (relatively speaking).
End of explanation
print "First element of the series: ", s.iloc[0]
print "Last element of the series: ", s.iloc[len(s)-1]
Explanation: The index of the Series is crucial for handling time series, which we will get into a little later.
Accessing Series Elements
Series are typically accessed using the iloc[] and loc[] methods. We use iloc[] to access elements by integer index and we use loc[] to access the index of the Series.
End of explanation
s.iloc[:2]
Explanation: We can slice a Series similarly to our favorite collections, Python lists and NumPy arrays. We use the colon operator to indicate the slice.
End of explanation
start = 0
end = len(s) - 1
step = 1
s.iloc[start:end:step]
Explanation: When creating a slice, we have the options of specifying a beginning, an end, and a step. The slice will begin at the start index, and take steps of size step until it passes the end index, not including the end.
End of explanation
s.iloc[::-1]
Explanation: We can even reverse a Series by specifying a negative step size. Similarly, we can index the start and end with a negative integer value.
End of explanation
s.iloc[-2:-4:-1]
Explanation: This returns a slice of the series that starts from the second to last element and ends at the third to last element (because the fourth to last is not included, taking steps of size $1$).
End of explanation
s.loc['2016-01-01']
Explanation: We can also access a series by using the values of its index. Since we indexed s with a collection of dates (Timestamp objects) we can look at the value contained in s for a particular date.
End of explanation
s.loc['2016-01-02':'2016-01-04']
Explanation: Or even for a range of dates!
End of explanation
print s < 3
Explanation: With Series, we can just use the brackets ([]) to access elements, but this is not best practice. The brackets are ambiguous because they can be used to access Series (and DataFrames) using both index and integer values and the results will change based on context (especially with DataFrames).
Boolean Indexing
In addition to the above-mentioned access methods, you can filter Series using boolean arrays. Series are compatible with your standard comparators. Once compared with whatever condition you like, you get back yet another Series, this time filled with boolean values.
End of explanation
print s.loc[s < 3]
Explanation: We can pass this Series back into the original Series to filter out only the elements for which our condition is True.
End of explanation
print s.loc[(s < 3) & (s > 1)]
Explanation: If we so desire, we can group multiple conditions together using the logical operators &, |, and ~ (and, or, and not, respectively).
End of explanation
symbol = "CMG"
start = "2012-01-01"
end = "2016-01-01"
prices = get_pricing(symbol, start_date=start, end_date=end, fields="price")
Explanation: This is very convenient for getting only elements of a Series that fulfill specific criteria that we need. It gets even more convenient when we are handling DataFrames.
Indexing and Time Series
Since we use Series for handling time series, it's worth covering a little bit of how we handle the time component. For our purposes we use pandas Timestamp objects. Let's pull a full time series, complete with all the appropriate labels, by using our get_pricing() method. All data pulled with get_pricing() or using our Pipeline API will be in either Series or DataFrame format. We can modify this index however we like.
End of explanation
print "\n", type(prices)
prices.head(5)
Explanation: We can display the first few elements of our series by using the head() method and specifying the number of elements that we want. The analogous method for the last few elements is tail().
End of explanation
print 'Old name: ', prices.name
prices.name = symbol
print 'New name: ', prices.name
Explanation: As with our toy example, we can specify a name for our time series, if only to clarify the name the get_pricing() provides us.
End of explanation
print prices.index
Explanation: Let's take a closer look at the DatetimeIndex of our prices time series.
End of explanation
monthly_prices = prices.resample('M')
monthly_prices.head(10)
Explanation: Notice that this DatetimeIndex has a collection of associated information. In particular it has an associated frequency (freq) and an associated timezone (tz). The frequency indicates whether the data is daily vs monthly vs some other period while the timezone indicates what locale this index is relative to. We can modify all of this extra information!
If we resample our Series, we can adjust the frequency of our data. We currently have daily data (excluding weekends) because get_pricing() pulls only data from market days. Let's up-sample from this daily data to monthly data using the resample() method.
End of explanation
monthly_prices_med = prices.resample('M', how='median')
monthly_prices_med.head(10)
Explanation: The resample() method defaults to using the mean of the lower level data to create the higher level data. We can specify how else we might want the up-sampling to be calculated by specifying the how parameter.
End of explanation
def custom_resampler(array_like):
Returns the first value of the period
return array_like[0]
first_of_month_prices = prices.resample('M', how=custom_resampler)
first_of_month_prices.head(10)
Explanation: We can even specify how we want the calculation of the new period to be done. Here we create a custom_resampler() function that will return the first value of the period. In our specific case, this will return a Series where the monthly value is the first value of that month.
End of explanation
eastern_prices = prices.tz_convert('US/Eastern')
eastern_prices.head(10)
Explanation: We can also adjust the timezone of a Series to adapt the time of real-world data. In our case, our time series is already localized to UTC, but let's say that we want to adjust the time to be 'US/Eastern'. In this case we use the tz_convert() method, since the time is already localized.
End of explanation
calendar_dates = pd.date_range(start=start, end=end, freq='D', tz='UTC')
print calendar_dates
Explanation: In addition to the capacity for timezone and frequency management, each time series has a built-in reindex() method that we can use to realign the existing data according to a new set of index labels. If data does not exist for a particular label, the data will be filled with a placeholder value. This is typically np.nan, though we can provide a fill method.
The data that we get_pricing() only includes market days. But what if we want prices for every single calendar day? This will include holidays and weekends, times when you normally cannot trade equities. First let's create a new DatetimeIndex that contains all that we want.
End of explanation
calendar_prices = prices.reindex(calendar_dates, method='ffill')
calendar_prices.head(15)
Explanation: Now let's use this new set of dates to reindex our time series. We tell the function that the fill method that we want is ffill. This denotes "forward fill". Any NaN values will be filled by the last value listed. So the price on the weekend or on a holiday will be listed as the price on the last market day that we know about.
End of explanation
meanfilled_prices = calendar_prices.fillna(calendar_prices.mean())
meanfilled_prices.head(10)
Explanation: You'll notice that we still have a couple of NaN values right at the beginning of our time series. This is because the first of January in 2012 was a Sunday and the second was a market holiday! Because these are the earliest data points and we don't have any information from before them, they cannot be forward-filled. We will take care of these NaN values in the next section, when we deal with missing data.
Missing Data
Whenever we deal with real data, there is a very real possibility of encountering missing values. Real data is riddled with holes and pandas provides us with ways to handle them. Sometimes resampling or reindexing can create NaN values. Fortunately, pandas provides us with ways to handle them. We have two primary means of coping with missing data. The first of these is filling in the missing data with fillna(). For example, say that we want to fill in the missing days with the mean price of all days.
End of explanation
bfilled_prices = calendar_prices.fillna(method='bfill')
bfilled_prices.head(10)
Explanation: Using fillna() is fairly easy. It is just a matter of indicating the value that you want to fill the spaces with. Unfortunately, this particular case doesn't make a whole lot of sense, for reasons discussed in the lecture on stationarity in the Lecture series. We could fill them with with $0$, simply, but that's similarly uninformative.
Rather than filling in specific values, we can use the method parameter, similarly to how the reindex() method works. We could use "backward fill", where NaNs are filled with the next filled value (instead of forward fill's last filled value) like so:
End of explanation
dropped_prices = calendar_prices.dropna()
dropped_prices.head(10)
Explanation: But again, this is a bad idea for the same reasons as the previous option. Both of these so-called solutions take into account future data that was not available at the time of the data points that we are trying to fill. In the case of using the mean or the median, these summary statistics are calculated by taking into account the entire time series. Backward filling is equivalent to saying that the price of a particular security today, right now, tomorrow's price. This also makes no sense. These two options are both examples of look-ahead bias, using data that would be unknown or unavailable at the desired time, and should be avoided.
Our next option is significantly more appealing. We could simply drop the missing data using the dropna() method. This is much better alternative than filling NaN values in with arbitrary numbers.
End of explanation
prices.plot();
# We still need to add the axis labels and title ourselves
plt.title(symbol + " Prices")
plt.ylabel("Price")
plt.xlabel("Date");
Explanation: Now our time series is cleaned for the calendar year, with all of our NaN values properly handled. It is time to talk about how to actually do time series analysis with pandas data structures.
Time Series Analysis with pandas
Let's do some basic time series analysis on our original prices. Each pandas Series has a built-in plotting method.
End of explanation
print "Mean: ", prices.mean()
print "Standard deviation: ", prices.std()
print "Summary Statistics"
print prices.describe()
Explanation: As well as some built-in descriptive statistics. We can either calculate these individually or using the describe() method.
End of explanation
modified_prices = prices * 2 - 10
modified_prices.head(5)
Explanation: We can easily modify Series with scalars using our basic mathematical operators.
End of explanation
noisy_prices = prices + 5 * pd.Series(np.random.normal(0, 5, len(prices)), index=prices.index) + 20
noisy_prices.head(5)
Explanation: And we can create linear combinations of Series themselves using the basic mathematical operators. pandas will group up matching indices and perform the calculations elementwise to produce a new Series.
End of explanation
empty_series = prices + pd.Series(np.random.normal(0, 1, len(prices)))
empty_series.head(5)
Explanation: If there are no matching indices, however, we may get an empty Series in return.
End of explanation
add_returns = prices.diff()[1:]
mult_returns = prices.pct_change()[1:]
plt.title("Multiplicative returns of " + symbol)
plt.xlabel("Date")
plt.ylabel("Percent Returns")
mult_returns.plot();
Explanation: Rather than looking at a time series itself, we may want to look at its first-order differences or percent change (in order to get additive or multiplicative returns, in our particular case). Both of these are built-in methods.
End of explanation
rolling_mean = pd.rolling_mean(prices, 30)
rolling_mean.name = "30-day rolling mean"
prices.plot()
rolling_mean.plot()
plt.title(symbol + "Price")
plt.xlabel("Date")
plt.ylabel("Price")
plt.legend();
rolling_std = pd.rolling_std(prices, 30)
rolling_std.name = "30-day rolling volatility"
rolling_std.plot()
plt.title(rolling_std.name);
plt.xlabel("Date")
plt.ylabel("Standard Deviation");
Explanation: pandas has convenient functions for calculating rolling means and standard deviations, as well!
End of explanation
print np.median(mult_returns)
Explanation: Many NumPy functions will work on Series the same way that they work on 1-dimensional NumPy arrays.
End of explanation
print mult_returns.median()
Explanation: The majority of these functions, however, are already implemented directly as Series and DataFrame methods.
End of explanation
dict_data = {
'a' : [1, 2, 3, 4, 5],
'b' : ['L', 'K', 'J', 'M', 'Z'],
'c' : np.random.normal(0, 1, 5)
}
print dict_data
Explanation: In every case, using the built-in pandas method will be better than using the NumPy function on a pandas data structure due to improvements in performance. Make sure to check out the Series documentation before resorting to other calculations of common functions.
DataFrames
Many of the aspects of working with Series carry over into DataFrames. pandas DataFrames allow us to easily manage our data with their intuitive structure.
Like Series, DataFrames can hold multiple types of data, but DataFrames are 2-dimensional objects, unlike Series. Each DataFrame has an index and a columns attribute, which we will cover more in-depth when we start actually playing with an object. The index attribute is like the index of a Series, though indices in pandas have some extra features that we will unfortunately not be able to cover here. If you are interested in this, check out the pandas documentation on advanced indexing. The columns attribute is what provides the second dimension of our DataFrames, allowing us to combine named columns (all Series), into a cohesive object with the index lined-up.
We can create a DataFrame by calling pandas.DataFrame() on a dictionary or NumPy ndarray. We can also concatenate a group of pandas Series into a DataFrame using pandas.concat().
End of explanation
frame_data = pd.DataFrame(dict_data, index=pd.date_range('2016-01-01', periods=5))
print frame_data
Explanation: Each DataFrame has a few key attributes that we need to keep in mind. The first of these is the index attribute. We can easily include an index of Timestamp objects like we did with Series.
End of explanation
s_1 = pd.Series([2, 4, 6, 8, 10], name='Evens')
s_2 = pd.Series([1, 3, 5, 7, 9], name="Odds")
numbers = pd.concat([s_1, s_2], axis=1)
print numbers
Explanation: As mentioned above, we can combine Series into DataFrames. Concatatenating Series like this will match elements up based on their corresponding index. As the following Series do not have an index assigned, they each default to an integer index.
End of explanation
print numbers.columns
Explanation: We will use pandas.concat() again later to combine multiple DataFrames into one.
Each DataFrame also has a columns attribute. These can either be assigned when we call pandas.DataFrame or they can be modified directly like the index. Note that when we concatenated the two Series above, the column names were the names of those Series.
End of explanation
numbers.columns = ['Shmevens', 'Shmodds']
print numbers
Explanation: To modify the columns after object creation, we need only do the following:
End of explanation
print numbers.index
numbers.index = pd.date_range("2016-01-01", periods=len(numbers))
print numbers
Explanation: In the same vein, the index of a DataFrame can be changed after the fact.
End of explanation
numbers.values
Explanation: Separate from the columns and index of a DataFrame, we can also directly access the values they contain by looking at the values attribute.
End of explanation
type(numbers.values)
Explanation: This returns a NumPy array.
End of explanation
symbol = ["CMG", "MCD", "SHAK", "WFM"]
start = "2012-01-01"
end = "2016-01-01"
prices = get_pricing(symbol, start_date=start, end_date=end, fields="price")
if isinstance(symbol, list):
prices.columns = map(lambda x: x.symbol, prices.columns)
else:
prices.name = symbol
Explanation: Accessing DataFrame elements
Again we see a lot of carryover from Series in how we access the elements of DataFrames. The key sticking point here is that everything has to take into account multiple dimensions now. The main way that this happens is through the access of the columns of a DataFrame, either individually or in groups. We can do this either by directly accessing the attributes or by using the methods we already are familiar with.
End of explanation
prices.CMG.head()
Explanation: Here we directly access the CMG column. Note that this style of access will only work if your column name has no spaces or unfriendly characters in it.
End of explanation
prices.loc[:, 'CMG'].head()
Explanation: We can also use loc[] to access an individual column like so.
End of explanation
print type(prices.CMG)
print type(prices.loc[:, 'CMG'])
Explanation: Accessing an individual column will return a Series, regardless of how we get it.
End of explanation
prices.loc[:, ['CMG', 'MCD']].head()
Explanation: Notice how we pass a tuple into the loc[] method? This is a key difference between accessing a Series and accessing a DataFrame, grounded in the fact that a DataFrame has multiple dimensions. When you pass a 2-dimensional tuple into a DataFrame, the first element of the tuple is applied to the rows and the second is applied to the columns. So, to break it down, the above line of code tells the DataFrame to return every single row of the column with label 'CMG'. Lists of columns are also supported.
End of explanation
prices.loc['2015-12-15':'2015-12-22']
Explanation: We can also simply access the DataFrame by index value using loc[], as with Series.
End of explanation
prices.loc['2015-12-15':'2015-12-22', ['CMG', 'MCD']]
Explanation: This plays nicely with lists of columns, too.
End of explanation
prices.iloc[0:2, 1]
# Access prices with integer index in
# [1, 3, 5, 7, 9, 11, 13, ..., 99]
# and in column 0 or 3
prices.iloc[[1, 3, 5] + range(7, 100, 2), [0, 3]].head(20)
Explanation: Using iloc[] also works similarly, allowing you to access parts of the DataFrame by integer index.
End of explanation
prices.loc[prices.MCD > prices.WFM].head()
Explanation: Boolean indexing
As with Series, sometimes we want to filter a DataFrame according to a set of criteria. We do this by indexing our DataFrame with boolean values.
End of explanation
prices.loc[(prices.MCD > prices.WFM) & ~prices.SHAK.isnull()].head()
Explanation: We can add multiple boolean conditions by using the logical operators &, |, and ~ (and, or, and not, respectively) again!
End of explanation
s_1 = get_pricing('TSLA', start_date=start, end_date=end, fields='price')
prices.loc[:, 'TSLA'] = s_1
prices.head(5)
Explanation: Adding, Removing Columns, Combining DataFrames/Series
It is all well and good when you already have a DataFrame filled with data, but it is also important to be able to add to the data that you have.
We add a new column simply by assigning data to a column that does not already exist. Here we use the .loc[:, 'COL_NAME'] notation and store the output of get_pricing() (which returns a pandas Series if we only pass one security) there. This is the method that we would use to add a Series to an existing DataFrame.
End of explanation
prices = prices.drop('TSLA', axis=1)
prices.head(5)
Explanation: It is also just as easy to remove a column.
End of explanation
df_1 = get_pricing(['SPY', 'VXX'], start_date=start, end_date=end, fields='price')
df_2 = get_pricing(['MSFT', 'AAPL', 'GOOG'], start_date=start, end_date=end, fields='price')
df_3 = pd.concat([df_1, df_2], axis=1)
df_3.head()
Explanation: If we instead want to combine multiple DataFrames into one, we use the pandas.concat() method.
End of explanation
filled0_prices = prices.fillna(0)
filled0_prices.head(5)
bfilled_prices = prices.fillna(method='bfill')
bfilled_prices.head(5)
Explanation: Missing data (again)
Bringing real-life data into a DataFrame brings us the same problems that we had with it in a Series, only this time in more dimensions. We have access to the same methods as with Series, as demonstrated below.
End of explanation
dropped_prices = prices.dropna()
dropped_prices.head(5)
Explanation: But again, the best choice in this case (since we are still using time series data, handling multiple time series at once) is still to simply drop the missing values.
End of explanation
prices.plot()
plt.title("Collected Stock Prices")
plt.ylabel("Price")
plt.xlabel("Date");
Explanation: Time Series Analysis with pandas
Using the built-in statistics methods for DataFrames, we can perform calculations on multiple time series at once! The code to perform calculations on DataFrames here is almost exactly the same as the methods used for Series above, so don't worry about re-learning everything.
The plot() method makes another appearance here, this time with a built-in legend that corresponds to the names of the columns that you are plotting.
End of explanation
prices.mean(axis=0)
Explanation: The same statistical functions from our interactions with Series resurface here with the addition of the axis parameter. By specifying the axis, we tell pandas to calculate the desired function along either the rows (axis=0) or the columns (axis=1). We can easily calculate the mean of each columns like so:
End of explanation
prices.std(axis=0)
Explanation: As well as the standard deviation:
End of explanation
prices.describe()
Explanation: Again, the describe() function will provide us with summary statistics of our data if we would rather have all of our typical statistics in a convenient visual instead of calculating them individually.
End of explanation
(2 * prices - 50).head(5)
Explanation: We can scale and add scalars to our DataFrame, as you might suspect after dealing with Series. This again works element-wise.
End of explanation
mult_returns = prices.pct_change()[1:]
mult_returns.head()
Explanation: Here we use the pct_change() method to get a DataFrame of the multiplicative returns of the securities that we are looking at.
End of explanation
norm_returns = (mult_returns - mult_returns.mean(axis=0))/mult_returns.std(axis=0)
norm_returns.loc['2014-01-01':'2015-01-01'].plot();
Explanation: If we use our statistics methods to standardize the returns, a common procedure when examining data, then we can get a better idea of how they all move relative to each other on the same scale.
End of explanation
rolling_mean = pd.rolling_mean(prices, 30)
rolling_mean.columns = prices.columns
rolling_mean.plot()
plt.title("Rolling Mean of Prices")
plt.xlabel("Date")
plt.ylabel("Price")
plt.legend();
Explanation: This makes it easier to compare the motion of the different time series contained in our example.
Rolling means and standard deviations also work with DataFrames.
End of explanation |
13,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The function $\texttt{toDot}(\texttt{Parent})$ takes a dictionary $\texttt{Parent}$.
For every node $x$, $\texttt{Parent}[x]$ is the parent of $x$. It draws this dictionary
as a family tree using graphviz, i.e. for every node $x$ it draws an arrow starting at $x$ and pointing
to $\texttt{Parent}[x]$. The roots of the trees are indicated by double circles.
Step1: A Tree Based Implementation of the Union-Find Algorithm
Given a set $M$ and a binary relation $R \subseteq M \times M$, the function $\texttt{union_find}$ returns a partition $\mathcal{P}$ of $M$ such that we have
$$ \forall \langle x, y \rangle \in R
Step2: Given a dictionary Parent and an element $x$ from $M$, the function $\texttt{find}(x, \texttt{Parent})$
returns the ancestor of $x$ that is its own parent.
Step3: The previous example was a worst case scenario because we had defined the relation $R$ as a list so that we were able to control the order of the
joining of different trees. If we represent $R$ as a set, the order of the pairs is more or less random and the trees do no degenerate to lists. | Python Code:
def toDot(Parent):
dot = gv.Digraph()
M = Parent.keys()
for x in M:
p = Parent[x]
if x == p:
dot.node(str(x), shape='doublecircle')
else:
dot.node(str(x), shape='circle')
dot.edge(str(x), str(p))
return dot
Explanation: The function $\texttt{toDot}(\texttt{Parent})$ takes a dictionary $\texttt{Parent}$.
For every node $x$, $\texttt{Parent}[x]$ is the parent of $x$. It draws this dictionary
as a family tree using graphviz, i.e. for every node $x$ it draws an arrow starting at $x$ and pointing
to $\texttt{Parent}[x]$. The roots of the trees are indicated by double circles.
End of explanation
def union_find(M, R):
Parent = { x: x for x in M } # trivial partition
for x, y in R:
print(f'{x} ≅ {y}')
root_x = find(x, Parent)
root_y = find(y, Parent)
display(toDot(Parent))
if root_x != root_y:
Parent[root_y] = root_x
display(toDot(Parent))
Roots = { x for x in M if Parent[x] == x }
return [{y for y in M if find(y, Parent) == r} for r in Roots]
Explanation: A Tree Based Implementation of the Union-Find Algorithm
Given a set $M$ and a binary relation $R \subseteq M \times M$, the function $\texttt{union_find}$ returns a partition $\mathcal{P}$ of $M$ such that we have
$$ \forall \langle x, y \rangle \in R: \exists S \in \mathcal{P}: \bigl(x \in S \wedge y \in S\bigr) $$
The resulting partition defines the equivalence relation that is generated by $R$.
End of explanation
def find(x, Parent):
p = Parent[x]
if p == x:
return x
return find(p, Parent)
def demo():
M = set(range(1, 10))
R = { (1, 4), (7, 9), (3, 5), (2, 6), (5, 8), (1, 9), (4, 7) }
P = union_find(M, R)
return P
demo()
def worst_case(n):
M = set(range(1, n+1))
R = [ (k+1, k) for k in M if k < n ]
print(f'R = {R}')
P = union_find(M, R)
print(f'P = {P}')
worst_case(10)
Explanation: Given a dictionary Parent and an element $x$ from $M$, the function $\texttt{find}(x, \texttt{Parent})$
returns the ancestor of $x$ that is its own parent.
End of explanation
def worst_case_set(n):
M = set(range(1, n+1))
R = { (k+1, k) for k in M if k < n }
print(f'R = {R}')
P = union_find(M, R)
print(f'P = {P}')
worst_case_set(20)
Explanation: The previous example was a worst case scenario because we had defined the relation $R$ as a list so that we were able to control the order of the
joining of different trees. If we represent $R$ as a set, the order of the pairs is more or less random and the trees do no degenerate to lists.
End of explanation |
13,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
Use linked DMA channels to perform "scan" across multiple ADC input channels.
See diagram below.
Channel configuration ##
DMA channel $i$ copies conesecutive SC1A configurations to the ADC SC1A
register. Each SC1A configuration selects an analog input channel.
Channel $i$ is initially triggered by software trigger
(i.e., DMA_SSRT = i), starting the ADC conversion for the first ADC
channel configuration.
Loading of subsequent ADC channel configurations is triggered through
minor loop linking of DMA channel $ii$ to DMA channel $i$.
DMA channel $ii$ is triggered by ADC conversion complete (i.e., COCO), and
copies the output result of the ADC to consecutive locations in the result
array.
Channel $ii$ has minor loop link set to channel $i$, which triggers the
loading of the next channel SC1A configuration to be loaded immediately
after the current ADC result has been copied to the result array.
After $n$ triggers of channel $i$, the result array contains $n$ ADC results,
one result per channel in the SC1A table.
<img src="multi-channel_ADC_using_DMA.jpg" style="max-height
Step1: Configure ADC sample rate, etc.
Step2: Pseudo-code to set DMA channel $i$ to be triggered by ADC0 conversion complete.
DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
DMA_ERQ[i] = 1 // DMA request input signals and this enable request flag
// must be asserted before a channel’s hardware service
// request is accepted (21.3.3/394).
DMA_SERQ = i // Can use memory mapped convenience register to set instead.
Set DMA mux source for channel 0 to ADC0
Step3: Analog channel list
List of channels to sample.
Map channels from Teensy references (e.g., A0, A1, etc.) to the Kinetis analog
pin numbers using the adc.CHANNEL_TO_SC1A_ADC0 mapping.
Step4: Allocate and initialize device arrays
SD1A register configuration for each ADC channel in the channel_sc1as list.
Copy channel_sc1as list to device.
ADC result array
Initialize to zero.
Step5: Configure DMA channel $i$
Step6: Configure DMA channel $ii$
Step7: Trigger sample scan across selected ADC channels | Python Code:
from arduino_rpc.protobuf import resolve_field_values
from teensy_minimal_rpc import SerialProxy
import teensy_minimal_rpc.DMA as DMA
import teensy_minimal_rpc.ADC as ADC
# Disconnect from existing proxy (if available)
try:
del proxy
except NameError:
pass
proxy = SerialProxy()
Explanation: Overview
Use linked DMA channels to perform "scan" across multiple ADC input channels.
See diagram below.
Channel configuration ##
DMA channel $i$ copies conesecutive SC1A configurations to the ADC SC1A
register. Each SC1A configuration selects an analog input channel.
Channel $i$ is initially triggered by software trigger
(i.e., DMA_SSRT = i), starting the ADC conversion for the first ADC
channel configuration.
Loading of subsequent ADC channel configurations is triggered through
minor loop linking of DMA channel $ii$ to DMA channel $i$.
DMA channel $ii$ is triggered by ADC conversion complete (i.e., COCO), and
copies the output result of the ADC to consecutive locations in the result
array.
Channel $ii$ has minor loop link set to channel $i$, which triggers the
loading of the next channel SC1A configuration to be loaded immediately
after the current ADC result has been copied to the result array.
After $n$ triggers of channel $i$, the result array contains $n$ ADC results,
one result per channel in the SC1A table.
<img src="multi-channel_ADC_using_DMA.jpg" style="max-height: 500px" />
Device
Connect to device
End of explanation
import arduino_helpers.hardware.teensy as teensy
# Set ADC parameters
proxy.setAveraging(16, teensy.ADC_0)
proxy.setResolution(16, teensy.ADC_0)
proxy.setConversionSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.setSamplingSpeed(teensy.ADC_MED_SPEED, teensy.ADC_0)
proxy.update_adc_registers(
teensy.ADC_0,
ADC.Registers(CFG2=ADC.R_CFG2(MUXSEL=ADC.R_CFG2.B)))
Explanation: Configure ADC sample rate, etc.
End of explanation
DMAMUX_SOURCE_ADC0 = 40 # from `kinetis.h`
DMAMUX_SOURCE_ADC1 = 41 # from `kinetis.h`
# DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
# DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
# DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
proxy.update_dma_mux_chcfg(0, DMA.MUX_CHCFG(SOURCE=DMAMUX_SOURCE_ADC0,
TRIG=False,
ENBL=True))
# DMA request input signals and this enable request flag
# must be asserted before a channel’s hardware service
# request is accepted (21.3.3/394).
# DMA_SERQ = i
proxy.update_dma_registers(DMA.Registers(SERQ=0))
proxy.enableDMA(teensy.ADC_0)
proxy.DMA_registers().loc['']
dmamux0 = DMA.MUX_CHCFG.FromString(proxy.read_dma_mux_chcfg(0).tostring())
resolve_field_values(dmamux0)[['full_name', 'value']]
adc0 = ADC.Registers.FromString(proxy.read_adc_registers(teensy.ADC_0).tostring())
resolve_field_values(adc0)[['full_name', 'value']].loc[['CFG2', 'SC1A', 'SC3']]
Explanation: Pseudo-code to set DMA channel $i$ to be triggered by ADC0 conversion complete.
DMAMUX0_CFGi[SOURCE] = DMAMUX_SOURCE_ADC0 // Route ADC0 as DMA channel source.
DMAMUX0_CFGi[TRIG] = 0 // Disable periodic trigger.
DMAMUX0_CFGi[ENBL] = 1 // Enable the DMAMUX configuration for channel.
DMA_ERQ[i] = 1 // DMA request input signals and this enable request flag
// must be asserted before a channel’s hardware service
// request is accepted (21.3.3/394).
DMA_SERQ = i // Can use memory mapped convenience register to set instead.
Set DMA mux source for channel 0 to ADC0
End of explanation
import re
import numpy as np
import pandas as pd
import arduino_helpers.hardware.teensy.adc as adc
sc1a_pins = pd.Series(dict([(v, adc.CHANNEL_TO_SC1A_ADC0[getattr(teensy, v)]) for v in dir(teensy) if re.search(r'^A\d+', v)]))
channel_sc1as = np.array(sc1a_pins[['A0', 'A1', 'A0', 'A3', 'A0']].tolist(), dtype='uint32')
Explanation: Analog channel list
List of channels to sample.
Map channels from Teensy references (e.g., A0, A1, etc.) to the Kinetis analog
pin numbers using the adc.CHANNEL_TO_SC1A_ADC0 mapping.
End of explanation
proxy.free_all()
N = np.dtype('uint16').itemsize * channel_sc1as.size
# Allocate source array
adc_result_addr = proxy.mem_alloc(N)
# Fill result array with zeros
proxy.mem_fill_uint8(adc_result_addr, 0, N)
# Copy channel SC1A configurations to device memory
adc_sda1s_addr = proxy.mem_aligned_alloc_and_set(4, channel_sc1as.view('uint8'))
print 'ADC results:', proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16')
print 'Analog pins:', proxy.mem_cpy_device_to_host(adc_sda1s_addr, len(channel_sc1as) *
channel_sc1as.dtype.itemsize).view('uint32')
Explanation: Allocate and initialize device arrays
SD1A register configuration for each ADC channel in the channel_sc1as list.
Copy channel_sc1as list to device.
ADC result array
Initialize to zero.
End of explanation
ADC0_SC1A = 0x4003B000 # ADC status and control registers 1
sda1_tcd_msg = DMA.TCD(CITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
BITER_ELINKNO=DMA.R_TCD_ITER_ELINKNO(ELINK=False, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._32_BIT,
DSIZE=DMA.R_TCD_ATTR._32_BIT),
NBYTES_MLNO=4,
SADDR=int(adc_sda1s_addr),
SOFF=4,
SLAST=-channel_sc1as.size * 4,
DADDR=int(ADC0_SC1A),
DOFF=0,
DLASTSGA=0,
CSR=DMA.R_TCD_CSR(START=0, DONE=False))
proxy.update_dma_TCD(1, sda1_tcd_msg)
Explanation: Configure DMA channel $i$
End of explanation
ADC0_RA = 0x4003B010 # ADC data result register
ADC0_RB = 0x4003B014 # ADC data result register
tcd_msg = DMA.TCD(CITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
BITER_ELINKYES=DMA.R_TCD_ITER_ELINKYES(ELINK=True, LINKCH=1, ITER=channel_sc1as.size),
ATTR=DMA.R_TCD_ATTR(SSIZE=DMA.R_TCD_ATTR._16_BIT,
DSIZE=DMA.R_TCD_ATTR._16_BIT),
NBYTES_MLNO=2,
SADDR=ADC0_RA,
SOFF=0,
SLAST=0,
DADDR=int(adc_result_addr),
DOFF=2,
DLASTSGA=-channel_sc1as.size * 2,
CSR=DMA.R_TCD_CSR(START=0, DONE=False))
proxy.update_dma_TCD(0, tcd_msg)
Explanation: Configure DMA channel $ii$
End of explanation
# Clear output array to zero.
proxy.mem_fill_uint8(adc_result_addr, 0, N)
# Software trigger channel $i$ to copy *first* SC1A configuration, which
# starts ADC conversion for the first channel.
#
# Conversions for subsequent ADC channels are triggered through minor-loop
# linking from DMA channel $ii$ to DMA channel $i$ (*not* through explicit
# software trigger).
proxy.update_dma_registers(DMA.Registers(SSRT=1))
# Display converted ADC values (one value per channel in `channel_sd1as` list).
print 'ADC results:', proxy.mem_cpy_device_to_host(adc_result_addr, N).view('uint16')
Explanation: Trigger sample scan across selected ADC channels
End of explanation |
13,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook shows the how tallies can be combined (added, subtracted, multiplied, etc.) using the Python API in order to create derived tallies. Since no covariance information is obtained, it is assumed that tallies are completely independent of one another when propagating uncertainties. The target problem is a simple pin cell.
Note
Step1: Generate Input Files
First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
Step2: With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pin.
Step3: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
Step4: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
Step5: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
Step6: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step7: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
Step8: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.
Step9: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
Step10: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
Step11: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.
Step12: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step13: Tally Data Processing
Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer.
Step14: You may have also noticed we instructed OpenMC to create a summary file with lots of geometry information in it. This can help to produce more sensible output from the Python API, so we will use the summary file to link against.
Step15: We have a tally of the total fission rate and the total absorption rate, so we can calculate k-infinity as
Step16: Notice that even though the neutron production rate and absorption rate are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically!
Often in textbooks you'll see k-infinity represented using the four-factor formula $$k_\infty = p \epsilon f \eta.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\frac{\langle\Sigma_a\phi\rangle_T}{\langle\Sigma_a\phi\rangle}$$ where the subscript $T$ means thermal energies.
Step17: The fast fission factor can be calculated as
$$\epsilon=\frac{\langle\nu\Sigma_f\phi\rangle}{\langle\nu\Sigma_f\phi\rangle_T}$$
Step18: The thermal flux utilization is calculated as
$$f=\frac{\langle\Sigma_a\phi\rangle^F_T}{\langle\Sigma_a\phi\rangle_T}$$
where the superscript $F$ denotes fuel.
Step19: The final factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\eta = \frac{\langle \nu\Sigma_f\phi \rangle_T}{\langle \Sigma_a \phi \rangle^F_T}$$
Step20: Now we can calculate $k_\infty$ using the product of the factors form the four-factor formula.
Step21: We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger.
Let's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections.
Step22: We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.
Step23: The same idea can be used not only for scores but also for filters and nuclides.
Step24: A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format. | Python Code:
%load_ext autoreload
%autoreload 2
import glob
from IPython.display import Image
import numpy as np
import openmc
from openmc.statepoint import StatePoint
from openmc.summary import Summary
%matplotlib inline
Explanation: This notebook shows the how tallies can be combined (added, subtracted, multiplied, etc.) using the Python API in order to create derived tallies. Since no covariance information is obtained, it is assumed that tallies are completely independent of one another when propagating uncertainties. The target problem is a simple pin cell.
Note: that this Notebook was created using the latest Pandas v0.16.1. Everything in the Notebook will wun with older versions of Pandas, but the multi-indexing option in >v0.15.0 makes the tables look prettier.
End of explanation
# Instantiate some Nuclides
h1 = openmc.Nuclide('H-1')
b10 = openmc.Nuclide('B-10')
o16 = openmc.Nuclide('O-16')
u235 = openmc.Nuclide('U-235')
u238 = openmc.Nuclide('U-238')
zr90 = openmc.Nuclide('Zr-90')
Explanation: Generate Input Files
First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.
End of explanation
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide(u235, 3.7503e-4)
fuel.add_nuclide(u238, 2.2625e-2)
fuel.add_nuclide(o16, 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide(h1, 4.9457e-2)
water.add_nuclide(o16, 2.4732e-2)
water.add_nuclide(b10, 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide(zr90, 7.2758e-3)
Explanation: With the nuclides we defined, we will now create three materials for the fuel, water, and cladding of the fuel pin.
End of explanation
# Instantiate a MaterialsFile, add Materials
materials_file = openmc.MaterialsFile()
materials_file.add_material(fuel)
materials_file.add_material(water)
materials_file.add_material(zircaloy)
materials_file.default_xs = '71c'
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our three materials, we can now create a materials file object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
# Use both reflective and vacuum boundaries to make life interesting
min_x = openmc.XPlane(x0=-0.63, boundary_type='reflective')
max_x = openmc.XPlane(x0=+0.63, boundary_type='reflective')
min_y = openmc.YPlane(y0=-0.63, boundary_type='reflective')
max_y = openmc.YPlane(y0=+0.63, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-0.63, boundary_type='reflective')
max_z = openmc.ZPlane(z0=+0.63, boundary_type='reflective')
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
End of explanation
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
pin_cell_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = pin_cell_universe
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
# Create Geometry and set root Universe
geometry = openmc.Geometry()
geometry.root_universe = root_universe
# Instantiate a GeometryFile
geometry_file = openmc.GeometryFile()
geometry_file.geometry = geometry
# Export to "geometry.xml"
geometry_file.export_to_xml()
Explanation: We now must create a geometry that is assigned a root universe, put the geometry into a geometry file, and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 20
inactive = 5
particles = 2500
# Instantiate a SettingsFile
settings_file = openmc.SettingsFile()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True, 'summary': True}
source_bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
settings_file.set_source_space('box', source_bounds)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 5 inactive batches and 15 active batches each with 2500 particles.
End of explanation
# Instantiate a Plot
plot = openmc.Plot(plot_id=1)
plot.filename = 'materials-xy'
plot.origin = [0, 0, 0]
plot.width = [1.26, 1.26]
plot.pixels = [250, 250]
plot.color = 'mat'
# Instantiate a PlotsFile, add Plot, and export to "plots.xml"
plot_file = openmc.PlotsFile()
plot_file.add_plot(plot)
plot_file.export_to_xml()
Explanation: Let us also create a plot file that we can use to verify that our pin cell geometry was created successfully.
End of explanation
# Run openmc in plotting mode
executor = openmc.Executor()
executor.plot_geometry(output=False)
# Convert OpenMC's funky ppm to png
!convert materials-xy.ppm materials-xy.png
# Display the materials plot inline
Image(filename='materials-xy.png')
Explanation: With the plots.xml file, we can now generate and view the plot. OpenMC outputs plots in .ppm format, which can be converted into a compressed format like .png with the convert utility.
End of explanation
# Instantiate an empty TalliesFile
tallies_file = openmc.TalliesFile()
# Create Tallies to compute microscopic multi-group cross-sections
# Instantiate energy filter for multi-group cross-section Tallies
energy_filter = openmc.Filter(type='energy', bins=[0., 0.625e-6, 20.])
# Instantiate flux Tally in moderator and fuel
tally = openmc.Tally(name='flux')
tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id]))
tally.add_filter(energy_filter)
tally.add_score('flux')
tallies_file.add_tally(tally)
# Instantiate reaction rate Tally in fuel
tally = openmc.Tally(name='fuel rxn rates')
tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id]))
tally.add_filter(energy_filter)
tally.add_score('nu-fission')
tally.add_score('scatter')
tally.add_nuclide(u238)
tally.add_nuclide(u235)
tallies_file.add_tally(tally)
# Instantiate reaction rate Tally in moderator
tally = openmc.Tally(name='moderator rxn rates')
tally.add_filter(openmc.Filter(type='cell', bins=[moderator_cell.id]))
tally.add_filter(energy_filter)
tally.add_score('absorption')
tally.add_score('total')
tally.add_nuclide(o16)
tally.add_nuclide(h1)
tallies_file.add_tally(tally)
# K-Eigenvalue (infinity) tallies
fiss_rate = openmc.Tally(name='fiss. rate')
abs_rate = openmc.Tally(name='abs. rate')
fiss_rate.add_score('nu-fission')
abs_rate.add_score('absorption')
tallies_file.add_tally(fiss_rate)
tallies_file.add_tally(abs_rate)
# Resonance Escape Probability tallies
therm_abs_rate = openmc.Tally(name='therm. abs. rate')
therm_abs_rate.add_score('absorption')
therm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))
tallies_file.add_tally(therm_abs_rate)
# Thermal Flux Utilization tallies
fuel_therm_abs_rate = openmc.Tally(name='fuel therm. abs. rate')
fuel_therm_abs_rate.add_score('absorption')
fuel_therm_abs_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))
fuel_therm_abs_rate.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id]))
tallies_file.add_tally(fuel_therm_abs_rate)
# Fast Fission Factor tallies
therm_fiss_rate = openmc.Tally(name='therm. fiss. rate')
therm_fiss_rate.add_score('nu-fission')
therm_fiss_rate.add_filter(openmc.Filter(type='energy', bins=[0., 0.625]))
tallies_file.add_tally(therm_fiss_rate)
# Instantiate energy filter to illustrate Tally slicing
energy_filter = openmc.Filter(type='energy', bins=np.logspace(np.log10(1e-8), np.log10(20), 10))
# Instantiate flux Tally in moderator and fuel
tally = openmc.Tally(name='need-to-slice')
tally.add_filter(openmc.Filter(type='cell', bins=[fuel_cell.id, moderator_cell.id]))
tally.add_filter(energy_filter)
tally.add_score('nu-fission')
tally.add_score('scatter')
tally.add_nuclide(h1)
tally.add_nuclide(u238)
tallies_file.add_tally(tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: As we can see from the plot, we have a nice pin cell with fuel, cladding, and water! Before we run our simulation, we need to tell the code what we want to tally. The following code shows how to create a variety of tallies.
End of explanation
# Remove old HDF5 (summary, statepoint) files
!rm statepoint.*
# Run OpenMC with MPI!
executor.run_simulation()
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
# Load the statepoint file
sp = StatePoint('statepoint.20.h5')
Explanation: Tally Data Processing
Our simulation ran successfully and created a statepoint file with all the tally data in it. We begin our analysis here loading the statepoint file and 'reading' the results. By default, the tally results are not read into memory because they might be large, even large enough to exceed the available memory on a computer.
End of explanation
# Load the summary file and link with statepoint
su = Summary('summary.h5')
sp.link_with_summary(su)
Explanation: You may have also noticed we instructed OpenMC to create a summary file with lots of geometry information in it. This can help to produce more sensible output from the Python API, so we will use the summary file to link against.
End of explanation
# Compute k-infinity using tally arithmetic
fiss_rate = sp.get_tally(name='fiss. rate')
abs_rate = sp.get_tally(name='abs. rate')
keff = fiss_rate / abs_rate
keff.get_pandas_dataframe()
Explanation: We have a tally of the total fission rate and the total absorption rate, so we can calculate k-infinity as:
$$k_\infty = \frac{\langle \nu \Sigma_f \phi \rangle}{\langle \Sigma_a \phi \rangle}$$
In this notation, $\langle \cdot \rangle^a_b$ represents an OpenMC that is integrated over region $a$ and energy range $b$. If $a$ or $b$ is not reported, it means the value represents an integral over all space or all energy, respectively.
End of explanation
# Compute resonance escape probability using tally arithmetic
therm_abs_rate = sp.get_tally(name='therm. abs. rate')
res_esc = therm_abs_rate / abs_rate
res_esc.get_pandas_dataframe()
Explanation: Notice that even though the neutron production rate and absorption rate are separate tallies, we still get a first-order estimate of the uncertainty on the quotient of them automatically!
Often in textbooks you'll see k-infinity represented using the four-factor formula $$k_\infty = p \epsilon f \eta.$$ Let's analyze each of these factors, starting with the resonance escape probability which is defined as $$p=\frac{\langle\Sigma_a\phi\rangle_T}{\langle\Sigma_a\phi\rangle}$$ where the subscript $T$ means thermal energies.
End of explanation
# Compute fast fission factor factor using tally arithmetic
therm_fiss_rate = sp.get_tally(name='therm. fiss. rate')
fast_fiss = fiss_rate / therm_fiss_rate
fast_fiss.get_pandas_dataframe()
Explanation: The fast fission factor can be calculated as
$$\epsilon=\frac{\langle\nu\Sigma_f\phi\rangle}{\langle\nu\Sigma_f\phi\rangle_T}$$
End of explanation
# Compute thermal flux utilization factor using tally arithmetic
fuel_therm_abs_rate = sp.get_tally(name='fuel therm. abs. rate')
therm_util = fuel_therm_abs_rate / therm_abs_rate
therm_util.get_pandas_dataframe()
Explanation: The thermal flux utilization is calculated as
$$f=\frac{\langle\Sigma_a\phi\rangle^F_T}{\langle\Sigma_a\phi\rangle_T}$$
where the superscript $F$ denotes fuel.
End of explanation
# Compute neutrons produced per absorption (eta) using tally arithmetic
eta = therm_fiss_rate / fuel_therm_abs_rate
eta.get_pandas_dataframe()
Explanation: The final factor is the number of fission neutrons produced per absorption in fuel, calculated as $$\eta = \frac{\langle \nu\Sigma_f\phi \rangle_T}{\langle \Sigma_a \phi \rangle^F_T}$$
End of explanation
keff = res_esc * fast_fiss * therm_util * eta
keff.get_pandas_dataframe()
Explanation: Now we can calculate $k_\infty$ using the product of the factors form the four-factor formula.
End of explanation
# Compute microscopic multi-group cross-sections
flux = sp.get_tally(name='flux')
flux = flux.get_slice(filters=['cell'], filter_bins=[(fuel_cell.id,)])
fuel_rxn_rates = sp.get_tally(name='fuel rxn rates')
mod_rxn_rates = sp.get_tally(name='moderator rxn rates')
fuel_xs = fuel_rxn_rates / flux
fuel_xs.get_pandas_dataframe()
Explanation: We see that the value we've obtained here has exactly the same mean as before. However, because of the way it was calculated, the standard deviation appears to be larger.
Let's move on to a more complicated example now. Before we set up tallies to get reaction rates in the fuel and moderator in two energy groups for two different nuclides. We can use tally arithmetic to divide each of these reaction rates by the flux to get microscopic multi-group cross sections.
End of explanation
# Show how to use Tally.get_values(...) with a CrossScore
nu_fiss_xs = fuel_xs.get_values(scores=['(nu-fission / flux)'])
print(nu_fiss_xs)
Explanation: We see that when the two tallies with multiple bins were divided, the derived tally contains the outer product of the combinations. If the filters/scores are the same, no outer product is needed. The get_values(...) method allows us to obtain a subset of tally scores. In the following example, we obtain just the neutron production microscopic cross sections.
End of explanation
# Show how to use Tally.get_values(...) with a CrossScore and CrossNuclide
u235_scatter_xs = fuel_xs.get_values(nuclides=['(U-235 / total)'],
scores=['(scatter / flux)'])
print(u235_scatter_xs)
# Show how to use Tally.get_values(...) with a CrossFilter and CrossScore
fast_scatter_xs = fuel_xs.get_values(filters=['energy'],
filter_bins=[((0.625e-6, 20.),)],
scores=['(scatter / flux)'])
print(fast_scatter_xs)
Explanation: The same idea can be used not only for scores but also for filters and nuclides.
End of explanation
# "Slice" the nu-fission data into a new derived Tally
nu_fission_rates = fuel_rxn_rates.get_slice(scores=['nu-fission'])
nu_fission_rates.get_pandas_dataframe()
# "Slice" the H-1 scatter data in the moderator Cell into a new derived Tally
need_to_slice = sp.get_tally(name='need-to-slice')
slice_test = need_to_slice.get_slice(scores=['scatter'], nuclides=['H-1'],
filters=['cell'], filter_bins=[(moderator_cell.id,)])
slice_test.get_pandas_dataframe()
Explanation: A more advanced method is to use get_slice(...) to create a new derived tally that is a subset of an existing tally. This has the benefit that we can use get_pandas_dataframe() to see the tallies in a more human-readable format.
End of explanation |
13,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FCN-8s Tutorial
Step1: This notebook walks you through how to work with this FCN-8s implementation. I will take the Cityscapes dataset as an example to train the model on in this notebook, but the described setup is applicable to arbitrary datasets. Here is an overview of what using this model looks like
Step2: 1.1 Visualize the dataset
Let's visualize the dataset just to get a better understanding of the ground truth data.
Step3: 2. Create the model
Instantiate an FCN8s. The constructor arguments might seem a bit confusing, but here is how it works. You can do either of three things
Step4: 3. Train the model
Now just call the train() method to train the model. Refer to the documentation for details on all the arguments, but here are a few notes
Step5: 3. Save the model
I already set the train() method above to save the model to disk during the training, so the model has already been saved (potentially multiple times) and it's not necessary to save it manually, but here is the exemplary method call just for the sake of completeness.
Step6: 4. Evaluate the model
I already set the train() method above to evaluate the model every few epochs during training, but you can evaluate the model explicitly as shown below. There are currently three metrics built in
Step7: 5. Make predictions and visualize them
Step8: 6. Process a sequence of images, save them to disk, and generate a video from them
In case you find it useful, with the method below you can just let the model run predictions on all images in a given directory, print the predicted segmentations onto them, and save a copy of them to disk.
Step9: Let's make a video from the predictions above
Step10: 7. Close the session
Remember, the TensorFlow session is being kept open and keeps owning resources until you manually close it, so don't forget to close it when you're done in order to release the resources. | Python Code:
from fcn8s_tensorflow import FCN8s
from data_generator.batch_generator import BatchGenerator
from helpers.visualization_utils import print_segmentation_onto_image, create_video_from_images
from cityscapesscripts.helpers.labels import TRAINIDS_TO_COLORS_DICT, TRAINIDS_TO_RGBA_DICT
from math import ceil
import time
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: FCN-8s Tutorial
End of explanation
# TODO: Set the paths to the images.
train_images = '../../datasets/Cityscapes_small/leftImg8bit/train/'
val_images = '../../datasets/Cityscapes_small/leftImg8bit/val/'
test_images = '../../datasets/Cityscapes_small/leftImg8bit/test/'
# TODO: Set the paths to the ground truth images.
train_gt = '../../datasets/Cityscapes_small/gtFine/train/'
val_gt = '../../datasets/Cityscapes_small/gtFine/val/'
# Put the paths to the datasets in lists, because that's what `BatchGenerator` requires as input.
train_image_dirs = [train_images]
train_ground_truth_dirs = [train_gt]
val_image_dirs = [val_images]
val_ground_truth_dirs = [val_gt]
num_classes = 20 # TODO: Set the number of segmentation classes.
train_dataset = BatchGenerator(image_dirs=train_image_dirs,
image_file_extension='png',
ground_truth_dirs=train_ground_truth_dirs,
image_name_split_separator='leftImg8bit',
ground_truth_suffix='gtFine_labelIds',
check_existence=True,
num_classes=num_classes)
val_dataset = BatchGenerator(image_dirs=val_image_dirs,
image_file_extension='png',
ground_truth_dirs=val_ground_truth_dirs,
image_name_split_separator='leftImg8bit',
ground_truth_suffix='gtFine_labelIds',
check_existence=True,
num_classes=num_classes)
num_train_images = train_dataset.get_num_files()
num_val_images = val_dataset.get_num_files()
print("Size of training dataset: ", num_train_images, " images")
print("Size of validation dataset: ", num_val_images, " images")
# TODO: Set the batch size. I'll use the same batch size for both generators here.
batch_size = 4
train_generator = train_dataset.generate(batch_size=batch_size,
convert_colors_to_ids=False,
convert_ids_to_ids=False,
convert_to_one_hot=True,
void_class_id=None,
random_crop=False,
crop=False,
resize=False,
brightness=False,
flip=0.5,
translate=False,
scale=False,
gray=False,
to_disk=False,
shuffle=True)
val_generator = val_dataset.generate(batch_size=batch_size,
convert_colors_to_ids=False,
convert_ids_to_ids=False,
convert_to_one_hot=True,
void_class_id=None,
random_crop=False,
crop=False,
resize=False,
brightness=False,
flip=False,
translate=False,
scale=False,
gray=False,
to_disk=False,
shuffle=True)
# Print out some diagnostics to make sure that our batches aren't empty and it doesn't take forever to generate them.
start_time = time.time()
images, gt_images = next(train_generator)
print('Time to generate one batch: {:.3f} seconds'.format(time.time() - start_time))
print('Number of images generated:' , len(images))
print('Number of ground truth images generated:' , len(gt_images))
Explanation: This notebook walks you through how to work with this FCN-8s implementation. I will take the Cityscapes dataset as an example to train the model on in this notebook, but the described setup is applicable to arbitrary datasets. Here is an overview of what using this model looks like:
First, you create an instance of the FCN8s model class. The constructor is explained in a subsequent next section.
The instantiated FCN8s model has the following main public methods:
train(): Trains the model.
evaluate(): Evaluates the model.
predict(): Makes predictions.
predict_and_save(): Makes predictions for a sequence of images, prints the predicted segmentations onto them and saves a copy of them to disk.
save(): Saves the model to disk.
close(): Closes the TensorFlow session. Once you instantiated a model, a session will be started and kept open until you manually close it. It is therefore important that you close the session when you're done working with the model.
fcn8s_tensorflow.py provides detailed documentation on the class and all of it's public methods, so take a look.
You can find a link to download a fully convolutionalized VGG-16 that was pre-trained on ImageNet classification in the README.
In the subsequent sections I'll go step by step over training, evaluation, prediction, and visualization.
1. Create a batch generator for training and evaluation
Let's get the preparation out of the way first. The train() and evaluate() methods need a generator that feeds them with batches of images and corresponding ground truth images. Ideally we want two generators, onr that serves data from a training dataset and another that serves data from a validation dataset. The Cityscapes dataset already provides a split of the data for us, so I'll just stick with that.
In order to train on the Cityscapes dataset, the only thing you really need to do here is set the appropriate paths to the dataset on your machine, for other datasets you will have to pass some different values to the BatchGenerator constructor, check the documentation for details.
If you need to preprocess your dataset, e.g. to change the image size or to convert the segmentation class labels, I suggest you do that offline. Take a look at how to use BatchGenerator as an offline preprocessor.
End of explanation
# Generate batches from the train_generator where the ground truth does not get converted to one-hot
# so that we can plot it as images.
example_generator = train_dataset.generate(batch_size=batch_size,
convert_to_one_hot=False)
# Generate a batch.
example_images, example_gt_images = next(example_generator)
i = 0 # Select which sample from the batch to display below.
figure, cells = plt.subplots(1, 2, figsize=(16,8))
cells[0].imshow(example_images[i])
cells[1].imshow(example_gt_images[i])
plt.figure(figsize=(16, 8))
plt.imshow(example_gt_images[i])
Explanation: 1.1 Visualize the dataset
Let's visualize the dataset just to get a better understanding of the ground truth data.
End of explanation
model = FCN8s(model_load_dir=None,
tags=None,
vgg16_dir='../VGG-16_mod2FCN_ImageNet-Classification',
num_classes=num_classes,
variables_load_dir=None)
Explanation: 2. Create the model
Instantiate an FCN8s. The constructor arguments might seem a bit confusing, but here is how it works. You can do either of three things:
Build the FCN-8s model from scratch, but load a pre-trained VGG-16 model into it. In order to do so, you need to pass values only for vgg16_dir (the directory that contains the pre-trained, convolutionalized VGG-16) and for num_classes. This is what you will want to do when you are using this model for the first time. You can find the download link to a convolutionalized VGG-16 trained to convergence on ImageNet classification in the README.
Load a saved model from a SavedModel protocol buffer. In order to do so, you need to pass values only for model_load_dir and tags. This is what you will likely want to do if you want to use or continue to train a previously saved FCN-8s. If you are unfamiliar with the SavedModel API, take a look at TensorFlow's documentation on this topic.
Build the FCN-8s model from scratch, but load variables into it that were saved using tf.train.Saver. In order to do so, you need to pass values only for variables_load_dir and vgg16_dir. This is what you will want to do if you made any changes to the graph, but still want to load the saved variables from an earlier version of the graph. Unfortunately you still need to provide a VGG-16 SavedModel, because I have not manually rebuilt the VGG-16 graph in this implementation, so it needs to be loaded from a saved model.
End of explanation
epochs = 6 # TODO: Set the number of epochs to train for.
# TODO: Define a learning rate schedule function to be passed to the `train()` method.
def learning_rate_schedule(step):
if step <= 10000: return 0.0001
elif 10000 < step <= 20000: return 0.00001
elif 20000 < step <= 40000: return 0.000003
else: return 0.000001
model.train(train_generator=train_generator,
epochs=epochs,
steps_per_epoch=ceil(num_train_images/batch_size),
learning_rate_schedule=learning_rate_schedule,
keep_prob=0.5,
l2_regularization=0.0,
eval_dataset='val',
eval_frequency=2,
val_generator=val_generator,
val_steps=ceil(num_val_images/batch_size),
metrics={'loss', 'mean_iou', 'accuracy'},
save_during_training=True,
save_dir='cityscapes_model',
save_best_only=True,
save_tags=['default'],
save_name='(batch-size-4)',
save_frequency=2,
saver='saved_model',
monitor='loss',
record_summaries=True,
summaries_frequency=10,
summaries_dir='tensorboard_log/cityscapes',
summaries_name='configuration_01',
training_loss_display_averaging=3)
Explanation: 3. Train the model
Now just call the train() method to train the model. Refer to the documentation for details on all the arguments, but here are a few notes:
You'll have to pass some learning rate schedule function, however simple it may be. This function takes as input an integer (the trining step) and returns a float (the learning rate). I'll just define a simple step function below.
Pass the generator(s) we instantiated above. Note that there are two arguments that take a generator as input, train_generator and val_generator, where the latter is optional.
End of explanation
model.save(model_save_dir='cityscapes_model',
saver='saved_model',
tags=['default'],
name='(batch-size-4)',
include_global_step=True,
include_last_training_loss=True,
include_metrics=True,
force_save=False)
Explanation: 3. Save the model
I already set the train() method above to save the model to disk during the training, so the model has already been saved (potentially multiple times) and it's not necessary to save it manually, but here is the exemplary method call just for the sake of completeness.
End of explanation
model.evaluate(data_generator=val_generator,
metrics={'loss', 'mean_iou', 'accuracy'},
num_batches=ceil(num_val_images/batch_size),
l2_regularization=0.0,
dataset='val')
Explanation: 4. Evaluate the model
I already set the train() method above to evaluate the model every few epochs during training, but you can evaluate the model explicitly as shown below. There are currently three metrics built in: (1) Mean intersection over union, which is probably the most important metric for semantic segmentation models, (2) accuracy, which simply measures the ratio of images pixels that were classified correctly, and (3) loss, which is simply the output of the loss function. You can evaluate the model on any subset of them.
End of explanation
images, labels = next(val_generator)
n = 3 # Select which image of the batch you would like to visualize.
# Make a prediction.
prediction = model.predict([images[n]], argmax=False)
# Print the predicted segmentation onto the image.
segmented_image = print_segmentation_onto_image(images[n], prediction, color_map=TRAINIDS_TO_RGBA_DICT)
plt.figure(figsize=(20,14))
plt.imshow(segmented_image)
Explanation: 5. Make predictions and visualize them
End of explanation
model.predict_and_save(results_dir='demo_video_images',
images_dir='../../datasets/Cityscapes_small/leftImg8bit/demoVideo/stuttgart_00',
color_map=TRAINIDS_TO_RGBA_DICT,
resize=False,
image_file_extension='png',
include_unprocessed_image=True,
arrangement='vertical')
Explanation: 6. Process a sequence of images, save them to disk, and generate a video from them
In case you find it useful, with the method below you can just let the model run predictions on all images in a given directory, print the predicted segmentations onto them, and save a copy of them to disk.
End of explanation
create_video_from_images(video_output_name='demo_video',
image_input_dir='demo_video_images',
frame_rate=30.0,
image_file_extension='png')
Explanation: Let's make a video from the predictions above:
End of explanation
model.close()
Explanation: 7. Close the session
Remember, the TensorFlow session is being kept open and keeps owning resources until you manually close it, so don't forget to close it when you're done in order to release the resources.
End of explanation |
13,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Reading NEXRAD Level II data from Google Cloud public datasets </h1>
This notebook demonstrates how to use PyART to visualize data from the Google Cloud public dataset.
Step1: <h3> Install Py-ART </h3>
See https
Step2: <h2> Plot into png </h2>
Step3: <h2> Create animating PNG </h2> | Python Code:
%bash
rm -rf data
mkdir data
cd data
RADAR=KIWA
YEAR=2013
MONTH=07
DAY=23
HOUR=23
gsutil cp gs://gcp-public-data-nexrad-l2/$YEAR/$MONTH/$DAY/$RADAR/*_$RADAR_${YEAR}${MONTH}${DAY}${HOUR}0000_${YEAR}${MONTH}${DAY}${HOUR}5959.tar temp.tar
tar xvf temp.tar
rm *.tar
ls
Explanation: <h1> Reading NEXRAD Level II data from Google Cloud public datasets </h1>
This notebook demonstrates how to use PyART to visualize data from the Google Cloud public dataset.
End of explanation
# Based on
# http://arm-doe.github.io/pyart/dev/auto_examples/plotting/plot_nexrad_multiple_moments.html
# by Jonathan J. Helmus ([email protected])
import matplotlib.pyplot as plt
import pyart
def plot_data(filename):
radar = pyart.io.read_nexrad_archive(infilename)
display = pyart.graph.RadarDisplay(radar)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(221)
display.plot('velocity', 1, ax=ax, title='Doppler Velocity',
colorbar_label='',
axislabels=('', 'North South distance from radar (km)'))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(222)
display.plot('reflectivity', 0, ax=ax,
title='Reflectivity lowest', colorbar_label='',
axislabels=('', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(223)
display.plot('reflectivity', 1, ax=ax,
title='Reflectivity second', colorbar_label='')
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(224)
display.plot('cross_correlation_ratio', 0, ax=ax,
title='Correlation Coefficient', colorbar_label='',
axislabels=('East West distance from radar (km)', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
plt.show()
Explanation: <h3> Install Py-ART </h3>
See https://github.com/ARM-DOE/pyart/wiki/Simple-Install-of-Py-ART-using-Anaconda
<h3> Plot volume scans using Py-ART within Jupyter </h3>
End of explanation
%writefile plot_pngs.py
import matplotlib.pyplot as plt
import pyart
def plot_data(infilename, outpng):
radar = pyart.io.read_nexrad_archive(infilename)
display = pyart.graph.RadarDisplay(radar)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(221)
display.plot('velocity', 1, ax=ax, title='Doppler Velocity',
colorbar_label='',
axislabels=('', 'North South distance from radar (km)'))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(222)
display.plot('reflectivity', 0, ax=ax,
title='Reflectivity lowest', colorbar_label='',
axislabels=('', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(223)
display.plot('reflectivity', 1, ax=ax,
title='Reflectivity second', colorbar_label='')
display.set_limits((-300, 300), (-300, 300), ax=ax)
ax = fig.add_subplot(224)
display.plot('cross_correlation_ratio', 0, ax=ax,
title='Correlation Coefficient', colorbar_label='',
axislabels=('East West distance from radar (km)', ''))
display.set_limits((-300, 300), (-300, 300), ax=ax)
fig.savefig(outpng)
if __name__ == '__main__':
import argparse
parser = argparse.ArgumentParser(description='plot some radar data')
parser.add_argument('nexrad', help="volume scan filename")
parser.add_argument('png', help="output png filename")
args = parser.parse_args()
print "Plotting {} into {}".format(args.nexrad, args.png)
plot_data(args.nexrad, args.png)
%bash
python plot_pngs.py data/KIWA20130723_235451_V06.gz radarplot.png
Explanation: <h2> Plot into png </h2>
End of explanation
%bash
rm -rf images
mkdir images
for volumefile in $(ls data); do
base=$(basename $volumefile)
python plot_pngs.py data/$volumefile images/$base.png
done
Explanation: <h2> Create animating PNG </h2>
End of explanation |
13,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is the reproduction of an exercise found at http
Step1: We'll read the data from ZoneA.dat.
Step2: We want the first, second and fourth columns of the data set, representing the x and y spatial coordinates, and the porosity.
Step3: We'll be interested in determining the porosity at a point (2000,4700).
Step4: We can plot our region of interest as follows
Step5: We can determine the parameters for our model by looking at the semivariogram and trying to determine the appropriate range and sill.
Step6: The semivariogram plotting function, svplot(), plots sill as a dashed line, and the empirical semivariogram as determined from the data. It optionally plots a semivariance model.
Step7: We can pass a model to this function using the optional model argument and see it plotted in red.
Step8: The covariance modeling function function will return a spherical covariance model that takes a distance as input, and returns an covariance estimate. We've used the global variance of the porosity in ZoneA.dat as the sill.
Step9: We can then krige the data, using the covariance model, the point we are interested in, (2000,47000), and N=6 signifying that we only want to use the six nearest points. The output of the simple and ordinary kriging functions below is the krigin estimate, and the standard deviation of the kriging estimate. | Python Code:
import sys
sys.path.append('..')
sys.path.append('../geostatsmodels')
from geostatsmodels import utilities, variograms, model, kriging, geoplot
import matplotlib.pyplot as plt
import numpy as np
import pandas
Explanation: This notebook is the reproduction of an exercise found at http://people.ku.edu/~gbohling/cpe940/Kriging.pdf
End of explanation
z = utilities.readGeoEAS('../data/ZoneA.dat')
Explanation: We'll read the data from ZoneA.dat.
End of explanation
P = z[:,[0,1,3]]
Explanation: We want the first, second and fourth columns of the data set, representing the x and y spatial coordinates, and the porosity.
End of explanation
pt = [2000, 4700]
Explanation: We'll be interested in determining the porosity at a point (2000,4700).
End of explanation
plt.scatter(P[:,0], P[:,1], c=P[:,2], cmap=geoplot.YPcmap)
plt.title('Zone A Subset % Porosity')
plt.colorbar()
xmin, xmax = 0, 4250
ymin, ymax = 3200, 6250
plt.xlim(xmin,xmax)
plt.ylim(ymin,ymax)
for i in range(len(P[:,2])):
x, y, por = P[i]
if (x < xmax) & (y > ymin) & (y < ymax):
plt.text( x+100, y, '{:4.2f}'.format( por ) )
plt.scatter(pt[0], pt[1], marker='x', c='k')
plt.text(pt[0] + 100 , pt[1], '?')
plt.xlabel('Easting (m)')
plt.ylabel('Northing (m)');
Explanation: We can plot our region of interest as follows:
End of explanation
tolerance = 250
lags = np.arange(tolerance, 10000, tolerance*2)
sill = np.var(P[:,2])
Explanation: We can determine the parameters for our model by looking at the semivariogram and trying to determine the appropriate range and sill.
End of explanation
geoplot.semivariogram(P, lags, tolerance)
Explanation: The semivariogram plotting function, svplot(), plots sill as a dashed line, and the empirical semivariogram as determined from the data. It optionally plots a semivariance model.
End of explanation
svm = model.semivariance(model.spherical, (4000, sill))
geoplot.semivariogram(P, lags, tolerance, model=svm)
Explanation: We can pass a model to this function using the optional model argument and see it plotted in red.
End of explanation
covfct = model.covariance(model.spherical, (4000, sill))
Explanation: The covariance modeling function function will return a spherical covariance model that takes a distance as input, and returns an covariance estimate. We've used the global variance of the porosity in ZoneA.dat as the sill.
End of explanation
kriging.simple(P, covfct, pt, N=6)
kriging.ordinary(P, covfct, pt, N=6)
est, kstd = kriging.krige(P, covfct, [[2000,4700],[2100,4700],[2000,4800],[2100,4800]], 'simple', N=6)
est
kstd
Explanation: We can then krige the data, using the covariance model, the point we are interested in, (2000,47000), and N=6 signifying that we only want to use the six nearest points. The output of the simple and ordinary kriging functions below is the krigin estimate, and the standard deviation of the kriging estimate.
End of explanation |
13,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep learning for Natural Language Processing
Simple text representations, bag of words
Word embedding and... not just another word2vec this time
1-dimensional convolutions for text
Aggregating several data sources "the hard way"
Solving ~somewhat~ real ML problem with ~almost~ end-to-end deep learning
Special thanks to Irina Golzmann for help with technical part.
NLTK
You will require nltk v3.2 to solve this assignment
It is really important that the version is 3.2, otherwize russian tokenizer might not work
Install/update
* sudo pip install --upgrade nltk==3.2
* If you don't remember when was the last pip upgrade, sudo pip install --upgrade pip
If for some reason you can't or won't switch to nltk v3.2, just make sure that russian words are tokenized properly with RegeExpTokenizer.
For students with low-RAM machines
This assignment can be accomplished with even the low-tier hardware (<= 4Gb RAM)
If that is the case, turn flag "low_RAM_mode" below to True
If you have around 8GB memory, it is unlikely that you will feel constrained by memory.
In case you are using a PC from last millenia, consider setting very_low_RAM=True
Step1: Dataset
Ex-kaggle-competition on prohibited content detection
There goes the description - https
Step2:
Step3: Balance-out the classes
Vast majority of data samples are non-prohibited
250k banned out of 4kk
Let's just downsample random 250k legal samples to make further steps less computationally demanding
If you aim for high Kaggle score, consider a smarter approach to that.
Step4: Tokenizing
First, we create a dictionary of all existing words.
Assign each word a number - it's Id
Step5: Remove rare tokens
We are unlikely to make use of words that are only seen a few times throughout the corpora.
Again, if you want to beat Kaggle competition metrics, consider doing something better.
Step6: Replace words with IDs
Set a maximum length for titles and descriptions.
* If string is longer that that limit - crop it, if less - pad with zeros.
* Thus we obtain a matrix of size [n_samples]x[max_length]
* Element at i,j - is an identifier of word j within sample i
Step7: Data format examples
Step8: As you can see, our preprocessing is somewhat crude. Let us see if that is enough for our network
Non-sequences
Some data features are not text samples. E.g. price, # urls, category, etc
They require a separate preprocessing.
Step9: Split data into training and test
Step10: Save preprocessed data [optional]
The next tab can be used to stash all the essential data matrices and get rid of the rest of the data.
Highly recommended if you have less than 1.5GB RAM left
To do that, you need to first run it with save_prepared_data=True, then restart the notebook and only run this tab with read_prepared_data=True.
Step11: Train the monster
Since we have several data sources, our neural network may differ from what you used to work with.
Separate input for titles
cnn+global max or RNN
Separate input for description
cnn+global max or RNN
Separate input for categorical features
обычные полносвязные слои или какие-нибудь трюки
These three inputs must be blended somehow - concatenated or added.
Output
Step12: NN architecture
Step13: Loss function
The standard way
Step14: Determinitic prediction
In case we use stochastic elements, e.g. dropout or noize
Compile a separate set of functions with deterministic prediction (deterministic = True)
Unless you think there's no neet for dropout there ofc. Btw is there?
Step15: Coffee-lation
Step16: Training loop
The regular way with loops over minibatches
Since the dataset is huge, we define epoch as some fixed amount of samples isntead of all dataset
Step17: Tweaking guide
batch_size - how many samples are processed per function call
optimization gets slower, but more stable, as you increase it.
May consider increasing it halfway through training
minibatches_per_epoch - max amount of minibatches per epoch
Does not affect training. Lesser value means more frequent and less stable printing
Setting it to less than 10 is only meaningfull if you want to make sure your NN does not break down after one epoch
n_epochs - total amount of epochs to train for
n_epochs = 10**10 and manual interrupting is still an option
Tips
Step18: Final evaluation
Evaluate network over the entire test set | Python Code:
low_RAM_mode = True
very_low_RAM = False #If you have <3GB RAM, set BOTH to true
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Deep learning for Natural Language Processing
Simple text representations, bag of words
Word embedding and... not just another word2vec this time
1-dimensional convolutions for text
Aggregating several data sources "the hard way"
Solving ~somewhat~ real ML problem with ~almost~ end-to-end deep learning
Special thanks to Irina Golzmann for help with technical part.
NLTK
You will require nltk v3.2 to solve this assignment
It is really important that the version is 3.2, otherwize russian tokenizer might not work
Install/update
* sudo pip install --upgrade nltk==3.2
* If you don't remember when was the last pip upgrade, sudo pip install --upgrade pip
If for some reason you can't or won't switch to nltk v3.2, just make sure that russian words are tokenized properly with RegeExpTokenizer.
For students with low-RAM machines
This assignment can be accomplished with even the low-tier hardware (<= 4Gb RAM)
If that is the case, turn flag "low_RAM_mode" below to True
If you have around 8GB memory, it is unlikely that you will feel constrained by memory.
In case you are using a PC from last millenia, consider setting very_low_RAM=True
End of explanation
if not low_RAM_mode:
# a lot of ram
df = pd.read_csv("avito_train.tsv",sep='\t')
else:
#aroung 4GB ram
df = pd.read_csv("avito_train_1kk.tsv",sep='\t')
print df.shape, df.is_blocked.mean()
df[:5]
Explanation: Dataset
Ex-kaggle-competition on prohibited content detection
There goes the description - https://www.kaggle.com/c/avito-prohibited-content
Download
High-RAM mode,
* Download avito_train.tsv from competition data files
Low-RAM-mode,
* Download downsampled dataset from here
* archive https://yadi.sk/d/l0p4lameqw3W8
* raw https://yadi.sk/d/I1v7mZ6Sqw2WK (in case you feel masochistic)
What's inside
Different kinds of features:
* 2 text fields - title and description
* Special features - price, number of e-mails, phones, etc
* Category and subcategory - unsurprisingly, categorical features
* Attributes - more factors
Only 1 binary target whether or not such advertisement contains prohibited materials
* criminal, misleading, human reproduction-related, etc
* diving into the data may result in prolonged sleep disorders
End of explanation
print "Blocked ratio",df.is_blocked.mean()
print "Count:",len(df)
Explanation:
End of explanation
#downsample
< downsample data so that both classes have approximately equal ratios>
df = <downsampled dataset>
print "Blocked ratio:",df.is_blocked.mean()
print "Count:",len(df)
assert df.is_blocked.mean() < 0.51
assert df.is_blocked.mean() > 0.49
assert len(df) <= 560000
print "All tests passed"
#In case your RAM-o-meter is in the red
if very_low_ram:
data = data[::2]
Explanation: Balance-out the classes
Vast majority of data samples are non-prohibited
250k banned out of 4kk
Let's just downsample random 250k legal samples to make further steps less computationally demanding
If you aim for high Kaggle score, consider a smarter approach to that.
End of explanation
from nltk.tokenize import RegexpTokenizer
from collections import Counter,defaultdict
tokenizer = RegexpTokenizer(r"\w+")
#Dictionary of tokens
token_counts = Counter()
#All texts
all_texts = np.hstack([df.description.values,df.title.values])
#Compute token frequencies
for s in all_texts:
if type(s) is not str:
continue
s = s.decode('utf8').lower()
tokens = tokenizer.tokenize(s)
for token in tokens:
token_counts[token] +=1
Explanation: Tokenizing
First, we create a dictionary of all existing words.
Assign each word a number - it's Id
End of explanation
#Word frequency distribution, just for kicks
_=plt.hist(token_counts.values(),range=[0,50],bins=50)
#Select only the tokens that had at least 10 occurences in the corpora.
#Use token_counts.
min_count = 10
tokens = <tokens from token_counts keys that had at least min_count occurences throughout the dataset>
token_to_id = {t:i+1 for i,t in enumerate(tokens)}
null_token = "NULL"
token_to_id[null_token] = 0
print "# Tokens:",len(token_to_id)
if len(token_to_id) < 30000:
print "Alarm! It seems like there are too few tokens. Make sure you updated NLTK and applied correct thresholds -- unless you now what you're doing, ofc"
if len(token_to_id) < 1000000:
print "Alarm! Too many tokens. You might have messed up when pruning rare ones -- unless you know what you're doin' ofc"
Explanation: Remove rare tokens
We are unlikely to make use of words that are only seen a few times throughout the corpora.
Again, if you want to beat Kaggle competition metrics, consider doing something better.
End of explanation
def vectorize(strings, token_to_id, max_len=150):
token_matrix = []
for s in strings:
if type(s) is not str:
token_matrix.append([0]*max_len)
continue
s = s.decode('utf8').lower()
tokens = tokenizer.tokenize(s)
token_ids = map(lambda token: token_to_id.get(token,0), tokens)[:max_len]
token_ids += [0]*(max_len - len(token_ids))
token_matrix.append(token_ids)
return np.array(token_matrix)
desc_tokens = vectorize(df.description.values,token_to_id,max_len = 150)
title_tokens = vectorize(df.title.values,token_to_id,max_len = 15)
Explanation: Replace words with IDs
Set a maximum length for titles and descriptions.
* If string is longer that that limit - crop it, if less - pad with zeros.
* Thus we obtain a matrix of size [n_samples]x[max_length]
* Element at i,j - is an identifier of word j within sample i
End of explanation
print "Размер матрицы:",title_tokens.shape
for title, tokens in zip(df.title.values[:3],title_tokens[:3]):
print title,'->', tokens[:10],'...'
Explanation: Data format examples
End of explanation
#All numeric features
df_numerical_features = df[["phones_cnt","emails_cnt","urls_cnt","price"]]
#One-hot-encoded category and subcategory
from sklearn.feature_extraction import DictVectorizer
categories = []
data_cat_subcat = df[["category","subcategory"]].values
categories = [A list of dictionaries {"category":category_name, "subcategory":subcategory_name} for each data sample]
vectorizer = DictVectorizer(sparse=False)
cat_one_hot = vectorizer.fit_transform(categories)
cat_one_hot = pd.DataFrame(cat_one_hot,columns=vectorizer.feature_names_)
df_non_text = pd.merge(
df_numerical_features,cat_one_hot,on = np.arange(len(cat_one_hot))
)
del df_non_text["key_0"]
Explanation: As you can see, our preprocessing is somewhat crude. Let us see if that is enough for our network
Non-sequences
Some data features are not text samples. E.g. price, # urls, category, etc
They require a separate preprocessing.
End of explanation
#Target variable - whether or not sample contains prohibited material
target = df.is_blocked.values.astype('int32')
#Preprocessed titles
title_tokens = title_tokens.astype('int32')
#Preprocessed tokens
desc_tokens = desc_tokens.astype('int32')
#Non-sequences
df_non_text = df_non_text.astype('float32')
#Split into training and test set.
#Difficulty selector:
#Easy: split randomly
#Medium: select test set items that have item_ids strictly above that of training set
#Hard: do whatever you want, but score yourself using kaggle private leaderboard
title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = <define_these_variables>
Explanation: Split data into training and test
End of explanation
save_prepared_data = True #save
read_prepared_data = False #load
#but not both at once
assert not (save_prepared_data and read_prepared_data)
if save_prepared_data:
print "Saving preprocessed data (may take up to 3 minutes)"
import pickle
with open("preprocessed_data.pcl",'w') as fout:
pickle.dump(data_tuple,fout)
with open("token_to_id.pcl",'w') as fout:
pickle.dump(token_to_id,fout)
print "готово"
elif read_prepared_data:
print "Reading saved data..."
import pickle
with open("preprocessed_data.pcl",'r') as fin:
data_tuple = pickle.load(fin)
title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = data_tuple
with open("token_to_id.pcl",'r') as fin:
token_to_id = pickle.load(fin)
#Re-importing libraries to allow staring noteboook from here
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
print "done"
Explanation: Save preprocessed data [optional]
The next tab can be used to stash all the essential data matrices and get rid of the rest of the data.
Highly recommended if you have less than 1.5GB RAM left
To do that, you need to first run it with save_prepared_data=True, then restart the notebook and only run this tab with read_prepared_data=True.
End of explanation
#libraries
import lasagne
from theano import tensor as T
import theano
#3 inputs and a refere output
title_token_ids = T.matrix("title_token_ids",dtype='int32')
desc_token_ids = T.matrix("desc_token_ids",dtype='int32')
categories = T.matrix("categories",dtype='float32')
target_y = T.ivector("is_blocked")
Explanation: Train the monster
Since we have several data sources, our neural network may differ from what you used to work with.
Separate input for titles
cnn+global max or RNN
Separate input for description
cnn+global max or RNN
Separate input for categorical features
обычные полносвязные слои или какие-нибудь трюки
These three inputs must be blended somehow - concatenated or added.
Output: a simple binary classification
1 sigmoidal with binary_crossentropy
2 softmax with categorical_crossentropy - essentially the same as previous one
1 neuron without nonlinearity (lambda x: x) + hinge loss
End of explanation
title_inp = lasagne.layers.InputLayer((None,title_tr.shape[1]),input_var=title_token_ids)
descr_inp = lasagne.layers.InputLayer((None,desc_tr.shape[1]),input_var=desc_token_ids)
cat_inp = lasagne.layers.InputLayer((None,nontext_tr.shape[1]), input_var=categories)
# Descriptions
#word-wise embedding. We recommend to start from some 64 and improving after you are certain it works.
descr_nn = lasagne.layers.EmbeddingLayer(descr_inp,
input_size=len(token_to_id)+1,
output_size=?)
#reshape from [batch, time, unit] to [batch,unit,time] to allow 1d convolution over time
descr_nn = lasagne.layers.DimshuffleLayer(descr_nn, [0,2,1])
descr_nn = 1D convolution over embedding, maybe several ones in a stack
#pool over time
descr_nn = lasagne.layers.GlobalPoolLayer(descr_nn,T.max)
#Possible improvements here are adding several parallel convs with different filter sizes or stacking them the usual way
#1dconv -> 1d max pool ->1dconv and finally global pool
# Titles
title_nn = <Process titles somehow (title_inp)>
# Non-sequences
cat_nn = <Process non-sequences(cat_inp)>
nn = <merge three layers into one (e.g. lasagne.layers.concat) >
nn = lasagne.layers.DenseLayer(nn,your_lucky_number)
nn = lasagne.layers.DropoutLayer(nn,p=maybe_use_me)
nn = lasagne.layers.DenseLayer(nn,1,nonlinearity=lasagne.nonlinearities.linear)
Explanation: NN architecture
End of explanation
#All trainable params
weights = lasagne.layers.get_all_params(nn,trainable=True)
#Simple NN prediction
prediction = lasagne.layers.get_output(nn)[:,0]
#Hinge loss
loss = lasagne.objectives.binary_hinge_loss(prediction,target_y,delta = what_do_you_think).mean()
#Weight optimization step
updates = <your favorite optimizer>
Explanation: Loss function
The standard way:
prediction
loss
updates
training and evaluation functions
Hinge loss
$ L_i = \max(0, \delta - t_i p_i) $
delta is a tunable parameter: how far should a neuron be in the positive margin area for us to stop bothering about it
Function description may mention some +-1 limitations - this is not neccessary, at least as long as hinge loss has a default flag binary = True
End of explanation
#deterministic version
det_prediction = lasagne.layers.get_output(nn,deterministic=True)[:,0]
#equivalent loss function
det_loss = <an excercise in copy-pasting and editing>
Explanation: Determinitic prediction
In case we use stochastic elements, e.g. dropout or noize
Compile a separate set of functions with deterministic prediction (deterministic = True)
Unless you think there's no neet for dropout there ofc. Btw is there?
End of explanation
train_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[loss,prediction],updates = updates)
eval_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[det_loss,det_prediction])
Explanation: Coffee-lation
End of explanation
#average precision at K
from oracle import APatK, score
# Out good old minibatch iterator now supports arbitrary amount of arrays (X,y,z)
def iterate_minibatches(*arrays,**kwargs):
batchsize=kwargs.get("batchsize",100)
shuffle = kwargs.get("shuffle",True)
if shuffle:
indices = np.arange(len(arrays[0]))
np.random.shuffle(indices)
for start_idx in range(0, len(arrays[0]) - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield [arr[excerpt] for arr in arrays]
Explanation: Training loop
The regular way with loops over minibatches
Since the dataset is huge, we define epoch as some fixed amount of samples isntead of all dataset
End of explanation
from sklearn.metrics import roc_auc_score, accuracy_score
n_epochs = 100
batch_size = 100
minibatches_per_epoch = 100
for i in range(n_epochs):
#training
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_tr,title_tr,nontext_tr,target_tr,batchsize=batch_size,shuffle=True)):
if j > minibatches_per_epoch:break
loss,pred_probas = train_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Train:"
print '\tloss:',b_loss/b_c
print '\tacc:',accuracy_score(epoch_y_true,epoch_y_pred>0.)
print '\tauc:',roc_auc_score(epoch_y_true,epoch_y_pred)
print '\tap@k:',APatK(epoch_y_true,epoch_y_pred,K = int(len(epoch_y_pred)*0.025)+1)
#evaluation
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_ts,title_ts,nontext_tr,target_ts,batchsize=batch_size,shuffle=True)):
if j > minibatches_per_epoch: break
loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Val:"
print '\tloss:',b_loss/b_c
print '\tacc:',accuracy_score(epoch_y_true,epoch_y_pred>0.)
print '\tauc:',roc_auc_score(epoch_y_true,epoch_y_pred)
print '\tap@k:',APatK(epoch_y_true,epoch_y_pred,K = int(len(epoch_y_pred)*0.025)+1)
print "If you are seeing this, it's time to backup your notebook. No, really, 'tis too easy to mess up everything without noticing. "
Explanation: Tweaking guide
batch_size - how many samples are processed per function call
optimization gets slower, but more stable, as you increase it.
May consider increasing it halfway through training
minibatches_per_epoch - max amount of minibatches per epoch
Does not affect training. Lesser value means more frequent and less stable printing
Setting it to less than 10 is only meaningfull if you want to make sure your NN does not break down after one epoch
n_epochs - total amount of epochs to train for
n_epochs = 10**10 and manual interrupting is still an option
Tips:
With small minibatches_per_epoch, network quality may jump around 0.5 for several epochs
AUC is the most stable of all three metrics
Average Precision at top 2.5% (APatK) - is the least stable. If batch_size*minibatches_per_epoch < 10k, it behaves as a uniform random variable.
Plotting metrics over training time may be a good way to analyze which architectures work better.
Once you are sure your network aint gonna crash, it's worth letting it train for a few hours of an average laptop's time to see it's true potential
End of explanation
#evaluation
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_ts,title_ts,nontext_tr,target_ts,batchsize=batch_size,shuffle=True)):
loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
final_accuracy = accuracy_score(epoch_y_true,epoch_y_pred>0)
final_auc = roc_auc_score(epoch_y_true,epoch_y_pred)
final_apatk = APatK(epoch_y_true,epoch_y_pred,K = int(len(epoch_y_pred)*0.025)+1)
print "Scores:"
print '\tloss:',b_loss/b_c
print '\tacc:',final_accuracy
print '\tauc:',final_auc
print '\tap@k:',final_apatk
score(final_accuracy,final_auc,final_apatk)
Explanation: Final evaluation
Evaluate network over the entire test set
End of explanation |
13,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algebra Lineal con Python
Esta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Mi blog sobre Python. El contenido esta bajo la licencia BSD.
<img alt="Algebra lineal" title="Algebra lineal" src="http
Step3: Representación gráfica
Tradicionalmente, los vectores son representados visualmente como flechas que parten desde el origen hacia un punto.
Por ejemplo, si quisieramos representar graficamente a los vectores v1=[2, 4], v2=[-3, 3] y v3=[-4, -3.5], podríamos hacerlo de la siguiente manera.
Step4: Operaciones con vectores
Las operaciones más comunes que utilizamos cuando trabajamos con vectores son la suma, la resta y la multiplicacion por <a href="http
Step5: Producto escalar o interior
El producto escalar de dos vectores se define como la suma de los productos de sus elementos, suele representarse matematicamente como < x, y > o x'y, donde x e y son dos vectores.
$$< x, y >
Step6: Matrices
Las <a href="http
Step7: Multiplicacion o Producto de matrices
La regla para la multiplicación de matrices generaliza la idea del producto interior que vimos con los vectores; y esta diseñada para facilitar las operaciones lineales básicas.
Cuando multiplicamos matrices, el número de columnas de la primera <a href="http
Step8: Este ultimo ejemplo vemos que la propiedad conmutativa no se cumple, es más, Python nos arroja un error, ya que el número de columnas de B no coincide con el número de filas de A, por lo que ni siquiera se puede realizar la multiplicación de B x A.
Para una explicación más detallada del proceso de multiplicación de matrices, pueden consultar el siguiente tutorial.
La matriz identidad, la matriz inversa, la matrix transpuesta y el determinante
La matriz identidad es el elemento neutro en la multiplicación de matrices, es el equivalente al número 1. Cualquier matriz multiplicada por la matriz identidad nos da como resultado la misma matriz. La matriz identidad es una matriz cuadrada (tiene siempre el mismo número de filas que de columnas); y su diagonal principal se compone de todos elementos 1 y el resto de los elementos se completan con 0. Suele representase con la letra I
Por ejemplo la matriz identidad de 3x3 sería la siguiente
Step9: Sistemas de ecuaciones lineales
Una de las principales aplicaciones del Álgebra lineal consiste en resolver problemas de sistemas de ecuaciones lineales.
Una ecuación lineal es una ecuación que solo involucra sumas y restas de una variable o mas variables a la primera potencia. Es la ecuación de la linea recta.Cuando nuestro problema esta representado por más de una ecuación lineal, hablamos de un sistema de ecuaciones lineales. Por ejemplo, podríamos tener un sistema de dos ecuaciones con dos incognitas como el siguiente
Step10: Luego de haber graficado las funciones, podemos ver que ambas rectas se cruzan en el punto (3, 1), es decir que la solución de nuestro sistema sería $x=3$ e $y=1$. En este caso, al tratarse de un sistema simple y con solo dos incognitas, la solución grafica puede ser de utilidad, pero para sistemas más complicados se necesita una solucion númerica, es aquí donde entran a jugar las <a href="http
Step11: Para resolver en forma numérica los sistema de ecuaciones, existen varios métodos
Step12: Programación lineal
La programación lineal estudia las situaciones en las que se exige maximizar o minimizar funciones que se encuentran sujetas a determinadas restricciones.
Consiste en optimizar (minimizar o maximizar) una función lineal, denominada función objetivo, de tal forma que las variables de dicha función estén sujetas a una serie de restricciones que expresamos mediante un sistema de inecuaciones lineales.
Para resolver un problema de programación lineal, debemos seguir los siguientes pasos | Python Code:
# Vector como lista de Python
v1 = [2, 4, 6]
v1
# Vectores con numpy
import numpy as np
v2 = np.ones(3) # vector de solo unos.
v2
v3 = np.array([1, 3, 5]) # pasando una lista a las arrays de numpy
v3
v4 = np.arange(1, 8) # utilizando la funcion arange de numpy
v4
Explanation: Algebra Lineal con Python
Esta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Mi blog sobre Python. El contenido esta bajo la licencia BSD.
<img alt="Algebra lineal" title="Algebra lineal" src="http://relopezbriega.github.io/images/lin-alg.jpg">
Introducción
Una de las herramientas matemáticas más utilizadas en machine learning y data mining es el Álgebra lineal; por tanto, si queremos incursionar en el fascinante mundo del aprendizaje automático y el analisis de datos es importante reforzar los conceptos que forman parte de sus cimientos.
El Álgebra lineal es una rama de las matemáticas que es sumamente utilizada en el estudio de una gran variedad de ciencias, como ser, ingenieria, finanzas, investigacion operativa, entre otras. Es una extensión del álgebra que aprendemos en la escuela secundaria, hacia un mayor número de dimensiones; en lugar de trabajar con incognitas a nivel de <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> comenzamos a trabajar con <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> y vectores.
El estudio del Álgebra lineal implica trabajar con varios objectos matemáticos, como ser:
Los <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">Escalares</a>: Un escalar es un solo número, en contraste con la mayoría de los otros objetos estudiados en Álgebra lineal, que son generalmente una colección de múltiples números.
Los Vectores:Un vector es una serie de números. Los números tienen una orden prestablecido, y podemos identificar cada número individual por su índice en ese orden. Podemos pensar en los vectores como la identificación de puntos en el espacio, con cada elemento que da la coordenada a lo largo de un eje diferente. Existen dos tipos de vectores, los vectores de fila y los vectores de columna. Podemos representarlos de la siguiente manera, dónde f es un vector de fila y c es un vector de columna:
$$f=\begin{bmatrix}0&1&-1\end{bmatrix} ; c=\begin{bmatrix}0\1\-1\end{bmatrix}$$
Las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">Matrices</a>: Una matriz es un arreglo bidimensional de números (llamados entradas de la matriz) ordenados en filas (o renglones) y columnas, donde una fila es cada una de las líneas horizontales de la matriz y una columna es cada una de las líneas verticales. En una matriz cada elemento puede ser identificado utilizando dos índices, uno para la fila y otro para la columna en que se encuentra. Las podemos representar de la siguiente manera, A es una matriz de 3x2.
$$A=\begin{bmatrix}0 & 1& \-1 & 2 \ -2 & 3\end{bmatrix}$$
Los Tensores:En algunos casos necesitaremos una matriz con más de dos ejes. En general, una serie de números dispuestos en una cuadrícula regular con un número variable de ejes es conocido como un tensor.
Sobre estos objectos podemos realizar las operaciones matemáticas básicas, como ser adición, multiplicación, sustracción y <a href="http://es.wikipedia.org/wiki/Divisi%C3%B3n_(matem%C3%A1tica)" >división</a>, es decir que vamos a poder sumar vectores con <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>, multiplicar <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> a vectores y demás.
Librerías de Python para álgebra lineal
Los principales módulos que Python nos ofrece para realizar operaciones de Álgebra lineal son los siguientes:
Numpy: El popular paquete matemático de Python, nos va a permitir crear vectores, <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> y tensores con suma facilidad.
numpy.linalg: Este es un submodulo dentro de Numpy con un gran número de funciones para resolver ecuaciones de Álgebra lineal.
scipy.linalg: Este submodulo del paquete científico Scipy es muy similar al anterior, pero con algunas más funciones y optimaciones.
Sympy: Esta librería nos permite trabajar con matemática simbólica, convierte a Python en un sistema algebraico computacional. Nos va a permitir trabajar con ecuaciones y fórmulas simbólicamente, en lugar de numéricamente.
CVXOPT: Este módulo nos permite resolver problemas de optimizaciones de programación lineal.
PuLP: Esta librería nos permite crear modelos de programación lineal en forma muy sencilla con Python.
Operaciones básicas
Vectores
Un vector de largo n es una secuencia (o array, o tupla) de n números. La solemos escribir como x=(x1,...,xn) o x=[x1,...,xn]
En Python, un vector puede ser representado con una simple lista, o con un array de Numpy; siendo preferible utilizar esta última opcion.
End of explanation
import matplotlib.pyplot as plt
from warnings import filterwarnings
%matplotlib inline
filterwarnings('ignore') # Ignorar warnings
def move_spines():
Crea la figura de pyplot y los ejes. Mueve las lineas de la izquierda y de abajo
para que se intersecten con el origen. Elimina las lineas de la derecha y la de arriba.
Devuelve los ejes.
fix, ax = plt.subplots()
for spine in ["left", "bottom"]:
ax.spines[spine].set_position("zero")
for spine in ["right", "top"]:
ax.spines[spine].set_color("none")
return ax
def vect_fig():
Genera el grafico de los vectores en el plano
ax = move_spines()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.grid()
vecs = [[2, 4], [-3, 3], [-4, -3.5]] # lista de vectores
for v in vecs:
ax.annotate(" ", xy=v, xytext=[0, 0],
arrowprops=dict(facecolor="blue",
shrink=0,
alpha=0.7,
width=0.5))
ax.text(1.1 * v[0], 1.1 * v[1], v)
vect_fig() # crea el gráfico
Explanation: Representación gráfica
Tradicionalmente, los vectores son representados visualmente como flechas que parten desde el origen hacia un punto.
Por ejemplo, si quisieramos representar graficamente a los vectores v1=[2, 4], v2=[-3, 3] y v3=[-4, -3.5], podríamos hacerlo de la siguiente manera.
End of explanation
# Ejemplo en Python
x = np.arange(1, 5)
y = np.array([2, 4, 6, 8])
x, y
# sumando dos vectores numpy
x + y
# restando dos vectores
x - y
# multiplicando por un escalar
x * 2
y * 3
Explanation: Operaciones con vectores
Las operaciones más comunes que utilizamos cuando trabajamos con vectores son la suma, la resta y la multiplicacion por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>.
Cuando sumamos dos vectores, vamos sumando elemento por elemento de cada
vector.
$$ \begin{split}x + y
\left[
\begin{array}{c}
x_1 \
x_2 \
\vdots \
x_n
\end{array}
\right]
+
\left[
\begin{array}{c}
y_1 \
y_2 \
\vdots \
y_n
\end{array}
\right]
:=
\left[
\begin{array}{c}
x_1 + y_1 \
x_2 + y_2 \
\vdots \
x_n + y_n
\end{array}
\right]\end{split}$$
De forma similar funciona la operación de resta.
$$ \begin{split}x - y
\left[
\begin{array}{c}
x_1 \
x_2 \
\vdots \
x_n
\end{array}
\right]
-
\left[
\begin{array}{c}
y_1 \
y_2 \
\vdots \
y_n
\end{array}
\right]
:=
\left[
\begin{array}{c}
x_1 - y_1 \
x_2 - y_2 \
\vdots \
x_n - y_n
\end{array}
\right]\end{split}$$
La multiplicacion por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> es una operación que toma a un número $\gamma$, y a un vector $x$ y produce un nuevo vector donde cada elemento del vector $x$ es multiplicado por el número $\gamma$.
$$\begin{split}\gamma x
:=
\left[
\begin{array}{c}
\gamma x_1 \
\gamma x_2 \
\vdots \
\gamma x_n
\end{array}
\right]\end{split}$$
En Python podríamos realizar estas operaciones en forma muy sencilla:
End of explanation
# Calculando el producto escalar de los vectores x e y
np.dot(x, y)
# o lo que es lo mismo, que:
sum(x * y)
# Calculando la norma del vector X
np.linalg.norm(x)
# otra forma de calcular la norma de x
np.sqrt(np.dot(x, x))
# vectores ortogonales
v1 = np.array([3, 4])
v2 = np.array([4, -3])
np.dot(v1, v2)
Explanation: Producto escalar o interior
El producto escalar de dos vectores se define como la suma de los productos de sus elementos, suele representarse matematicamente como < x, y > o x'y, donde x e y son dos vectores.
$$< x, y > := \sum_{i=1}^n x_i y_i$$
Dos vectores son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> o perpendiculares cuando forman ángulo recto entre sí. Si el producto escalar de dos vectores es cero, ambos vectores son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a>.
Adicionalmente, todo producto escalar induce una norma sobre el espacio en el que está definido, de la siguiente manera:
$$\| x \| := \sqrt{<x, x>} := \left( \sum_{i=1}^n x_i^2 \right)^{1/2}$$
En Python lo podemos calcular de la siguiente forma:
End of explanation
# Ejemplo en Python
A = np.array([[1, 3, 2],
[1, 0, 0],
[1, 2, 2]])
B = np.array([[1, 0, 5],
[7, 5, 0],
[2, 1, 1]])
# suma de las matrices A y B
A + B
# resta de matrices
A - B
# multiplicando matrices por escalares
A * 2
B * 3
# ver la dimension de una matriz
A.shape
# ver cantidad de elementos de una matriz
A.size
Explanation: Matrices
Las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> son una forma clara y sencilla de organizar los datos para su uso en operaciones lineales.
Una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> n × k es una agrupacion rectangular de números con n filas y k columnas; se representa de la siguiente forma:
$$\begin{split}A =
\left[
\begin{array}{cccc}
a_{11} & a_{12} & \cdots & a_{1k} \
a_{21} & a_{22} & \cdots & a_{2k} \
\vdots & \vdots & & \vdots \
a_{n1} & a_{n2} & \cdots & a_{nk}
\end{array}
\right]\end{split}$$
En la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A, el símbolo Ank representa el elemento n-ésimo de la fila en la k-ésima columna. La <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A también puede ser llamada un vector si cualquiera de n o k son iguales a 1. En el caso de n=1, A se llama un vector fila, mientras que en el caso de k=1 se denomina un vector columna.
Las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se utilizan para múltiples aplicaciones y sirven, en particular, para representar los coeficientes de los sistemas de ecuaciones lineales o para representar transformaciones lineales dada una base. Pueden sumarse, multiplicarse y descomponerse de varias formas.
Operaciones con matrices
Al igual que con los vectores, que no son más que un caso particular, las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se pueden sumar, restar y la multiplicar por <a href="http://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>.
Multiplicacion por escalares:
$$\begin{split}\gamma A
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \
\vdots & \vdots & \vdots \
a_{n1} & \cdots & a_{nk} \
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
\gamma a_{11} & \cdots & \gamma a_{1k} \
\vdots & \vdots & \vdots \
\gamma a_{n1} & \cdots & \gamma a_{nk} \
\end{array}
\right]\end{split}$$
Suma de matrices: $$\begin{split}A + B =
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \
\vdots & \vdots & \vdots \
a_{n1} & \cdots & a_{nk} \
\end{array}
\right]
+
\left[
\begin{array}{ccc}
b_{11} & \cdots & b_{1k} \
\vdots & \vdots & \vdots \
b_{n1} & \cdots & b_{nk} \
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
a_{11} + b_{11} & \cdots & a_{1k} + b_{1k} \
\vdots & \vdots & \vdots \
a_{n1} + b_{n1} & \cdots & a_{nk} + b_{nk} \
\end{array}
\right]\end{split}$$
Resta de matrices: $$\begin{split}A - B =
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \
\vdots & \vdots & \vdots \
a_{n1} & \cdots & a_{nk} \
\end{array}
\right]
-
\left[
\begin{array}{ccc}
b_{11} & \cdots & b_{1k} \
\vdots & \vdots & \vdots \
b_{n1} & \cdots & b_{nk} \
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
a_{11} - b_{11} & \cdots & a_{1k} - b_{1k} \
\vdots & \vdots & \vdots \
a_{n1} - b_{n1} & \cdots & a_{nk} - b_{nk} \
\end{array}
\right]\end{split}$$
Para los casos de suma y resta, hay que tener en cuenta que solo se pueden sumar o restar <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> que tengan las mismas dimensiones, es decir que si tengo una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de dimension 3x2 (3 filas y 2 columnas) solo voy a poder sumarle o restarle la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> B si esta tambien tiene 3 filas y 2 columnas.
End of explanation
# Ejemplo multiplicación de matrices
A = np.arange(1, 13).reshape(3, 4) #matriz de dimension 3x4
A
B = np.arange(8).reshape(4,2) #matriz de dimension 4x2
B
# Multiplicando A x B
A.dot(B) #resulta en una matriz de dimension 3x2
# Multiplicando B x A
B.dot(A)
Explanation: Multiplicacion o Producto de matrices
La regla para la multiplicación de matrices generaliza la idea del producto interior que vimos con los vectores; y esta diseñada para facilitar las operaciones lineales básicas.
Cuando multiplicamos matrices, el número de columnas de la primera <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> debe ser igual al número de filas de la segunda <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>; y el resultado de esta multiplicación va a tener el mismo número de filas que la primer <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> y el número de la columnas de la segunda <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>. Es decir, que si yo tengo una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de dimension 3x4 y la multiplico por una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> B de dimension 4x2, el resultado va a ser una <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> C de dimension 3x2.
Algo a tener en cuenta a la hora de multiplicar matrices es que la propiedad connmutativa no se cumple. AxB no es lo mismo que BxA.
Veamos los ejemplos en Python.
End of explanation
# Creando una matriz identidad de 2x2
I = np.eye(2)
I
# Multiplicar una matriz por la identidad nos da la misma matriz
A = np.array([[4, 7],
[2, 6]])
A
A.dot(I) # AxI = A
# Calculando el determinante de la matriz A
np.linalg.det(A)
# Calculando la inversa de A.
A_inv = np.linalg.inv(A)
A_inv
# A x A_inv nos da como resultado I.
A.dot(A_inv)
# Trasponiendo una matriz
A = np.arange(6).reshape(3, 2)
A
np.transpose(A)
Explanation: Este ultimo ejemplo vemos que la propiedad conmutativa no se cumple, es más, Python nos arroja un error, ya que el número de columnas de B no coincide con el número de filas de A, por lo que ni siquiera se puede realizar la multiplicación de B x A.
Para una explicación más detallada del proceso de multiplicación de matrices, pueden consultar el siguiente tutorial.
La matriz identidad, la matriz inversa, la matrix transpuesta y el determinante
La matriz identidad es el elemento neutro en la multiplicación de matrices, es el equivalente al número 1. Cualquier matriz multiplicada por la matriz identidad nos da como resultado la misma matriz. La matriz identidad es una matriz cuadrada (tiene siempre el mismo número de filas que de columnas); y su diagonal principal se compone de todos elementos 1 y el resto de los elementos se completan con 0. Suele representase con la letra I
Por ejemplo la matriz identidad de 3x3 sería la siguiente:
$$I=\begin{bmatrix}1 & 0 & 0 & \0 & 1 & 0\ 0 & 0 & 1\end{bmatrix}$$
Ahora que conocemos el concepto de la matriz identidad, podemos llegar al concepto de la matriz inversa. Si tenemos una matriz A, la matriz inversa de A, que se representa como $A^{-1}$ es aquella matriz cuadrada que hace que la multiplicación $A$x$A^{-1}$ sea igual a la matriz identidad I. Es decir que es la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> reciproca de A.
$$A × A^{-1} = A^{-1} × A = I$$
Tener en cuenta que esta matriz inversa en muchos casos puede no existir.En este caso se dice que la matriz es singular o degenerada. Una matriz es singular si y solo si su <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es nulo.
El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es un número especial que puede calcularse sobre las matrices cuadradas. Se calcula como la suma de los productos de las diagonales de la matriz en una direccion menos la suma de los productos de las diagonales en la otra dirección. Se represente con el simbolo |A|.
$$A=\begin{bmatrix}a_{11} & a_{12} & a_{13} & \a_{21} & a_{22} & a_{23} & \ a_{31} & a_{32} & a_{33} & \end{bmatrix}$$
$$|A| =
(a_{11} a_{22} a_{33}
+ a_{12} a_{23} a_{31}
+ a_{13} a_{21} a_{32} )
- (a_{31} a_{22} a_{13}
+ a_{32} a_{23} a_{11}
+ a_{33} a_{21} a_{12})
$$
Por último, la matriz transpuesta es aquella en que las filas se transforman en columnas y las columnas en filas. Se representa con el simbolo $A^\intercal$
$$\begin{bmatrix}a & b & \c & d & \ e & f & \end{bmatrix}^T:=\begin{bmatrix}a & c & e &\b & d & f & \end{bmatrix}$$
Ejemplos en Python:
End of explanation
# graficando el sistema de ecuaciones.
x_vals = np.linspace(0, 5, 50) # crea 50 valores entre 0 y 5
plt.plot(x_vals, (1 - x_vals)/-2) # grafica x - 2y = 1
plt.plot(x_vals, (11 - (3*x_vals))/2) # grafica 3x + 2y = 11
plt.axis(ymin = 0)
Explanation: Sistemas de ecuaciones lineales
Una de las principales aplicaciones del Álgebra lineal consiste en resolver problemas de sistemas de ecuaciones lineales.
Una ecuación lineal es una ecuación que solo involucra sumas y restas de una variable o mas variables a la primera potencia. Es la ecuación de la linea recta.Cuando nuestro problema esta representado por más de una ecuación lineal, hablamos de un sistema de ecuaciones lineales. Por ejemplo, podríamos tener un sistema de dos ecuaciones con dos incognitas como el siguiente:
$$ x - 2y = 1$$
$$3x + 2y = 11$$
La idea es encontrar el valor de $x$ e $y$ que resuelva ambas ecuaciones. Una forma en que podemos hacer esto, puede ser representando graficamente ambas rectas y buscar los puntos en que las rectas se cruzan.
En Python esto se puede hacer en forma muy sencilla con la ayuda de matplotlib.
End of explanation
# Comprobando la solucion con la multiplicación de matrices.
A = np.array([[1., -2.],
[3., 2.]])
x = np.array([[3.],[1.]])
A.dot(x)
Explanation: Luego de haber graficado las funciones, podemos ver que ambas rectas se cruzan en el punto (3, 1), es decir que la solución de nuestro sistema sería $x=3$ e $y=1$. En este caso, al tratarse de un sistema simple y con solo dos incognitas, la solución grafica puede ser de utilidad, pero para sistemas más complicados se necesita una solucion númerica, es aquí donde entran a jugar las <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>.
Ese mismo sistema se podría representar como una ecuación matricial de la siguiente forma:
$$\begin{bmatrix}1 & -2 & \3 & 2 & \end{bmatrix} \begin{bmatrix}x & \y & \end{bmatrix} = \begin{bmatrix}1 & \11 & \end{bmatrix}$$
Lo que es lo mismo que decir que la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A por la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $x$ nos da como resultado el vector b.
$$ Ax = b$$
En este caso, ya sabemos el resultado de $x$, por lo que podemos comprobar que nuestra solucion es correcta realizando la multiplicación de matrices.
End of explanation
# Creando matriz de coeficientes
A = np.array([[1, 2, 3],
[2, 5, 2],
[6, -3, 1]])
A
# Creando matriz de resultados
b = np.array([6, 4, 2])
b
# Resolviendo sistema de ecuaciones
x = np.linalg.solve(A, b)
x
# Comprobando la solucion
A.dot(x) == b
Explanation: Para resolver en forma numérica los sistema de ecuaciones, existen varios métodos:
El método de sustitución: El cual consiste en despejar en una de las ecuaciones cualquier incógnita, preferiblemente la que tenga menor coeficiente y a continuación sustituirla en otra ecuación por su valor.
El método de igualacion: El cual se puede entender como un caso particular del método de sustitución en el que se despeja la misma incógnita en dos ecuaciones y a continuación se igualan entre sí la parte derecha de ambas ecuaciones.
El método de reduccion: El procedimiento de este método consiste en transformar una de las ecuaciones (generalmente, mediante productos), de manera que obtengamos dos ecuaciones en la que una misma incógnita aparezca con el mismo coeficiente y distinto signo. A continuación, se suman ambas ecuaciones produciéndose así la reducción o cancelación de dicha incógnita, obteniendo una ecuación con una sola incógnita, donde el método de resolución es simple.
El método gráfico: Que consiste en construir el gráfica de cada una de las ecuaciones del sistema. Este método (manualmente aplicado) solo resulta eficiente en el plano cartesiano (solo dos incognitas).
El método de Gauss: El método de eliminación de Gauss o simplemente método de Gauss consiste en convertir un sistema lineal de n ecuaciones con n incógnitas, en uno escalonado, en el que la primera ecuación tiene n incógnitas, la segunda ecuación tiene n - 1 incógnitas, ..., hasta la última ecuación, que tiene 1 incógnita. De esta forma, será fácil partir de la última ecuación e ir subiendo para calcular el valor de las demás incógnitas.
El método de Eliminación de Gauss-Jordan: El cual es una variante del método anterior, y consistente en triangular la matriz aumentada del sistema mediante transformaciones elementales, hasta obtener ecuaciones de una sola incógnita.
El método de Cramer: El cual consiste en aplicar la regla de Cramer para resolver el sistema. Este método solo se puede aplicar cuando la matriz de coeficientes del sistema es cuadrada y de determinante no nulo.
La idea no es explicar cada uno de estos métodos, sino saber que existen y que Python nos hacer la vida mucho más fácil, ya que para resolver un sistema de ecuaciones simplemente debemos llamar a la función solve().
Por ejemplo, para resolver este sistema de 3 ecuaciones y 3 incognitas.
$$ x + 2y + 3z = 6$$
$$ 2x + 5y + 2z = 4$$
$$ 6x - 3y + z = 2$$
Primero armamos la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de coeficientes y la <a href="http://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> b de resultados y luego utilizamos solve() para resolverla.
End of explanation
# Resolviendo la optimizacion con pulp
from pulp import *
# declarando las variables
x1 = LpVariable("x1", 0, 800) # 0<= x1 <= 40
x2 = LpVariable("x2", 0, 1000) # 0<= x2 <= 1000
# definiendo el problema
prob = LpProblem("problem", LpMaximize)
# definiendo las restricciones
prob += x1+1.5*x2 <= 750
prob += 2*x1+x2 <= 1000
prob += x1>=0
prob += x2>=0
# definiendo la funcion objetivo a maximizar
prob += 50*x1+40*x2
# resolviendo el problema
status = prob.solve(GLPK(msg=0))
LpStatus[status]
# imprimiendo los resultados
(value(x1), value(x2))
# Resolviendo el problema con cvxopt
from cvxopt import matrix, solvers
A = matrix([[-1., -2., 1., 0.], # columna de x1
[-1.5, -1., 0., 1.]]) # columna de x2
b = matrix([750., 1000., 0., 0.]) # resultados
c = matrix([50., 40.]) # funcion objetivo
# resolviendo el problema
sol=solvers.lp(c,A,b)
# imprimiendo la solucion.
print('{0:.2f}, {1:.2f}'.format(sol['x'][0]*-1, sol['x'][1]*-1))
# Resolviendo la optimizacion graficamente.
x_vals = np.linspace(0, 800, 10) # 10 valores entre 0 y 800
plt.plot(x_vals, ((750 - x_vals)/1.5)) # grafica x1 + 1.5x2 = 750
plt.plot(x_vals, (1000 - 2*x_vals)) # grafica 2x1 + x2 = 1000
plt.axis(ymin = 0)
Explanation: Programación lineal
La programación lineal estudia las situaciones en las que se exige maximizar o minimizar funciones que se encuentran sujetas a determinadas restricciones.
Consiste en optimizar (minimizar o maximizar) una función lineal, denominada función objetivo, de tal forma que las variables de dicha función estén sujetas a una serie de restricciones que expresamos mediante un sistema de inecuaciones lineales.
Para resolver un problema de programación lineal, debemos seguir los siguientes pasos:
Elegir las incógnitas.
Escribir la función objetivo en función de los datos del problema.
Escribir las restricciones en forma de sistema de inecuaciones.
Averiguar el conjunto de soluciones factibles representando gráficamente las restricciones.
Calcular las coordenadas de los vértices del recinto de soluciones factibles (si son pocos).
Calcular el valor de la función objetivo en cada uno de los vértices para ver en cuál de ellos presenta el valor máximo o mínimo según nos pida el problema (hay que tener en cuenta aquí la posible no existencia de solución).
Veamos un ejemplo y como Python nos ayuda a resolverlo en forma sencilla.
Supongamos que tenemos la siguiente funcion objetivo:
$$f(x_{1},x_{2})= 50x_{1} + 40x_{2}$$
y las siguientes restricciones:
$$x_{1} + 1.5x_{2} \leq 750$$
$$2x_{1} + x_{2} \leq 1000$$
$$x_{1} \geq 0$$
$$x_{2} \geq 0$$
Podemos resolverlo utilizando PuLP, CVXOPT o graficamente (con matplotlib) de la siguiente forma.
End of explanation |
13,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DualMap plugin
This plugin is using the Leaflet plugin Sync by Jieter
Step1: The DualMap class accepts the same arguments as the normal Map class. Except for these
Step2: You can access the two submaps with attributes m1 and m2. You can add objects to each map specifically.
Here we add different tile layers to each map. This way you can see two different tile sets at the same time.
Step3: Now we're going to add feature groups and markers to both maps and to each map individually. We'll color the shared icon red.
Step4: Finally, you can use the layout argument to change the layout to vertical | Python Code:
import folium
import folium.plugins
Explanation: DualMap plugin
This plugin is using the Leaflet plugin Sync by Jieter:
https://github.com/jieter/Leaflet.Sync
The goal is to have two maps side by side. When you pan or zoom on one map, the other will move as well.
End of explanation
m = folium.plugins.DualMap(location=(52.1, 5.1), zoom_start=8)
m
Explanation: The DualMap class accepts the same arguments as the normal Map class. Except for these: 'width', 'height', 'left', 'top', 'position'.
In the following example we create a DualMap, add layer controls and then show the map. Try panning and zooming to check that both maps are syncronized.
End of explanation
m = folium.plugins.DualMap(location=(52.1, 5.1), tiles=None, zoom_start=8)
folium.TileLayer("openstreetmap").add_to(m.m1)
folium.TileLayer("cartodbpositron").add_to(m.m2)
folium.LayerControl(collapsed=False).add_to(m)
m
Explanation: You can access the two submaps with attributes m1 and m2. You can add objects to each map specifically.
Here we add different tile layers to each map. This way you can see two different tile sets at the same time.
End of explanation
m = folium.plugins.DualMap(location=(52.1, 5.1), tiles="cartodbpositron", zoom_start=8)
fg_both = folium.FeatureGroup(name="markers_both").add_to(m)
fg_1 = folium.FeatureGroup(name="markers_1").add_to(m.m1)
fg_2 = folium.FeatureGroup(name="markers_2").add_to(m.m2)
icon_red = folium.Icon(color="red")
folium.Marker((52.0, 5.0), tooltip="both", icon=icon_red).add_to(fg_both)
folium.Marker((52.4, 5.0), tooltip="1").add_to(fg_1)
folium.Marker((52.0, 5.4), tooltip="2").add_to(fg_2)
folium.LayerControl(collapsed=False).add_to(m)
m
Explanation: Now we're going to add feature groups and markers to both maps and to each map individually. We'll color the shared icon red.
End of explanation
m = folium.plugins.DualMap(layout="vertical")
m
Explanation: Finally, you can use the layout argument to change the layout to vertical:
End of explanation |
13,625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import data from web scrawlers
Here we first build a web scrawler to scrap all the trafic information from the official twitter accounts of RATP, SNCF
Step1: First, have a look at the data
Step2: Check how many items in each sub data set
Step3: Have a look at the data in each sub dataset
Since RER datasets may have different patterns comparing to metros.
Step4: We find a fun fact by checking data from RER_A
Step5: Import different station names from RATP API
Step6: Data Cleaning
Step7: Check how many items left in each sub dataset.
Step8: Using NLP to analyze tweets
Find the most frequent words
Find the most frequent gares
Find the most stations
Step9: Check words_freq function on RER_B
Step10: Check gare_fq function on RER_B
Step11: Build incident_freq function
Step12: Test incident_reason function in RER_C
Step13: Consolidate all functions in one dashboard function
Step14: Merge all sub datasets into one
Step15: How many tweets per Ligne/RER
Step16: Numbers of Tweets per day
Step17: Plot a chart visualize the data
Step18: Export date, username, tweets count per date & consolidated data
Step19: Consolidate the data by hour
Step20: Export Tweets Number, Incident Reason, and Ligne/RER
Step21: Then we find out that a few incidents should be classified in a same group
Step22: Build a table for Machie Learning Algorithm
Import temperature data.
Import which arrondissements are passed through by which line.
Step23: Remember that we built a consolidated before, let's review this dataframe.
Step24: Split the dataset into X_train, X_test, y_train, y_test
Step25: Check X data
Step26: Run different models
Decision Tree
Random Forest
KNN
Linear SVC
Logistic Regression | Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
# ---- Summary of the twitter accounts -----#
# RER_A
# RERB
# RERC_SNCF --< Infotrafic
# RERD_SNCF --< Infotrafic
# RERE_SNCF --< Infotrafic
# Ligne12_RATP --< from 1 to 14 lines
line_list = ['RER_A', 'RER_B', 'RER_C', 'RER_D', 'RER_E','Ligne1_RATP', 'Ligne2_RATP', 'Ligne3_RATP', 'Ligne4_RATP',
'Ligne5_RATP', 'Ligne6_RATP', 'Ligne7_RATP', 'Ligne8_RATP', 'Ligne9_RATP', 'Ligne10_RATP', 'Ligne11_RATP', 'Ligne12_RATP',
'Ligne13_RATP', 'Ligne14_RATP' ]
file_path = "data/"
line_dict = dict()
for item in line_list:
line_dict[item] = pd.read_csv(file_path + item +'.csv', sep=';',error_bad_lines=False)
Explanation: Import data from web scrawlers
Here we first build a web scrawler to scrap all the trafic information from the official twitter accounts of RATP, SNCF
End of explanation
line_dict['RER_A'].sort_values(by='retweets',ascending = False).head()
Explanation: First, have a look at the data
End of explanation
# Check how many items we have
for k,v in line_dict.items():
print(k, v.shape)
Explanation: Check how many items in each sub data set
End of explanation
line_dict['Ligne11_RATP'].head()
line_dict['RER_B'].head()
Explanation: Have a look at the data in each sub dataset
Since RER datasets may have different patterns comparing to metros.
End of explanation
# sort by retweets --> a dog is missing??? comes up first???
line_dict['RER_A'].sort_values(by='retweets',ascending = False)['text'].head()
Explanation: We find a fun fact by checking data from RER_A: the most popular twitter is about a refound dog???
End of explanation
# find all station names --> gares
df_st = pd.read_csv('data\gares.csv', delimiter=';')
gares = df_st.nomptar.str.split('(')
gares = [x[0].rstrip(' ') for x in gares] # la defense has a lagging space
Explanation: Import different station names from RATP API
End of explanation
# change display to 200
## Step 1: delete all
## Théo, Bonjour, @, Tom, Emma
import re
def clean_data(input):
pd.options.display.max_colwidth = 200
input['date'] = pd.to_datetime(input.date)
input = input[input.date >= pd.to_datetime('2014-1-1')]
# replace pte, chateau
input.text = input.text.str.replace('Pte|pte', 'Porte')
input.text = input.text.str.replace('Chateau|chateau', 'Château')
input.text = input.text.str.replace('électr.', 'électrique')
input.text = input.text.str.replace('tvx.', 'travaux')
# in RER C, D, E, they published traffic information
# with a hashtag of "Infotrafic"
if re.search('RER[CDE]_SNCF',input.username.iloc[0]):
output = input[input.text.str.contains('Infotrafic', na=False)]
else:
# for all other lines,
# we drop the conservations data (see report for more details)
to_drop = ["Bonjour", "@",'Théo', 'Emma','Bjr','Inès',
'Lana','vous','soirée','Oui','estimée',
'Travaux prévus','journée','bonjour','rerb',
'rerc','rerd', 'rere','Infotrafic'] # all about conversations
output = input[~input.text.str.contains('|'.join(to_drop), na=False)]
return output
Explanation: Data Cleaning:
Build a data cleaning function.
Run the function over all datasets.
End of explanation
for k in line_dict.keys():
line_dict[k] = clean_data(line_dict[k])
print(k, line_dict[k].shape)
line_dict['RER_A'].sample(3).text
Explanation: Check how many items left in each sub dataset.
End of explanation
# top 20 frequent words
import nltk
def words_freq(output):
moby_tokens = nltk.word_tokenize(output.text.str.lower().str.cat(sep = ' '))
text1 = nltk.Text(moby_tokens)
nltk.FreqDist(text1).most_common(20)
stopwords = nltk.corpus.stopwords.words('french')
stopwords = stopwords + ['rera','rerb','rerc','rerd','rere',
'ratp','ligne','entre',
'http','les','vers','dir','trafic','gare']
words_except_stop_dist = nltk.FreqDist(w for w in text1 if w not
in stopwords and w.isalpha() )
return words_except_stop_dist
from collections import Counter
def gare_fq(output):
gare_freq = Counter()
for gare in gares:
gare_freq[gare] = output.text.str.lower().str.contains(gare.lower()).sum()
return gare_freq
# sometimes, cergy-le-haut, naterre may be due to their direction -->
# result is true, many items are ignored entre XXX et XXX
line_dict['RER_A'].text[line_dict['RER_A'].text.str.contains('Cergy-Le-Haut')].sample(10)
Explanation: Using NLP to analyze tweets
Find the most frequent words
Find the most frequent gares
Find the most stations
End of explanation
## Now let's try RER B
output_b = line_dict['RER_B']
words_freq(output_b).most_common(20)
Explanation: Check words_freq function on RER_B
End of explanation
gare_fq(output_b).most_common(20)
Explanation: Check gare_fq function on RER_B
End of explanation
from collections import Counter
def incidient_reason(input):
output = input
incidents = ['malaise voyageur',"incident d'exploitation","incident technique",'Incident de signalisation',
"colis suspect", "voyageur malade", "incident voyageur",
"divers incidents",'panne de signalisation','panne de matériel',
'panne électrique','panne mécanique','panne de caténaire',
"panne d'aiguillage",'panne matériel','panne éléctrique',
'panne sur un train','pannes de signalisation',"panne d'un train",
"panne de train",'obstacle sur la voie', 'bagage abandonné','incident de passage',
'accident de personne','feu aux abords','pb signalisation','acte de malveillance',
'jets de pierre','obstacle sur la voie','bagage oublié',
'personnes sur les voies','branche tombée','jet de projectile']
incident_freq = Counter()
for incident in incidents:
incident_freq[incident] = output.text.str.lower().str.contains(incident.lower()).sum()
return incident_freq
Explanation: Build incident_freq function
End of explanation
incidient_reason(line_dict['RER_C']).most_common()
Explanation: Test incident_reason function in RER_C
End of explanation
# what if we write a summary function
def summary(input):
output = input
print()
print ('The 20 most frequent words are: ')
print(words_freq(output).most_common(20))
print('\n')
print('The 20 most frequent stations are: ')
print(gare_fq(output).most_common(20))
print('\n')
print('The 20 most frequent reasons are: ')
print(incidient_reason(output).most_common(20))
#summary(line_dict['RER_A'])
Explanation: Consolidate all functions in one dashboard function
End of explanation
# concat all dataframe and clean data
def consol(data_dic):
result = pd.DataFrame()
for k, v in data_dic.items():
result = pd.concat([result, v])
result = result.sort_values(by='date')
return result
df_consol = consol(line_dict)
Explanation: Merge all sub datasets into one
End of explanation
# overall tweets
df_consol.username.value_counts()
Explanation: How many tweets per Ligne/RER
End of explanation
date_tweets = df_consol.date.apply(lambda x: x.date()).value_counts()
date_tweets.iloc[:10]
Explanation: Numbers of Tweets per day
End of explanation
from matplotlib import pyplot as plt
%matplotlib inline
date_tweets.plot()
Explanation: Plot a chart visualize the data
End of explanation
# export date, username, tweets count
df_consol['date_new'] = df_consol.date.apply(lambda x: x.date())
df_consol.groupby(['date_new', 'username']).size().to_csv('output/tweets_date.csv')
df_consol.to_csv('output/consol.csv')
Explanation: Export date, username, tweets count per date & consolidated data
End of explanation
df_consol['hour'] = df_consol.date.apply(lambda x: x.hour)
df_consol.groupby(['hour','username']).size().to_csv('output/date_hour.csv')
df_consol.sort_values(by='retweets',ascending = False).head()
Explanation: Consolidate the data by hour
End of explanation
df_incident = pd.DataFrame()
for k, v in line_dict.items():
print(k,'\n')
df_inter = pd.DataFrame.from_dict(incidient_reason(v).most_common())
df_inter['username'] = k
df_incident = pd.concat([df_incident, df_inter])
df_incident.sort_values(by=1, ascending = False).head()
Explanation: Export Tweets Number, Incident Reason, and Ligne/RER
End of explanation
df_incident['group'] = df_incident.iloc[:,0]
rep = {'bagage oublié':'bagage abandonné', 'colis suspect':'bagage abandonné',
'voyageur malade':'malaise voyageur',
'pb signalisation':'panne de signalisation', 'jets de pierre':'acte de malveillance',
'jets de pierre':'acte de malveillance','jet de projectile':'acte de malveillance'}
df_incident.group = df_incident.group.replace(rep)
df_incident.loc[df_incident[0].str.contains('bagage', na=False)].head()
df_incident.to_csv('output/df_incident.csv')
df_incident.head()
Explanation: Then we find out that a few incidents should be classified in a same group
End of explanation
df_temp = pd.read_csv('data/temperature.csv')
df_temp.head()
fil = ['Date','T_avg','V_avg','W_avg', 'rain','fog','snow','Thunderstorms']
df_temp_fil = df_temp[fil]
df_temp_fil.head()
# import which arrondissements are passed through by which line.
df_arr = pd.read_csv('data/data_arrondissement.csv')
df_arr.head()
df_traffic = pd.read_csv('data/traffic-16.csv')
df_traffic.head()
# build another conslidated dataframe, and forecast the reason
def df_class(input):
# list all reasons of incidents
incidents = ['malaise voyageur',"incident d'exploitation","incident technique",
'Incident de signalisation',
"colis suspect", "voyageur malade", "incident voyageur",
"divers incidents",'panne de signalisation','panne de matériel',
'panne électrique','panne mécanique','panne de caténaire',
"panne d'aiguillage",'panne matériel','panne éléctrique',
'panne sur un train','pannes de signalisation',"panne d'un train",
"panne de train",'obstacle sur la voie', 'bagage abandonné','incident de passage',
'accident de personne','feu aux abords','pb signalisation','acte de malveillance',
'jets de pierre','obstacle sur la voie','bagage oublié',
'personnes sur les voies','branche tombée','jet de projectile',
'grave de voyageur','animal sur la voie','défaut électrique',
'fin tardive de chantier',"Défaut d'alimentation électrique"]
# clean data
output = clean_data(input)
output = input[input.text.str.contains('|'.join(incidents),na=False)]
filt = "(malaise voyageur|incident d'exploitation|incident technique|Incident de signalisation|colis suspect|voyageur malade|incident voyageur|divers incidents|panne de signalisation|panne de matériel|panne électrique|panne mécanique|panne de caténaire|panne d'aiguillage|panne matériel|panne éléctrique|panne sur un train|pannes de signalisation|panne d'un train|panne de train|obstacle sur la voie|bagage abandonné|incident de passage|accident de personne|feu aux abords|pb signalisation|acte de malveillance|jets de pierre|obstacle sur la voie|bagage oublié|personnes sur les voies|branche tombée|jet de projectile|grave de voyageur|animal sur la voie|défaut électrique|fin tardive de chantier|Défaut d'alimentation électrique)"
output['reason'] = output.text.str.extract(filt)
filt2 = ['username','date_new','reason']
# extract incident reasons and create a new column of "reasons"
output = output[filt2]
# create quarter, month, year columns
output.date_new = pd.to_datetime(output.date_new)
df_temp_fil.Date = pd.to_datetime(df_temp_fil.Date)
#merge temperature data, arrondissements data and traffic data
output = output.merge(right=df_temp_fil, how='inner', left_on='date_new', right_on='Date')
output = output.merge(right=df_arr, how='inner', left_on='username', right_on='username')
output = output.merge(right=df_traffic, how='inner', left_on='username', right_on='username')
output['Quarter'] = output.date_new.apply(lambda x: pd.to_datetime(x).quarter)
output['Month'] = output.date_new.apply(lambda x: pd.to_datetime(x).day)
output['Year'] = output.date_new.apply(lambda x: pd.to_datetime(x).year)
output = output.drop(['date_new','Date'], axis=1)
# standardize all incident reasons
rep = {'bagage oublié':'bagage abandonné', 'colis suspect':'bagage abandonné',
'voyageur malade':'malaise voyageur', "Défaut d'alimentation électrique":'panne électrique',
"panne d'un train":'panne de train','grave de voyageur':'incident voyageur',
'Incident de signalisation':'pannes de signalisation',
'panne de matériel':'panne matériel',
'panne sur un train':'panne de train',
'pb signalisation':'panne de signalisation', 'jets de pierre':'acte de malveillance',
'jets de pierre':'acte de malveillance','jet de projectile':'acte de malveillance',
'accident de personne':'incident voyageur','malaise voyageur':'incident voyageur',
'pannes de signalisation':'panne de signalisation'}
output.reason = output.reason.replace(rep)
# some rows from df_temp_fil contains '-' items
output = output[output.T_avg != '-']
output = output[output.V_avg != '-']
return output
Explanation: Build a table for Machie Learning Algorithm
Import temperature data.
Import which arrondissements are passed through by which line.
End of explanation
df_consol.head()
Explanation: Remember that we built a consolidated before, let's review this dataframe.
End of explanation
df_class(df_consol).drop('reason', axis=1).sample(5)
# let's run classification
from sklearn.model_selection import train_test_split
X = df_class(df_consol).drop('reason', axis=1)
Explanation: Split the dataset into X_train, X_test, y_train, y_test
End of explanation
X.head()
# convert all data into numeric values and scale the data
X.T_avg = pd.to_numeric(X.T_avg)
X.V_avg = pd.to_numeric(X.V_avg)
X.W_avg = pd.to_numeric(X.W_avg)
X = pd.get_dummies(X)
y = df_class(df_consol).reason
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
Explanation: Check X data
End of explanation
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import auc, roc_auc_score, accuracy_score, f1_score
tree = DecisionTreeClassifier()
tree.fit(X_train_scaled, y_train)
y_pred = tree.predict(X_test_scaled)
acc_tree = accuracy_score(y_test, y_pred)
f1_tree = f1_score(y_test, y_pred, average = 'weighted')
print('Accuracy is {}'.format(accuracy_score(y_test, y_pred)))
print('F1 score is {}'.format(f1_score(y_test, y_pred, average = 'weighted')))
from sklearn.metrics import confusion_matrix
y_predicted = tree.predict(X_test_scaled)
confusion = confusion_matrix(y_test, y_predicted)
df_cm = pd.DataFrame(confusion)
#sns.set(font_scale=1.4)#for label size
plt.figure(figsize = (10,7))
sns.heatmap(df_cm)# font size
# run knn
import seaborn as sns
from sklearn.neighbors import KNeighborsClassifier
plt.figure()
scores = []
for n in range(1,50,20):
knn = KNeighborsClassifier(n_neighbors=n)
knn.fit(X_train_scaled, y_train)
scores.append(knn.score(X_test_scaled, y_test))
plt.plot(range(1,50,20), scores)
plt.title('KNN Accuracy curve')
plt.xlabel('n_neighbors')
plt.ylabel('Accuracy')
plt.show()
acc_knn = max(scores)
n_knn = list(range(1,50,20))[scores.index(max(scores))]
#
knn = KNeighborsClassifier(n_neighbors=n_knn)
knn.fit(X_train_scaled, y_train)
y_pred = knn.predict(X_test_scaled)
f1_knn = f1_score(y_test, y_pred, average = 'weighted')
print('Accuracy is {}'.format(acc_knn))
print('Best parameter is n = {}'.format(n_knn))
print('F1 score is {}'.format(f1_knn))
# performance --> confusion matrix
from sklearn.metrics import confusion_matrix
y_predicted = knn.predict(X_test_scaled)
confusion = confusion_matrix(y_test, y_predicted)
import seaborn as sns
import matplotlib.pyplot as plt
df_cm = pd.DataFrame(confusion)
#sns.set(font_scale=1.4)#for label size
plt.figure(figsize = (10,7))
sns.heatmap(df_cm, cmap="YlGnBu")# font size
# Random Forest
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import auc, roc_auc_score, accuracy_score, f1_score
scores = []
for n in range(1,200,20):
forest = RandomForestClassifier(n_estimators=n)
forest.fit(X_train_scaled, y_train)
y_pred = forest.predict(X_test_scaled)
scores.append(accuracy_score(y_test, y_pred))
#print(score)
plt.plot(range(1,200,20), scores)
plt.title('Random Forest accuracy curve')
plt.xlabel('n_estimators')
plt.ylabel('Accuracy score')
plt.show()
acc_forest = max(scores)
n_forest = list(range(1,200,20))[scores.index(max(scores))]
#
forest = RandomForestClassifier(n_estimators=n_forest)
forest.fit(X_train_scaled, y_train)
y_pred = forest.predict(X_test_scaled)
f1_forest = f1_score(y_test, y_pred, average = 'weighted')
print('Accuracy is {}'.format(acc_forest))
print('Best parameter is n = {}'.format(n_forest))
print('F1 score is {}'.format(f1_forest))
from sklearn.metrics import confusion_matrix
y_predicted = forest.predict(X_test_scaled)
confusion = confusion_matrix(y_test, y_predicted)
df_cm = pd.DataFrame(confusion)
#sns.set(font_scale=1.4)#for label size
plt.figure(figsize = (10,7))
sns.heatmap(df_cm)# font size
from sklearn.ensemble import AdaBoostClassifier
scores = []
for n in [1,200,500,1000]:
boost = AdaBoostClassifier(n_estimators=n)
boost.fit(X_train_scaled, y_train)
y_pred = boost.predict(X_test_scaled)
scores.append(accuracy_score(y_test, y_pred))
#print(score)
plt.plot([1,200,500,1000], scores)
plt.title('AdaBoost accuracy curve')
plt.xlabel('n_estimators')
plt.ylabel('Accuracy score')
plt.show()
acc_boost = max(scores)
n_boost = list(range(1,200,20))[scores.index(max(scores))]
#
boost = AdaBoostClassifier(n_estimators=n_boost)
boost.fit(X_train_scaled, y_train)
y_pred = boost.predict(X_test_scaled)
f1_boost = f1_score(y_test, y_pred, average = 'weighted')
print('Accuracy is {}'.format(acc_boost))
print('Best parameter is n = {}'.format(n_boost))
print('F1 score is {}'.format(f1_boost))
from sklearn.svm import LinearSVC
import numpy as np
scores = []
rng = [1,10,50,70,100]
for c in rng:
l_svc = LinearSVC(C=c)
l_svc.fit(X_train_scaled, y_train)
y_pred = l_svc.predict(X_test_scaled)
scores.append(accuracy_score(y_test, y_pred))
#print(score)
plt.plot(rng, scores)
plt.title('Linear SVC')
plt.xlabel('C')
plt.ylabel('Accuracy score')
plt.show()
acc_svc = max(scores)
c_svc = rng[scores.index(max(scores))]
#
l_svc = LinearSVC(C=c_svc)
l_svc.fit(X_train_scaled, y_train)
y_pred = l_svc.predict(X_test_scaled)
f1_svc = f1_score(y_test, y_pred, average = 'weighted')
print('Accuracy is {}'.format(acc_svc))
print('Best parameter is c = {}'.format(c_svc))
print('F1 score is {}'.format(f1_svc))
l_svc = LinearSVC(C=c_svc)
l_svc.fit(X_train_scaled, y_train)
y_predicted = l_svc.predict(X_test_scaled)
confusion = confusion_matrix(y_test, y_predicted)
df_cm = pd.DataFrame(confusion)
#sns.set(font_scale=1.4)#for label size
plt.figure(figsize = (10,7))
sns.heatmap(df_cm)# font size
from sklearn.linear_model import LogisticRegression
import numpy as np
scores = []
rng = [0.1,1,3,5,10,15]
for c in rng:
lr = LogisticRegression(C=c)
lr.fit(X_train_scaled, y_train)
y_pred = lr.predict(X_test_scaled)
scores.append(accuracy_score(y_test, y_pred))
#print(score)
plt.plot(rng, scores)
plt.title('Logistic Regression')
plt.xlabel('C')
plt.ylabel('Accuracy score')
plt.show()
acc_lr = max(scores)
c_lr = rng[scores.index(max(scores))]
#
lr = LinearSVC(C=c_svc)
lr.fit(X_train_scaled, y_train)
y_pred = lr.predict(X_test_scaled)
f1_lr = f1_score(y_test, y_pred, average = 'weighted')
print('Accuracy is {}'.format(acc_lr))
print('Best parameter is c = {}'.format(c_lr))
print('F1 score is {}'.format(f1_lr))
y_predicted = lr.predict(X_test_scaled)
confusion = confusion_matrix(y_test, y_predicted)
df_cm = pd.DataFrame(confusion)
#sns.set(font_scale=1.4)#for label size
plt.figure(figsize = (10,7))
sns.heatmap(df_cm)# font size
models = pd.DataFrame({
'Model': ['Linear SVC', 'KNN', 'Random Forest', 'AdaBoost',
'Logistic Regression', 'Decision Tree'],
'Score': [acc_svc,acc_knn, acc_forest, acc_boost,
acc_lr, acc_tree],
'F1 Score':[f1_svc, f1_knn, f1_forest, f1_boost,
f1_lr, f1_tree]})
models.sort_values(by='Score', ascending=False)
Explanation: Run different models
Decision Tree
Random Forest
KNN
Linear SVC
Logistic Regression
End of explanation |
13,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Lesson
Step2: Project 1
Step3: Transforming Text into Numbers | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory
End of explanation
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
Explanation: Project 1: Quick Theory Validation
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
Explanation: Transforming Text into Numbers
End of explanation |
13,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note
Step1: First, we import the energy data from the sample CSV and transform it into records
Step2: The records we just created look like this
Step3: The energy trace data looks like this
Step4: Now we load the rest of the project data from the sample project data CSV. This CSV includes the project_id (Which we don't use in this tutorial), the ZIP code of the building, and the dates retrofit work for this project started and completed.
Step5: We create an Intervention from the retrofit start and end dates and wrap it in a list
Step6: Then we create a ZIPCodeSite for the project by passing in the zipcode
Step7: Now we can create a project using the data we've loaded
Step8: Running meters
To run the EEmeter on the project, instantiate an EnergyEfficiencyMeter and run the .evaluate(project) method, passing in the project we just created
Step9: That's it! Now we can inspect and use our results.
Inspecting results
Let's quickly look through the results object so that we can understand what they mean. The results are embedded in a nested python dict
Step10: Now we can select the desired interpretation; four are available.
Step11: The interpretation level results are broken into "BASELINE" and "REPORTING" in all cases in which they are available; otherwise; the value is None.
Step12: These results have two components as well - the type of savings.
Step13: We select the results for one of them
Step14: As described above, each energy value also includes upper and lower bounds, but can also be used directly to determine savings. | Python Code:
# library imports
from eemeter.structures import (
EnergyTrace,
EnergyTraceSet,
Intervention,
ZIPCodeSite,
Project
)
from eemeter.io.serializers import ArbitraryStartSerializer
from eemeter.ee.meter import EnergyEfficiencyMeter
import pandas as pd
import pytz
Explanation: Note:
Most users of the EEmeter stack do not directly use the eemeter
package for loading their data. Instead, they use the datastore,
which uses the eemeter internally. To learn to use the datastore, head
over to the datastore basic usage tutorial.
Data preparation
The basic container for project data is the eemeter.structures.Project
object. This object contains all of the data necessary for running a meter.
There are three items it requires:
An EnergyTraceSet, which is a collection of EnergyTraces
An list of Interventions
An eemeter.structures.ZIPCodeSite
Let's start by creating an EnergyTrace. Internally, EnergyTrace
objects use numpy and
pandas, which are nearly
ubiquitous python packages for efficient numerical computation and
data analysis, respectively.
Since this data is not in a format eemeter recognizes, we need to load it.
Let's load this data using a parser we create to turn this data into a
format that eemeter recognizes.
We will load data from formatted records using an
eemeter.io.serializer.ArbitraryStartSerializer.
End of explanation
energy_data = pd.read_csv('sample-energy-data_project-ABC_zipcode-50321.csv',
parse_dates=['date'], dtype={'zipcode': str})
records = [{
"start": pytz.UTC.localize(row.date.to_datetime()),
"value": row.value,
"estimated": row.estimated,
} for _, row in energy_data.iterrows()]
Explanation: First, we import the energy data from the sample CSV and transform it into records
End of explanation
energy_trace = EnergyTrace(
records=records,
unit="KWH",
interpretation="ELECTRICITY_CONSUMPTION_SUPPLIED",
serializer=ArbitraryStartSerializer())
Explanation: The records we just created look like this:
>>> records
[
{
'estimated': False,
'start': datetime.datetime(2011, 1, 1, 0, 0, tzinfo=<UTC>),
'value': 57.8
},
{
'estimated': False,
'start': datetime.datetime(2011, 1, 2, 0, 0, tzinfo=<UTC>),
'value': 64.8
},
{
'estimated': False,
'start': datetime.datetime(2011, 1, 3, 0, 0, tzinfo=<UTC>),
'value': 49.5
},
...
]
Next, we load our records into an EnergyTrace. We give it units "kWh" and interpretation "ELECTRICITY_CONSUMPTION_SUPPLIED", which means that this is electricity consumed by the building and supplied by a utility (rather than by solar panels or other on-site generation). We also pass in an instance of the record serializer ArbitraryStartSerializer to show it how to interpret the records.
End of explanation
energy_trace_set = EnergyTraceSet([energy_trace], labels=["DEF"])
Explanation: The energy trace data looks like this:
>>> energy_trace.data[:3]
value estimated
2011-01-01 00:00:00+00:00 57.8 False
2011-01-02 00:00:00+00:00 64.8 False
2011-01-03 00:00:00+00:00 49.5 False
Though we only have one trace here, we will often have more than one trace. Because of that, projects expect an EnergyTraceSet, which is a labeled set of EnergyTraces. We give it the trace_id supplied in the CSV.
End of explanation
project_data = pd.read_csv('sample-project-data.csv',
parse_dates=['retrofit_start_date', 'retrofit_end_date']).iloc[0]
Explanation: Now we load the rest of the project data from the sample project data CSV. This CSV includes the project_id (Which we don't use in this tutorial), the ZIP code of the building, and the dates retrofit work for this project started and completed.
End of explanation
retrofit_start_date = pytz.UTC.localize(project_data.retrofit_start_date)
retrofit_end_date = pytz.UTC.localize(project_data.retrofit_end_date)
interventions = [Intervention(retrofit_start_date, retrofit_end_date)]
Explanation: We create an Intervention from the retrofit start and end dates and wrap it in a list:
End of explanation
site = ZIPCodeSite(project_data.zipcode)
Explanation: Then we create a ZIPCodeSite for the project by passing in the zipcode:
End of explanation
project = Project(energy_trace_set=energy_trace_set, interventions=interventions, site=site)
Explanation: Now we can create a project using the data we've loaded
End of explanation
meter = EnergyEfficiencyMeter()
results = meter.evaluate(project)
Explanation: Running meters
To run the EEmeter on the project, instantiate an EnergyEfficiencyMeter and run the .evaluate(project) method, passing in the project we just created:
End of explanation
project_derivatives = results['project_derivatives']
project_derivatives.keys()
modeling_period_set_results = project_derivatives[('baseline', 'reporting')]
Explanation: That's it! Now we can inspect and use our results.
Inspecting results
Let's quickly look through the results object so that we can understand what they mean. The results are embedded in a nested python dict:
>>> results
{
'weather_normal_source': TMY3WeatherSource("725460"),
'weather_source': ISDWeatherSource("725460"),
'modeling_period_set': ModelingPeriodSet(),
'modeled_energy_traces': {
'DEF': SplitModeledEnergyTrace()
},
'modeled_energy_trace_derivatives': {
'DEF': {
('baseline', 'reporting'): {
'BASELINE': {
'annualized_weather_normal': (11051.6, 142.4, 156.4, 365),
'gross_predicted': (31806.3, 251.5, 276.1, 1138)
},
'REPORTING': {
'annualized_weather_normal': (8758.2, 121.9, 137.2, 365),
'gross_predicted': (25208.1, 215.2, 242.3, 1138)
}
}
}
},
'project_derivatives': {
('baseline', 'reporting'): {
'ALL_FUELS_CONSUMPTION_SUPPLIED': {
'BASELINE': {
'annualized_weather_normal': (11051.6, 142.4, 156.4, 365),
'gross_predicted': (31806.3, 251.5, 276.1, 1138)
},
'REPORTING': {
'annualized_weather_normal': (8758.2, 121.9, 137.2, 365),
'gross_predicted': (25208.1, 215.2, 242.3, 1138)
}
},
'ELECTRICITY_CONSUMPTION_SUPPLIED': {
'BASELINE': {
'annualized_weather_normal': (11051.6, 142.4, 156.4, 365),
'gross_predicted': (31806.3, 251.5, 276.1, 1138)
},
'REPORTING': {
'annualized_weather_normal': (8758.2, 121.9, 137.2, 365),
'gross_predicted': (25208.1, 215.2, 242.3, 1138)
}
},
'ELECTRICITY_ON_SITE_GENERATION_UNCONSUMED': None,
'NATURAL_GAS_CONSUMPTION_SUPPLIED': None
}
},
}
Note the contents of the dictionary:
'weather_source': An instance of eemeter.weather.ISDWeatherSource. The weather source used to gather observed weather data. The station at which this weather was recorded can be found by inspecting weather_source.station.(Matched by ZIP code)
'weather_normal_source': An instance of eemeter.weather.TMY3WeatherSource. The weather normal source used to gather weather normal data. The station at which this weather normal data was recorded can be found by inspecting weather_normal_source.station.(Matched by ZIP code)
'modeling_period_set': An instance of eemeter.structures.ModelingPeriodSet. The modeling periods determined by the intervention start and end dates; includes groupings. The default grouping for a single intervention is into two modeling periods called "baseline" and "reporting".
'modeled_energy_traces': SplitModeledEnergyTraces instances
keyed by trace_id (as given in the EnergyTraceSet; includes
models and fit statistics for each modeling period.
'modeled_energy_trace_derivatives': energy results specific to each
modeled energy trace, organized by trace_id and modeling period group.
'project_derivatives': Project-level results which are aggregated up from the 'modeled_energy_trace_derivatives'.
The project derivatives are nested quite deeply. The nesting of key-value pairs is as follows:
1st layer: Modeling Period Set id: a tuple of 1 baseline period id and 1 reporting period id, usually ('baseline', 'reporting') - contains the results specific to this pair of modeling periods.
2nd layer: Trace interpretation: a string describing the trace interpretation; in our case "ELECTRICITY_CONSUMPTION_SUPPLIED"
3rd layer: 'BASELINE' and 'REPORTING' - these are fixed labels that always appear at this level; they demarcate the baseline aggregations and the reporting aggregations.
4th layer: 'annualized_weather_normal' and 'gross_predicted' - these are also fixed labels that always appear at this level to indicate the type of the savings values
At the final layers are a 4-tuple of results (value, lower, upper, n): value, indicating the estimated expected value of the selected result; lower, a number which can be subtracted from value to obtain the lower 95% confidence interval bound; upper, a number which can be added to value to obtain the upper 95% confidence interval bound, and n, the total number of records that went into calculation of this value.
To obtain savings numbers, the reporting value should be subtracted from the baseline value as described in the methods overview.
Let's select the most useful results from the eemeter, the project-level derivatives. Note the modeling_period_set selector at the first level: ('baseline', 'reporting')
End of explanation
modeling_period_set_results.keys()
electricity_consumption_supplied_results = modeling_period_set_results['ELECTRICITY_CONSUMPTION_SUPPLIED']
Explanation: Now we can select the desired interpretation; four are available.
End of explanation
electricity_consumption_supplied_results.keys()
baseline_results = electricity_consumption_supplied_results["BASELINE"]
reporting_results = electricity_consumption_supplied_results["REPORTING"]
Explanation: The interpretation level results are broken into "BASELINE" and "REPORTING" in all cases in which they are available; otherwise; the value is None.
End of explanation
baseline_results.keys()
reporting_results.keys()
Explanation: These results have two components as well - the type of savings.
End of explanation
baseline_normal = baseline_results['annualized_weather_normal']
reporting_normal = reporting_results['annualized_weather_normal']
Explanation: We select the results for one of them:
End of explanation
percent_savings = (baseline_normal[0] - reporting_normal[0]) / baseline_normal[0]
percent_savings
Explanation: As described above, each energy value also includes upper and lower bounds, but can also be used directly to determine savings.
End of explanation |
13,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building an ML App
Now that we have a machine learning model to predict the defaults, let us try to build a web application to lend loans.
It'll have two parts
Step1: Let us run it as a service using firefly by running the following command in your terminal.
$ firefly sq.square
[2017-07-13 09
Step2: The function will be available with the same name in the client. Please note that the client functions takes parameters only by name.
If you want to run on a different port, you can specify that as an argument.
$ firefly -b 0.0.0.0
Step5: Credit Grade Service
Banks will have a access to the credit grade of each customer. Since we haven't have real data, let us build a simple mock credit grade service.
It'll take the email address of the person and gives a grade at random.
Step6: Deploy it as a servive using Firefly.
firefly credit_grade.find_credit_grade
Step9: Deploying the ML model
To deploy the machine learning model as a service, we need to read the model and all the encodings that we have used in building the model.
Step10: Run it as a service using firefly, again from your terminal. Let us use port 9000 now as port 8000 is used by the credit grade service.
$ firefly -b 127.0.0.1 | Python Code:
%%file sq.py
def square(n):
return n*n
Explanation: Building an ML App
Now that we have a machine learning model to predict the defaults, let us try to build a web application to lend loans.
It'll have two parts:
a form to submit the loans
admin panel to look at the submitted loans and their probability of defaults
The source code for the ML app is available in the github repo in credit-risk/webap folder.
It has all the moving parts except integration with the model.
Start the application using:
python webapp.py
ML as a Service
While we can package the model with the webapp and use it, it created tight coupling between the two. Everytime the model changes, the webapp will have to change. What if there are more than one application using the same model?
It is lot simpler to deploy the ML model as a service exposing it's functionality through an HTTP API.
In this turorial we are going to use a tool called firefly for running the model as a service.
Introduction to Firefly
Firefly makes it very easy to deploy functions as a service without having to worry about writing a web app, managing request/response formats etc. and also provides a very simple client interface.
The detailed documentation of Firefly is available at:
http://firefly-python.readthedocs.io/
Let's try a simple example. Here We're creating a file sq.py with a square function. We'll see how deploy it a service and use in other programs.
End of explanation
import firefly
remote_sq = firefly.Client("http://127.0.0.1:8000")
remote_sq.square(n=4)
Explanation: Let us run it as a service using firefly by running the following command in your terminal.
$ firefly sq.square
[2017-07-13 09:48:07 +0200] [5001] [INFO] Starting gunicorn 19.7.1
[2017-07-13 09:48:07 +0200] [5001] [INFO] Listening at: http://127.0.0.1:8000 (5001)
It takes the <module_name>.<function_name> as argument and exposes the function as an API. The argument the function takes and the return value must be JSON-friendly for it work.
Once that is running, we can try to access it using the firefly client.
End of explanation
%%file add.py
# your code here
Explanation: The function will be available with the same name in the client. Please note that the client functions takes parameters only by name.
If you want to run on a different port, you can specify that as an argument.
$ firefly -b 0.0.0.0:9000 sq.square
For more help on the available command-line options, try:
$ firefly --help
Problem: Write funciton add_numbers in a file add.py and deploy it as a service using Firefly. Once that is ready, try to use it in another program using the Firefly Client.
End of explanation
%%file credit_grade.py
Program to find the credit grade of a person.
import zlib
import random
def find_credit_grade(email):
Returns the credit grade of the person identified by the given email address.
The credit grade can be either A, B, C, D, E, F or G.
# since we need to give the same grade everytime the function is called
# with the same email. Using the checksum of the string as random seed
# to get the same result everytime when used with the same email.
seed = zlib.adler32(email.encode("utf-8"))
r = random.Random(seed)
return r.choice(["A", "B", "C", "D", "E", "F", "G"])
Explanation: Credit Grade Service
Banks will have a access to the credit grade of each customer. Since we haven't have real data, let us build a simple mock credit grade service.
It'll take the email address of the person and gives a grade at random.
End of explanation
credit_grade_api = firefly.Client("http://127.0.0.1:8000/")
credit_grade_api.find_credit_grade(email="[email protected]")
Explanation: Deploy it as a servive using Firefly.
firefly credit_grade.find_credit_grade
End of explanation
%%file credit_risk_service.py
Service to expose the credit risk model as an API.
from sklearn.externals import joblib
# read the encoders and the model
grade_encoder = joblib.load("../notebooks/le_grade.pkl")
ownership_encoder = joblib.load("../notebooks/le_ownership.pkl")
model = joblib.load("../notebooks/model.pkl")
def predict(amount, years, age, ownership, income, grade):
Returns the probablity of default for given features.
# encoders work on a vector. Wrapping in a list as we only have a single value
ownership_code = ownership_encoder.transform([ownership])[0]
grade_code = grade_encoder.transform([grade])[0]
# important to pass the features in the same order as we built the model
features = [amount, grade_code, years, ownership_code, income, age]
# probablity for not-defaulting and defaulting
# Again, wrapping in a list as a list of features is expected
p0, p1 = model.predict_proba([features])[0]
return p1
Explanation: Deploying the ML model
To deploy the machine learning model as a service, we need to read the model and all the encodings that we have used in building the model.
End of explanation
import firefly
credit_risk_api = firefly.Client("http://127.0.0.1:9000")
credit_risk_api.predict(amount=10000,
years=2,
age=35,
ownership='RENT',
income=12345,
grade='A')
Explanation: Run it as a service using firefly, again from your terminal. Let us use port 9000 now as port 8000 is used by the credit grade service.
$ firefly -b 127.0.0.1:9000 credit_risk_service.predict
Now let us predict.
End of explanation |
13,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
From NumPy to Leaflet
This notebook shows how to display some raster geographic data in IPyLeaflet. The data is a NumPy array, which means that you have all the power of the Python scientific stack at your disposal to process it.
The following libraries are needed
Step1: Download a raster file representing the flow accumulation for South America. This gives an idea of the river network.
Step2: We transform the data a bit so that rivers appear thicker.
Step3: The original data is in the WGS 84 projection, but Leaflet uses Web Mercator, so we need to reproject.
Step4: Let's convert our NumPy array to an image. For that we must specify a colormap (here plt.cm.jet).
Step5: The image is embedded in the URL as a PNG file, so that it can be sent to the browser.
Step6: Finally we can overlay our image and if everything went fine it should be exactly over South America.
Step7: You can play with the opacity slider and check that rivers from our data file match the rivers on OpenStreetMap. | Python Code:
import requests
import os
from tqdm import tqdm
import zipfile
import rasterio
from affine import Affine
import numpy as np
import scipy.ndimage
from rasterio.warp import reproject, Resampling
import PIL
import matplotlib.pyplot as plt
from base64 import b64encode
try:
from StringIO import StringIO
py3 = False
except ImportError:
from io import StringIO, BytesIO
py3 = True
from ipyleaflet import Map, ImageOverlay, basemap_to_tiles, basemaps
Explanation: From NumPy to Leaflet
This notebook shows how to display some raster geographic data in IPyLeaflet. The data is a NumPy array, which means that you have all the power of the Python scientific stack at your disposal to process it.
The following libraries are needed:
* requests
* tqdm
* rasterio
* numpy
* scipy
* pillow
* matplotlib
* ipyleaflet
The recommended way is to try to conda install them first, and if they are not found then pip install.
End of explanation
url = 'https://edcintl.cr.usgs.gov/downloads/sciweb1/shared/hydrosheds/sa_30s_zip_grid/sa_acc_30s_grid.zip'
filename = os.path.basename(url)
name = filename[:filename.find('_grid')]
adffile = name + '/' + name + '/w001001.adf'
if not os.path.exists(adffile):
r = requests.get(url, stream=True)
with open(filename, 'wb') as f:
total_length = int(r.headers.get('content-length'))
for chunk in tqdm(r.iter_content(chunk_size=1024), total=(total_length/1024) + 1):
if chunk:
f.write(chunk)
f.flush()
zip = zipfile.ZipFile(filename)
zip.extractall('.')
Explanation: Download a raster file representing the flow accumulation for South America. This gives an idea of the river network.
End of explanation
dataset = rasterio.open(adffile)
acc_orig = dataset.read()[0]
acc = np.where(acc_orig<0, 0, acc_orig)
shrink = 1 # if you are out of RAM try increasing this number (should be a power of 2)
radius = 5 # you can play with this number to change the width of the rivers
circle = np.zeros((2*radius+1, 2*radius+1)).astype('uint8')
y, x = np.ogrid[-radius:radius+1,-radius:radius+1]
index = x**2 + y**2 <= radius**2
circle[index] = 1
acc = np.sqrt(acc)
acc = scipy.ndimage.maximum_filter(acc, footprint=circle)
acc[acc_orig<0] = np.nan
acc = acc[::shrink, ::shrink]
Explanation: We transform the data a bit so that rivers appear thicker.
End of explanation
# At this point if GDAL complains about not being able to open EPSG support file gcs.csv, try in the terminal:
# export GDAL_DATA=`gdal-config --datadir`
with rasterio.Env():
rows, cols = acc.shape
src_transform = list(dataset.transform)
src_transform[0] *= shrink
src_transform[4] *= shrink
src_transform = Affine(*src_transform[:6])
src_crs = {'init': 'EPSG:4326'}
source = acc
dst_crs = {'init': 'EPSG:3857'}
dst_transform, width, height = rasterio.warp.calculate_default_transform(src_crs, dst_crs, cols, rows, *dataset.bounds)
dst_shape = height, width
destination = np.zeros(dst_shape)
reproject(
source,
destination,
src_transform=src_transform,
src_crs=src_crs,
dst_transform=dst_transform,
dst_crs=dst_crs,
resampling=Resampling.nearest)
acc_web = destination
Explanation: The original data is in the WGS 84 projection, but Leaflet uses Web Mercator, so we need to reproject.
End of explanation
acc_norm = acc_web - np.nanmin(acc_web)
acc_norm = acc_norm / np.nanmax(acc_norm)
acc_norm = np.where(np.isfinite(acc_web), acc_norm, 0)
acc_im = PIL.Image.fromarray(np.uint8(plt.cm.jet(acc_norm)*255))
acc_mask = np.where(np.isfinite(acc_web), 255, 0)
mask = PIL.Image.fromarray(np.uint8(acc_mask), mode='L')
im = PIL.Image.new('RGBA', acc_norm.shape[::-1], color=None)
im.paste(acc_im, mask=mask)
Explanation: Let's convert our NumPy array to an image. For that we must specify a colormap (here plt.cm.jet).
End of explanation
if py3:
f = BytesIO()
else:
f = StringIO()
im.save(f, 'png')
data = b64encode(f.getvalue())
if py3:
data = data.decode('ascii')
imgurl = 'data:image/png;base64,' + data
Explanation: The image is embedded in the URL as a PNG file, so that it can be sent to the browser.
End of explanation
b = dataset.bounds
bounds = [(b.bottom, b.left), (b.top, b.right)]
io = ImageOverlay(url=imgurl, bounds=bounds)
center = [-10, -60]
zoom = 2
m = Map(center=center, zoom=zoom, interpolation='nearest')
m
tile = basemap_to_tiles(basemaps.Esri.WorldStreetMap)
m.add_layer(tile)
Explanation: Finally we can overlay our image and if everything went fine it should be exactly over South America.
End of explanation
m.add_layer(io)
io.interact(opacity=(0.0,1.0,0.01))
Explanation: You can play with the opacity slider and check that rivers from our data file match the rivers on OpenStreetMap.
End of explanation |
13,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Let's begin by querying our newly transformed vcf table, containing variants for the human chromosome 21.
For instructions on the BigQuery transformation see
Step2: Colaboratory notebooks allow us to leverage the power of Python and use pandas to create & save queries to datatables.
Step3: We can filter variants so that we have variants with only a single alternate base. The alternate base is a RECORD type, which means it could have a list of variants. Here we use the ARRAY_LENGTH keyword
Step4: Then, further narrow it down to SNPs (a single nucleotide change).
Step5: What are the five most frequent mutations on chromosome 21?
Step6: Now, let's jump into a specific example. The gene DYRK1A on chromosome 21 has been shown to contribute to the development of leukemia. Researchers are studying it as a potential theraputic target. DYRK1A resides on chromosome 21 from position 37365790 to 37517450. Let's explore variants in this gene.
Step7: And how many samples do we have?
Step8: We have 2,535 samples and the total number of genomic positions mapping to the DYRK1A gene is 4,485. How many variants are there in this gene per sample?
To answer this question, we'll need to start working with records. To do that, we'll build up to answering this question through a series of queries.
Our first query will flatten the call record into three columns, the call.name, and both genotype calls (one for each chromosome), where a zero is the reference call, and an alternate call otherwise. You can see most of the listed entries are actually just homozygous refs.
Step9: In the above result, we have 11369475 rows, which (as hoped for) is the product of 2535 * 4485!
What we need now, is to match up the sample IDs, and genomic positions.
Here's an example of what we're looking for
Step10: OK we should ready to answer the question
Step11: In the above result, we have the expected 2,535 number of rows (for each sample). The number of variants per sample ranges from 42 to 401. Let's make a histogram.
Step12: So we can see most samples have around 300 variants in this gene.
How many variants within DYRK1A gene for a particular sample that are shared by no other samples. These variants are called private variants.
One way to do this would be to filter out variant positions that have a single sample ID associated with it.
Step13: OK! The above result shows 1852 private variants out of 11,369,475 variants in our table.
Finally, let's do some statistics! Let's do Z-scores. Now we could just work with the pandas table we already have, but for the example, let's do it in BigQuery.
Step14: So, the above query calculated Z-scores for a single gene. Let's make another query where we construct a series of bins, one bin per million bases, and compute a Z-score per sample, per bin.
This is adapted from the Google tutorial (https
Step15: But what about the case when you have mulple alleles?
To work with that, we need to select the right alternate allele using the genotype calls. | Python Code:
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
Explanation: <a href="https://colab.research.google.com/github/isb-cgc/examples-Python/blob/master/ISB_CGC_Query_of_the_Month_November_2018.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
ISB-CGC Query of the Month, November 2018
Kawther Abdilleh, David L Gibbs
For more Queries: https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/QueryOfTheMonthClub.html
Part of the ISB-CGC: http://www.isb-cgc.org
First we need to get authenticated. If you don't have a Google Cloud project, see https://cloud.google.com/dataproc/docs/guides/setup-project. There's $300 free credit available for new accounts!
End of explanation
%%bigquery --project YOUR-PROJECT-ID df
select
reference_name,
start_position,
end_position
from
`isb-cgc.QotM.1000genomes`
limit
10
Explanation: Let's begin by querying our newly transformed vcf table, containing variants for the human chromosome 21.
For instructions on the BigQuery transformation see: https://isb-cancer-genomics-cloud.readthedocs.io/en/latest/sections/QueryOfTheMonthClub.html
This first query demonstrates the 'magic' %%bigquery command.
End of explanation
df = pd.io.gbq.read_gbq('''
select
reference_name as chr,
start_position,
end_position
from
`isb-cgc.QotM.1000genomes`
limit
10
''', project_id=project_id, verbose=False, dialect='standard')
df.head()
Explanation: Colaboratory notebooks allow us to leverage the power of Python and use pandas to create & save queries to datatables.
End of explanation
df = pd.io.gbq.read_gbq('''
#standardsql
SELECT
start_position,
reference_name,
reference_bases AS original,
alternate_bases[ORDINAL(1)].alt AS alt
FROM
`isb-cgc.QotM.1000genomes` AS v
WHERE
ARRAY_LENGTH(alternate_bases) = 1
LIMIT 10
'''
, project_id=project_id, verbose=False, dialect='standard')
df.head()
Explanation: We can filter variants so that we have variants with only a single alternate base. The alternate base is a RECORD type, which means it could have a list of variants. Here we use the ARRAY_LENGTH keyword
End of explanation
df = pd.io.gbq.read_gbq('''
SELECT
start_position,
reference_name,
reference_bases AS original,
alternate_bases[ORDINAL(1)].alt AS changed
FROM
`isb-cgc.QotM.1000genomes` AS v
WHERE
ARRAY_LENGTH(alternate_bases) = 1
AND alternate_bases[ORDINAL(1)].alt IN ('A','C','G','T')
ORDER BY start_position
LIMIT 10
''', project_id=project_id, verbose=False, dialect='standard')
df.head()
Explanation: Then, further narrow it down to SNPs (a single nucleotide change).
End of explanation
df = pd.io.gbq.read_gbq('''
#standardsql
WITH
table1 AS (
SELECT
start_position,
reference_name,
CONCAT( reference_bases, '->', alternate_bases[ORDINAL(1)].alt) AS mutation
FROM
`isb-cgc.QotM.1000genomes` AS v
WHERE
ARRAY_LENGTH(alternate_bases) = 1
AND alternate_bases[ORDINAL(1)].alt IN ('A','C','G','T')
)
SELECT
mutation,
COUNT(mutation) AS num_mutations
FROM
table1
GROUP BY mutation
ORDER BY num_mutations DESC
LIMIT 5
''', project_id=project_id, verbose=False, dialect='standard')
df.head()
Explanation: What are the five most frequent mutations on chromosome 21?
End of explanation
df = pd.io.gbq.read_gbq('''
#standardsql
SELECT
COUNT(reference_name) AS num_variants
FROM
`isb-cgc.QotM.1000genomes` AS v
WHERE
reference_name = '21'
AND start_position BETWEEN 37365790
AND 37517450
''', project_id=project_id, verbose=False, dialect='standard')
df.head()
Explanation: Now, let's jump into a specific example. The gene DYRK1A on chromosome 21 has been shown to contribute to the development of leukemia. Researchers are studying it as a potential theraputic target. DYRK1A resides on chromosome 21 from position 37365790 to 37517450. Let's explore variants in this gene.
End of explanation
df = pd.io.gbq.read_gbq('''
SELECT
COUNT(DISTINCT(call.name)) as num_samples
FROM
`isb-cgc.QotM.1000genomes`
JOIN
UNNEST(call) AS call
WHERE
reference_name = '21'
AND (start_position BETWEEN 37365790 AND 37517450)
''', project_id=project_id, verbose=False, dialect='standard')
df.head()
Explanation: And how many samples do we have?
End of explanation
df8 = pd.io.gbq.read_gbq('''
SELECT
call.name,
call.genotype[OFFSET(0)] g1,
call.genotype[OFFSET(1)] g2
FROM
`isb-cgc.QotM.1000genomes`
JOIN
UNNEST(call) AS call WITH OFFSET AS ci
WHERE
reference_name = '21'
AND start_position BETWEEN 37365790
AND 37517450
LIMIT 10''', project_id=project_id, verbose=False, dialect='standard')
df8.head()
Explanation: We have 2,535 samples and the total number of genomic positions mapping to the DYRK1A gene is 4,485. How many variants are there in this gene per sample?
To answer this question, we'll need to start working with records. To do that, we'll build up to answering this question through a series of queries.
Our first query will flatten the call record into three columns, the call.name, and both genotype calls (one for each chromosome), where a zero is the reference call, and an alternate call otherwise. You can see most of the listed entries are actually just homozygous refs.
End of explanation
df9 = pd.io.gbq.read_gbq('''
WITH
t1 as (
SELECT
reference_name,
start_position,
call.name,
call.genotype[OFFSET(0)] g1,
call.genotype[OFFSET(1)] g2
FROM
`isb-cgc.QotM.1000genomes`
JOIN
UNNEST(call) AS call WITH OFFSET AS ci
WHERE
reference_name = '21'
AND start_position BETWEEN 37365790
AND 37517450
)
select * from t1 where name = 'HG00096' and start_position = 37424669
''', project_id=project_id, verbose=False, dialect='standard')
df9.head()
Explanation: In the above result, we have 11369475 rows, which (as hoped for) is the product of 2535 * 4485!
What we need now, is to match up the sample IDs, and genomic positions.
Here's an example of what we're looking for:
sample chr pos mut g1 g2
HG00096 21 37424669 C->A 0 1
See: http://www.internationalgenome.org/data-portal/sample/HG00096
Let's write a query to find it.
End of explanation
df10 = pd.io.gbq.read_gbq('''
WITH
t1 AS (
SELECT
reference_name,
start_position,
call.name as sample,
call.genotype[OFFSET(0)] g1,
call.genotype[OFFSET(1)] g2
FROM
`isb-cgc.QotM.1000genomes`
JOIN
UNNEST(call) AS call WITH OFFSET AS ci
WHERE
reference_name = '21'
AND (start_position BETWEEN 37365790 AND 37517450)
),
t2 AS (
SELECT
sample,
COUNT(sample) as N
FROM
t1
WHERE
g1 = 1 OR g2 = 1
GROUP BY
sample
)
select * from t2 GROUP BY N, sample ORDER BY N
''', project_id=project_id, verbose=False, dialect='standard')
df10.head()
Explanation: OK we should ready to answer the question: How many variants per sample in this gene?
End of explanation
df10.hist()
Explanation: In the above result, we have the expected 2,535 number of rows (for each sample). The number of variants per sample ranges from 42 to 401. Let's make a histogram.
End of explanation
df10z = pd.io.gbq.read_gbq('''
WITH
t1 AS (
SELECT
reference_name,
start_position,
call.name,
call.genotype[OFFSET(0)] g1,
call.genotype[OFFSET(1)] g2
FROM
`isb-cgc.QotM.1000genomes`
JOIN
UNNEST(call) AS call WITH OFFSET AS ci
WHERE
reference_name = '21'
AND (start_position BETWEEN 37365790 AND 37517450)
),
t2 AS (
SELECT
start_position,
COUNT(start_position) as N
FROM
t1
WHERE
(g1 = 1 OR g2 = 1)
GROUP BY
start_position
)
select COUNT(*) private_vars from t2 WHERE N = 1
''', project_id=project_id, verbose=False, dialect='standard')
df10z.head()
Explanation: So we can see most samples have around 300 variants in this gene.
How many variants within DYRK1A gene for a particular sample that are shared by no other samples. These variants are called private variants.
One way to do this would be to filter out variant positions that have a single sample ID associated with it.
End of explanation
df11 = pd.io.gbq.read_gbq('''
WITH
t1 AS (
SELECT
reference_name,
start_position,
call.name as sample,
call.genotype[OFFSET(0)] g1,
call.genotype[OFFSET(1)] g2
FROM
`isb-cgc.QotM.1000genomes`
JOIN
UNNEST(call) AS call WITH OFFSET AS ci
WHERE
reference_name = '21'
AND (start_position BETWEEN 37365790 AND 37517450)
),
t2 AS (
SELECT
sample,
COUNT(sample) as N
FROM
t1
WHERE
g1 = 1 OR g2 = 1
GROUP BY
sample
),
t3 AS (
SELECT
AVG(N) avgn,
STDDEV(N) stddevn
FROM
t2
),
t4 AS (
SELECT
sample,
N,
avgn,
stddevn,
(N - avgn) / stddevn as Z_score
FROM
t2 CROSS JOIN t3
)
select * from t4
''', project_id=project_id, verbose=False, dialect='standard')
df11.head()
df11.hist()
Explanation: OK! The above result shows 1852 private variants out of 11,369,475 variants in our table.
Finally, let's do some statistics! Let's do Z-scores. Now we could just work with the pandas table we already have, but for the example, let's do it in BigQuery.
End of explanation
df13 = pd.io.gbq.read_gbq('''
#standardsql
WITH ind AS (
-- count variants for each sample/ref/bin
SELECT
call.name AS sample,
reference_name AS ref,
FLOOR(start_position/1000000) AS bin,
COUNT(call.name) AS n
FROM `isb-cgc.QotM.1000genomes`
JOIN UNNEST(call) AS call
JOIN UNNEST(alternate_bases) AS alt
WHERE alt.alt != '<*>'
AND (call.genotype[OFFSET(0)] = 1 OR call.genotype[OFFSET(1)] = 1)
GROUP BY sample, ref, bin
),
pop AS (
-- overall all samples in ref/bin
SELECT
ref,
bin,
AVG(n) AS pop_mu,
STDDEV(n) AS pop_sigma
FROM ind
GROUP BY ref, bin
),
zscore AS (
SELECT
ind.sample,
ind.n AS ind_n,
(ind.n-pop.pop_mu)/pop.pop_sigma AS z,
pop.ref,
pop.bin,
pop.pop_mu,
pop.pop_sigma
FROM pop, ind
WHERE ind.ref = pop.ref AND ind.bin = pop.bin
)
SELECT * from zscore
ORDER BY ABS(Z) DESC
''', project_id=project_id, verbose=False, dialect='standard')
df13.head()
import matplotlib
matplotlib.rcParams['figure.figsize'] = [6, 8]
df13.hist()
Explanation: So, the above query calculated Z-scores for a single gene. Let's make another query where we construct a series of bins, one bin per million bases, and compute a Z-score per sample, per bin.
This is adapted from the Google tutorial (https://codelabs.developers.google.com/codelabs/genomics-vcfbq/index.html?index=..%2F..index#0
).
End of explanation
df14 = pd.io.gbq.read_gbq('''
SELECT
reference_name as chr,
start_position,
reference_bases ,
--
-- if the genotype is 0, it's a ref call
-- else use the genotype call to index the alternative base
--
IF( (call.genotype[OFFSET(0)] = 0),
reference_bases,
alternate_bases[OFFSET(call.genotype[OFFSET(0)] - 1)].alt ) as alt1,
--
--
IF( (call.genotype[OFFSET(1)] = 0),
reference_bases,
alternate_bases[OFFSET(call.genotype[OFFSET(1)] - 1)].alt ) AS alt2,
--
-- then we're still unnesting a single column.
--
call.name,
call.genotype[OFFSET(0)] AS g1,
call.genotype[OFFSET(1)] AS g2
FROM
`isb-cgc.QotM.1000genomes`
JOIN
UNNEST(call) as call
WHERE
reference_name = '21'
AND call.name = 'HG00119'
AND start_position = 34434667
''', project_id=project_id, verbose=False, dialect='standard'
)
df14.head()
Explanation: But what about the case when you have mulple alleles?
To work with that, we need to select the right alternate allele using the genotype calls.
End of explanation |
13,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#A-brief-tutorial-for-the-WormBase-Enrichment-Suite,-Python-interface" data-toc-modified-id="A-brief-tutorial-for-the-WormBase-Enrichment-Suite,-Python-interface-1"><span class="toc-item-num">1 </span>A brief tutorial for the WormBase Enrichment Suite, Python interface</a></div><div class="lev2 toc-item"><a href="#Loading-the-required-libraries" data-toc-modified-id="Loading-the-required-libraries-1.1"><span class="toc-item-num">1.1 </span>Loading the required libraries</a></div><div class="lev2 toc-item"><a href="#Loading-your-gene-list-and-fetching-the-dictionaries" data-toc-modified-id="Loading-your-gene-list-and-fetching-the-dictionaries-1.2"><span class="toc-item-num">1.2 </span>Loading your gene list and fetching the dictionaries</a></div><div class="lev2 toc-item"><a href="#Analyzing-your-gene-list" data-toc-modified-id="Analyzing-your-gene-list-1.3"><span class="toc-item-num">1.3 </span>Analyzing your gene list</a></div><div class="lev2 toc-item"><a href="#Plotting-the-results" data-toc-modified-id="Plotting-the-results-1.4"><span class="toc-item-num">1.4 </span>Plotting the results</a></div>
# A brief tutorial for the WormBase Enrichment Suite, Python interface
## Loading the required libraries
Step1: Loading your gene list and fetching the dictionaries
Step2: Analyzing your gene list
Step3: Plotting the results | Python Code:
# this first cell imports the libraries we typically use for data science in Python
import pandas as pd
import numpy as np
# this is the WormBase Enrichment Suite module (previously just TEA)
import tissue_enrichment_analysis as ea
# plotting libraries
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# This enables SVG graphics inline.
%config InlineBackend.figure_formats = {'png', 'retina'}
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#A-brief-tutorial-for-the-WormBase-Enrichment-Suite,-Python-interface" data-toc-modified-id="A-brief-tutorial-for-the-WormBase-Enrichment-Suite,-Python-interface-1"><span class="toc-item-num">1 </span>A brief tutorial for the WormBase Enrichment Suite, Python interface</a></div><div class="lev2 toc-item"><a href="#Loading-the-required-libraries" data-toc-modified-id="Loading-the-required-libraries-1.1"><span class="toc-item-num">1.1 </span>Loading the required libraries</a></div><div class="lev2 toc-item"><a href="#Loading-your-gene-list-and-fetching-the-dictionaries" data-toc-modified-id="Loading-your-gene-list-and-fetching-the-dictionaries-1.2"><span class="toc-item-num">1.2 </span>Loading your gene list and fetching the dictionaries</a></div><div class="lev2 toc-item"><a href="#Analyzing-your-gene-list" data-toc-modified-id="Analyzing-your-gene-list-1.3"><span class="toc-item-num">1.3 </span>Analyzing your gene list</a></div><div class="lev2 toc-item"><a href="#Plotting-the-results" data-toc-modified-id="Plotting-the-results-1.4"><span class="toc-item-num">1.4 </span>Plotting the results</a></div>
# A brief tutorial for the WormBase Enrichment Suite, Python interface
## Loading the required libraries
End of explanation
# load your DE genes (in WBID format) to a pandas dataframe or to a list
df = pd.read_csv('EVN_wbids.csv')
# fetch the dictionaries using the fetch_dictionary function:
tissue = ea.fetch_dictionary('tissue')
phenotype = ea.fetch_dictionary('phenotype')
go = ea.fetch_dictionary('go')
Explanation: Loading your gene list and fetching the dictionaries
End of explanation
# place the dictionaries into a hash
frames = {'tissue': tissue, 'phenotype': phenotype, 'go': go}
# test the list of genes against each dictionary and store the
# results in a hash called results
# NOTE: The enrichment_analysis function only returns Stat. Sig. Results!
result = {}
for analysis, dictionary in frames.items():
result[analysis] = ea.enrichment_analysis(df.gene_name, dictionary, show=False, alpha=10**-1)
Explanation: Analyzing your gene list
End of explanation
# make the figure in the paper:
fig, ax = plt.subplots(nrows=3, figsize=(8, 10))
i= 0
# go through the results hash:
for t, r in result.items():
# calculate the negative log of the Q-values and store
r['logQ'] = -r['Q value'].apply(np.log10)
# remove np.infinites with np.nan
r.logQ.replace([np.inf], np.nan, inplace=True)
# remove np.nan with 70 (after 10**-64, the hypergeometric function crashes and returns 0)
r.logQ.replace(np.nan, 70, inplace=True)
# call the plotting function in the Enrichment Suite to plot the results
ea.plot_enrichment_results(r, title=t, analysis=t, ax=ax[i], y='logQ', n_bars=10)
# prettify axes
ax[i].set_ylabel(t)
if i != 2:
ax[i].set_xlabel('')
else:
ax[i].set_xlabel(r'$-\log_{10}{Q}$')
i += 1
# save figure
plt.savefig('Enrichment_Results.svg', bbox_inches='tight')
Explanation: Plotting the results
End of explanation |
13,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Remote WMI Wbemcomn DLL Hijack
Metadata
| | |
|
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Look for non-system accounts SMB accessing a C
Step3: Analytic II
Look for C
Step4: Analytic III
Look for C
Step5: Analytic IV
Look for C | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: Remote WMI Wbemcomn DLL Hijack
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2020/10/09 |
| modification date | 2020/10/09 |
| playbook related | ['WIN-201012004336'] |
Hypothesis
Threat actors might be copying files remotely to abuse a DLL hijack opportunity found on the WMI provider host (wmiprvse.exe).
Technical Context
Windows Management Instrumentation (WMI) is the Microsoft implementation of Web-Based Enterprise Management (WBEM), which is an industry initiative to develop a standard technology for accessing management information in an enterprise environment. WMI uses the Common Information Model (CIM) industry standard to represent systems, applications, networks, devices, and other managed components.
WMI resides in a shared service host with several other services. To avoid stopping all the services when a provider fails, providers are loaded into a separate host process named "Wmiprvse.exe". More than one process with this name can be running.
The shared host can run under one of the following system accounts in a Wmiprvse.exe host process:
* LocalSystem
* NetworkService
* LocalService
When wmiprvse.exe handles a network connection, it runs under the NETWORK SERVICE account. A Threat actor could try to run code as a Network Service user leveraging the WMI provider host process.
Offensive Tradecraft
A threat actor could use a known DLL hijack vulnerability on the execution of wmiprvse.exe to accomplish code execution as a NETWORK SERVICE account. One way to perform a DLL hijack on the WMI provider host is via the wbemcomn DLL.
When wmiprvse.exe triggers, it looks for wbemcomn.dll in the C:\Windows\System32\wbem\ directory. That DLL does not exist in that folder. Therefore, a threat actor could easily copy its own DLL in that folder and execute it with the WMI provider host.
When the malicious DLL is loaded, there are various approaches to hijacking execution, but most likely a threat actor would want the DLL to act as a proxy to the real DLL to minimize the chances of interrupting normal operations.
One way to do this is by cloning the export table from one DLL to another one. One known tool that can help with it is Koppeling.
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/08_lateral_movement/SDWIN-201009173318.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/covenant_wmi_wbemcomn_dll_hijack.zip |
Analytics
Initialize Analytics Engine
End of explanation
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/covenant_wmi_wbemcomn_dll_hijack.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
Explanation: Download & Process Mordor Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 5145
AND RelativeTargetName LIKE '%wbem\\\wbemcomn.dll'
AND NOT SubjectUserName LIKE '%$'
AND AccessMask = '0x2'
'''
)
df.show(10,False)
Explanation: Analytic I
Look for non-system accounts SMB accessing a C:\Windows\System32\wbem\wbemcomn.dll with write (0x2) access mask via an administrative share (i.e C$).
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM mordorTable b
INNER JOIN (
SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\'))[0]) as TargetFilename
FROM mordorTable
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND Image = 'System'
AND EventID = 11
AND TargetFilename LIKE '%wbem\\\wbemcomn.dll'
) a
ON LOWER(REVERSE(SPLIT(RelativeTargetName, '\'))[0]) = a.TargetFilename
WHERE LOWER(b.Channel) = 'security'
AND b.EventID = 5145
AND b.AccessMask = '0x2'
'''
)
df.show(10,False)
Explanation: Analytic II
Look for C:\Windows\System32\wbem\wbemcomn.dll being accessed over the network with write (0x2) access mask via an administrative share (i.e C$) and created by the System process on the target system.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |
| File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM mordorTable b
INNER JOIN (
SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\'))[0]) as TargetFilename
FROM mordorTable
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND Image = 'System'
AND EventID = 11
AND TargetFilename LIKE '%wbem\\\wbemcomn.dll'
) a
ON LOWER(REVERSE(SPLIT(RelativeTargetName, '\'))[0]) = a.TargetFilename
WHERE LOWER(b.Channel) = 'security'
AND b.EventID = 5145
AND b.AccessMask = '0x2'
'''
)
df.show(10,False)
Explanation: Analytic III
Look for C:\Windows\System32\wbem\wbemcomn.dll being accessed over the network with write (0x2) access mask via an administrative share (i.e C$) and created by the System process on the target system.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |
| File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM mordorTable d
INNER JOIN (
SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\'))[0]) as TargetFilename
FROM mordorTable b
INNER JOIN (
SELECT ImageLoaded
FROM mordorTable
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND EventID = 7
AND LOWER(Image) LIKE '%wmiprvse.exe'
AND ImageLoaded LIKE '%wbem\\\wbemcomn.dll'
) a
ON b.TargetFilename = a.ImageLoaded
WHERE b.Channel = 'Microsoft-Windows-Sysmon/Operational'
AND b.Image = 'System'
AND b.EventID = 11
) c
ON LOWER(REVERSE(SPLIT(RelativeTargetName, '\'))[0]) = c.TargetFilename
WHERE LOWER(d.Channel) = 'security'
AND d.EventID = 5145
AND d.AccessMask = '0x2'
'''
)
df.show(10,False)
Explanation: Analytic IV
Look for C:\Windows\System32\wbem\wbemcomn.dll being accessed over the network with write (0x2) access mask via an administrative share (i.e C$), created by the System process and loaded by the WMI provider host (wmiprvse.exe). All happening on the target system.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |
| File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 |
| File | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
End of explanation |
13,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 4- Decision Trees
This assignment uses 2012 data obtained from the Federal Election Commission on contributions to candidates from committees. The data dictionary is available at http
Step1: Calculating Gini Index
Question 1
Step2: Question 2
Step3: Best Split of a Numeric Feature
Step4: Question 3
Step5: Question 4
Step6: Question 5
Step7: Question 6
Step8: Question 7
Step9: Best Split of a Categorial Variable
Step10: Question 8
Step11: Question 9
Step12: Question 10
Step13: Question 11
Step14: Question 12
Step16: In this exercise, you will be partitioning the original dataset (as opposed to further partitioning the transaction amount partitions from the previous set of questions).
Python tip
Step17: Question 14
Step18: The topmost split depends on the date of the contribution. The next highest are whether a contribution was from Akron, and whether CMTE_ID equals C90011156 (which we think has to do with who the contributor is). The final row splits on whether CMTE_ID equals C00521013, whether the donor is an individual, and whether the state is North Carolina.
Question 15 | Python Code:
from __future__ import division, print_function
from collections import Counter, defaultdict
from itertools import combinations
import pandas as pd
import numpy as np
import itertools
import sklearn
from sklearn.tree import DecisionTreeClassifier
from sklearn.feature_extraction import DictVectorizer #to turn categorial variables into numeric arrays
from sklearn import preprocessing #to transform the feature labels
from sklearn.feature_extraction import DictVectorizer
df = pd.read_csv('lab4_candidate_contributions.csv')
#convert zip code and transaction date from floats to strings (since we wnat to treat them as categorical)
df.ZIP_CODE = df.ZIP_CODE.astype('int').astype('str')
df.TRANSACTION_DT = df.TRANSACTION_DT.astype('int').astype('str')
df.head()
Explanation: Lab 4- Decision Trees
This assignment uses 2012 data obtained from the Federal Election Commission on contributions to candidates from committees. The data dictionary is available at http://www.fec.gov/finance/disclosure/metadata/DataDictionaryContributionstoCandidates.shtml. The file we've given you has been subset to 10,000 randomly sampled rows, wth some columns removed
End of explanation
obama = 0
romney = 0
for i in range(df.CAND_ID.size):
if df.CAND_ID.get_value(i) == "Obama":
obama += 1
else:
romney += 1
print("Obama: %d, Romney: %d"%(obama, romney))
Explanation: Calculating Gini Index
Question 1: How many rows are there in the dataset for Obama? For Romney?
End of explanation
def gini(D):
obama = sum(D.CAND_ID == "Obama")
romney = sum(D.CAND_ID == "Romney")
total = obama+romney
return 1-((obama/total)**2+(romney/total)**2)
print("gini index: %f"%gini(df))
Explanation: Question 2: What is the Gini Index of the this dataset, using Romney and Obama as the target classes?
End of explanation
sortd = df.sort(columns="TRANSACTION_AMT")
mingini = 1
mini = 0
ob1 = 0
ob2 = sum(df.CAND_ID == "Obama")
ro1 = 0
ro2 = sum(df.CAND_ID == "Romney")
total = ob2+ro2
for i in range(df.CAND_ID.size-1):
if sortd.CAND_ID.get_value(i) == "Obama":
ob1 += 1
ob2 -= 1
else:
ro1 += 1
ro2 -= 1
low = df.TRANSACTION_AMT.get_value(i)
high = df.TRANSACTION_AMT.get_value(i+1)
if low != high:
tot1 = ob1+ro1
tot2 = ob2+ro2
gini1 = 1-((ob1/tot1)**2+(ro1/tot1)**2)
gini2 = 1-((ob2/tot2)**2+(ro2/tot2)**2)
ginit = gini1*((ob1+ro1)/total) + gini2*((ob2+ro2)/total)
if ginit < mingini:
mini = i
mingini = ginit
minob1 = ob1
minob2 = ob2
minro1 = ro1
minro2 = ro2
print("split after: %d, gini score: %f, gini reduced by: %f, Obama below: %d, Obama above: %d, Romney below: %d, Romney above: %d"%(mini, mingini, (gini(df)-mingini), minob1, minob2, minro1, minro2))
Explanation: Best Split of a Numeric Feature
End of explanation
mini
Explanation: Question 3: What is the best split point of the TRANSACTION_AMT feature.
End of explanation
mingini
Explanation: Question 4: What is the Gini Index of this best split?
End of explanation
(gini(df)-mingini)
Explanation: Question 5: How much does this partitioning reduce the Gini Index over that of the overall dataset?
End of explanation
print("Romney Rows: ", minro1)
print("Obama Rows:", minob1)
Explanation: Question 6: How many Romney rows are below your best split point? Obama rows?
End of explanation
sortd = df.sort(columns="TRANSACTION_AMT")
mingini = 1
mini = 0
ob1 = 0
ob2 = sum(df.CAND_ID == "Obama")
ro1 = 0
ro2 = sum(df.CAND_ID == "Romney")
total = ob2+ro2
for i in range(df.CAND_ID.size-1):
if sortd.CAND_ID.get_value(i) == "Obama":
ob1 += 1
ob2 -= 1
else:
ro1 += 1
ro2 -= 1
low = df.TRANSACTION_AMT.get_value(i)
high = df.TRANSACTION_AMT.get_value(i+1)
if low != high:
tot1 = ob1+ro1
tot2 = ob2+ro2
gini1 = 1-((ob1/tot1)**2+(ro1/tot1)**2)
gini2 = 1-((ob2/tot2)**2+(ro2/tot2)**2)
ginit = gini1*((ob1+ro1)/total) + gini2*((ob2+ro2)/total)
if ginit < mingini:
mini = i
mingini = ginit
minob1 = ob1
minob2 = ob2
minro1 = ro1
minro2 = ro2
print("split after: %d, gini score: %f, gini reduced by: %f, Obama below: %d, Obama above: %d, Romney below: %d, Romney above: %d"%(mini, mingini, (gini(df)-mingini), minob1, minob2, minro1, minro2))
Explanation: Question 7: How many Romney rows are above your best split point? Obama rows?
Recall that, to calculate the best split of this numeric field, you'll need to order your data by TRANSACTION AMT, then consider the midpoint between each pair of consecutive transaction amounts as a potential split point, then calculate the Gini Index for that partitioning. You'll want to keep track of the best split point and its Gini Index (remember that you are trying to minimize the Gini Index).
There are a lot of ways to do this. Some are very fast, others very slow. One tip to make this run quickly is, as you consecutively step through the data and calculate the Gini Index of each possible split point, keep a running total of the number of rows for each candidate that are located above and below the split point.
Some Python tips:
Counter(), from the collections module, is a special dictionary for counting values of a key
zip() lets you concatenate lists into a list of tuples (for example, if we have a list of the candidates and a list of transaction amounts, zip(candidate_list, transaction_amount) would give us a list of (candidate, transaction amount) pairs
End of explanation
import functools
# question 8
entity_vals = pd.unique(df["ENTITY_TP"])
combinations = functools.reduce(lambda x,y: x+y, [list(itertools.combinations(entity_vals, r)) for r in range(1, len(entity_vals)//2 + 1)])
# question 9
mingini = 1
mincomb = None
for comb in combinations:
indices = df["ENTITY_TP"].isin(comb)
split1 = df.loc[indices,:]
split2 = df.loc[~indices,:]
cur_gini = (len(split1) * gini(split1) + len(split2) * gini(split2)) / len(df)
if cur_gini < mingini:
mingini = cur_gini
mincomb = comb
min_split1 = split1
min_split2 = split2
#print "%d, %d"%(len(split1), len(split2))
Explanation: Best Split of a Categorial Variable
End of explanation
len(combinations)
# (2**7 - 2) / 2 (because we optimize by throwing out half)
Explanation: Question 8: How many possible splits are there of the ENTITY_TP feature?
End of explanation
# question 9
mincomb
Explanation: Question 9: Which split of ENTITY_TP best splits the Obama and Romney rows, as measured by the Gini Index?
End of explanation
# question 10
mingini
Explanation: Question 10: What is the Gini Index of this best split?
End of explanation
# question 11
gini(df) - mingini
Explanation: Question 11: How much does this partitioning reduce the Gini Index over that of the overall data set?
End of explanation
# question 12
print("Romney: %s, Obama: %s"%(sum(min_split1.CAND_ID == "Romney"), sum(min_split1.CAND_ID == "Obama")))
print("Romney: %s, Obama: %s"%(sum(min_split2.CAND_ID == "Romney"), sum(min_split2.CAND_ID == "Obama")))
Explanation: Question 12: How many Romney rows and Obama rows are in your first partition? How many Romney rows and Obama rows are in your second partition?
End of explanation
from random import sample
def trainingSample(n, size):
rows = sample(range(n), size)
return rows
def separateRows(training_rows, data):
Return (training set, prediction set) with n% of rows in training set
training = data.ix[training_rows]
prediction = data.drop(training_rows)
return (training, prediction)
from datetime import datetime
from sklearn import preprocessing
from sklearn.feature_extraction import DictVectorizer
classifier = DecisionTreeClassifier(criterion='gini', splitter='best', max_depth=3, min_samples_split=2, min_samples_leaf=1, max_features=None, random_state=None, min_density=None, compute_importances=None, max_leaf_nodes=None)
df_new = df.copy()
df_new.TRANSACTION_DT = df_new.TRANSACTION_DT.apply(lambda x: x if len(x) == 8 else "0" + x)
df_new.TRANSACTION_DT = df_new.TRANSACTION_DT.apply(lambda x: datetime.strptime(x, "%m%d%Y").toordinal())
CAND_ID = df_new.CAND_ID
X = df_new.drop("CAND_ID", axis = 1)
vec = DictVectorizer()
X = pd.DataFrame(vec.fit_transform(X.to_dict("records")).toarray())
X.columns = vec.get_feature_names()
train_size = 0.75
training_rows = trainingSample(X.shape[0], int(train_size * X.shape[0]))
train_rows, pred_rows = separateRows(training_rows, X)
train_Y, pred_Y = separateRows(training_rows, CAND_ID)
train_Y = train_Y == 'Obama'
train_Y = train_Y.astype(int)
pred_Y = pred_Y == 'Obama'
pred_Y = pred_Y.astype(int)
clf = classifier.fit(train_rows, train_Y)
classifier.score(train_rows, train_Y)
classifier.score(pred_rows, pred_Y)
Explanation: In this exercise, you will be partitioning the original dataset (as opposed to further partitioning the transaction amount partitions from the previous set of questions).
Python tip: the combinations function of the itertools module allows you to enumerate combinations of a list
Training a decision tree
Question 13: Using all of the features in the original dataframe read in at the top of this notebook, train a decision tree classifier that has a depth of three (including the root node and leaf nodes). What is the accuracy of this classifier on the training data?
End of explanation
from sklearn.externals.six import StringIO
with open("obama_romney.dot", 'w') as f:
f = sklearn.tree.export_graphviz(clf, out_file=f)
print(train_rows.columns[2768])
print(train_rows.columns[7])
print(train_rows.columns[745])
print(train_rows.columns[706])
print(train_rows.columns[810])
print(train_rows.columns[2745])
Explanation: Question 14: Export your decision tree to graphviz. Please submit a png file of this graphic to bcourses. In your write-up, write down the interpretation of the rule at each node (for example, 'Root node: rows from state AL go the the left, rows from all other states go to the right. Left child of root node: ... etc
End of explanation
print(3419 / (3079 + 3419))
print(135 / (0 + 135))
print(75 / (56 + 75))
print(68/ (1 + 68))
print(364 / (17 + 364))
print(9 / (7 + 9))
Explanation: The topmost split depends on the date of the contribution. The next highest are whether a contribution was from Akron, and whether CMTE_ID equals C90011156 (which we think has to do with who the contributor is). The final row splits on whether CMTE_ID equals C00521013, whether the donor is an individual, and whether the state is North Carolina.
Question 15: For each of your leaf nodes, specify the percentage of Obama rows in that node (out of the total number of rows at that node).
End of explanation |
13,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
200-D Multivariate Normal
Let's go for broke here.
Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
Step4: Here we will quickly demonstrate that slice sampling is able to cope with very high-dimensional problems without the use of gradients. Our target will in this case be a 250-D uncorrelated multivariate normal distribution with an identical prior.
Step5: We will use "Hamiltonian" Slice Sampling ('hslice') with our gradients to sample in high dimensions.
Step6: Now let's see how our sampling went.
Step7: That looks good! Obviously we can't plot the full 200x200 plot, but 5x5 subplots should do.
Now we can finally check how well our mean and covariances agree. | Python Code:
# system functions that are always useful to have
import time, sys, os
import pickle
# basic numeric setup
import numpy as np
from numpy import linalg
from scipy import stats
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
# seed the random number generator
rstate = np.random.default_rng(520)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
Explanation: 200-D Multivariate Normal
Let's go for broke here.
Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
End of explanation
ndim = 200 # number of dimensions
C = np.identity(ndim) # set covariance to identity matrix
Cinv = linalg.inv(C) # precision matrix
lnorm = -0.5 * (np.log(2 * np.pi) * ndim + np.log(linalg.det(C))) # ln(normalization)
# 250-D iid standard normal log-likelihood
def loglikelihood(x):
Multivariate normal log-likelihood.
return -0.5 * np.dot(x, np.dot(Cinv, x)) + lnorm
# gradient of log-likelihood *with respect to u*
# i.e. d(lnl(v))/dv * dv/du where
# dv/du = 1. / prior(v)
def gradient(x):
Gradient of multivariate normal log-likelihood.
return -np.dot(Cinv, x) / stats.norm.pdf(x)
# prior transform (iid standard normal prior)
def prior_transform(u):
Transforms our unit cube samples `u` to a standard normal prior.
return stats.norm.ppf(u)
# ln(evidence)
lnz_truth = lnorm - 0.5 * ndim * np.log(2)
print(lnz_truth)
Explanation: Here we will quickly demonstrate that slice sampling is able to cope with very high-dimensional problems without the use of gradients. Our target will in this case be a 250-D uncorrelated multivariate normal distribution with an identical prior.
End of explanation
# hamiltonian slice sampling ('hslice')
sampler = dynesty.NestedSampler(loglikelihood, prior_transform, ndim, nlive=50,
bound='none', sample='hslice',
slices=10, gradient=gradient, rstate=rstate)
sampler.run_nested(dlogz=0.01)
res = sampler.results
Explanation: We will use "Hamiltonian" Slice Sampling ('hslice') with our gradients to sample in high dimensions.
End of explanation
from dynesty import plotting as dyplot
# evidence check
fig, axes = dyplot.runplot(res, color='red', lnz_truth=lnz_truth, truth_color='black', logplot=True)
fig.tight_layout()
# posterior check
from dynesty.results import Results
dims = [-1, -2, -3, -4, -5]
fig, ax = plt.subplots(5, 5, figsize=(25, 25))
samps, samps_t = res.samples, res.samples[:,dims]
dres = res.asdict()
dres['samples'] = samps_t
res = Results(dres)
fg, ax = dyplot.cornerplot(res, color='red', truths=np.zeros(ndim), truth_color='black',
span=[(-3.5, 3.5) for i in range(len(dims))],
show_titles=True, title_kwargs={'y': 1.05},
quantiles=None, fig=(fig, ax))
dres = res.asdict()
dres['samples'] = samps
res = Results(dres)
print(1.96 / np.sqrt(2))
Explanation: Now let's see how our sampling went.
End of explanation
# let's confirm we actually got the entire distribution
from dynesty import utils
weights = np.exp(res.logwt - res.logz[-1])
mu, cov = utils.mean_and_cov(samps, weights)
# plot residuals
from scipy.stats.kde import gaussian_kde
mu_kde = gaussian_kde(mu)
xgrid = np.linspace(-0.5, 0.5, 1000)
mu_pdf = mu_kde.pdf(xgrid)
cov_kde = gaussian_kde((cov - C).flatten())
xgrid2 = np.linspace(-0.3, 0.3, 1000)
cov_pdf = cov_kde.pdf(xgrid2)
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(xgrid, mu_pdf, lw=3, color='black')
plt.xlabel('Mean Offset')
plt.ylabel('PDF')
plt.subplot(1, 2, 2)
plt.plot(xgrid2, cov_pdf, lw=3, color='red')
plt.xlabel('Covariance Offset')
plt.ylabel('PDF')
# print values
print('Means (0.):', np.mean(mu), '+/-', np.std(mu))
print('Variance (0.5):', np.mean(np.diag(cov)), '+/-', np.std(np.diag(cov)))
cov_up = np.triu(cov, k=1).flatten()
cov_low = np.tril(cov,k=-1).flatten()
cov_offdiag = np.append(cov_up[abs(cov_up) != 0.], cov_low[cov_low != 0.])
print('Covariance (0.):', np.mean(cov_offdiag), '+/-', np.std(cov_offdiag))
plt.tight_layout()
# plot individual values
plt.figure(figsize=(20,6))
plt.subplot(1, 3, 1)
plt.plot(mu, 'k.')
plt.ylabel(r'$\Delta$ Mean')
plt.xlabel('Dimension')
plt.ylim([-np.max(np.abs(mu)) - 0.05,
np.max(np.abs(mu)) + 0.05])
plt.tight_layout()
plt.subplot(1, 3, 2)
dcov = np.diag(cov) - 0.5
plt.plot(dcov, 'r.')
plt.ylabel(r' $\Delta$ Variance')
plt.xlabel('Dimension')
plt.ylim([-np.max(np.abs(dcov)) - 0.02,
np.max(np.abs(dcov)) + 0.02])
plt.tight_layout()
plt.subplot(1, 3, 3)
dcovlow = cov_low[cov_low != 0.]
dcovup = cov_up[cov_up != 0.]
dcovoff = np.append(dcovlow, dcovup)
plt.plot(dcovlow, 'b.', ms=1, alpha=0.3)
plt.plot(dcovup, 'b.', ms=1, alpha=0.3)
plt.ylabel(r' $\Delta$ Covariance')
plt.xlabel('Cross-Term')
plt.ylim([-np.max(np.abs(dcovoff)) - 0.02,
np.max(np.abs(dcovoff)) + 0.02])
plt.tight_layout()
Explanation: That looks good! Obviously we can't plot the full 200x200 plot, but 5x5 subplots should do.
Now we can finally check how well our mean and covariances agree.
End of explanation |
13,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-hr5', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-HR5
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
13,636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1.4 查找最大或最小的N个元素
怎样从一个集合中获得最大或者最小的N个元素列表?
Step1: 当查找的元素个数较小时(N < nSum),函数nlargest and nsmalest 是很适合<br>若 仅仅想查找唯一的 最小或最大N=1的元素的话,使用max and min 更快<br>若N的大小的和集合大小接近时,通常先排序在切片更快 sorted(items)[
Step2: pop 1 返回优先级最高的元素<br>针对pop 3 and pop 4 按照其被插入至queue 顺序返回
module heapq ---heapq.heappush() and heapq.pop() 分别在_queue队列中插入和删除第一个元素 同时保证_queue第一个元素拥有最小优先级<br>heapq()函数总是返回"最小的(priority)"的元素--This is Key of 保证queue pop操作返回正确元素的关键 时间复杂度O(log N ) super quick!!!<br> index -var 保证同等优先级正确排序 如pop3 and pop4
1.6 字典中的键映射多个值
怎样实现一个键对应多个值的字典(也叫 multidict )?
一个dict就是一个键对应一个单值的映射 if you want to 一个键映射多个值 则需要将这多个值放置另外容器 比如 list or set中
Step3: 选择list 还是set 取决你的实际要求 if want to keep element 的插入顺序 则选择list ifwant 去掉 repeat element 即使用set<br>you can use collections module 中的defaultdict 来构造这样字典
Step4: 以上d 指的是(创建)新的dict [创建映射实体]<br>if you 只想在一个普通的字典上使用setdefault方法来替代
Step5: create a 多值映射dict很简单 but if you want to create yourself 太难啦
Step6: But use defaultdict is so easy and simple
Step7: 1.7 字典排序
想创建一个dict and 在迭代or序列化这个dict 时可控制element 的顺序
为control 一个dict 中element 的order you can use collections 中的OrderedDict类 在迭代操作时 其会 keep element 元素插入时的order
Step8: create a 将来序列化 or 编码成其他格式的映射的时候OrderedDict is very useful<br> 精确control JSON编码字段的顺序可使用OrderedDict来构建数据
Step9: OrderedDict 内部维护这一个根据插入顺序排序的双向链表 每次当一个新的element insert into it and newelement will be 放到 链表的尾部<br>对于一个已经存在键的重复赋值不会改变键的顺序
需要注意的是,一个 OrderedDict 的大小是一个普通字典的两倍,因为它内部维护着另外一个链表。 所以如果你要构建一个需要大量 OrderedDict 实例的数据结构的时候(比如读取100,000行CSV数据到一个 OrderedDict 列表中去), 那么你就得仔细权衡一下是否使用 OrderedDict 带来的好处要大过额外内存消耗的影响。
1.8 字典的运算
怎样在data dict 中执行一些计算操作
Step10: 需要注意的是 zip function is 创建的一个只能访问一次的迭代器
Step11: max() arg is an empty sequence
ERROR:表示此时 max 中的参数是一个空的 序列
若是不利用zip() 直接进行普通的数学运算<br>他会作用于key 而不是value
Step12: 为弥补以上问题 我就直接提取出 dict中的value
Step13: 不过 以上两种方式 都差强人意 我对dict操作 是为了既要显示 key 并要显示 value<br>So 这里要利用到lambda函数
Step14: 以上key 函数 可以返回 value 最低的对应key 即value 最低是 10.45 贰最低对应的key是 FB
最先利用的zip 函数就可以"反转"为 (value ,key)元组序列来解决上述问题 当比较此元组时 value会先进性比较 后是key---(这样的话,即可利用简单语句进行实现操作)---
若是出现dict中实体拥有相同的value 在执行 max or min 时会继续判读 key的大小来据此进行判断 | Python Code:
import heapq
nums = [1,8,23,44,56,12,-2,45,23]
print(heapq.nlargest(3,nums))
print(heapq.nsmallest(3,nums))
portfolio = [
{'name':'IBM','shares':100,'price':91.1},
{'name':'AAPL','shares':50,'price':543.22},
{'name': 'FB', 'shares': 200, 'price': 21.09},
{'name': 'HPQ', 'shares': 35, 'price': 31.75},
{'name': 'YHOO', 'shares': 45, 'price': 16.35},
{'name': 'ACME', 'shares': 75, 'price': 115.65}
]
cheap = heapq.nsmallest(4,portfolio,key = lambda s : s['price'])
expensive = heapq.nlargest(3,portfolio,key = lambda s:s['price'])
print('The fours of cheapest:%s\nThe threes of expensivest:%s'%(cheap,expensive))
heapq.heapify(nums)
nums
heapq.heappop(nums) # heap -- 堆
Explanation: 1.4 查找最大或最小的N个元素
怎样从一个集合中获得最大或者最小的N个元素列表?
End of explanation
class PriorityQueue:
def __init__(self):
self._queue = []
self._index = 0
def push(self,item,priority):
heapq.heappush(self._queue,(priority,self._index,item))
self._index += 1
'''
push 按照queue 优先级priority 是以从低到高起 若 -priority is 以从高到低起
'''
def pop(self):
return heapq.heappop(self._queue)[-1]
class Item:
def __init__(self,name):
self.name = name
def __repr__(self):
return 'Item({!r})'.format(self.name)
q = PriorityQueue()
q.push(Item('foo'),1)
q.push(Item('bar'),5)
q.push(Item('spqm'),4)
q.push(Item('grok'),1)
q
q.pop() # pop 1
q.pop() # pop 2
q.pop() # pop 3
q.pop() # pop 4
Explanation: 当查找的元素个数较小时(N < nSum),函数nlargest and nsmalest 是很适合<br>若 仅仅想查找唯一的 最小或最大N=1的元素的话,使用max and min 更快<br>若N的大小的和集合大小接近时,通常先排序在切片更快 sorted(items)[:N] and sorted(items)[-N:]
1.5 实现一个优先级队列
怎样实现一个按优先级排序的队列? 并且在这个队列上面每次pop操作总是返回优先级最高的那个元素
End of explanation
d = {
'a':[1,2,3],
'b':[4,5]
}
e = {
'a':{1,2,3},
'b':{4,5}
}
Explanation: pop 1 返回优先级最高的元素<br>针对pop 3 and pop 4 按照其被插入至queue 顺序返回
module heapq ---heapq.heappush() and heapq.pop() 分别在_queue队列中插入和删除第一个元素 同时保证_queue第一个元素拥有最小优先级<br>heapq()函数总是返回"最小的(priority)"的元素--This is Key of 保证queue pop操作返回正确元素的关键 时间复杂度O(log N ) super quick!!!<br> index -var 保证同等优先级正确排序 如pop3 and pop4
1.6 字典中的键映射多个值
怎样实现一个键对应多个值的字典(也叫 multidict )?
一个dict就是一个键对应一个单值的映射 if you want to 一个键映射多个值 则需要将这多个值放置另外容器 比如 list or set中
End of explanation
from collections import defaultdict
d = defaultdict(list)
d['a'].append(1)
d['a'].append(2)
d['b'].append(3)
d = defaultdict(set)
d['a'].add(1)
d['a'].add(2)
d['b'].add(4)
d
Explanation: 选择list 还是set 取决你的实际要求 if want to keep element 的插入顺序 则选择list ifwant 去掉 repeat element 即使用set<br>you can use collections module 中的defaultdict 来构造这样字典
End of explanation
d = {} # a regular dictionary
d.setdefault('a',[]).append(1)
d.setdefault('a',[]).append(2)
d.setdefault('b',[]).append(4)
# 每次都调用需要创建一个新的初始值的instance(empty list=[])
d
Explanation: 以上d 指的是(创建)新的dict [创建映射实体]<br>if you 只想在一个普通的字典上使用setdefault方法来替代
End of explanation
'''
d = {}
pairs = {'a':[1,2],'b':2,'c':3}
for key, value in pairs:
if key not in d:
d[key] = []
d[key].append(value)
'''
Explanation: create a 多值映射dict很简单 but if you want to create yourself 太难啦
End of explanation
'''
d = defaultdict(list)
for key, value in pairs:
d[key].append(value)
'''
Explanation: But use defaultdict is so easy and simple
End of explanation
from collections import OrderedDict
def ordered_dict():
d = OrderedDict()
d['foo'] = 1
d['bar'] = 2
d['spa'] = 3
d['gro'] = 4
# Outputs 'foo 1','bar 2','spa 3','gro 4'
ordered_dict()
for key in d:
print(key,d[key])
Explanation: 1.7 字典排序
想创建一个dict and 在迭代or序列化这个dict 时可控制element 的顺序
为control 一个dict 中element 的order you can use collections 中的OrderedDict类 在迭代操作时 其会 keep element 元素插入时的order
End of explanation
import json
json.dumps(d)
Explanation: create a 将来序列化 or 编码成其他格式的映射的时候OrderedDict is very useful<br> 精确control JSON编码字段的顺序可使用OrderedDict来构建数据
End of explanation
prices = {
'AC':45.34,
'AA':615.2,
'IAM':205.3,
'FB':10.765
}
# 对dict进行计算操作 需要利用zip() 将key and value 反转过来
min_price = min(zip(prices.values(),prices.keys()))
print('min_price is %s , %s' % min_price[:])
max_price = max(zip(prices.values(),prices.keys()))
print('max_price is %s , %s' % max_price[:])
prices_sorted = sorted(zip(prices.values(),prices.keys()))
prices_sorted
Explanation: OrderedDict 内部维护这一个根据插入顺序排序的双向链表 每次当一个新的element insert into it and newelement will be 放到 链表的尾部<br>对于一个已经存在键的重复赋值不会改变键的顺序
需要注意的是,一个 OrderedDict 的大小是一个普通字典的两倍,因为它内部维护着另外一个链表。 所以如果你要构建一个需要大量 OrderedDict 实例的数据结构的时候(比如读取100,000行CSV数据到一个 OrderedDict 列表中去), 那么你就得仔细权衡一下是否使用 OrderedDict 带来的好处要大过额外内存消耗的影响。
1.8 字典的运算
怎样在data dict 中执行一些计算操作
End of explanation
prices_and_names = zip(prices.values(),prices.keys())
print(min(prices_and_names))
print(max(prices_and_names))
Explanation: 需要注意的是 zip function is 创建的一个只能访问一次的迭代器
End of explanation
min(prices)
max(prices)
# 以上是按照key的 字母进行排序并获得最大 or 最小
Explanation: max() arg is an empty sequence
ERROR:表示此时 max 中的参数是一个空的 序列
若是不利用zip() 直接进行普通的数学运算<br>他会作用于key 而不是value
End of explanation
min(prices.values())
max(prices.values())
Explanation: 为弥补以上问题 我就直接提取出 dict中的value
End of explanation
print(min(prices,key=lambda k:prices[k]))
# print(max(prices,value=lambda v:prices[v]))
Explanation: 不过 以上两种方式 都差强人意 我对dict操作 是为了既要显示 key 并要显示 value<br>So 这里要利用到lambda函数
End of explanation
p = {'a':123,'b':123}
print(min(zip(p.values(),p.keys())))
print(max(zip(p.values(),p.keys())))
Explanation: 以上key 函数 可以返回 value 最低的对应key 即value 最低是 10.45 贰最低对应的key是 FB
最先利用的zip 函数就可以"反转"为 (value ,key)元组序列来解决上述问题 当比较此元组时 value会先进性比较 后是key---(这样的话,即可利用简单语句进行实现操作)---
若是出现dict中实体拥有相同的value 在执行 max or min 时会继续判读 key的大小来据此进行判断
End of explanation |
13,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Part 16
Step1: For this example, we will create a data distribution consisting of a set of ellipses in 2D, each with a random position, shape, and orientation. Each class corresponds to a different ellipse. Let's randomly generate the ellipses.
Step2: This function generates random data from the distribution. For each point it chooses a random class, then a random position in that class' ellipse.
Step3: Let's plot a bunch of random points drawn from this distribution to see what it looks like. Points are colored based on their class label.
Step4: Now let's create the model for our CGAN.
Step5: We'll use different loss functions for training the generator and discriminator. The discriminator outputs its predictions in the form of a probability that each sample is a real sample (that is, that it came from the training set rather than the generator). Its loss consists of two terms. The first term tries to maximize the output probability for real data, and the second term tries to minimize the output probability for generated samples. The loss function for the generator is just a single term
Step6: Now to fit the model. Here are some important points to notice about the code.
We use fit_generator() to train only a single batch at a time, and we alternate between the discriminator and the generator. That way. both parts of the model improve together.
We only train the generator half as often as the discriminator. On this particular model, that gives much better results. You will often need to adjust (# of discriminator steps)/(# of generator steps) to get good results on a given problem.
We disable checkpointing by specifying checkpoint_interval=0. Since each call to fit_generator() includes only a single batch, it would otherwise save a checkpoint to disk after every batch, which would be very slow. If this were a real project and not just an example, we would want to occasionally call model.save_checkpoint() to write checkpoints at a reasonable interval.
Step7: Have the trained model generate some data, and see how well it matches the training distribution we plotted before. | Python Code:
%tensorflow_version 1.x
!curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import deepchem_installer
%time deepchem_installer.install(version='2.3.0')
Explanation: Tutorial Part 16: Conditional Generative Adversarial Network
Note: This example implements a GAN from scratch. The same model could be implemented much more easily with the dc.models.GAN class. See the MNIST GAN notebook for an example of using that class. It can still be useful to know how to implement a GAN from scratch for advanced situations that are beyond the scope of what the standard GAN class supports.
A Generative Adversarial Network (GAN) is a type of generative model. It consists of two parts called the "generator" and the "discriminator". The generator takes random values as input and transforms them into an output that (hopefully) resembles the training data. The discriminator takes a set of samples as input and tries to distinguish the real training samples from the ones created by the generator. Both of them are trained together. The discriminator tries to get better and better at telling real from false data, while the generator tries to get better and better at fooling the discriminator.
A Conditional GAN (CGAN) allows additional inputs to the generator and discriminator that their output is conditioned on. For example, this might be a class label, and the GAN tries to learn how the data distribution varies between classes.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.
End of explanation
import deepchem as dc
import numpy as np
import tensorflow as tf
n_classes = 4
class_centers = np.random.uniform(-4, 4, (n_classes, 2))
class_transforms = []
for i in range(n_classes):
xscale = np.random.uniform(0.5, 2)
yscale = np.random.uniform(0.5, 2)
angle = np.random.uniform(0, np.pi)
m = [[xscale*np.cos(angle), -yscale*np.sin(angle)],
[xscale*np.sin(angle), yscale*np.cos(angle)]]
class_transforms.append(m)
class_transforms = np.array(class_transforms)
Explanation: For this example, we will create a data distribution consisting of a set of ellipses in 2D, each with a random position, shape, and orientation. Each class corresponds to a different ellipse. Let's randomly generate the ellipses.
End of explanation
def generate_data(n_points):
classes = np.random.randint(n_classes, size=n_points)
r = np.random.random(n_points)
angle = 2*np.pi*np.random.random(n_points)
points = (r*np.array([np.cos(angle), np.sin(angle)])).T
points = np.einsum('ijk,ik->ij', class_transforms[classes], points)
points += class_centers[classes]
return classes, points
Explanation: This function generates random data from the distribution. For each point it chooses a random class, then a random position in that class' ellipse.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plot
classes, points = generate_data(1000)
plot.scatter(x=points[:,0], y=points[:,1], c=classes)
Explanation: Let's plot a bunch of random points drawn from this distribution to see what it looks like. Points are colored based on their class label.
End of explanation
import deepchem.models.tensorgraph.layers as layers
model = dc.models.TensorGraph(learning_rate=1e-4, use_queue=False)
# Inputs to the model
random_in = layers.Feature(shape=(None, 10)) # Random input to the generator
generator_classes = layers.Feature(shape=(None, n_classes)) # The classes of the generated samples
real_data_points = layers.Feature(shape=(None, 2)) # The training samples
real_data_classes = layers.Feature(shape=(None, n_classes)) # The classes of the training samples
is_real = layers.Weights(shape=(None, 1)) # Flags to distinguish real from generated samples
# The generator
gen_in = layers.Concat([random_in, generator_classes])
gen_dense1 = layers.Dense(30, in_layers=gen_in, activation_fn=tf.nn.relu)
gen_dense2 = layers.Dense(30, in_layers=gen_dense1, activation_fn=tf.nn.relu)
generator_points = layers.Dense(2, in_layers=gen_dense2)
model.add_output(generator_points)
# The discriminator
all_points = layers.Concat([generator_points, real_data_points], axis=0)
all_classes = layers.Concat([generator_classes, real_data_classes], axis=0)
discrim_in = layers.Concat([all_points, all_classes])
discrim_dense1 = layers.Dense(30, in_layers=discrim_in, activation_fn=tf.nn.relu)
discrim_dense2 = layers.Dense(30, in_layers=discrim_dense1, activation_fn=tf.nn.relu)
discrim_prob = layers.Dense(1, in_layers=discrim_dense2, activation_fn=tf.sigmoid)
Explanation: Now let's create the model for our CGAN.
End of explanation
# Discriminator
discrim_real_data_loss = -layers.Log(discrim_prob+1e-10) * is_real
discrim_gen_data_loss = -layers.Log(1-discrim_prob+1e-10) * (1-is_real)
discrim_loss = layers.ReduceMean(discrim_real_data_loss + discrim_gen_data_loss)
discrim_submodel = model.create_submodel(layers=[discrim_dense1, discrim_dense2, discrim_prob], loss=discrim_loss)
# Generator
gen_loss = -layers.ReduceMean(layers.Log(discrim_prob+1e-10) * (1-is_real))
gen_submodel = model.create_submodel(layers=[gen_dense1, gen_dense2, generator_points], loss=gen_loss)
Explanation: We'll use different loss functions for training the generator and discriminator. The discriminator outputs its predictions in the form of a probability that each sample is a real sample (that is, that it came from the training set rather than the generator). Its loss consists of two terms. The first term tries to maximize the output probability for real data, and the second term tries to minimize the output probability for generated samples. The loss function for the generator is just a single term: it tries to maximize the discriminator's output probability for generated samples.
For each one, we create a "submodel" specifying a set of layers that will be optimized based on a loss function.
End of explanation
batch_size = model.batch_size
discrim_error = []
gen_error = []
for step in range(20000):
classes, points = generate_data(batch_size)
class_flags = dc.metrics.to_one_hot(classes, n_classes)
feed_dict={random_in: np.random.random((batch_size, 10)),
generator_classes: class_flags,
real_data_points: points,
real_data_classes: class_flags,
is_real: np.concatenate([np.zeros((batch_size,1)), np.ones((batch_size,1))])}
discrim_error.append(model.fit_generator([feed_dict],
submodel=discrim_submodel,
checkpoint_interval=0))
if step%2 == 0:
gen_error.append(model.fit_generator([feed_dict],
submodel=gen_submodel,
checkpoint_interval=0))
if step%1000 == 999:
print(step, np.mean(discrim_error), np.mean(gen_error))
discrim_error = []
gen_error = []
Explanation: Now to fit the model. Here are some important points to notice about the code.
We use fit_generator() to train only a single batch at a time, and we alternate between the discriminator and the generator. That way. both parts of the model improve together.
We only train the generator half as often as the discriminator. On this particular model, that gives much better results. You will often need to adjust (# of discriminator steps)/(# of generator steps) to get good results on a given problem.
We disable checkpointing by specifying checkpoint_interval=0. Since each call to fit_generator() includes only a single batch, it would otherwise save a checkpoint to disk after every batch, which would be very slow. If this were a real project and not just an example, we would want to occasionally call model.save_checkpoint() to write checkpoints at a reasonable interval.
End of explanation
classes, points = generate_data(1000)
feed_dict = {random_in: np.random.random((1000, 10)),
generator_classes: dc.metrics.to_one_hot(classes, n_classes)}
gen_points = model.predict_on_generator([feed_dict])
plot.scatter(x=gen_points[:,0], y=gen_points[:,1], c=classes)
Explanation: Have the trained model generate some data, and see how well it matches the training distribution we plotted before.
End of explanation |
13,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lists
Some methods of list
Step1: <code>del</code> statement can be used to remove an item from a list given its index
Step2: <code>list()</code>
Step3: Sort a list
<code>sorted</code><code>(list, [cmp=None[, key=None[, reverse]]])</code>
Step4: List comprehensive
a concise way to create list
when need to create a list... think about it
Step5: Tuples
lists and strings are two examples of sequence data types.
tuple can think like list, but it is immutable.
Note
Step6: Sets
A set is an unordered collection with no duplicate elements
Curly brackets can use to create set, but <code>{}</code> create dictionary instead of set
<code>set()</code>
Step7: add item to a set
Step8: check item is in a set
Step9: delete item
Step10: similarly to list comprehensions, set comprehensions are also supported
Step11: you can loop over the set
Step12: Dictionaries
it's mapping type
Step13: delete key in dictionary
Step14: check if a key is in dictionary
Step15: <code>dict.keys()</code>
Step16: dictionary can be construct by calling
<code>dict(sequence)</code>
Step17: <code>zip(sequence...)</code>
Step18: Loop over a dictionary
Step19: Generator & iterator
Generator
generator
Step20: comprehensive syntax like list comprehensive
Step21: Iterator
iterator
Step22: More on looping
<code>enumerate</code><code>(sequence, start=0)</code>
Step23: <code>dict.iteritems()</code> | Python Code:
pets = ['dog', 'cat', 'pig']
print pets.index('cat')
pets.insert(0, 'rabbit')
print pets
pets.pop(1)
print pets
Explanation: Lists
Some methods of list:
<code>list.append(x)</code>: add <code>x</code> to the end
<code>list.insert(i, x)</code>: insert <code>x</code> at position <code>i</code>
<code>list.index(x)</code>: return index of the first item whose value is <code>x</code>
<code>list.pop([i])</code>: remove item at position <code>i</code>. default value is 0
End of explanation
a = range(10)
print a
del a[2]
print a
print a[:3]
del a[:3]
print a
Explanation: <code>del</code> statement can be used to remove an item from a list given its index
End of explanation
print list('i can eat glass')
Explanation: <code>list()</code>: convert a sequence to a list
End of explanation
print sorted([2, 3, 1], reverse=True)
a = [2, 3, 1]
print a.sort(reverse=True)
print a
print sorted([
['peter', 23],
['john', 30],
['tom', 18]
], key=lambda x: x[1])
Explanation: Sort a list
<code>sorted</code><code>(list, [cmp=None[, key=None[, reverse]]])</code>: return a new sorted list
<code>list.sort</code><code>([cmp=None[, key=None[, reverse]]])</code>: sort the current list (not creating new list)
where:
+ cmp: custom comparison function should return a negative, zero or positive number depending on whether the first argument is considered smaller than, equal to, or larger than the second argument
+ key: function extracting a comparison key from each list element.
+ reverse: if True, ascending sort otherwise, descending sort.
End of explanation
squares = []
for x in range(10):
squares.append(x**2)
print squares
print [x**2 for x in range(10)]
array = []
for x in [1,2,3]:
for y in [1, 2, 3]:
if x != y:
array.append((x, y))
print array
print [(x, y) for x in [1,2,3] for y in [1,2,3] if x != y]
Explanation: List comprehensive
a concise way to create list
when need to create a list... think about it
End of explanation
t = (1, 2, 3, 4, 5)
print t
tuple([1,2,3])
# change the tuple raise exception
t[0] = 5
Explanation: Tuples
lists and strings are two examples of sequence data types.
tuple can think like list, but it is immutable.
Note: no tuple comprehensive
End of explanation
letters = {'a', 'b', 'c', 'a'}
print letters
print set(['a', 'b', 'c', 'a'])
s = set(['a', 'b'])
s.add('c')
print s
Explanation: Sets
A set is an unordered collection with no duplicate elements
Curly brackets can use to create set, but <code>{}</code> create dictionary instead of set
<code>set()</code>: convert a sequence to a set
End of explanation
pets = { 'dog', 'cat', 'pig' }
pets.add('dog')
print pets
pets.add('fish')
print pets
Explanation: add item to a set
End of explanation
print 'fish' in pets
print 'lion' in pets
Explanation: check item is in a set
End of explanation
pets.remove('fish')
print pets
Explanation: delete item
End of explanation
letters = {x for x in 'i can eat glass'}
print letters
Explanation: similarly to list comprehensions, set comprehensions are also supported
End of explanation
for c in set('i can eat glass'):
print c,
Explanation: you can loop over the set
End of explanation
{'a', 'b'}
tel = {'jack': 4098, 'sape': 4139}
tel['guido'] = 4127
print tel
tel['vu'] = 4910
print tel
print tel['jack']
Explanation: Dictionaries
it's mapping type: key -> value
key can be any immutable type: usually string or number
End of explanation
del tel['guido']
print tel
Explanation: delete key in dictionary
End of explanation
print 'sape' in tel
print 'foo' in tel
Explanation: check if a key is in dictionary
End of explanation
tel = {'sape': 4139, 'jack': 4098, 'guido': 4127}
print tel.keys()
print tel.values()
Explanation: <code>dict.keys()</code>: return list of keys
<code>dict.values()</code>: return list of values
End of explanation
print dict([('sape', 4139), ('jack', 4098), ('guido', 4127)])
Explanation: dictionary can be construct by calling
<code>dict(sequence)</code>: where a sequence of (key, value) pairs
End of explanation
zip([1, 2, 3], 'abc', 'ABC')
print dict(zip('abc', [1, 2, 3]))
Explanation: <code>zip(sequence...)</code>: zip sequences together
End of explanation
for name in tel:
print name, ':', tel[name]
tel.values()
for telno in tel.values():
print telno
Explanation: Loop over a dictionary
End of explanation
def firstn(n):
i = 0
while i < n:
yield i
i += 1
gen = firstn(10)
print range(50)
print firstn(50)
for i in range(5):
print i,
print '\n--------------------'
for i in firstn(5):
print i,
Explanation: Generator & iterator
Generator
generator: for lazy (on demand) generating of sequence of values
performance benefit!
End of explanation
for i in (x ** 2 for x in range(10)):
print i,
Explanation: comprehensive syntax like list comprehensive
End of explanation
for i in xrange(10):
print i,
Explanation: Iterator
iterator: is an object that enables a program to traverse though a sequence
<code>for</code> statement: loop through any iterator
End of explanation
list(enumerate(['dog', 'cat', 'pig']))
print list(enumerate(['dog', 'cat', 'pig']))
print list(enumerate(['dog', 'cat', 'pig'], start=2))
for value in enumerate(['dog', 'cat', 'pig']):
print value
for index, value in enumerate(['dog', 'cat', 'pig']):
print index, ':', value
Explanation: More on looping
<code>enumerate</code><code>(sequence, start=0)</code>
End of explanation
print tel
print list(tel.iteritems())
for name, telno in tel.iteritems():
print name, ':', telno
for key in tel.iterkeys():
print key
import os
os.listdir('.')
[file_name for file_name in os.listdir('.') if file_name.endswith('.pyc')]
filter(lambda file_name: file_name.endswith('.pyc'), os.listdir('.'))
os.remove('./main.pyc')
[os.remove(file_name) for file_name in os.listdir('.') if file_name.endswith('.pyc')]
Explanation: <code>dict.iteritems()</code>: return an iterator through items (key, value) of a dictionary
<code>dict.iterkeys()</code>: return an iterator through key of a dictionary
<code>dict.itervalues()</code>: return an iterator through value of a dictionary
End of explanation |
13,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Account for AOI reflection losses (in full mode only)
In this section, we will learn
Step1: Let's define a few helper functions that will help clarify the notebook
Step2: Get timeseries inputs
Step3: Prepare PV array parameters
Step4: Default AOI loss behavior
In pvfactors
Step5: Let's plot the back AOI losses
Step6: As shown above, by default pvfactors apply constant values of AOI losses for all the surfaces in the system, and for all the incident irradiance components
Step7: As expected, there are less reflection losses for incident light rays normal to the surface than everywhere else.
Use the fAOI function
It's then easy to use the created fAOI function in the irradiance models. It just has to be passed to the model at initialization.
For this example, we will use the same fAOI function for the front and back surfaces of the PV rows.
Step8: Then pass the model to the PVEngine and run the simulation as usual.
Step9: Let's now see what the irradiance and AOI losses look like.
Step10: We can now see the changes in AOI losses, which now use the fAOI function for the direct, circumsolar, and horizon light components. But it still uses the constant rho_front and rho_back values for the reflection and isotropic components of the incident light on the surfaces.
Advanced
Step11: Add fAOI losses to the view factor calculator, and use 1000 integration points
Step12: Re-calculate global hemispherical reflectivity values based on fAOI function
Step13: Since we're using the same fAOI function for front and back sides, we now get the same global hemispherical reflectivity values.
We can now create the irradiance model.
Step14: Simulations can then be run the usual way
Step15: Run the simulation
Step16: Let's now see what the irradiance and AOI losses look like.
Step17: This is the way to apply fAOI losses to all the irradiance components in a pvfactors simulation.
Doing all of the above using the "run functions"
When using the "run functions", you'll just need to define the parameters in advance and then pass it to
the functions.
Step18: Using run_timeseries_engine()
Step21: Using run_parallel_engine()
Because of Python's multiprocessing, and because functions cannot be pickled in Python, the functions need to be wrapped up into classes.
Step22: Pass the objects through the dictionaries and run the simulation | Python Code:
# Import external libraries
import os
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
import pandas as pd
import warnings
# Settings
%matplotlib inline
np.set_printoptions(precision=3, linewidth=300)
warnings.filterwarnings('ignore')
plt.style.use('seaborn-whitegrid')
plt.rcParams.update({'font.size': 12})
# Paths
LOCAL_DIR = os.getcwd()
DATA_DIR = os.path.join(LOCAL_DIR, 'data')
filepath = os.path.join(DATA_DIR, 'test_df_inputs_MET_clearsky_tucson.csv')
RUN_FIXED_TILT = True
Explanation: Account for AOI reflection losses (in full mode only)
In this section, we will learn:
how pvfactors accounts for AOI losses by default
how to account for AOI-dependent reflection losses for direct, circumsolar, and horizon irradiance components
how to account for AOI-dependent reflection losses for isotropic and reflection irradiance components
how to run all of this using the pvfactors run functions
Imports and settings
End of explanation
# Helper functions for plotting and simulation
def plot_irradiance(df_report):
# Plot irradiance
f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 4))
# Plot back surface irradiance
df_report[['qinc_back', 'qabs_back']].plot(ax=ax[0])
ax[0].set_title('Back surface irradiance')
ax[0].set_ylabel('W/m2')
# Plot front surface irradiance
df_report[['qinc_front', 'qabs_front']].plot(ax=ax[1])
ax[1].set_title('Front surface irradiance')
ax[1].set_ylabel('W/m2')
plt.show()
def plot_aoi_losses(df_report):
# plotting AOI losses
f, ax = plt.subplots(figsize=(5.5, 4))
df_report[['aoi_losses_back_%']].plot(ax=ax)
df_report[['aoi_losses_front_%']].plot(ax=ax)
# Adjust axes
ax.set_ylabel('%')
ax.legend(['AOI losses back PV row', 'AOI losses front PV row'])
ax.set_title('AOI losses')
plt.show()
# Create a function that will build a simulation report
def fn_report(pvarray):
# Get irradiance values
report = {'qinc_back': pvarray.ts_pvrows[1].back.get_param_weighted('qinc'),
'qabs_back': pvarray.ts_pvrows[1].back.get_param_weighted('qabs'),
'qinc_front': pvarray.ts_pvrows[1].front.get_param_weighted('qinc'),
'qabs_front': pvarray.ts_pvrows[1].front.get_param_weighted('qabs')}
# Calculate AOI losses
report['aoi_losses_back_%'] = (report['qinc_back'] - report['qabs_back']) / report['qinc_back'] * 100.
report['aoi_losses_front_%'] = (report['qinc_front'] - report['qabs_front']) / report['qinc_front'] * 100.
# Return report
return report
Explanation: Let's define a few helper functions that will help clarify the notebook
End of explanation
def export_data(fp):
tz = 'US/Arizona'
df = pd.read_csv(fp, index_col=0)
df.index = pd.DatetimeIndex(df.index).tz_convert(tz)
return df
df = export_data(filepath)
df_inputs = df.iloc[:48, :]
# Plot the data
f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(12, 3))
df_inputs[['dni', 'dhi']].plot(ax=ax1)
df_inputs[['solar_zenith', 'solar_azimuth']].plot(ax=ax2)
df_inputs[['surface_tilt', 'surface_azimuth']].plot(ax=ax3)
plt.show()
# Use a fixed albedo
albedo = 0.2
Explanation: Get timeseries inputs
End of explanation
pvarray_parameters = {
'n_pvrows': 3, # number of pv rows
'pvrow_height': 1, # height of pvrows (measured at center / torque tube)
'pvrow_width': 1, # width of pvrows
'axis_azimuth': 0., # azimuth angle of rotation axis
'gcr': 0.4, # ground coverage ratio
}
Explanation: Prepare PV array parameters
End of explanation
from pvfactors.geometry import OrderedPVArray
# Create PV array
pvarray = OrderedPVArray.init_from_dict(pvarray_parameters)
from pvfactors.engine import PVEngine
from pvfactors.irradiance import HybridPerezOrdered
# Create irradiance model
irradiance_model = HybridPerezOrdered(rho_front=0.03, rho_back=0.05)
# Create engine
engine = PVEngine(pvarray, irradiance_model=irradiance_model)
# Fit engine to data
engine.fit(df_inputs.index, df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
albedo)
# Plot pvarray shapely geometries
f, ax = plt.subplots(figsize=(8, 4))
pvarray.plot_at_idx(12, ax)
plt.title('Modeled PV array at {}'.format(df_inputs.index[12]))
plt.show()
# Run full mode simulation
report = engine.run_full_mode(fn_build_report=fn_report)
# Turn report into dataframe
df_report = pd.DataFrame(report, index=df_inputs.index)
plot_irradiance(df_report)
Explanation: Default AOI loss behavior
In pvfactors:
qinc is the total incident irradiance on a surface, and it does not account for reflection losses
but qabs, which is the total absorbed irradiance by a surface, does accounts for it.
By default, pvfactors assumes that all reflection losses (or AOI losses) are diffuse; i.e. they do not depend on angle of incidence (AOI). Here is an example.
Let's run a full mode simulation (reflection equilibrium) and compare the calculated incident and absorbed irradiance on both sides of a PV row in a modeled PV array. We'll use 3% reflection for PV row front surfaces, and 5% for the back surfaces.
End of explanation
plot_aoi_losses(df_report)
Explanation: Let's plot the back AOI losses
End of explanation
# import utility function
from pvfactors.viewfactors.aoimethods import faoi_fn_from_pvlib_sandia
# Choose a module name
module_name = 'SunPower_128_Cell_Module___2009_'
# Create an faoi function
faoi_function = faoi_fn_from_pvlib_sandia(module_name)
# Plot faoi function values
aoi_values = np.linspace(0, 180, 100)
faoi_values = faoi_function(aoi_values)
f, ax = plt.subplots()
ax.plot(aoi_values, faoi_values)
ax.set_title('fAOI values for pvlib\'s {}'.format(module_name))
ax.set_ylabel('fAOI values')
ax.set_xlabel('AOI angles measured from "horizontal" [deg]')
plt.show()
Explanation: As shown above, by default pvfactors apply constant values of AOI losses for all the surfaces in the system, and for all the incident irradiance components:
3% loss for the irradiance incident on front of PV rows, which corresponds to the chosen rho_front in the irradiance model
5% loss for the irradiance incident on back of PV rows, which corresponds to the chosen rho_back in the irradiance model
Use an fAOI function in the irradiance model
The next step that can improve the AOI loss calculation, especially for the PV row front surface that receives a lot of direct light, would be to use reflection losses that would be dependent on the AOI, and that would be applied to all the irradiance model components: direct, circumsolar, and horizon light components.
What is an fAOI function?
The fAOI function that the users need to provide takes an angle of incidence as input (AOI measured in degrees and against the surface horizontal - from 0 to 180 deg, not against the surface normal vector - which would have been from 0 to 90 deg), and it returns a transmission value for the incident light. So it's effectively a factor that removes reflection losses.
Let's see what this looks like. First, let's create such a function using a pvfactors utility function, and then we'll plot it.
Given a pvlib module database name, you can create an fAOI function as follows using pvfactors.
End of explanation
# Create irradiance model with fAOI function
irradiance_model = HybridPerezOrdered(faoi_fn_front=faoi_function, faoi_fn_back=faoi_function)
Explanation: As expected, there are less reflection losses for incident light rays normal to the surface than everywhere else.
Use the fAOI function
It's then easy to use the created fAOI function in the irradiance models. It just has to be passed to the model at initialization.
For this example, we will use the same fAOI function for the front and back surfaces of the PV rows.
End of explanation
# Create engine
engine = PVEngine(pvarray, irradiance_model=irradiance_model)
# Fit engine to data
engine.fit(df_inputs.index, df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
albedo)
# Run full mode simulation
report = engine.run_full_mode(fn_build_report=fn_report)
# Turn report into dataframe
df_report = pd.DataFrame(report, index=df_inputs.index)
Explanation: Then pass the model to the PVEngine and run the simulation as usual.
End of explanation
plot_irradiance(df_report)
plot_aoi_losses(df_report)
Explanation: Let's now see what the irradiance and AOI losses look like.
End of explanation
# first let's discretize the PV row sides
pvarray_parameters.update({
'cut': {1: {'front': 5, 'back': 5}}
})
# Create a new pv array
pvarray = OrderedPVArray.init_from_dict(pvarray_parameters)
Explanation: We can now see the changes in AOI losses, which now use the fAOI function for the direct, circumsolar, and horizon light components. But it still uses the constant rho_front and rho_back values for the reflection and isotropic components of the incident light on the surfaces.
Advanced: use an fAOI function for the (ground and array) reflection and isotropic components
The more advanced use is to apply the fAOI losses to the reflection and isotropic component of the light incident on the PV row surfaces.
In order to do so you simply need to pass the fAOI function to the view factor calculator before initializing the PVEngine.
In this case, the simulation workflow will be as follows:
the PVEngine will still calculate the equilibrium of reflections assuming diffuse surfaces and constant reflection losses
it will then use the calculated radiosity values and apply the fAOI using an integral combining the AOI losses and the view factor integrands, as described in the theory section, and similarly to Marion, B., et al (2017)
A word of caution
The users should be careful when using fAOI losses with the view factor calculator for the following reasons:
in order to be fully consistent in the PVEngine calculations, it is wiser to re-calculate a global hemispherical reflectivity value using the fAOI function, which will be used in the reflection equilibrium calculation
the method used for accounting fAOI losses in reflections is physically valid only if the surfaces are "infinitesimal" because it uses view factor formulas only valid in this case (see http://www.thermalradiation.net/sectionb/B-71.html). So in order to make it work in pvfactors, you'll need to discretize the PV row sides into smaller segments
the method relies on the numerical calculation of an integral, and that calculation will converge only given a sufficient number of integral points (which can be provided to the pvfactors view factor calculator). Marion, B., et al (2017) seems to be using 180 points, but in pvfactors' implementation it doesn't look like it's enough for the integral to converge, so we'll use 1000 integral points in this example
the two points above slow down the computation time by an order of magnitude. 8760 simulations that normally take a couple of seconds to run with pvfactors's full mode can then take up to a minute
Apply fAOI losses to reflection terms
Discretize the PV row sides of the PV array:
End of explanation
from pvfactors.viewfactors import VFCalculator
vf_calculator = VFCalculator(faoi_fn_front=faoi_function, faoi_fn_back=faoi_function,
n_aoi_integral_sections=1000)
Explanation: Add fAOI losses to the view factor calculator, and use 1000 integration points
End of explanation
# For back PV row surface
is_back = True
rho_back = vf_calculator.vf_aoi_methods.rho_from_faoi_fn(is_back)
# For front PV row surface
is_back = False
rho_front = vf_calculator.vf_aoi_methods.rho_from_faoi_fn(is_back)
# Print results
print('Reflectivity values for front side: {}, and back side: {}'.format(rho_front, rho_back))
Explanation: Re-calculate global hemispherical reflectivity values based on fAOI function
End of explanation
irradiance_model = HybridPerezOrdered(rho_front=rho_front, rho_back=rho_back,
faoi_fn_front=faoi_function, faoi_fn_back=faoi_function)
Explanation: Since we're using the same fAOI function for front and back sides, we now get the same global hemispherical reflectivity values.
We can now create the irradiance model.
End of explanation
# Create engine
engine = PVEngine(pvarray, vf_calculator=vf_calculator,
irradiance_model=irradiance_model)
# Fit engine to data
engine.fit(df_inputs.index, df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
albedo)
# Plot pvarray shapely geometries
f, ax = plt.subplots(figsize=(8, 4))
ax = pvarray.plot_at_idx(12, ax, with_surface_index=True)
plt.title('Modeled PV array at {}'.format(df_inputs.index[14]))
plt.show()
Explanation: Simulations can then be run the usual way:
End of explanation
# Run full mode simulation
report = engine.run_full_mode(fn_build_report=fn_report)
# Turn report into dataframe
df_report = pd.DataFrame(report, index=df_inputs.index)
Explanation: Run the simulation:
End of explanation
plot_irradiance(df_report)
plot_aoi_losses(df_report)
Explanation: Let's now see what the irradiance and AOI losses look like.
End of explanation
# Define the parameters for the irradiance model and the view factor calculator
irradiance_params = {'rho_front': rho_front, 'rho_back': rho_back,
'faoi_fn_front': faoi_function, 'faoi_fn_back': faoi_function}
vf_calculator_params = {'faoi_fn_front': faoi_function, 'faoi_fn_back': faoi_function,
'n_aoi_integral_sections': 1000}
Explanation: This is the way to apply fAOI losses to all the irradiance components in a pvfactors simulation.
Doing all of the above using the "run functions"
When using the "run functions", you'll just need to define the parameters in advance and then pass it to
the functions.
End of explanation
from pvfactors.run import run_timeseries_engine
# run simulations in parallel mode
report_from_fn = run_timeseries_engine(fn_report, pvarray_parameters, df_inputs.index,
df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
albedo,
irradiance_model_params=irradiance_params,
vf_calculator_params=vf_calculator_params)
# Turn report into dataframe
df_report_from_fn = pd.DataFrame(report_from_fn, index=df_inputs.index)
plot_irradiance(df_report_from_fn)
plot_aoi_losses(df_report_from_fn)
Explanation: Using run_timeseries_engine()
End of explanation
class ReportBuilder(object):
Class for building the reports with multiprocessing
@staticmethod
def build(pvarray):
pvrow = pvarray.ts_pvrows[1]
report = {'qinc_front': pvrow.front.get_param_weighted('qinc'),
'qabs_front': pvrow.front.get_param_weighted('qabs'),
'qinc_back': pvrow.back.get_param_weighted('qinc'),
'qabs_back': pvrow.back.get_param_weighted('qabs')}
# Calculate AOI losses
report['aoi_losses_back_%'] = (report['qinc_back'] - report['qabs_back']) / report['qinc_back'] * 100.
report['aoi_losses_front_%'] = (report['qinc_front'] - report['qabs_front']) / report['qinc_front'] * 100.
# Return report
return report
@staticmethod
def merge(reports):
report = reports[0]
keys = report.keys()
for other_report in reports[1:]:
for key in keys:
report[key] = list(report[key])
report[key] += list(other_report[key])
return report
class FaoiClass(object):
Class for passing the faoi function to engine
@staticmethod
def faoi(*args, **kwargs):
fn = faoi_fn_from_pvlib_sandia(module_name)
return fn(*args, **kwargs)
Explanation: Using run_parallel_engine()
Because of Python's multiprocessing, and because functions cannot be pickled in Python, the functions need to be wrapped up into classes.
End of explanation
# Define the parameters for the irradiance model and the view factor calculator
irradiance_params = {'rho_front': rho_front, 'rho_back': rho_back,
'faoi_fn_front': FaoiClass, 'faoi_fn_back': FaoiClass}
vf_calculator_params = {'faoi_fn_front': FaoiClass, 'faoi_fn_back': FaoiClass,
'n_aoi_integral_sections': 1000}
from pvfactors.run import run_parallel_engine
# run simulations in parallel mode
report_from_fn = run_parallel_engine(ReportBuilder, pvarray_parameters, df_inputs.index,
df_inputs.dni, df_inputs.dhi,
df_inputs.solar_zenith, df_inputs.solar_azimuth,
df_inputs.surface_tilt, df_inputs.surface_azimuth,
albedo,
irradiance_model_params=irradiance_params,
vf_calculator_params=vf_calculator_params)
# Turn report into dataframe
df_report_from_fn = pd.DataFrame(report_from_fn, index=df_inputs.index)
plot_irradiance(df_report_from_fn)
plot_aoi_losses(df_report_from_fn)
Explanation: Pass the objects through the dictionaries and run the simulation
End of explanation |
13,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estandarizacion de datos de los Anuarios Geoestadísticos de INEGI 2017
1. Introduccion
Parámetros que salen de esta fuente
Step1: 2. Descarga de datos
Cada entidad cuenta con una página que presenta sus respectivos anuarios geoestadísticos. La manera más rápida de obtener las ligas de los anuarios es entrar a la biblioteca de INEGI (http
Step2: Extraccion de indices
Conocer la información que contiene cada hoja del índice geoestadístico puede ser muy valioso y es necesario para hacer una función que itere adecuadamente por los archivos de todos los estados, debido a que los anuarios geoestadísticos de cada estado tienen ligeras variaciones que impiden una iteración directa.
Step3: Los índices obtenidos de esta manera recibirán una limpieza manual desde Excel.
Estandarizacion de datos para Parámetros.
P0610 Ventas de electricidad
Debido a la falta de estructura de los índices de parámetros de electricidad, tuvieron que ser estandarizados manualmente en excel. Con los índices estandarizados ya es posible generar un iterador
Step4: Seleccionar renglones correspondientes a volumen de ventas de energía en MW/h
Step5: Extraer datos de todas las ciudades
Por medio de una función que se aplica al dataframe ventaselec, obtenemos los datos de ventas de energía eléctrica para todos los estados. Esta función incluye lineas para verificar los datasets extraidos. | Python Code:
descripciones = {
'P0610': 'Ventas de electricidad',
'P0701': 'Longitud total de la red de carreteras del municipio (excluyendo las autopistas)'
}
# Librerias utilizadas
import pandas as pd
import sys
import urllib
import os
import csv
import zipfile
# Configuracion del sistema
print('Python {} on {}'.format(sys.version, sys.platform))
print('Pandas version: {}'.format(pd.__version__))
import platform; print('Running on {} {}'.format(platform.system(), platform.release()))
Explanation: Estandarizacion de datos de los Anuarios Geoestadísticos de INEGI 2017
1. Introduccion
Parámetros que salen de esta fuente:
ID |Descripción
---|:----------
P0610|Ventas de electricidad
P0701|Longitud total de la red de carreteras del municipio (excluyendo las autopistas)
End of explanation
raiz = 'http://internet.contenidos.inegi.org.mx/contenidos/Productos/prod_serv/contenidos/espanol/bvinegi/productos/nueva_estruc/anuarios_2017/'
# El diccionario tiene como llave la CVE_EDO y dirige hacia la liga de descarga del archivo zip con las tablas del
# Anuario Geoestadístico de cada estado
links = {
'01': raiz + '702825092078.zip',
'02': raiz + '702825094874.zip',
'03': raiz + '702825094881.zip',
'04': raiz + '702825095109.zip',
'05': raiz + '702825095406.zip',
'06': raiz + '702825092061.zip',
'07': raiz + '702825094836.zip',
'08': raiz + '702825092139.zip',
'09': raiz + '702825094683.zip',
'10': raiz + '702825092115.zip',
'11': raiz + '702825092146.zip',
'12': raiz + '702825094690.zip',
'13': raiz + '702825095093.zip',
'14': raiz + '702825092085.zip',
'15': raiz + '702825094706.zip',
'16': raiz + '702825092092.zip',
'17': raiz + '702825094713.zip',
'18': raiz + '702825092054.zip',
'19': raiz + '702825094911.zip',
'20': raiz + '702825094843.zip',
'21': raiz + '702825094973.zip',
'22': raiz + '702825092108.zip',
'23': raiz + '702825095130.zip',
'24': raiz + '702825092122.zip',
'25': raiz + '702825094898.zip',
'26': raiz + '702825094904.zip',
'27': raiz + '702825095123.zip',
'28': raiz + '702825094928.zip',
'29': raiz + '702825096212.zip',
'30': raiz + '702825094980.zip',
'31': raiz + '702825095116.zip',
'32': raiz + '702825092047.zip'
}
# Descarga de archivos a carpeta local
destino = r'D:\PCCS\00_RawData\01_CSV\AGEO\2017'
archivos = {} # Diccionario para guardar memoria de descarga
for k,v in links.items():
archivo_local = destino + r'\{}.zip'.format(k)
if os.path.isfile(archivo_local):
print('Ya existe el archivo: {}'.format(archivo_local))
archivos[k] = archivo_local
else:
print('Descargando {} ... ... ... ... ... '.format(archivo_local))
urllib.request.urlretrieve(v, archivo_local) #
archivos[k] = archivo_local
print('se descargó {}'.format(archivo_local))
# Descompresión de archivos de estado
unzipped = {}
for estado, comprimido in archivos.items():
target = destino + '\\' + estado
if os.path.isdir(target):
print('Ya existe el directorio: {}'.format(target))
unzipped[estado] = target
else:
print('Descomprimiendo {} ... ... ... ... ... '.format(target))
descomprimir = zipfile.ZipFile(comprimido, 'r')
descomprimir.extractall(target)
descomprimir.close
unzipped[estado] = target
Explanation: 2. Descarga de datos
Cada entidad cuenta con una página que presenta sus respectivos anuarios geoestadísticos. La manera más rápida de obtener las ligas de los anuarios es entrar a la biblioteca de INEGI (http://www.beta.inegi.org.mx/app/publicaciones/) y buscar la palabra "Anuario" en el campo de búsqueda.
CVE_EDO |Nombre|URL
---|:---|:----------
01|Aguascalientes|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092078
02|Baja California|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094874
03|Baja California Sur|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094881
04|Campeche|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095109
05|Coahuila de Zaragoza|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095406
06|Colima|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092061
07|Chiapas|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094836
08|Chihuahua|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092139
09|Ciudad de México|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094683
10|Durango|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092115
11|Guanajuato|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092146
12|Guerrero|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094690
13|Hidalgo|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095093
14|Jalisco|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092085
15|México|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094706
16|Michoacán de Ocampo|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092092
17|Morelos|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094713
18|Nayarit|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092054
19|Nuevo León|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094911
20|Oaxaca|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094843
21|Puebla|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094973
22|Querétaro|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092108
23|Quintana Roo|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095130
24|San Luis Potosí|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092122
25|Sinaloa|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094898
26|Sonora|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094904
27|Tabasco|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095123
28|Tamaulipas|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094928
29|Tlaxcala|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825096212
30|Veracruz de Ignacio de la Llave|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825094980
31|Yucatán|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825095116
32|Zacatecas|http://www.beta.inegi.org.mx/app/biblioteca/ficha.html?upc=702825092047
Dentro de cada página, se incluye una liga directa para descargar un archivo comprimido con las tablas de datos de cada anuario geoestadítico. La lista links contiene estas URL y se utilizará para sistematizar la descarga de datos.
End of explanation
unzipped
# Extraer indices
indices = {}
for estado, ruta in unzipped.items():
for file in os.listdir(ruta):
if file.endswith('.xls'):
path = ruta + '\\' + file
indice = pd.read_excel(path, sheetname='Índice', skiprows = 1) # Primera lectura al indice para sacar columnas
dtypes = list(indice)
tempdic = {}
for i in dtypes:
tempdic[i] = 'str'
indice = pd.read_excel(path,
sheetname='Índice',
skiprows = 1,
dtype = tempdic).dropna(how = 'all') # Segunda lectura al indice ya con dtypes
name = list(indice)[0] # Guarda el nombre del indice
cols = []
for i in range(len(list(indice))):
cols.append('col{}'.format(i)) # Crea nombres estandar de columna
indice.columns = cols # Asigna nombres de columna
indice['indice'] = name
indice['file'] = file
if estado not in indices.keys(): # Crea un diccionario para cada estado, si no existe
indices[estado] = {}
indices[estado][name] = indice
print('Procesado {} |||NOMBRE:||| {}; [{}]'.format(file, name, len(cols))) # Imprime los resultados del proceso
# Reordenar los dataframes por tipo
indices_2 = {}
for estado in indices.keys():
for indice in indices[estado].keys():
if indice not in indices_2.keys():
indices_2[indice] = {}
indices_2[indice][estado] = indices[estado][indice]
# Convertir indices en archivos unicos.
finalindexes = {}
for i in indices_2.keys():
print(i)
frameslist = []
for estado in indices_2[i].keys():
frame = indices_2[i][estado]
frame['estado'] = estado
frameslist.append(frame)
fullindex = pd.concat(frameslist)
finalindexes[i] = fullindex
print('Hecho: {}\n'.format(i))
# Escribir archivos xlsx
path = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017\indices'
for indice in finalindexes.keys():
file = path+'\\'+indice+'.xlsx'
writer = pd.ExcelWriter(file)
finalindexes[indice].to_excel(writer, sheet_name = 'Indice')
writer.save()
print('[{}] lineas - archivo {}'.format(len(finalindexes[indice]), file))
Explanation: Extraccion de indices
Conocer la información que contiene cada hoja del índice geoestadístico puede ser muy valioso y es necesario para hacer una función que itere adecuadamente por los archivos de todos los estados, debido a que los anuarios geoestadísticos de cada estado tienen ligeras variaciones que impiden una iteración directa.
End of explanation
# Importar dataset de índices
f_indice = r'D:\PCCS\01_Dmine\Datasets\AGEO\2017\indices\Limpios\Electricidad.xlsx'
ds_indices = pd.read_excel(f_indice, dtype={'Numeral':'str', 'estado':'str'}).set_index('estado')
ds_indices.head()
Explanation: Los índices obtenidos de esta manera recibirán una limpieza manual desde Excel.
Estandarizacion de datos para Parámetros.
P0610 Ventas de electricidad
Debido a la falta de estructura de los índices de parámetros de electricidad, tuvieron que ser estandarizados manualmente en excel. Con los índices estandarizados ya es posible generar un iterador
End of explanation
# Dataframe con índice de hojas sobre el tema "Ventas de electricidad"
ventaselec = ds_indices[ds_indices['Units'] == '(Megawatts-hora)']
ventaselec.head()
len(ventaselec)
# Crear columna con rutas
path = r'D:\PCCS\00_RawData\01_CSV\AGEO\2017'
ventaselec['path'] = path+'\\'+ventaselec.index+'\\'+ventaselec['file']
# Definir función para traer datos a python
unnameds = set(['Unnamed: '+str(i) for i in range(0, 50)]) # Lista 'Unnamed: x' de 0 a 50
def get_ventas(path, sheet, estado):
temp = pd.ExcelFile(path)
temp = temp.parse(sheet, header = 6).dropna(axis = 0, how='all').dropna(axis = 1, how='all')
# Elimina las columnas unnamed
dropplets = set(temp.columns).intersection(unnameds)
temp = temp.drop(dropplets, axis = 1)
temp = temp.dropna(axis = 0, how='all')
temp = temp.reset_index().drop('index', axis = 1)
# Identifica los últimos renglones, que no contienen datos
col0 = temp.columns[0] # Nombre de la columna 0, para usarlo en un chingo de lugares. Bueno 3
try: tempnotas = temp[col0][temp[col0] == 'Nota:'].index[0] # Para las hojas que terminan en 'Notas'
except: tempnotas = temp[col0][temp[col0] == 'a/'].index[0] # Para las hojas que terminan en 'a/'
print(tempnotas)
# Aparta los renglones después de "a/"
trashes = temp.iloc[tempnotas:-1]
# Elimina los renglones después de "a/"
temp = temp.iloc[0:tempnotas]
# Crear columna de estado y renombrar la primera columna para poder concatenar datframes más tarde.
temp['CVE_EDO'] = estado
temp = temp.rename(columns={col0:'NOM_MUN'})
print(type(temp))
return temp, trashes
temp1.columns[0]
temp1[temp1.columns[0]]
Explanation: Seleccionar renglones correspondientes a volumen de ventas de energía en MW/h
End of explanation
# Funcion para extraer datos
def getdata(serie, estado):
path = serie['path']
sheet = serie['Numeral']
print('{}\n{}'.format('-'*30, path)) # Imprime la ruta hacia el archivo
print('Hoja: {}'.format(sheet)) # Imprime el nombre de la hoja que se va a extraer
temp = get_ventas(path, sheet, estado)
print(temp.iloc[[0, -1]][temp.columns[0]])
print(list(temp))
print(('len = {}'.format(len(temp))))
return temp
ventasdic = {}
trashesdic = {}
for estado in ventaselec.index:
ventasdic[estado], trashesdic[estado] = getdata(ventaselec.loc[estado], estado)
ventasdic['09']
ventaselec['path']
Explanation: Extraer datos de todas las ciudades
Por medio de una función que se aplica al dataframe ventaselec, obtenemos los datos de ventas de energía eléctrica para todos los estados. Esta función incluye lineas para verificar los datasets extraidos.
End of explanation |
13,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Baseline prediction for homework type
The baseline prediction method we use for predicting which homework the notebook came from uses the popular plagiarism detector JPlag.
We feed each noteboook through our pipeline to eliminate variable names, string declarations, comments, and import names
Step1: Running Jplag
To run jplag, we need to write all of our files to a directory, and then setup the command with the .jar file that needs to be run on the command line
Step2: After we run the JPlag command
While JPlag produces a nice report that is human readable, we want the pairwise similarities, which are printed out by JPlag as it runs. By parsing the output file we can get these similarities that we will use for prediction
Step3: Inter and Intra Similarities
The first measure that we can use to determine if something reasonable is happening is to look at, for each homework, the average similarity of two notebooks both pulled from that homework, and the average similarity of a notebook pulled from that homework and any notebook in the corpus not pulled from that homework. These are printed below
Step4: Actual Prediction
While the above results are helpful, it is better to use a classifier that uses more information. The setup is as follows
Step5: Results
Below are the results of the prediction. We can see a good deal of predictive power, though there is room for improvement | Python Code:
# First step is to load a balanced dataset of homeworks
import sys
home_directory = '/dfs/scratch2/fcipollone'
sys.path.append(home_directory)
import numpy as np
from nbminer.notebook_miner import NotebookMiner
hw_filenames = np.load('../homework_names_jplag_combined_per_student.npy')
min_val = min([len(temp) for temp in hw_filenames])
print(min_val)
hw_notebooks = [[NotebookMiner(filename) for filename in temp[:min_val]] for temp in hw_filenames]
# Now we do the transformation, storing the results into the variable hw_code
from nbminer.pipeline.pipeline import Pipeline
from nbminer.features.features import Features
from nbminer.preprocess.get_ast_features import GetASTFeatures
from nbminer.preprocess.get_imports import GetImports
import tqdm
hw_code = []
for corp in tqdm.tqdm(hw_notebooks):
temp = []
for nb in corp:
a = Features([nb])
gastf = GetASTFeatures()
gi = GetImports()
pipe = Pipeline([gastf, gi])
a = pipe.transform(a)
code = a.get_notebook(0).get_all_asts()
lines = code.split('\n')
lines = [line for line in lines if line != '']
temp.append('\n\n'.join(lines))
hw_code.append(temp)
# Print an example to see what the result of the transformation looks like.
print(hw_code[0][0])
Explanation: Baseline prediction for homework type
The baseline prediction method we use for predicting which homework the notebook came from uses the popular plagiarism detector JPlag.
We feed each noteboook through our pipeline to eliminate variable names, string declarations, comments, and import names
End of explanation
import os
for i in range(len(hw_code)):
if i < 2:
continue
base_name = 'plagiarism/homework_code_cleaned_hw2plus/hw' + str(i) + '_'
for j, code_body in enumerate(hw_code[i]):
fname = base_name + 'student_' + str(j) + ".py"
f = open(fname,'w')
f.write(code_body)
f.close
import os
jar_file = 'plagiarism/jplag-2.11.9-SNAPSHOT-jar-with-dependencies.jar'
lang = 'python3'
results = 'plagiarism/results_cleaned_hw2plus'
students = 'plagiarism/homework_code_cleaned_hw2plus'
command = "java -jar " + jar_file + " -l " + lang + " -r " + results + " -s " + students + " -m 20"
print("nohup",command,"> plagiarism/experiment_cleaned_hw2plus.out &")
Explanation: Running Jplag
To run jplag, we need to write all of our files to a directory, and then setup the command with the .jar file that needs to be run on the command line
End of explanation
output = open('plagiarism/experiment_cleaned_hw2plus.out','r')
lines = [line for line in output if line[:9] == 'Comparing']
len(lines)
# Create the dictionary of pairwise sims
my_dict = {}
for line in lines:
hw1 = line.split()[1].split('-')[0].split('.')[0]
hw2 = line.split()[1].split('-')[1].split('.')[0]
val = line.split()[2]
if hw1 not in my_dict:
my_dict[hw1] = {}
if hw2 not in my_dict:
my_dict[hw2] = {}
my_dict[hw1][hw2] = val
my_dict[hw2][hw1] = val
Explanation: After we run the JPlag command
While JPlag produces a nice report that is human readable, we want the pairwise similarities, which are printed out by JPlag as it runs. By parsing the output file we can get these similarities that we will use for prediction
End of explanation
import numpy as np
def get_avg_inter_intra_sims(sim_dict, hw):
cur_hw = 'hw' + str(hw)
in_vals = []
out_vals = []
for key in sim_dict.keys():
if key[:3] != cur_hw:
continue
for key2 in sim_dict[key].keys():
if key2[:3] != cur_hw:
out_vals.append(float(sim_dict[key][key2]))
else:
in_vals.append(float(sim_dict[key][key2]))
return in_vals, out_vals
for i in range(2,6):
intra_sims, inter_sims = get_avg_inter_intra_sims(my_dict, i)
print('Mean intra similarity for hw',i,'is',np.mean(intra_sims),'with std',np.std(intra_sims))
print('Mean inter similarity for hw',i,'is',np.mean(inter_sims),'with std',np.std(inter_sims))
print('----')
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = 5, 10
def get_all_sims(sim_dict, hw):
cur_hw = 'hw' + str(hw)
sims = []
for key in sim_dict.keys():
for key2 in sim_dict[key].keys():
if key[:3] != cur_hw and key2[:3] != cur_hw:
continue
sims.append(float(sim_dict[key][key2]))
return sims
fig, axes = plt.subplots(6)
for i in range(2,6):
axes[i].hist(get_all_sims(my_dict,i), bins=50)
Explanation: Inter and Intra Similarities
The first measure that we can use to determine if something reasonable is happening is to look at, for each homework, the average similarity of two notebooks both pulled from that homework, and the average similarity of a notebook pulled from that homework and any notebook in the corpus not pulled from that homework. These are printed below
End of explanation
from sklearn.model_selection import train_test_split
features = [key for key in my_dict]
feature_map = {}
test_features = set()
indices = [i for i in range(len(features))]
#import pdb; pdb.set_trace()
train, test = train_test_split(indices, test_size=.2)
for i in test:
test_features.add(features[i])
train_features = []
for i in train:
train_features.append(features[i])
for i, el in enumerate(train_features):
feature_map[el] = i
X = np.zeros((len(train),len(train)))
y = []
X_test = np.zeros((len(test), len(train)))
y_test = []
for i, el in enumerate(train_features):
for key in my_dict[el]:
if key not in feature_map:
continue
loc = feature_map[key]
X[i, loc] = my_dict[el][key]
y.append(int(el[2]))
for i, el in enumerate(test_features):
for key in my_dict[el]:
if key not in feature_map:
continue
loc = feature_map[key]
X_test[i, loc] = my_dict[el][key]
y_test.append(int(el[2]))
import sklearn
from sklearn.ensemble import RandomForestClassifier
clf = sklearn.ensemble.RandomForestClassifier(n_estimators=400, max_depth=4)
clf.fit(X, y)
clf.predict(X_test)
Explanation: Actual Prediction
While the above results are helpful, it is better to use a classifier that uses more information. The setup is as follows:
Split the data into train and test
For each notebook, generate a feature vector that is calculated as the similarity between the notebook and each notebook of the train set
Build a random forest classifier that uses this feature representation, and measure the performance
End of explanation
import numpy as np
np.sum(clf.predict(X_test)==y_test)/len(y_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(clf.predict(X_test),y_test)
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(cm, cmap=plt.cm.Blues)
plt.show()
clfi = clf.feature_importances_
sa = []
for i in range(len(clfi)):
sa.append((clfi[i], train_features[i]))
sra = [el for el in reversed(sorted(sa))]
for i in range(100):
print(sra[i])
Explanation: Results
Below are the results of the prediction. We can see a good deal of predictive power, though there is room for improvement
End of explanation |
13,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decision Analysis
The Price is Right problem
On November 1, 2007, contestants named Letia and Nathaniel appeared on
The Price is Right, an American game show. They competed
in a game called The Showcase, where the objective is to
guess the price of a showcase of prizes. The contestant who comes
closest to the actual price of the showcase, without going over, wins
the prizes.
Nathaniel went first. His showcase included a dishwasher, a wine
cabinet, a laptop computer, and a car. He bid \$26,000.
Letia’s showcase included a pinball machine, a video arcade game, a pool
table, and a cruise of the Bahamas. She bid \$21,500.
The actual price of Nathaniel’s showcase was \$25,347. His bid was too
high, so he lost.
The actual price of Letia’s showcase was \$21,578. She was only off by
\$78, so she won her showcase and, because her bid was off by less than
\$250, she also won Nathaniel’s showcase.
For a Bayesian thinker, this scenario suggests several questions
Step1: This shows the distribution of prices for these
showcases. The most common value for both showcases is around \$28,000,
but the first showcase has a second mode near \$50,000, and the second
showcase is occasionally worth more than \$70,000.
These distributions are based on actual data, but they have been
smoothed by Gaussian kernel density estimation (KDE). Before we go on, I
want to take a detour to talk about probability density functions and
KDE.
Probability density functions
So far we have been working with probability mass functions, or PMFs. A
PMF is a map from each possible value to its probability. In my
implementation, a Pmf object provides a method named Prob
that takes a value and returns a probability, also known as a
probability mass.
A probability density function, or PDF, is the
continuous version of a PMF, where the possible values make up a
continuous range rather than a discrete set.
In mathematical notation, PDFs are usually written as functions; for
example, here is the PDF of a Gaussian distribution with mean 0 and
standard deviation 1
Step2: Density takes a value, x, and returns the
corresponding density. MakePmf makes a discrete
approximation to the PDF.
Pdf provides an implementation of MakePmf, but
not Density, which has to be provided by a child class.
A concrete type is a child class that extends an
abstract type and provides an implementation of the missing methods. For
example, GaussianPdf extends Pdf and provides
Density
Step3: __init__ takes mu and sigma, which are the
mean and standard deviation of the distribution, and stores them as
attributes.
Density uses a function from scipy.stats to
evaluate the Gaussian PDF. The function is called norm.pdf
because the Gaussian distribution is also called the “normal”
distribution.
The Gaussian PDF is defined by a simple mathematical function, so it is
easy to evaluate. And it is useful because many quantities in the real
world have distributions that are approximately Gaussian.
But with real data, there is no guarantee that the distribution is
Gaussian or any other simple mathematical function. In that case we can
use a sample to estimate the PDF of the whole population.
For example, in The Price Is Right data, we have 313
prices for the first showcase. We can think of these values as a sample
from the population of all possible showcase prices.
This sample includes the following values (in order)
Step4: __init__ takes a sample and computes a kernel density estimate. The
result is a gaussian_kde object that provides an evaluate
method.
Density takes a value, calls gaussian_kde.evaluate, and
returns the resulting density.
Finally, here’s an outline of the code I used to generate
Figure [fig.price1]
Step5: pdf is a Pdf object, estimated by KDE.
pmf is a Pmf object that approximates the Pdf by evaluating
the density at a sequence of equally spaced values.
linspace stands for “linear space.” It takes a range,
low and high, and the number of points,
n, and returns a new numpy array with
n elements equally spaced between low and
high, including both.
And now back to The Price is Right.
Modeling the contestants
The PDFs in Figure [fig.price1] estimate the distribution of possible
prices. If you were a contestant on the show, you could use this
distribution to quantify your prior belief about the price of each
showcase (before you see the prizes).
To update these priors, we have to answer these questions
Step6: Again, we use the variance of diff to estimate the variance
of error. This estimate is not perfect because contestants’
bids are sometimes strategic; for example, if Player 2 thinks that
Player 1 has overbid, Player 2 might make a very low bid. In that case
diff does not reflect error. If this happens a
lot, the observed variance in diff might overestimate the
variance in error. Nevertheless, I think it is a reasonable
modeling decision.
As an alternative, someone preparing to appear on the show could
estimate their own distribution of error by watching
previous shows and recording their guesses and the actual prices.
Likelihood
Now we are ready to write the likelihood function. As usual, I define a
new class that extends thinkbayes.Suite
Step7: pmf represents the prior distribution and
player is a Player object as described in the previous
section. In Likelihood hypo is the hypothetical price of the showcase.
data is the contestant’s best guess at the price.
error is the difference, and like is the
likelihood of the data, given the hypothesis.
ErrorDensity is defined in Player
Step8: player and opponent are Player
objects.
GainCalculator provides ExpectedGains, which
computes a sequence of bids and the expected gain for each bid | Python Code:
from price import *
import matplotlib.pyplot as plt
player1, player2 = MakePlayers(path='../code')
MakePrice1(player1, player2)
plt.legend();
Explanation: Decision Analysis
The Price is Right problem
On November 1, 2007, contestants named Letia and Nathaniel appeared on
The Price is Right, an American game show. They competed
in a game called The Showcase, where the objective is to
guess the price of a showcase of prizes. The contestant who comes
closest to the actual price of the showcase, without going over, wins
the prizes.
Nathaniel went first. His showcase included a dishwasher, a wine
cabinet, a laptop computer, and a car. He bid \$26,000.
Letia’s showcase included a pinball machine, a video arcade game, a pool
table, and a cruise of the Bahamas. She bid \$21,500.
The actual price of Nathaniel’s showcase was \$25,347. His bid was too
high, so he lost.
The actual price of Letia’s showcase was \$21,578. She was only off by
\$78, so she won her showcase and, because her bid was off by less than
\$250, she also won Nathaniel’s showcase.
For a Bayesian thinker, this scenario suggests several questions:
Before seeing the prizes, what prior beliefs should the contestant
have about the price of the showcase?
After seeing the prizes, how should the contestant update those
beliefs?
Based on the posterior distribution, what should the contestant bid?
The third question demonstrates a common use of Bayesian analysis:
decision analysis. Given a posterior distribution, we can choose the bid
that maximizes the contestant’s expected return.
This problem is inspired by an example in Cameron Davidson-Pilon’s book,
Bayesian Methods for Hackers. The code I wrote for this
chapter is available from http://thinkbayes.com/price.py; it reads
data files you can download from
http://thinkbayes.com/showcases.2011.csv and
http://thinkbayes.com/showcases.2012.csv. For more information see
Section [download].
The prior
To choose a prior distribution of prices, we can take advantage of data
from previous episodes. Fortunately, fans of the show keep detailed
records. When I corresponded with Mr. Davidson-Pilon about his book, he
sent me data collected by Steve Gee at http://tpirsummaries.8m.com. It
includes the price of each showcase from the 2011 and 2012 seasons and
the bids offered by the contestants.
End of explanation
class Pdf(object):
def Density(self, x):
raise UnimplementedMethodException()
def MakePmf(self, xs):
pmf = Pmf()
for x in xs:
pmf.Set(x, self.Density(x))
pmf.Normalize()
return pmf
Explanation: This shows the distribution of prices for these
showcases. The most common value for both showcases is around \$28,000,
but the first showcase has a second mode near \$50,000, and the second
showcase is occasionally worth more than \$70,000.
These distributions are based on actual data, but they have been
smoothed by Gaussian kernel density estimation (KDE). Before we go on, I
want to take a detour to talk about probability density functions and
KDE.
Probability density functions
So far we have been working with probability mass functions, or PMFs. A
PMF is a map from each possible value to its probability. In my
implementation, a Pmf object provides a method named Prob
that takes a value and returns a probability, also known as a
probability mass.
A probability density function, or PDF, is the
continuous version of a PMF, where the possible values make up a
continuous range rather than a discrete set.
In mathematical notation, PDFs are usually written as functions; for
example, here is the PDF of a Gaussian distribution with mean 0 and
standard deviation 1:
$$f(x) = \frac{1}{\sqrt{2 \pi}} \exp(-x^2/2)$$
For a given value of $x$, this function computes a probability density. A
density is similar to a probability mass in the sense that a higher
density indicates that a value is more likely.
But a density is not a probability. A density can be 0 or any positive
value; it is not bounded, like a probability, between 0 and 1.
If you integrate a density over a continuous range, the result is a
probability. But for the applications in this book we seldom have to do
that.
Instead we primarily use probability densities as part of a likelihood
function. We will see an example soon.
Representing PDFs
To represent PDFs in Python, thinkbayes.py provides a class
named Pdf. Pdf is an abstract type, which means that it defines the interface a Pdf is
supposed to have, but does not provide a complete implementation. The
Pdf interface includes two methods, Density
and MakePmf:
End of explanation
class GaussianPdf(Pdf):
def __init__(self, mu, sigma):
self.mu = mu
self.sigma = sigma
def Density(self, x):
return scipy.stats.norm.pdf(x, self.mu, self.sigma)
Explanation: Density takes a value, x, and returns the
corresponding density. MakePmf makes a discrete
approximation to the PDF.
Pdf provides an implementation of MakePmf, but
not Density, which has to be provided by a child class.
A concrete type is a child class that extends an
abstract type and provides an implementation of the missing methods. For
example, GaussianPdf extends Pdf and provides
Density:
End of explanation
class EstimatedPdf(Pdf):
def __init__(self, sample):
self.kde = scipy.stats.gaussian_kde(sample)
def Density(self, x):
return self.kde.evaluate(x)
Explanation: __init__ takes mu and sigma, which are the
mean and standard deviation of the distribution, and stores them as
attributes.
Density uses a function from scipy.stats to
evaluate the Gaussian PDF. The function is called norm.pdf
because the Gaussian distribution is also called the “normal”
distribution.
The Gaussian PDF is defined by a simple mathematical function, so it is
easy to evaluate. And it is useful because many quantities in the real
world have distributions that are approximately Gaussian.
But with real data, there is no guarantee that the distribution is
Gaussian or any other simple mathematical function. In that case we can
use a sample to estimate the PDF of the whole population.
For example, in The Price Is Right data, we have 313
prices for the first showcase. We can think of these values as a sample
from the population of all possible showcase prices.
This sample includes the following values (in order):
$$28800, 28868, 28941, 28957, 28958$$
In the sample, no values appear
between 28801 and 28867, but there is no reason to think that these
values are impossible. Based on our background information, we expect
all values in this range to be equally likely. In other words, we expect
the PDF to be fairly smooth.
Kernel density estimation (KDE) is an algorithm that takes a sample and
finds an appropriately smooth PDF that fits the data. You can read
details at http://en.wikipedia.org/wiki/Kernel_density_estimation.
scipy provides an implementation of KDE and
thinkbayes provides a class called
EstimatedPdf that uses it:
End of explanation
data = ReadData(path='../code')
cols = zip(*data)
price1, price2, bid1, bid2, diff1, diff2 = cols
pdf = thinkbayes.EstimatedPdf(price1)
low, high = 0, 75000
n = 101
xs = numpy.linspace(low, high, n)
pdf.kde.evaluate([3, 3])
pmf = pdf.MakePmf(xs)
thinkplot.Pmfs([pmf])
Explanation: __init__ takes a sample and computes a kernel density estimate. The
result is a gaussian_kde object that provides an evaluate
method.
Density takes a value, calls gaussian_kde.evaluate, and
returns the resulting density.
Finally, here’s an outline of the code I used to generate
Figure [fig.price1]:
End of explanation
MakePrice2(player1, player2)
Explanation: pdf is a Pdf object, estimated by KDE.
pmf is a Pmf object that approximates the Pdf by evaluating
the density at a sequence of equally spaced values.
linspace stands for “linear space.” It takes a range,
low and high, and the number of points,
n, and returns a new numpy array with
n elements equally spaced between low and
high, including both.
And now back to The Price is Right.
Modeling the contestants
The PDFs in Figure [fig.price1] estimate the distribution of possible
prices. If you were a contestant on the show, you could use this
distribution to quantify your prior belief about the price of each
showcase (before you see the prizes).
To update these priors, we have to answer these questions:
What data should we consider and how should we quantify it?
Can we compute a likelihood function; that is, for each hypothetical
value of price, can we compute the conditional
likelihood of the data?
To answer these questions, I am going to model the contestant as a
price-guessing instrument with known error characteristics. In other
words, when the contestant sees the prizes, he or she guesses the price
of each prize—ideally without taking into consideration the fact that
the prize is part of a showcase—and adds up the prices. Let’s call this
total guess.
Under this model, the question we have to answer is, “If the actual
price is price, what is the likelihood that the
contestant’s estimate would be guess?”
Or if we define
python
error = price - guess
then we could ask, “What is the likelihood that the contestant’s
estimate is off by error?”
To answer this question, we can use the historical data again.
Figure [fig.price2] shows the cumulative distribution of
diff, the difference between the contestant’s bid and the
actual price of the showcase.
The definition of diff is
python
diff = price - bid
When diff is negative, the bid is too high. As an aside, we
can use this distribution to compute the probability that the
contestants overbid: the first contestant overbids 25% of the time; the
second contestant overbids 29% of the time.
We can also see that the bids are biased; that is, they are more likely
to be too low than too high. And that makes sense, given the rules of
the game.
Finally, we can use this distribution to estimate the reliability of the
contestants’ guesses. This step is a little tricky because we don’t
actually know the contestant’s guesses; we only know what they bid.
So we’ll have to make some assumptions. Specifically, I assume that the
distribution of error is Gaussian with mean 0 and the same
variance as diff.
The Player class implements this model:
```python
class Player(object):
def __init__(self, prices, bids, diffs):
self.pdf_price = thinkbayes.EstimatedPdf(prices)
self.cdf_diff = thinkbayes.MakeCdfFromList(diffs)
mu = 0
sigma = numpy.std(diffs)
self.pdf_error = thinkbayes.GaussianPdf(mu, sigma)
```
prices is a sequence of showcase prices, bids
is a sequence of bids, and diffs is a sequence of diffs,
where again diff = price - bid.
pdf_price is the smoothed PDF of prices, estimated by KDE. cdf_diff
is the cumulative distribution of diff, which we saw in
Figure [fig.price2]. And pdf_error is the PDF that characterizes the
distribution of errors; where error = price - guess.
End of explanation
class Price(thinkbayes.Suite):
def __init__(self, pmf, player):
thinkbayes.Suite.__init__(self, pmf)
self.player = player
def Likelihood(self, data, hypo):
price = hypo
guess = data
error = price - guess
like = self.player.ErrorDensity(error)
return like
Explanation: Again, we use the variance of diff to estimate the variance
of error. This estimate is not perfect because contestants’
bids are sometimes strategic; for example, if Player 2 thinks that
Player 1 has overbid, Player 2 might make a very low bid. In that case
diff does not reflect error. If this happens a
lot, the observed variance in diff might overestimate the
variance in error. Nevertheless, I think it is a reasonable
modeling decision.
As an alternative, someone preparing to appear on the show could
estimate their own distribution of error by watching
previous shows and recording their guesses and the actual prices.
Likelihood
Now we are ready to write the likelihood function. As usual, I define a
new class that extends thinkbayes.Suite:
End of explanation
class GainCalculator(object):
def __init__(self, player, opponent):
self.player = player
self.opponent = opponent
def ExpectedGains(self, low=0, high=75000, n=101):
bids = numpy.linspace(low, high, n)
gains = [self.ExpectedGain(bid) for bid in bids]
return bids, gains
def ExpectedGain(self, bid):
suite = self.player.posterior
total = 0
for price, prob in sorted(suite.Items()):
gain = self.Gain(bid, price)
total += prob * gain
return total
def Gain(self, bid, price):
# if you overbid, you get nothing
if bid > price:
return 0
# otherwise compute the probability of winning
diff = price - bid
prob = self.ProbWin(diff)
# if you are within 250 dollars, you win both showcases
if diff <= 250:
return 2 * price * prob
else:
return price * prob
def ProbWin(self, diff):
prob = (self.opponent.ProbOverbid() +
self.opponent.ProbWorseThan(diff))
return prob
Explanation: pmf represents the prior distribution and
player is a Player object as described in the previous
section. In Likelihood hypo is the hypothetical price of the showcase.
data is the contestant’s best guess at the price.
error is the difference, and like is the
likelihood of the data, given the hypothesis.
ErrorDensity is defined in Player:
```python
class Player:
def ErrorDensity(self, error):
return self.pdf_error.Density(error)
```
ErrorDensity works by evaluating pdf_error at the given
value of error. The result is a probability density, so it
is not really a probability. But remember that Likelihood
doesn’t need to compute a probability; it only has to compute something
proportional to a probability. As long as the constant of
proportionality is the same for all likelihoods, it gets canceled out
when we normalize the posterior distribution.
And therefore, a probability density is a perfectly good likelihood.
Update
Player provides a method that takes the contestant’s guess
and computes the posterior distribution:
```python
class Player
def MakeBeliefs(self, guess):
pmf = self.PmfPrice()
self.prior = Price(pmf, self)
self.posterior = self.prior.Copy()
self.posterior.Update(guess)
```
PmfPrice generates a discrete approximation to the PDF of
price, which we use to construct the prior.
PmfPrice uses MakePmf, which evaluates
pdf_price at a sequence of values:
```python
class Player
n = 101
price_xs = numpy.linspace(0, 75000, n)
def PmfPrice(self):
return self.pdf_price.MakePmf(self.price_xs)
```
To construct the posterior, we make a copy of the prior and then invoke
Update, which invokes Likelihood for each
hypothesis, multiplies the priors by the likelihoods, and renormalizes.
So let’s get back to the original scenario. Suppose you are Player 1 and
when you see your showcase, your best guess is that the total price of
the prizes is \$20,000.
Figure [fig.price3] shows prior and posterior beliefs about the actual
price. The posterior is shifted to the left because your guess is on the
low end of the prior range.
On one level, this result makes sense. The most likely value in the
prior is \$27,750, your best guess is \$20,000, and the mean of the
posterior is somewhere in between: \$25,096.
On another level, you might find this result bizarre, because it
suggests that if you think the price is \$20,000, then
you should believe the price is \$24,000.
To resolve this apparent paradox, remember that you are combining two
sources of information, historical data about past showcases and guesses
about the prizes you see.
We are treating the historical data as the prior and updating it based
on your guesses, but we could equivalently use your guess as a prior and
update it based on historical data.
If you think of it that way, maybe it is less surprising that the most
likely value in the posterior is not your original guess.
Optimal bidding
Now that we have a posterior distribution, we can use it to compute the
optimal bid, which I define as the bid that maximizes expected return
(see http://en.wikipedia.org/wiki/Expected_return).
I’m going to present the methods in this section top-down, which means I
will show you how they are used before I show you how they work. If you
see an unfamiliar method, don’t worry; the definition will be along
shortly.
To compute optimal bids, I wrote a class called
GainCalculator:
End of explanation
player1.MakeBeliefs(20000)
player2.MakeBeliefs(40000)
calc1 = GainCalculator(player1, player2)
calc2 = GainCalculator(player2, player1)
bids, gains = calc1.ExpectedGains()
thinkplot.Plot(bids, gains, label='Player 1')
print('Player 1 optimal bid', max(zip(gains, bids)))
bids, gains = calc2.ExpectedGains()
thinkplot.Plot(bids, gains, label='Player 2')
plt.legend();
Explanation: player and opponent are Player
objects.
GainCalculator provides ExpectedGains, which
computes a sequence of bids and the expected gain for each bid:
low and high specify the range of possible
bids; n is the number of bids to try.
ExpectedGains calls ExpectedGain, which
computes expected gain for a given bid:
ExpectedGain loops through the values in the posterior and
computes the gain for each bid, given the actual prices of the showcase.
It weights each gain with the corresponding probability and returns the
total.
ExpectedGain invokes Gain, which takes a bid
and an actual price and returns the expected gain.
End of explanation |
13,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading files
The iterator notation is easiest.
Step1: (The comma at the end suppresses extra newline). Can also use the object-oriented interface.
Step2: Reading all of the lines at once
Step3: Writing files
Writing one line at a time
Step4: Writing all of the lines at once
This approach does not add newlines, so add them yourself if needed.
Step5: Binary files
Open and close similar to text files, but use read() and write().
Step6: Format-specific binary I/O is available through standard modules, like Image.
Step7: Pickels!
The pickle is an internal Python format for writing arbitrary data to a file in a way that allows it to be read in again, intact. | Python Code:
f = open('kaiju_movies.dat')
for movie in f:
print movie,
f.close()
Explanation: Reading files
The iterator notation is easiest.
End of explanation
f = file('kaiju_movies.dat')
for movie in f:
print movie,
f.close()
Explanation: (The comma at the end suppresses extra newline). Can also use the object-oriented interface.
End of explanation
f = open('kaiju_movies.dat')
movies = f.readlines()
print movies
f.close()
Explanation: Reading all of the lines at once
End of explanation
dumb_monsters = ('Hedorah', 'Megalon', 'Gigan', 'Minilla')
f = open('monsters.txt', 'w')
for monster in dumb_monsters:
f.write(monster + '\n')
f.close()
Explanation: Writing files
Writing one line at a time
End of explanation
dumb_monsters = ('Hedorah', 'Megalon', 'Gigan', 'Minilla')
f = open('monsters2.txt', 'w')
f.writelines(dumb_monsters)
f.close()
Explanation: Writing all of the lines at once
This approach does not add newlines, so add them yourself if needed.
End of explanation
f = open('nikki.jpg', 'rb')
my_dog = f.read()
f.close()
# Do arbitrary stuff with data.
f = open('new_nikki.jpg', 'wb')
f.write(my_dog)
f.close()
Explanation: Binary files
Open and close similar to text files, but use read() and write().
End of explanation
from IPython.display import Image
puppeh = Image(filename = 'nikki.jpg')
puppeh
Explanation: Format-specific binary I/O is available through standard modules, like Image.
End of explanation
movies = [{'title': 'Godzilla', 'year': 1954}, {'title': 'Godzilla 2000: Millennium', 'year': 1999}]
import pickle
f = open('pickled_kaiju.pkl', 'wb')
pickle.dump(movies, f)
f.close()
f = open('pickled_kaiju.pkl', 'rb')
pickled_movies = pickle.load(f)
f.close()
print pickled_movies
Explanation: Pickels!
The pickle is an internal Python format for writing arbitrary data to a file in a way that allows it to be read in again, intact.
End of explanation |
13,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating STIX Content
Creating STIX Domain Objects
To create a STIX object, provide keyword arguments to the type's constructor
Step1: Certain required attributes of all objects will be set automatically if not provided as keyword arguments
Step2: Passing a value for type that does not match the class being constructed will cause an error
Step3: If not provided, id will be generated randomly. If you provide an
id argument, it must begin with the correct prefix
Step4: For indicators, pattern and pattern_type are required and cannot be set automatically. Trying to create an indicator that is missing one of these properties will result in an error
Step5: However, the required valid_from attribute on Indicators will be set to the current time if not provided as a keyword argument.
Once created, the object acts like a frozen dictionary. Properties can be accessed using the standard Python dictionary syntax
Step6: Or access properties using the standard Python attribute syntax
Step7: <div class="alert alert-warning">
**Warning**
Note that there are several attributes on these objects used for method names. Accessing those will return a bound method, not the attribute value.
</div>
Attempting to modify any attributes will raise an error
Step8: To update the properties of an object, see the Versioning section.
Creating a Malware object follows the same pattern
Step9: As with indicators, the type, id, created, and modified properties will be set automatically if not provided. For Malware objects, the is_family property must be provided.
You can see the full list of SDO classes here.
Creating Relationships
STIX 2 Relationships are separate objects, not properties of the object on either side of the relationship. They are constructed similarly to other STIX objects. The type, id, created, and modified properties are added automatically if not provided. Callers must provide the relationship_type, source_ref, and target_ref properties.
Step10: The source_ref and target_ref properties can be either the ID's of other STIX objects, or the STIX objects themselves. For readability, Relationship objects can also be constructed with the source_ref, relationship_type, and target_ref as positional (non-keyword) arguments
Step11: Creating Bundles
STIX Bundles can be created by passing objects as arguments to the Bundle constructor. All required properties (type, id, and spec_version) will be set automatically if not provided, or can be provided as keyword arguments
Step12: Creating Cyber Observable References
Cyber Observable Objects have properties that can reference other Cyber Observable Objects. In order to create those references, either supply the ID string of the object being referenced, or pass in the object itself.
For example, the IPv4Address object has a resolves_to_refs property which must hold a list of references to MACAddress objects. We could specify the id string
Step13: Or we could create the MACAddress object(s) beforehand and then pass them in | Python Code:
from stix2 import Indicator
indicator = Indicator(name="File hash for malware variant",
pattern="[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']",
pattern_type="stix")
print(indicator.serialize(pretty=True))
Explanation: Creating STIX Content
Creating STIX Domain Objects
To create a STIX object, provide keyword arguments to the type's constructor:
End of explanation
indicator2 = Indicator(type='indicator',
pattern_type="stix",
pattern="[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']")
Explanation: Certain required attributes of all objects will be set automatically if not provided as keyword arguments:
If not provided, type will be set automatically to the correct type. You can also provide the type explicitly, but this is not necessary:
End of explanation
indicator3 = Indicator(type='xxx',
pattern_type="stix",
pattern="[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']")
Explanation: Passing a value for type that does not match the class being constructed will cause an error:
End of explanation
indicator4 = Indicator(id="campaign--63ce9068-b5ab-47fa-a2cf-a602ea01f21a",
pattern_type="stix",
pattern="[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']")
Explanation: If not provided, id will be generated randomly. If you provide an
id argument, it must begin with the correct prefix:
End of explanation
indicator = Indicator()
Explanation: For indicators, pattern and pattern_type are required and cannot be set automatically. Trying to create an indicator that is missing one of these properties will result in an error:
End of explanation
indicator['name']
Explanation: However, the required valid_from attribute on Indicators will be set to the current time if not provided as a keyword argument.
Once created, the object acts like a frozen dictionary. Properties can be accessed using the standard Python dictionary syntax:
End of explanation
indicator.name
Explanation: Or access properties using the standard Python attribute syntax:
End of explanation
indicator['name'] = "This is a revised name"
indicator.name = "This is a revised name"
Explanation: <div class="alert alert-warning">
**Warning**
Note that there are several attributes on these objects used for method names. Accessing those will return a bound method, not the attribute value.
</div>
Attempting to modify any attributes will raise an error:
End of explanation
from stix2 import Malware
malware = Malware(name="Poison Ivy",
is_family=False)
print(malware.serialize(pretty=True))
Explanation: To update the properties of an object, see the Versioning section.
Creating a Malware object follows the same pattern:
End of explanation
from stix2 import Relationship
relationship = Relationship(relationship_type='indicates',
source_ref=indicator.id,
target_ref=malware.id)
print(relationship.serialize(pretty=True))
Explanation: As with indicators, the type, id, created, and modified properties will be set automatically if not provided. For Malware objects, the is_family property must be provided.
You can see the full list of SDO classes here.
Creating Relationships
STIX 2 Relationships are separate objects, not properties of the object on either side of the relationship. They are constructed similarly to other STIX objects. The type, id, created, and modified properties are added automatically if not provided. Callers must provide the relationship_type, source_ref, and target_ref properties.
End of explanation
relationship2 = Relationship(indicator, 'indicates', malware)
print(relationship2.serialize(pretty=True))
Explanation: The source_ref and target_ref properties can be either the ID's of other STIX objects, or the STIX objects themselves. For readability, Relationship objects can also be constructed with the source_ref, relationship_type, and target_ref as positional (non-keyword) arguments:
End of explanation
from stix2 import Bundle
bundle = Bundle(indicator, malware, relationship)
print(bundle.serialize(pretty=True))
Explanation: Creating Bundles
STIX Bundles can be created by passing objects as arguments to the Bundle constructor. All required properties (type, id, and spec_version) will be set automatically if not provided, or can be provided as keyword arguments:
End of explanation
from stix2 import IPv4Address
ip4 = IPv4Address(
value="177.60.40.7",
resolves_to_refs=["mac-addr--43f380fd-37c6-476d-8643-60849bf9240e"]
)
print(ip4.serialize(pretty=True))
Explanation: Creating Cyber Observable References
Cyber Observable Objects have properties that can reference other Cyber Observable Objects. In order to create those references, either supply the ID string of the object being referenced, or pass in the object itself.
For example, the IPv4Address object has a resolves_to_refs property which must hold a list of references to MACAddress objects. We could specify the id string:
End of explanation
from stix2 import MACAddress
mac_addr_a = MACAddress(value="a1:b2:c3:d4:e5:f6")
mac_addr_b = MACAddress(value="a7:b8:c9:d0:e1:f2")
ip4_valid_refs = IPv4Address(
value="177.60.40.7",
resolves_to_refs=[mac_addr_a.id, mac_addr_b.id]
)
print(ip4_valid_refs.serialize(pretty=True))
Explanation: Or we could create the MACAddress object(s) beforehand and then pass them in:
End of explanation |
13,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to use this Notebook
The development cycle intended here is to
Init an experiment
Reset the model and optimizer
Restart wandb
Run one epoch
Add any new logs for plots, as needed
Then repeat steps 4-5 as needed or 2-5 as needed.
Step1: Initialize the experiment
Step2: Reset the Model, Optimizer, and Wandb
Helpful to start new runs quickly
Step3: Run One Epoch
Be sure to run Functions to Run the Model and Add Wandb Plots via Hooks below to define the run functions and desired plots.
Step4: Functions to Run the Model
Step6: Add Wandb Plots via Hooks
DendriticNetwork2(
(segments) | Python Code:
from functools import partial
import torch
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
Explanation: How to use this Notebook
The development cycle intended here is to
Init an experiment
Reset the model and optimizer
Restart wandb
Run one epoch
Add any new logs for plots, as needed
Then repeat steps 4-5 as needed or 2-5 as needed.
End of explanation
# Load the exp
import os
import sys
sys.path.insert(0, os.path.expanduser("~/nta/nupic.research/projects/meta_cl"))
from experiments import CONFIGS
exp_name = "metacl_dendrites2"
config = CONFIGS[exp_name]
exp_cls = config["experiment_class"]
exp = exp_cls()
exp.setup_experiment(config)
Explanation: Initialize the experiment
End of explanation
# Reset the optimizer and model
exp.model.reset_params()
# Configure optimizer
group_decay, group_no_decay = [], []
for module in exp.model.modules():
for name, param in module.named_parameters(recurse=False):
if exp.should_decay_parameter(module, name, param, config):
group_decay.append(param)
else:
group_no_decay.append(param)
optimizer_class = config.get("optimizer_class", torch.optim.SGD)
optimizer_args = config.get("optimizer_args", {})
exp.optimizer = optimizer_class([dict(params=group_decay),
dict(params=group_no_decay,
weight_decay=0.)],
**optimizer_args)
# Reset the epoch
exp.current_epoch = 0
import wandb
if wandb.run is not None:
wandb.join()
wandb.init(name="metacl_dendrites2", project="metacl_dendrites_test", reinit=True)
Explanation: Reset the Model, Optimizer, and Wandb
Helpful to start new runs quickly
End of explanation
run_epoch(exp)
Explanation: Run One Epoch
Be sure to run Functions to Run the Model and Add Wandb Plots via Hooks below to define the run functions and desired plots.
End of explanation
def run_epoch(exp):
exp.pre_epoch()
exp.optimizer.zero_grad()
# Sample tasks for inner loop.
tasks_train = np.random.choice(
exp.fast_and_slow_classes,
size=exp.tasks_per_epoch,
replace=False
)
# Run pre_task; For instance, may reset parameters as needed.
exp.pre_task(tasks_train)
# Clone model - clone fast params and the slow params. The latter will be frozen
cloned_adaptation_net = exp.clone_model()
# Inner loop: Train over sampled tasks.
for task in tasks_train:
run_task(exp, task, cloned_adaptation_net)
# Sample from the replay set.
exp.train_replay_loader.sampler.set_active_tasks(exp.replay_classes)
replay_data, replay_target = next(iter(exp.train_replay_loader))
# Sample from the slow set.
slow_data, slow_target = [], []
for task in tasks_train:
exp.train_slow_loader.sampler.set_active_tasks(task)
x, y = next(iter(exp.train_slow_loader))
slow_data.append(x)
slow_target.append(y)
# Concatenate the slow and replay set.
slow_data = torch.cat(slow_data + [replay_data]).to(exp.device)
slow_target = torch.cat(slow_target + [replay_target]).to(exp.device)
# LOGGING
# cloned_adaptation_net.classifier.register_forward_hook(partial(fhook, name="classifier"))
cloned_adaptation_net.apply_dendrites = apply_dendrites_and_log
# Take step for outer loop. This will backprop through to the original
# slow and fast params.
output = cloned_adaptation_net(slow_data)
loss = exp._loss_function(output, slow_target)
loss.backward()
exp.optimizer.step()
# Report statistics for the outer loop
pred = output.max(1, keepdim=True)[1]
correct = pred.eq(slow_target.view_as(pred)).sum().item()
total = output.shape[0]
results = {
"total_correct": correct,
"total_tested": total,
"mean_loss": loss.item(),
"mean_accuracy": correct / total if total > 0 else 0,
"learning_rate": exp.get_lr()[0],
}
exp.logger.debug(results)
exp.post_epoch()
exp.current_epoch += 1
return results
def run_task(exp, task, cloned_adaptation_net):
exp.train_fast_loader.sampler.set_active_tasks(task)
# Meta-train training. Use no more than `num_fast_steps` sequential updates.
for i, (data, target) in enumerate(exp.train_fast_loader):
if i >= exp.num_fast_steps:
break
data = data.to(exp.device)
target = target.to(exp.device)
train_loss = exp._loss_function(
cloned_adaptation_net(data), target
)
# Update in place
exp.adapt(cloned_adaptation_net, train_loss)
# See if there are images to validate on. If 'train_train_sample_size'
# is equivalent to the number of images per class, then there won't be any.
if len(exp.val_fast_loader) == 0:
return
# Run and log validation for given task.
with torch.no_grad():
exp.val_fast_loader.sampler.set_active_tasks(task)
data, target = next(iter(exp.val_fast_loader))
data = data.to(exp.device)
target = target.to(exp.device)
preds = cloned_adaptation_net(data)
valid_error = exp._loss_function(preds, target)
valid_error /= len(data)
exp.logger.debug(f"Valid error meta train training: {valid_error}")
# calculate accuracy
preds = preds.argmax(dim=1).view(target.shape)
valid_accuracy = (preds == target).sum().float() / target.size(0)
exp.logger.debug(f"Valid accuracy meta train training: {valid_accuracy}")
Explanation: Functions to Run the Model
End of explanation
def fhook(module, x_tuple, y, name=None):
print(f"NAME: {name} ({module.__class__.__name__})")
print(" input :", x_tuple[0].shape)
print(" output:", y.shape)
print()
x = x_tuple[0].clone().detach().cpu().numpy()
wandb.log({
"pred_modulated_0": wandb.Histogram(x[0, :]),
"pred_modulated_20": wandb.Histogram(x[20, :]),
"sparsity_pred_modulated_0": (x[0, :] == 0).sum().item() / len(x[0, :])
})
def apply_dendrites_and_log(y, dendrite_activations):
Apply dendrites as a gating mechanism.
# # Multiple by the sigmoid of the max along each segment.
# return y * torch.sigmoid(dendrite_activations.max(dim=2).values)
input_dendrite_activations = dendrite_activations.clone()
inds = dendrite_activations.abs().max(dim=2).indices
inds = inds.unsqueeze(dim=2)
dendrite_activations = torch.gather(dendrite_activations, dim=2, index=inds)
dendrite_activations = dendrite_activations.squeeze(dim=2)
dendrite_activations = torch.sigmoid(dendrite_activations)
out = y * dendrite_activations
log_modulated(y, out, input_dendrite_activations)
return out
def log_modulated(pre_modulated, post_modulated, input_dendrite_activations):
pre_modulated = to_numpy(pre_modulated)
post_modulated = to_numpy(post_modulated)
input_dendrite_activations = to_numpy(input_dendrite_activations)
print(input_dendrite_activations[:, 0, :])
wandb.log({
# Pre-modulated values
"pre_modulated_0": wandb.Histogram(pre_modulated[0, :]),
"pre_modulated_20": wandb.Histogram(pre_modulated[20, :]),
"sparsity_pre_modulated": (pre_modulated == 0).sum().item() / len(pre_modulated.flatten()),
# Post-modulated values
"post_modulated_0": wandb.Histogram(post_modulated[0, :]),
"post_modulated_20": wandb.Histogram(post_modulated[20, :]),
"sparsity_post_modulated": (post_modulated == 0).sum().item() / len(post_modulated.flatten()),
# Dendrite activations
"dendrite_activation_unit_0": plot_dendrite_activations(input_dendrite_activations[:, 0, :]),
})
def plot_dendrite_activations(dendrite_activations):
plt.cla()
activations = dendrite_activations.transpose(1, 0)[:, 0:84]
num_contexts = 84
num_dendrites = 4
x_labels = [
"context {}".format(j) for j in range(num_contexts)
]
y_labels = ["dendrite {}".format(j) for j in range(num_dendrites)]
# Find the range of activation values to anchor the colorbar
vmax = np.abs(activations).max()
vmin = -1.0 * vmax
# Use matplotlib to plot the activation heatmap
fig, ax = plt.subplots(figsize=(30, 10))
ax.imshow(activations, cmap="coolwarm_r", vmin=vmin, vmax=vmax)
ax.set_xticks(np.arange(num_contexts))
ax.set_yticks(np.arange(num_dendrites))
ax.set_xticklabels(x_labels)
ax.set_yticklabels(y_labels)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
plt.tight_layout()
# Annotate just the top absolute activation for each context
top_activation_dendrite_per_context = np.argmax(np.abs(activations), axis=0)
for j, i in enumerate(top_activation_dendrite_per_context):
val = np.round(activations[i, j], 2)
ax.text(j, i, val, ha="center", va="center", color="w")
figure = plt.gcf()
return fig
def to_numpy(tensor):
return tensor.clone().detach().cpu().numpy()
Explanation: Add Wandb Plots via Hooks
DendriticNetwork2(
(segments): DendriteSegments()
(classifier): Linear(in_features=2304, out_features=963, bias=True)
(prediction): Sequential(
(0): Conv2d(3, 256, kernel_size=(3, 3), stride=(1, 1))
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1))
(4): ReLU()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1))
(7): ReLU()
(8): AdaptiveAvgPool2d(output_size=(3, 3))
(9): Flatten()
)
(modulation): Sequential(
(0): Conv2d(3, 112, kernel_size=(3, 3), stride=(1, 1))
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(112, 112, kernel_size=(3, 3), stride=(1, 1))
(4): ReLU()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): Conv2d(112, 112, kernel_size=(3, 3), stride=(1, 1))
(7): ReLU()
(8): AdaptiveAvgPool2d(output_size=(3, 3))
(9): Flatten()
(10): Linear(in_features=1008, out_features=100, bias=True)
)
)
End of explanation |
13,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Storage To Table
Move using bucket and path prefix.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter Storage To Table Recipe Parameters
Specify a bucket and path prefix, * suffix is NOT required.
Every time the job runs it will overwrite the table.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute Storage To Table
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: Storage To Table
Move using bucket and path prefix.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'bucket':'', # Google cloud bucket.
'auth_write':'service', # Credentials used for writing data.
'path':'', # Path prefix to read from, no * required.
'dataset':'', # Existing BigQuery dataset.
'table':'', # Table to create from this query.
'schema':'[]', # Schema provided in JSON list format or empty list.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter Storage To Table Recipe Parameters
Specify a bucket and path prefix, * suffix is NOT required.
Every time the job runs it will overwrite the table.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'bigquery':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'from':{
'bucket':{'field':{'name':'bucket','kind':'string','order':1,'default':'','description':'Google cloud bucket.'}},
'path':{'field':{'name':'path','kind':'string','order':2,'default':'','description':'Path prefix to read from, no * required.'}}
},
'to':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'dataset','kind':'string','order':3,'default':'','description':'Existing BigQuery dataset.'}},
'table':{'field':{'name':'table','kind':'string','order':4,'default':'','description':'Table to create from this query.'}}
},
'schema':{'field':{'name':'schema','kind':'json','order':5,'default':'[]','description':'Schema provided in JSON list format or empty list.'}}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute Storage To Table
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
13,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http
Step1: Parameters
Step2: Data
We get the traditionnal MNIST dataset and add a new label to the existing one. For each digit we return a new label that stands for Odd or Even
Step3: We assign the transform to the original dataset
Step4: We load the datasets DataLoaders
Step5: Multi-task Network
The output of the featurization is passed to two different outputs layers
Step6: We can use two different losses, one for each output
Step7: We create and initialize the network
Step8: Evaluate Accuracy
We need to evaluate the accuracy of each task separately
Step9: Training Loop
We need to balance the contribution of each loss to the overall training and do so by tuning this alpha parameter within [0,1].
Step10: Testing | Python Code:
import logging
import random
import time
import matplotlib.pyplot as plt
import mxnet as mx
from mxnet import gluon, np, npx, autograd
import numpy as onp
Explanation: Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
Multi-Task Learning Example
This is a simple example to show how to use mxnet for multi-task learning.
The network is jointly going to learn whether a number is odd or even and to actually recognize the digit.
For example
1 : 1 and odd
2 : 2 and even
3 : 3 and odd
etc
In this example we don't expect the tasks to contribute to each other much, but for example multi-task learning has been successfully applied to the domain of image captioning. In A Multi-task Learning Approach for Image Captioning by Wei Zhao, Benyou Wang, Jianbo Ye, Min Yang, Zhou Zhao, Ruotian Luo, Yu Qiao, they train a network to jointly classify images and generate text captions
End of explanation
batch_size = 128
epochs = 5
ctx = mx.gpu() if mx.device.num_gpus() > 0 else mx.cpu()
lr = 0.01
Explanation: Parameters
End of explanation
train_dataset = gluon.data.vision.MNIST(train=True)
test_dataset = gluon.data.vision.MNIST(train=False)
def transform(x,y):
x = x.transpose((2,0,1)).astype('float32')/255.
y1 = y
y2 = y % 2 #odd or even
return x, onp.float32(y1), onp.float32(y2)
Explanation: Data
We get the traditionnal MNIST dataset and add a new label to the existing one. For each digit we return a new label that stands for Odd or Even
End of explanation
train_dataset_t = train_dataset.transform(transform)
test_dataset_t = test_dataset.transform(transform)
Explanation: We assign the transform to the original dataset
End of explanation
train_data = gluon.data.DataLoader(train_dataset_t, shuffle=True, last_batch='rollover', batch_size=batch_size, num_workers=5)
test_data = gluon.data.DataLoader(test_dataset_t, shuffle=False, last_batch='rollover', batch_size=batch_size, num_workers=5)
print("Input shape: {}, Target Labels: {}".format(train_dataset[0][0].shape, train_dataset_t[0][1:]))
Explanation: We load the datasets DataLoaders
End of explanation
class MultiTaskNetwork(gluon.HybridBlock):
def __init__(self):
super(MultiTaskNetwork, self).__init__()
self.shared = gluon.nn.HybridSequential()
self.shared.add(
gluon.nn.Dense(128, activation='relu'),
gluon.nn.Dense(64, activation='relu'),
gluon.nn.Dense(10, activation='relu')
)
self.output1 = gluon.nn.Dense(10) # Digist recognition
self.output2 = gluon.nn.Dense(1) # odd or even
def forward(self, x):
y = self.shared(x)
output1 = self.output1(y)
output2 = self.output2(y)
return output1, output2
Explanation: Multi-task Network
The output of the featurization is passed to two different outputs layers
End of explanation
loss_digits = gluon.loss.SoftmaxCELoss()
loss_odd_even = gluon.loss.SigmoidBCELoss()
Explanation: We can use two different losses, one for each output
End of explanation
mx.np.random.seed(42)
random.seed(42)
net = MultiTaskNetwork()
net.initialize(mx.init.Xavier(), ctx=ctx)
net.hybridize() # hybridize for speed
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate':lr})
Explanation: We create and initialize the network
End of explanation
def evaluate_accuracy(net, data_iterator):
acc_digits = mx.gluon.metric.Accuracy(name='digits')
acc_odd_even = mx.gluon.metric.Accuracy(name='odd_even')
for i, (data, label_digit, label_odd_even) in enumerate(data_iterator):
data = data.to_device(ctx)
label_digit = label_digit.to_device(ctx)
label_odd_even = label_odd_even.to_device(ctx).reshape(-1,1)
output_digit, output_odd_even = net(data)
acc_digits.update(label_digit, npx.softmax(output_digit))
acc_odd_even.update(label_odd_even, npx.sigmoid(output_odd_even) > 0.5)
return acc_digits.get(), acc_odd_even.get()
Explanation: Evaluate Accuracy
We need to evaluate the accuracy of each task separately
End of explanation
alpha = 0.5 # Combine losses factor
for e in range(epochs):
# Accuracies for each task
acc_digits = mx.gluon.metric.Accuracy(name='digits')
acc_odd_even = mx.gluon.metric.Accuracy(name='odd_even')
# Accumulative losses
l_digits_ = 0.
l_odd_even_ = 0.
for i, (data, label_digit, label_odd_even) in enumerate(train_data):
data = data.to_device(ctx)
label_digit = label_digit.to_device(ctx)
label_odd_even = label_odd_even.to_device(ctx).reshape(-1,1)
with autograd.record():
output_digit, output_odd_even = net(data)
l_digits = loss_digits(output_digit, label_digit)
l_odd_even = loss_odd_even(output_odd_even, label_odd_even)
# Combine the loss of each task
l_combined = (1-alpha)*l_digits + alpha*l_odd_even
l_combined.backward()
trainer.step(data.shape[0])
l_digits_ += l_digits.mean()
l_odd_even_ += l_odd_even.mean()
acc_digits.update(label_digit, npx.softmax(output_digit))
acc_odd_even.update(label_odd_even, npx.sigmoid(output_odd_even) > 0.5)
print("Epoch [{}], Acc Digits {:.4f} Loss Digits {:.4f}".format(
e, acc_digits.get()[1], l_digits_.item()/(i+1)))
print("Epoch [{}], Acc Odd/Even {:.4f} Loss Odd/Even {:.4f}".format(
e, acc_odd_even.get()[1], l_odd_even_.item()/(i+1)))
print("Epoch [{}], Testing Accuracies {}".format(e, evaluate_accuracy(net, test_data)))
Explanation: Training Loop
We need to balance the contribution of each loss to the overall training and do so by tuning this alpha parameter within [0,1].
End of explanation
def get_random_data():
idx = random.randint(0, len(test_dataset))
img = test_dataset[idx][0]
data, _, _ = test_dataset_t[idx]
data = np.expand_dims(data.to_device(ctx), axis=0)
plt.imshow(img.squeeze().asnumpy(), cmap='gray')
return data
data = get_random_data()
digit, odd_even = net(data)
digit = digit.argmax(axis=1)[0].asnumpy()
odd_even = (npx.sigmoid(odd_even)[0] > 0.5).asnumpy()
print("Predicted digit: {}, odd: {}".format(digit, odd_even))
Explanation: Testing
End of explanation |
13,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-ll', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-LL
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
13,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
If we're making the fin frames with a router/2-axis mill, we need to know what angle of chamfer cutter to use.
This is just the trig to make sure that the angle of the leading edge is $\leq$ the mach angle.
Step1: The left triangle is the upper half of a streamwise cross section of the leading edge. ($\mu$ is the mach angle)
The diagram in the middle shows the leading edge, as it would be shown in the planform.
The triangle on the right shows a cross section perpendicular to the leading edge of the fin.
define a function to go from the designed mach number to the required included angle of the cutter
Step2: Ideally, the included angle of the cutter should be thetaI or less.
Any more and we get an oblique shock at the leading edge.
Step3: The plot above shows the relationship between mach number and the minimum required included angle of the cutter.
For a cutter with an included angle of 120 degrees, check that a 1/4" diameter is okay | Python Code:
import math as m
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
Explanation: If we're making the fin frames with a router/2-axis mill, we need to know what angle of chamfer cutter to use.
This is just the trig to make sure that the angle of the leading edge is $\leq$ the mach angle.
End of explanation
def thetaI(M=2, thick=1/8):
# thick = 1/8
# M = 2
thetaMu = m.asin(1/M)
deltaY = thick/(2*m.tan(thetaMu))
thetaS = m.atan(9/6.42)
deltaE = deltaY*m.sin(m.pi/2-thetaS)
thetaE = m.atan(thick/(2*deltaE))
thetaI = m.pi-2*thetaE
return thetaI
print(thetaI()*360/2/m.pi)
Explanation: The left triangle is the upper half of a streamwise cross section of the leading edge. ($\mu$ is the mach angle)
The diagram in the middle shows the leading edge, as it would be shown in the planform.
The triangle on the right shows a cross section perpendicular to the leading edge of the fin.
define a function to go from the designed mach number to the required included angle of the cutter
End of explanation
Ms= np.linspace(start=1.1, stop=3, num=1e3)
thetas= [thetaI(M)*360/2/m.pi for M in Ms]
plt.plot(Ms, thetas)
plt.grid()
Explanation: Ideally, the included angle of the cutter should be thetaI or less.
Any more and we get an oblique shock at the leading edge.
End of explanation
thick= 1/8
thetaI= 120/360*m.pi*2
thetaE= m.pi/2-thetaI/2
# tan(thetaE)=thick/2/deltaE
deltaE= thick/2/m.tan(thetaE)
print(thick)
print(thetaI*360/2/m.pi)
print(thetaE*360/2/m.pi)
print(deltaE)
print(deltaE*2)
Explanation: The plot above shows the relationship between mach number and the minimum required included angle of the cutter.
For a cutter with an included angle of 120 degrees, check that a 1/4" diameter is okay:
End of explanation |
13,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Programming Paradigms
In the initial days we had only one type of programming paradigms, the paradigm of the developer
Step1: Functional
Step2: Functional
Step3: the above example is below implemented using lambda
Step4: Imperative
Step5: Procedural
Step6: Object-oriented
Step7: Design Patterns
Flyweight pattern
In computer programming, flyweight is a software design pattern. A flyweight is an object that minimizes memory usage by sharing as much data as possible with other similar objects; it is a way to use objects in large numbers when a simple repeated representation would use an unacceptable amount of memory. Often some parts of the object state can be shared, and it is common practice to hold them in external data structures and pass them to the objects temporarily when they are used.
A classic example usage of the flyweight pattern is the data structures for graphical representation of characters in a word processor. It might be desirable to have, for each character in a document, a glyph object containing its font outline, font metrics, and other formatting data, but this would amount to hundreds or thousands of bytes for each character. Instead, for every character there might be a reference to a flyweight glyph object shared by every instance of the same character in the document; only the position of each character (in the document and/or the page) would need to be stored internally.
REFERENCE | Python Code:
towns = ["Rio de Janeiro", "Bhopal", "Budd Lake", "New York", "São Paulo", "Curitib]a "]
count = 0
for city in towns:
print(city)
count = count + 1
print()
print("Total number of cities:", count)
Explanation: Introduction to Programming Paradigms
In the initial days we had only one type of programming paradigms, the paradigm of the developer :). We will write what we wants and how we wants, but as we matures and our problems increases in size & complexity from adding two numbers to soling the issues of the entire universe like chatting, sharing (photos, texts {our own or other's copy pasted 1000<sup>nd</sup> time}, videos, live streaming etc), we need to make sure our code
- is easily understood by others
- is easily upgradable
- is easily modifiable
- execution is fast
- is aligned to the problem resolution
As the domain of solution increased, so did the types of paradigms which can be used for resolution. Most common of paradigms are listed below.
Imperative: Imperative uses statements which can change the state of program. It's focus is on describing how a program operates, same way in which the imperative mood in natural languages expresses commands. It consists of commands for the computer to perform.
It is useful in manipulating data structures and produces elegant & simple code.
The term is often used in contrast to declarative programming, which focuses on what the program should accomplish without specifying how the program should achieve the result.
End of explanation
L = [1, 2, 4 , 6, 5, 7, 3]
Explanation: Functional: Every statement in functional programming is treated as mathematical equation. Also state or mutable data are avoided. Its main advantage are
that any side effects due to data are avoided,
good at parallel processing because there is no state, recursion and lambda calculus.
Object-oriented: Relies on data fields that are treated as objects and manipulated only through prescribed methods. Python doesn’t fully support this coding form because it can’t implement features such as data hiding. However, this remains a useful coding style for complex applications because it supports encapsulation and polymorphism. This coding style also favors code reuse.
Procedural: Tasks are treated as step-by-step iterations where common tasks are placed in functions that are called as needed. This coding style favors iteration, sequencing, selection, and modularization. It is a type of imperative programming in which, programs are created using one or more procedures (also known as subroutines or functions).
Examples
End of explanation
from functools import reduce
def add(x, y):
return x + y
sum = reduce(add, L)
print(sum)
Explanation: Functional
End of explanation
import functools
Sum = functools.reduce(lambda x, y: x + y, L)
print(Sum)
Explanation: the above example is below implemented using lambda
End of explanation
sum = 0
for x in L:
sum += x
print(sum)
Explanation: Imperative
End of explanation
def add(a, b):
return a + b
sum = 0
for a in L:
sum = add(sum, a)
print(sum)
Explanation: Procedural
End of explanation
class Saving(object):
def __init__(self, list_data):
self.total_savings = list_data
def add(self):
sum = 0
for a in self.total_savings:
sum += a
return sum
s = Saving(L)
print(s.add())
Explanation: Object-oriented
End of explanation
# Instances of CheeseBrand will be the Flyweights
class CheeseBrand(object):
def __init__(self, brand, cost):
self.brand = brand
self.cost = cost
self._immutable = True # Disables future attributions
def __setattr__(self, name, value):
if getattr(self, '_immutable', False): # Allow initial attribution
raise RuntimeError('This object is immutable')
else:
super(CheeseBrand, self).__setattr__(name, value)
class CheeseShop(object):
menu = {} # Shared container to access the Flyweights
def __init__(self):
self.orders = {} # per-instance container with private attributes
def stock_cheese(self, brand, cost):
cheese = CheeseBrand(brand, cost)
self.menu[brand] = cheese # Shared Flyweight
def sell_cheese(self, brand, units):
self.orders.setdefault(brand, 0)
self.orders[brand] += units # Instance attribute
def total_units_sold(self):
return sum(self.orders.values())
def total_income(self):
income = 0
for brand, units in self.orders.items():
income += self.menu[brand].cost * units
return income
shop1 = CheeseShop()
shop2 = CheeseShop()
shop1.stock_cheese('white', 1.25)
shop1.stock_cheese('blue', 3.75)
# Now every CheeseShop have 'white' and 'blue' on the inventory
# The SAME 'white' and 'blue' CheeseBrand
shop1.sell_cheese('blue', 3) # Both can sell
shop2.sell_cheese('blue', 8) # But the units sold are stored per-instance
assert shop1.total_units_sold() == 3
assert shop1.total_income() == 3.75 * 3
assert shop2.total_units_sold() == 8
assert shop2.total_income() == 3.75 * 8
Explanation: Design Patterns
Flyweight pattern
In computer programming, flyweight is a software design pattern. A flyweight is an object that minimizes memory usage by sharing as much data as possible with other similar objects; it is a way to use objects in large numbers when a simple repeated representation would use an unacceptable amount of memory. Often some parts of the object state can be shared, and it is common practice to hold them in external data structures and pass them to the objects temporarily when they are used.
A classic example usage of the flyweight pattern is the data structures for graphical representation of characters in a word processor. It might be desirable to have, for each character in a document, a glyph object containing its font outline, font metrics, and other formatting data, but this would amount to hundreds or thousands of bytes for each character. Instead, for every character there might be a reference to a flyweight glyph object shared by every instance of the same character in the document; only the position of each character (in the document and/or the page) would need to be stored internally.
REFERENCE: https://en.wikipedia.org/wiki/Flyweight_pattern
End of explanation |
13,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Twitter + Watson Tone Analyzer Sample Notebook
In this sample notebook, we show how to load and analyze data from the Twitter + Watson Tone Analyzer Spark sample application (code can be found here https
Step1: Load the data
In this section, we load the data from a parquet file that has been saved from a scala notebook (see tutorial here...) and create a SparkSQL DataFrame that contains all the data.
Step2: Compute the distribution of tweets by sentiments > 60%
In this section, we demonstrate how to use SparkSQL queries to compute for each tone that number of tweets that are greater than 60%
Step3: Breakdown of the top 5 hashtags by sentiment scores
In this section, we demonstrate how to build a more complex analytic which decompose the top 5 hashtags by sentiment scores. The code below computes the mean of all the sentiment scores and visualize them in a multi-series bar chart | Python Code:
# Import SQLContext and data types
from pyspark.sql import SQLContext
from pyspark.sql.types import *
Explanation: Twitter + Watson Tone Analyzer Sample Notebook
In this sample notebook, we show how to load and analyze data from the Twitter + Watson Tone Analyzer Spark sample application (code can be found here https://github.com/ibm-watson-data-lab/spark.samples/tree/master/streaming-twitter). The tweets data has been enriched with scores from various Sentiment Tone (e.g Anger, Cheerfulness, etc...).
End of explanation
parquetFile = sqlContext.read.parquet("swift://notebooks.spark/tweetsFull.parquet")
print parquetFile
parquetFile.registerTempTable("tweets");
sqlContext.cacheTable("tweets")
tweets = sqlContext.sql("SELECT * FROM tweets")
print tweets.count()
tweets.cache()
Explanation: Load the data
In this section, we load the data from a parquet file that has been saved from a scala notebook (see tutorial here...) and create a SparkSQL DataFrame that contains all the data.
End of explanation
#create an array that will hold the count for each sentiment
sentimentDistribution=[0] * 13
#For each sentiment, run a sql query that counts the number of tweets for which the sentiment score is greater than 60%
#Store the data in the array
for i, sentiment in enumerate(tweets.columns[-13:]):
sentimentDistribution[i]=sqlContext.sql("SELECT count(*) as sentCount FROM tweets where " + sentiment + " > 60")\
.collect()[0].sentCount
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
ind=np.arange(13)
width = 0.35
bar = plt.bar(ind, sentimentDistribution, width, color='g', label = "distributions")
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2.5, plSize[1]*2) )
plt.ylabel('Tweet count')
plt.xlabel('Tone')
plt.title('Distribution of tweets by sentiments > 60%')
plt.xticks(ind+width, tweets.columns[-13:])
plt.legend()
plt.show()
from operator import add
import re
tagsRDD = tweets.flatMap( lambda t: re.split("\s", t.text))\
.filter( lambda word: word.startswith("#") )\
.map( lambda word : (word, 1 ))\
.reduceByKey(add, 10).map(lambda (a,b): (b,a)).sortByKey(False).map(lambda (a,b):(b,a))
top10tags = tagsRDD.take(10)
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
print(top10tags)
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2, plSize[1]*2) )
labels = [i[0] for i in top10tags]
sizes = [int(i[1]) for i in top10tags]
colors = ['yellowgreen', 'gold', 'lightskyblue', 'lightcoral', "beige", "paleturquoise", "pink", "lightyellow", "coral"]
plt.pie(sizes, labels=labels, colors=colors,autopct='%1.1f%%', shadow=True, startangle=90)
plt.axis('equal')
plt.show()
Explanation: Compute the distribution of tweets by sentiments > 60%
In this section, we demonstrate how to use SparkSQL queries to compute for each tone that number of tweets that are greater than 60%
End of explanation
cols = tweets.columns[-13:]
def expand( t ):
ret = []
for s in [i[0] for i in top10tags]:
if ( s in t.text ):
for tone in cols:
ret += [s.replace(':','').replace('-','') + u"-" + unicode(tone) + ":" + unicode(getattr(t, tone))]
return ret
def makeList(l):
return l if isinstance(l, list) else [l]
#Create RDD from tweets dataframe
tagsRDD = tweets.map(lambda t: t )
#Filter to only keep the entries that are in top10tags
tagsRDD = tagsRDD.filter( lambda t: any(s in t.text for s in [i[0] for i in top10tags] ) )
#Create a flatMap using the expand function defined above, this will be used to collect all the scores
#for a particular tag with the following format: Tag-Tone-ToneScore
tagsRDD = tagsRDD.flatMap( expand )
#Create a map indexed by Tag-Tone keys
tagsRDD = tagsRDD.map( lambda fullTag : (fullTag.split(":")[0], float( fullTag.split(":")[1]) ))
#Call combineByKey to format the data as follow
#Key=Tag-Tone
#Value=(count, sum_of_all_score_for_this_tone)
tagsRDD = tagsRDD.combineByKey((lambda x: (x,1)),
(lambda x, y: (x[0] + y, x[1] + 1)),
(lambda x, y: (x[0] + y[0], x[1] + y[1])))
#ReIndex the map to have the key be the Tag and value be (Tone, Average_score) tuple
#Key=Tag
#Value=(Tone, average_score)
tagsRDD = tagsRDD.map(lambda (key, ab): (key.split("-")[0], (key.split("-")[1], round(ab[0]/ab[1], 2))))
#Reduce the map on the Tag key, value becomes a list of (Tone,average_score) tuples
tagsRDD = tagsRDD.reduceByKey( lambda x, y : makeList(x) + makeList(y) )
#Sort the (Tone,average_score) tuples alphabetically by Tone
tagsRDD = tagsRDD.mapValues( lambda x : sorted(x) )
#Format the data as expected by the plotting code in the next cell.
#map the Values to a tuple as follow: ([list of tone], [list of average score])
#e.g. #someTag:([u'Agreeableness', u'Analytical', u'Anger', u'Cheerfulness', u'Confident', u'Conscientiousness', u'Negative', u'Openness', u'Tentative'], [1.0, 0.0, 0.0, 1.0, 0.0, 0.48, 0.0, 0.02, 0.0])
tagsRDD = tagsRDD.mapValues( lambda x : ([elt[0] for elt in x],[elt[1] for elt in x]) )
#Use custom sort function to sort the entries by order of appearance in top10tags
def customCompare( key ):
for (k,v) in top10tags:
if k == key:
return v
return 0
tagsRDD = tagsRDD.sortByKey(ascending=False, numPartitions=None, keyfunc = customCompare)
#Take the mean tone scores for the top 10 tags
top10tagsMeanScores = tagsRDD.take(10)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*3, plSize[1]*2) )
top5tagsMeanScores = top10tagsMeanScores[:5]
width = 0
ind=np.arange(13)
(a,b) = top5tagsMeanScores[0]
labels=b[0]
colors = ["beige", "paleturquoise", "pink", "lightyellow", "coral", "lightgreen", "gainsboro", "aquamarine","c"]
idx=0
for key, value in top5tagsMeanScores:
plt.bar(ind + width, value[1], 0.15, color=colors[idx], label=key)
width += 0.15
idx += 1
plt.xticks(ind+0.3, labels)
plt.ylabel('AVERAGE SCORE')
plt.xlabel('TONES')
plt.title('Breakdown of top hashtags by sentiment tones')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc='center',ncol=5, mode="expand", borderaxespad=0.)
plt.show()
Explanation: Breakdown of the top 5 hashtags by sentiment scores
In this section, we demonstrate how to build a more complex analytic which decompose the top 5 hashtags by sentiment scores. The code below computes the mean of all the sentiment scores and visualize them in a multi-series bar chart
End of explanation |
13,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Doppler shifts
Exploring doppler shift on precision and quality. Specifically in the Z-band.
This showed the largest change with application of dopplershifts (Fegueira 2016 Fig. C.3 and C.4)
Step1: For precision relative to 100 in the Z band.
Applying doppler shifts of $+/- 200$ km/s only produce changes of $< \pm0.01$ m/s for conditions 1 and 3, and $\pm 0.1-0.05$ m/s for condition 2.
There is a slight slope due to the shape of the input spectrum.
The large increase in the Z-band at +/-10km/s Figueria et al 2016 Appendix C is not observed here. This is due to the previous errors in condition #2.
Step2: Applying the telluric mask reduces the spectrum analyzed to 45%, This inclusion for barycentric shift reduces this to 25% of the original spectra so there is a large increase in RV error.
Cross correlations
Between a synthetic spectrum and atmospheric model
Step3: Auto Correletations | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from tqdm import tqdm
import PyAstronomy.pyasl as pyasl
from astropy import constants as const
import eniric
from eniric import config
# config.cache["location"] = None # Disable caching for these tests
config.cache["location"] = ".joblib" # Enable caching
from eniric.broaden import rotational_convolution, resolution_convolution
from eniric.utilities import band_limits, load_aces_spectrum, wav_selector
from scripts.phoenix_precision import convolve_and_resample
from eniric.snr_normalization import snr_constant_band
from eniric.precision import rv_precision, quality
from eniric.utilities import doppler_shift_wav, doppler_shift_flux
from eniric.atmosphere import Atmosphere
# Convolution settings
epsilon = 0.6
vsini = 10.0
R = 40000
wav1, flux1 = load_aces_spectrum([3900, 4.5, 0.0, 0])
zmin_, zmax_ = band_limits("Z")
# To avoid the strong telluric band limititations will shrink limits
span = zmax_ - zmin_
zmin_ = zmin_ + 0.1 * span
zmax_ = zmax_ - 0.1 * span
from eniric.utilities import doppler_limits
# doppler resilient boundraries
rvmax = 10000 # km/s
zmin_dop, zmax_dop = doppler_limits(rvmax, zmin_, zmax_)
wav1, flux1 = wav_selector(wav1, flux1, zmin_dop, zmax_dop)
# PyAstronomy requires even spaced wavelength (eniric does not)
wav = np.linspace(wav1[0], wav1[-1], len(wav1))
flux = np.interp(wav, wav1, flux1)
# Normalization
const = snr_constant_band(wav, flux, snr=100, band="Z")
flux = flux / const
atm__ = Atmosphere.from_file(atmmodel="../../data/atmmodel/Average_TAPAS_2014.dat")
atm_ = atm__.copy()
atm_.wave_select(zmin_dop, zmax_dop)
atm_.mask_transmission(depth=2)
# atm_.barycenter_broaden(30,consecutive_test=False)
atm_2 = atm_.copy()
atm_.wave_select(zmin_dop, zmax_dop)
def qfunc(wav, flux):
# Func to calculate the 4 precision versions
atm = atm_.at(wav)
rva = rv_precision(wav, flux)
rvb = rv_precision(wav, flux, mask=atm.mask)
rvc = rv_precision(wav, flux, mask=atm.transmission ** 2)
q = quality(wav, flux)
return rva, rvb, rvc, q
shifts = np.arange(-200, 200, 1)
# shifts = [1000]
rv1s, rv2s, rv3s, qs = [], [], [], []
nwav, _ = wav_selector(wav, flux, zmin_, zmax_)
for shift in tqdm(shifts):
nflux = doppler_shift_flux(wav, flux, shift, new_wav=nwav)
a, b, c, d = qfunc(nwav, nflux)
rv1s.append(a.value)
rv2s.append(b.value)
rv3s.append(c.value)
qs.append(d)
# rv2 with bary shifted mask
atm_2.barycenter_broaden(30, consecutive_test=False)
def qfunc2(wav, flux):
# Func to calculate the 4 precision versions
atm = atm_2.at(wav)
rvb = rv_precision(wav, flux, mask=atm.mask)
return rvb
rv2s_bary = []
for shift in tqdm(shifts):
nflux = doppler_shift_flux(wav, flux, shift, new_wav=nwav)
b2 = qfunc2(nwav, nflux)
rv2s_bary.append(b2.value)
fig, axs = plt.subplots(5, 1, sharex=True, figsize=(10, 7))
axs[0].plot(shifts, rv1s, "o-", label="rv1")
axs[0].legend(loc=1)
axs[0].set_ylabel("Precision (m/s)")
axs[1].plot(shifts, rv2s, "x-", label="rv2 without Bary")
axs[1].legend(loc=1)
axs[1].set_ylabel("Precision (m/s)")
axs[2].plot(shifts, rv2s_bary, "x-", label="rv2 with Bary")
axs[2].legend(loc=1)
axs[2].set_ylabel("Precision (m/s)")
axs[3].plot(shifts, rv3s, ".-", label="rv3")
axs[3].legend(loc=1)
axs[3].set_ylabel("Precision (m/s)")
axs[4].plot(shifts, qs, "o-", label="quality")
axs[4].set_xlabel("RV (km/s)")
axs[4].set_ylabel("Quality")
plt.legend()
plt.show()
Explanation: Doppler shifts
Exploring doppler shift on precision and quality. Specifically in the Z-band.
This showed the largest change with application of dopplershifts (Fegueira 2016 Fig. C.3 and C.4)
End of explanation
atm_spec = atm_.at(wav)
sum1, len1 = np.sum(atm_spec.mask), len(atm_spec.mask)
atm_spec2 = atm_2.at(wav) # With bary shift
sum2, len2 = np.sum(atm_spec2.mask), len(atm_spec2.mask)
print("Telluric mask: {0:d}/{1:d} = {2:.03}%".format(sum1, len1, 100 * sum1 / len1))
print(
"Mask with Bary shift:: {0:d}/{1:d} = {2:4.03}%".format(
sum2, len2, 100 * sum2 / len2
)
)
Explanation: For precision relative to 100 in the Z band.
Applying doppler shifts of $+/- 200$ km/s only produce changes of $< \pm0.01$ m/s for conditions 1 and 3, and $\pm 0.1-0.05$ m/s for condition 2.
There is a slight slope due to the shape of the input spectrum.
The large increase in the Z-band at +/-10km/s Figueria et al 2016 Appendix C is not observed here. This is due to the previous errors in condition #2.
End of explanation
from PyAstronomy.pyasl import crosscorrRV
# Cross correlation of spectra wth telluric mask
xwav = np.linspace(zmin_, zmax_, len(wav1))
xflux = np.interp(xwav, wav1, flux1)
# trans = atm_.at(xwav).transmission()
print(len(xwav), len(atm_.wl))
s, corr = crosscorrRV(
xwav,
xflux,
atm_.wl,
atm_.transmission,
rvmin=-200,
rvmax=200,
drv=2,
mode="doppler",
skipedge=10000,
)
# Cross correlation of spectra wth telluric mask
# PyAstronomy requires even spaced wavelength (eniric does not)
xwav = np.linspace(zmin_, zmax_, len(wav1))
xflux = np.interp(xwav, wav1, flux1)
atm2 = atm_.at(xwav)
s2, corr2 = crosscorrRV(
atm2.wl,
atm2.transmission,
wav1,
flux1,
rvmin=-200,
rvmax=200,
drv=1,
mode="doppler",
skipedge=10000,
)
# Cross correlations
plt.plot(s, corr / np.mean(corr), label="template=atm")
plt.plot(s2, corr2 / np.mean(corr2), label="template=spectrum")
plt.xlabel("RV shift (km/s)")
plt.ylabel("Correlation")
plt.legend()
plt.show()
Explanation: Applying the telluric mask reduces the spectrum analyzed to 45%, This inclusion for barycentric shift reduces this to 25% of the original spectra so there is a large increase in RV error.
Cross correlations
Between a synthetic spectrum and atmospheric model
End of explanation
wavauto, corr_wav_auto = crosscorrRV(
wav1,
flux1,
wav1,
flux1,
rvmin=-200,
rvmax=200,
drv=1,
mode="doppler",
skipedge=10000,
)
atmauto, corr_atm_auto = crosscorrRV(
atm2.wl,
atm2.transmission,
atm2.wl,
atm2.transmission,
rvmin=-200,
rvmax=200,
drv=1,
mode="doppler",
skipedge=10000,
)
plt.plot(wavauto, corr_wav_auto / max(corr_wav_auto), label="Spectrum Autocorrelation")
plt.plot(
atmauto, corr_atm_auto / max(corr_atm_auto), label="Atmosphere Autocorrelation"
)
plt.plot(s, corr / max(corr), label="Cross Correlation")
plt.legend()
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(s, corr / np.max(corr), "g", label="xcorr")
ax2.plot(shifts, rv3s, "b--", label="RV3")
ax1.set_xlabel("RV shift (km/s)")
ax1.set_ylabel("Xcorr", color="g")
ax2.set_ylabel("Precision", color="b")
# plt.legend()
plt.show()
Explanation: Auto Correletations
End of explanation |
13,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
if not isfile(dataset_filename):
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
from collections import Counter
word_counts = Counter(int_words)
word_counts.most_common(3)
threshold = 1e-5
total_counts = len(int_words)
frequencies = {word: count / total_counts for word, count in word_counts.items()}
drop_prob = {word: 1 - np.sqrt(threshold / frequencies[word]) for word in int_words}
drop_prob[0]
## Your code here
import random
train_words = [word for word in int_words if drop_prob[word] < random.random()]
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, shape=[None], name="inputs")
labels = tf.placeholder(tf.int32, shape=[None, None], name="labels")
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform([n_vocab, n_embedding], minval=-1, maxval=1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal([n_vocab, n_embedding], stddev=0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros([n_vocab])) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
13,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project
Step3: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
Step5: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
Step7: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https
Step10: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step13: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).
Step16: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
Step19: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented
Step22: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
Step25: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
Step27: Train
Implement train to build and train the GANs. Use the following functions you implemented
Step29: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
Step31: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces. | Python Code:
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
DON'T MODIFY ANYTHING IN THIS CELL
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Explanation: Face Generation
In this project, you'll use generative adversarial networks to generate new images of faces.
Get the Data
You'll be using two datasets in this project:
- MNIST
- CelebA
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Explanation: Explore the Data
MNIST
As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.
End of explanation
show_n_images = 25
DON'T MODIFY ANYTHING IN THIS CELL
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Explanation: CelebA
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Preprocess the Data
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single [color channel](https://en.wikipedia.org/wiki/Channel_(digital_image%29) while the CelebA images have [3 color channels (RGB color channel)](https://en.wikipedia.org/wiki/Channel_(digital_image%29#RGB_Images).
Build the Neural Network
You'll build the components necessary to build a GANs by implementing the following functions below:
- model_inputs
- discriminator
- generator
- model_loss
- model_opt
- train
Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
input_ = tf.placeholder(shape=(None, image_width, image_height, image_channels), dtype=tf.float32)
z = tf.placeholder(shape=(None, z_dim), dtype=tf.float32)
learning_rate = tf.placeholder(tf.float32)
return input_, z, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Input
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
- Z input placeholder with rank 2 using z_dim.
- Learning rate placeholder with rank 0.
Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
End of explanation
# Helper functions for _discriminator_ and _generator_
from operator import mul
from functools import reduce
def leaky_relu(x, alpha = 0.2):
return tf.maximum(x,alpha*x)
def batch_norm(x, training):
return tf.layers.batch_normalization(x, training=True)
def flatten(x):
image_shape = x.get_shape()[1:4]
image_shape = map(int, image_shape)
image_size = reduce(mul, image_shape, 1)
return tf.reshape(x, (-1, image_size))
def deconv2d(x, n_filters, padding, kernel, stride, is_train=False, activation=leaky_relu, normalize=True):
x = tf.layers.conv2d_transpose(x, n_filters, kernel, stride, padding, use_bias=(not normalize))
if normalize == True:
x = batch_norm(x, is_train)
return activation(x)
def conv2d(x, n_filters, activation=leaky_relu, normalize=True):
x = tf.layers.conv2d(x, n_filters, (5,5), (2,2), 'same', use_bias=(not normalize))
if normalize == True:
# Discriminator is only used during training
x = batch_norm(x, training=True)
return activation(x)
def dense(x, width, height, n_units, is_train=False, activation=leaky_relu):
x = tf.layers.dense(x, width*height*n_units, use_bias=False)
x = tf.reshape(x, (-1, width, height, n_units))
x = batch_norm(x, is_train)
return activation(x)
def discriminator(images, reuse=False):
Create the discriminator network
:param image: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
with tf.variable_scope('discriminator', reuse=reuse):
# Convolutional input layer without batch normalization
out = conv2d(images, 32, normalize=False)
# Convolution hidden layers with leaky relu activation, no pooling
out = conv2d(out, 64)
out = conv2d(out, 128)
# Fully connected output layer with sigmoid activation
out = flatten(out)
logits = tf.layers.dense(out, 1)
out = tf.sigmoid(logits)
return out, logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_discriminator(discriminator, tf)
Explanation: Discriminator
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the generator, tensor logits of the generator).
End of explanation
def generator(z, n_out_channel, is_train=True):
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
with tf.variable_scope('generator', reuse=(not is_train)):
# Starting with 4x4x512 sized layer
out = dense(z, 4, 4, 512, is_train)
# Deconv layers
# 7x7x256
out = deconv2d(out, 256, 'valid', 4, 1, is_train)
# 14x14x128
out = deconv2d(out, 128, 'same', 5, 2, is_train)
# Output layer
# 28x28x3
out = deconv2d(out, n_out_channel, 'same', 5, 2, activation=tf.tanh, normalize=False)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_generator(generator, tf)
Explanation: Generator
Implement generator to generate an image using z. This function should be able to reuse the variabes in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
End of explanation
def model_loss(input_real, input_z, out_channel_dim):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
smoothing = 0.1
g_model = generator(input_z, out_channel_dim, True)
d_model_fake, d_logits_fake = discriminator(g_model)
d_model_real, d_logits_real = discriminator(input_real, reuse=True)
labels = tf.ones_like(d_logits_real) * (1 - smoothing)
d_loss_real = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=d_logits_real)
labels = tf.zeros_like(d_logits_fake) + smoothing
d_loss_fake = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=d_logits_fake)
d_loss = tf.reduce_mean(d_loss_real) + tf.reduce_mean(d_loss_fake)
labels = tf.ones_like(d_logits_fake) * (1 - smoothing)
g_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=d_logits_fake)
g_loss = tf.reduce_mean(g_loss)
return d_loss, g_loss
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_loss(model_loss)
Explanation: Loss
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
- discriminator(images, reuse=False)
- generator(z, out_channel_dim, is_train=True)
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
trainables = tf.trainable_variables()
g_vars = [var for var in trainables if var.name.startswith('generator')]
d_vars = [var for var in trainables if var.name.startswith('discriminator')]
g_opt = tf.train.AdamOptimizer(learning_rate, beta1).minimize(g_loss, var_list=g_vars)
d_opt = tf.train.AdamOptimizer(learning_rate, beta1).minimize(d_loss, var_list=d_vars)
return d_opt, g_opt
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_opt(model_opt, tf)
Explanation: Optimization
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Explanation: Neural Network Training
Show Output
Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.
End of explanation
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
n_images, width, height, n_channels = data_shape
n_batches = n_images // batch_size
images, z, lr = model_inputs(width, height, n_channels, z_dim)
d_loss, g_loss = model_loss(images, z, n_channels)
d_opt, g_opt = model_opt(d_loss, g_loss, lr, beta1)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
epochs = range(epoch_count)
for epoch_i in epochs:
print("Epoch: ", epoch_i)
g_losses, d_losses = [], []
for batch_i, batch_images in enumerate(get_batches(batch_size), 1):
# Generating input for the generator
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
# Scaling for tanh domain, [-1,1]
batch_real = 2*batch_images
# Train
g_feed = {z: batch_z, lr: learning_rate}
d_feed = {z: batch_z, images: batch_real, lr: learning_rate}
sess.run( g_opt, g_feed )
sess.run( d_opt, d_feed )
print(".", end="",flush=True)
# Check Losses
if batch_i % 100 == 0:
generator = sess.run(g_loss, g_feed)
discriminator = sess.run(d_loss, d_feed)
print("\t{0}/{1} Losses:\tD={2:6.4f}\t\tG={3:6.4f}".format(
batch_i, n_batches, discriminator, generator))
g_losses.append(generator)
d_losses.append(discriminator)
print('\nAverage loss per epoch:','D: ', np.average(d_losses), 'G: ', np.average(g_losses), flush=True)
show_generator_output(sess, 9, z, n_channels, data_image_mode)
Explanation: Train
Implement train to build and train the GANs. Use the following functions you implemented:
- model_inputs(image_width, image_height, image_channels, z_dim)
- model_loss(input_real, input_z, out_channel_dim)
- model_opt(d_loss, g_loss, learning_rate, beta1)
Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
End of explanation
batch_size = 128
z_dim = 100
learning_rate = 0.001
beta1 = 0.
tf.reset_default_graph()
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
Explanation: MNIST
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
End of explanation
batch_size = 128
z_dim = 100
learning_rate = 0.0004
beta1 = 0.
tf.reset_default_graph()
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
Explanation: CelebA
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
End of explanation |
13,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Crash Course In Linear Algebra For Data Scientists
Preamble
This notebook was made for the purposes of quickly introducing the theoretical groundwork of Linear Algebra for Data Scientists.
To that end, we will be primarily making the computer do the hard work of doing the tedious calculations for us.
Specifically, this notebook will be using numpy as the backend for the computations.
This notebook will just ensure that your environment is loaded and all basic operations are supported.
Contents
Basic Linear Algebra Operations
Vector Spaces
Inverse Matrix Theorem
Regression and PCA
L_1 and L_2 norms from Linear Algebra.
Graph Representations
Aim of this notebook.
This notebook simply aims to make sure you have the basic environment set up to work with the rest of the notebooks.
Step1: Creating and shaping numpy arrays.
Step2: For our purposes, a matrix will be a numpy array of shape (m, n) where m, n > 0 and are integers. The matrix may consist of fractions, floats, or complex numbers.
A vector will be a matrix of shape (m, 1).
Algebraically speaking a matrix of integers is a module and is a generalization of linear Algebra. We will not speak of such things again. If a matrix of integers is provided, presume we meant either floats, fractions, or complex numbers as deduced from context.
Step3: Numpy supports broadcasting of operations.
Step4: This is analagous to scalar multiplication with matrices.
2 * [[2,3],[4,5]] = [[4,6],[8,10]]
Step5: Transpositions of Matrices.
Step6: We will frequently write vectors as $[1,2]^T$ so we can write them in line.
Step7: Basic Axioms of Linear Algebra
Taken from the Ikkipedia
Vector spaces
The main structures of linear algebra are vector spaces. A vector space over a field F (often the field of the real numbers) is a set V equipped with two binary operations satisfying the following axioms. Elements of V are called vectors, and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new vector av. The operations of addition and multiplication in a vector space must satisfy the following axioms.[15] In the list below, let u, v and w be arbitrary vectors in V, and a and b scalars in F.
Axiom Signification
Associativity of addition u + (v + w) = (u + v) + w
Commutativity of addition u + v = v + u
Identity element of addition There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v ∈ V.
Inverse elements of addition For every v ∈ V, there exists an element −v ∈ V, called the additive inverse of v, such that v + (−v) = 0
Distributivity of scalar multiplication with respect to vector addition a(u + v) = au + av
Distributivity of scalar multiplication with respect to field addition (a + b)v = av + bv
Compatibility of scalar multiplication with field multiplication a(bv) = (ab)v
Identity element of scalar multiplication 1v = v, where 1 denotes the multiplicative identity in F.
The first four axioms are those of V being an abelian group under vector addition. Elements of a vector space may have various nature; for example, they can be sequences, functions, polynomials or matrices. Linear algebra is concerned with properties common to all vector spaces.
Lets see how these look in numpy | Python Code:
import numpy as np
Explanation: Crash Course In Linear Algebra For Data Scientists
Preamble
This notebook was made for the purposes of quickly introducing the theoretical groundwork of Linear Algebra for Data Scientists.
To that end, we will be primarily making the computer do the hard work of doing the tedious calculations for us.
Specifically, this notebook will be using numpy as the backend for the computations.
This notebook will just ensure that your environment is loaded and all basic operations are supported.
Contents
Basic Linear Algebra Operations
Vector Spaces
Inverse Matrix Theorem
Regression and PCA
L_1 and L_2 norms from Linear Algebra.
Graph Representations
Aim of this notebook.
This notebook simply aims to make sure you have the basic environment set up to work with the rest of the notebooks.
End of explanation
# Create numpy arrays like this.
print("A 1 row by 4 column numpy matrix\n{}".format(np.array(range(4))))
# Create a 2 by 2 matrix like this.
A = np.array(range(4)).reshape(2,2) # Reshape into an array of two rows and 2 columns.
print("A 2 row by 2 column numpy matrix\n{}".format(A))
# Create a 3 by 2 matrix.
A = np.array(range(6)).reshape(3,2)
print("A 3 row by 2 column numpy matrix\n{}".format(A))
# Note the ordering of how it reshapes. It places items item by item until the row is filled.
# Be careful to match the number of elements to the shape you give to reshape!
np.array(range(6)).reshape(2,4) # Size of the inputs must equal the product of the dimensions.
Explanation: Creating and shaping numpy arrays.
End of explanation
# Create a (1,3) vector
np.array(range(3)).reshape(3,1)
# You can use this to create larger tensors, but we will not discuss tensors further in this tutorial.
print("A tensor!\n{}".format(np.array(range(24)).reshape(2,3,4)))
# Golly gee whiz! Looks like two 3 by 4 matrices!
Explanation: For our purposes, a matrix will be a numpy array of shape (m, n) where m, n > 0 and are integers. The matrix may consist of fractions, floats, or complex numbers.
A vector will be a matrix of shape (m, 1).
Algebraically speaking a matrix of integers is a module and is a generalization of linear Algebra. We will not speak of such things again. If a matrix of integers is provided, presume we meant either floats, fractions, or complex numbers as deduced from context.
End of explanation
A = np.array([2,3,4,5]).reshape(2,2)
2 * A
Explanation: Numpy supports broadcasting of operations.
End of explanation
# Numpy dynamically casts to floats, etc.
0.5 * A
# Careful to multiply matrices, vectors, etc with the np.matmul operation.
A = np.array(range(4)).reshape(2,2)
b = np.array([2,5]).reshape(2,1)
element_wise_multiplication = A * b
matrix_multiplication = np.matmul(A, b)
print("Element-wise multiplication:\nA .* b = \n{}\n".format(element_wise_multiplication))
print("Matrix-Multiplication:\nA * b = \n{}".format(matrix_multiplication))
Explanation: This is analagous to scalar multiplication with matrices.
2 * [[2,3],[4,5]] = [[4,6],[8,10]]
End of explanation
# Take a Matrix and use it's .transpose method.
print(A.transpose())
Explanation: Transpositions of Matrices.
End of explanation
a = np.array([1,2]).reshape(2,1) # Or declare them like this in numpy.
print(a)
Explanation: We will frequently write vectors as $[1,2]^T$ so we can write them in line.
End of explanation
# Fix vectors u, v, w
# Fix scalars a, b
u = np.array([2,3,4]).reshape(3,1)
v = np.array([-2,3,4.5]).reshape(3,1)
w = np.array([-3.2, 5, 12]).reshape(3,1)
zero = np.array([0,0,0]).reshape(3,1)
a = 3.2
b = 4
print("Check associativity:\n {} \n== \n{}\n True!".format(u + (v + w),(u + v) + w))
print("Check commutativity:\n {}\n ==\n {}".format(u + v, v + u))
print("Check addition has an identity element:\n {}\n+\n{}\n==\n{}".format(v, -v, zero))
print("You're getting the picture...")
Explanation: Basic Axioms of Linear Algebra
Taken from the Ikkipedia
Vector spaces
The main structures of linear algebra are vector spaces. A vector space over a field F (often the field of the real numbers) is a set V equipped with two binary operations satisfying the following axioms. Elements of V are called vectors, and elements of F are called scalars. The first operation, vector addition, takes any two vectors v and w and outputs a third vector v + w. The second operation, scalar multiplication, takes any scalar a and any vector v and outputs a new vector av. The operations of addition and multiplication in a vector space must satisfy the following axioms.[15] In the list below, let u, v and w be arbitrary vectors in V, and a and b scalars in F.
Axiom Signification
Associativity of addition u + (v + w) = (u + v) + w
Commutativity of addition u + v = v + u
Identity element of addition There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v ∈ V.
Inverse elements of addition For every v ∈ V, there exists an element −v ∈ V, called the additive inverse of v, such that v + (−v) = 0
Distributivity of scalar multiplication with respect to vector addition a(u + v) = au + av
Distributivity of scalar multiplication with respect to field addition (a + b)v = av + bv
Compatibility of scalar multiplication with field multiplication a(bv) = (ab)v
Identity element of scalar multiplication 1v = v, where 1 denotes the multiplicative identity in F.
The first four axioms are those of V being an abelian group under vector addition. Elements of a vector space may have various nature; for example, they can be sequences, functions, polynomials or matrices. Linear algebra is concerned with properties common to all vector spaces.
Lets see how these look in numpy
End of explanation |
13,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook
Roadmap
Data Generating Process
Objects of Interest
Processing Specification
Setting up Simulation
Conducting Estimation
Inspection of Results
Over the next couple of lectures, we will then constantly refine the basic code and explore elements of software engineering such as Object-Oriented Programming, Unit Testing, Debugging, and Profiling.
Before we get started, let us import some basic libraries.
Step1: Processing of Model Specification
We manage the model specification in an external text file, which is called init.ini. This file will turn out to be useful to provide the parameters for a simulation of a synthetic sample or the initialization of starting values for an estimation.
Step8: Now we will develop a set of function that processes the initialization file.
Step9: Let us check if it is all working.
Step13: Setting up the Simulation
Distributinal Assumptions
Observables
\begin{align}
X & \sim \mathbb{N}(0, 1) \
Z & \sim \mathbb{N}(0, 1) \
\end{align}
Unobservables
\begin{eqnarray}
\begin{pmatrix}U_{1}\
U_{0}\
V
\end{pmatrix} & \sim & \mathbb{N}\left[\left(\begin{array}{c}
0\
0\
0
\end{array}\right),\left(\begin{array}{ccc}
\sigma_{U_1}^2 & 0 & \sigma_{U_1,V}\
0 & \sigma_{U_0}^2 & \sigma_{U_0,V}\
\sigma_{U_1,V} & \sigma_{U_0,V} & \sigma_{V}^2
\end{array}\right)\right]\
\end{eqnarray}
Step14: Let us check if it is all working.
Step15: Given our parametrization, let us revisit our objects of interest. We start with the individual-specific benefits. Please note the use of the StatsModels library,
that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. Think of it as a replacement for using R. In general, rpy2 provides a low-level interface to R from Python.
Step23: Estimation
Now, we will perform Maximum Likelihood Estimation using alternative optimization algorithms. Here is the likelihood function
Step24: Let us evaluate the criterion function and see if that part is working.
Step26: Finally, we create a function for the estimation. It builds on all our previously defined elements.
Step27: Now, let us actually run some estimations. Our current implementation allow to easily switch between alternative optimzation algorithms and starting values.
Step29: Inspection of Results
After conduction an estimation, the focus shifts back to the economic interpretation of our results. As we often have to study the results from hundreds of different estimation runs, it is convenient to have a function set up that produces the main objects of interest.
Step30: Let us try it out.
Step31: Cleanup
Step32: Formatting | Python Code:
%%capture
# Notebook metods
from IPython.core.display import HTML, Image
# Unix Pattern Extensions
import glob
# Operating System Interfaces
import os
# Lexical Analysis
import shlex
# Copy operations
import copy
# Encoders and Decoders
import codecs
# Statistical Modeling and Econometrics
import statsmodels.api as sm
# Plotting
import matplotlib.pyplot as plt
%pylab inline --no-import-all
# Scientific Computing
import numpy as np
from scipy.stats import norm
from scipy.optimize import minimize
Explanation: Notebook
Roadmap
Data Generating Process
Objects of Interest
Processing Specification
Setting up Simulation
Conducting Estimation
Inspection of Results
Over the next couple of lectures, we will then constantly refine the basic code and explore elements of software engineering such as Object-Oriented Programming, Unit Testing, Debugging, and Profiling.
Before we get started, let us import some basic libraries.
End of explanation
Image(filename='material/images/init.png', width=1000)
Explanation: Processing of Model Specification
We manage the model specification in an external text file, which is called init.ini. This file will turn out to be useful to provide the parameters for a simulation of a synthetic sample or the initialization of starting values for an estimation.
End of explanation
def _process_bene(list_, dict_, keyword):
This function processes the BENE part of the initialization file.
# Distribute information
name, val_treated, val_untreated = list_[0], list_[1], list_[2]
# Initialize dictionary
if 'TREATED' not in dict_.keys():
for subgroup in ['TREATED', 'UNTREATED']:
dict_[subgroup] = {}
dict_[subgroup]['coeff'] = []
dict_[subgroup]['int'] = None
dict_[subgroup]['sd'] = None
# Type conversion
val_treated = float(val_treated)
val_untreated = float(val_untreated)
# Collect information
if name in ['coeff']:
dict_['TREATED'][name] += [val_treated]
dict_['UNTREATED'][name] += [val_untreated]
else:
dict_['TREATED'][name] = val_treated
dict_['UNTREATED'][name] = val_untreated
# Finishing
return dict_
def _process_not_bene(list_, dict_, keyword):
This function processes all of the initialization file, but the
BENE section.
# Distribute information
name, val = list_[0], list_[1]
# Prepare container.
if name not in dict_[keyword].keys():
if name in ['coeff']:
dict_[keyword][name] = []
# Type conversion
if name in ['agents', 'maxiter']:
val = int(val)
elif name in ['source', 'algorithm', 'start', 'version']:
val = str(val)
else:
val = float(val)
# Collect information
if name in ['coeff']:
dict_[keyword][name] += [val]
else:
dict_[keyword][name] = val
# Finishing.
return dict_
def _check_integrity_process(dict_):
Check integrity of initFile dict.
# Antibugging
assert (isinstance(dict_, dict))
# Check number of agents
assert (dict_['BASICS']['agents'] > 0)
assert (isinstance(dict_['BASICS']['agents'], int))
# Check optimizer
assert (dict_['ESTIMATION']['algorithm'] in ['bfgs', 'nm'])
# Check starting values
assert (dict_['ESTIMATION']['start'] in ['random', 'init'])
# Maximum iterations
assert (dict_['ESTIMATION']['maxiter'] >= 0)
# Finishing
return True
def _add_auxiliary(dict_):
Add some auxiliary objects.
# Antibugging
assert (isinstance(dict_, dict))
# Initialize container
dict_['AUX'] = {}
# Full set of coefficients.
for key_ in ['TREATED', 'UNTREATED', 'COST']:
dict_[key_]['all'] = [dict_[key_]['int']]
dict_[key_]['all'] += dict_[key_]['coeff']
dict_[key_]['all'] = np.array(dict_[key_]['all'])
# Number of covariates
num_covars_out = len(dict_['TREATED']['all'])
num_covars_cost = len(dict_['COST']['all'])
dict_['AUX']['num_covars_out'] = num_covars_out
dict_['AUX']['num_covars_cost'] = num_covars_cost
# Number of parameters
dict_['AUX']['num_paras'] = 2 * num_covars_out + num_covars_cost + 2 + 2
# Starting values
dict_['AUX']['init_values'] = []
for key_ in ['TREATED', 'UNTREATED', 'COST']:
dict_['AUX']['init_values'] += dict_[key_]['all'].tolist()
dict_['AUX']['init_values'] += [dict_['TREATED']['sd']]
dict_['AUX']['init_values'] += [dict_['UNTREATED']['sd']]
dict_['AUX']['init_values'] += [dict_['DIST']['rho1']]
dict_['AUX']['init_values'] += [dict_['DIST']['rho0']]
# Finishing
return dict_
def _process_cases(list_):
Process cases and determine whether keyword or empty
line.
# Antibugging
assert (isinstance(list_, list))
# Get information
is_empty = (len(list_) == 0)
if not is_empty:
is_keyword = list_[0].isupper()
else:
is_keyword = False
# Antibugging
assert (is_keyword in [True, False])
assert (is_empty in [True, False])
# Finishing
return is_empty, is_keyword
def process(file_):
Process initialization file.
# Initialization
dict_ = {}
for line in open(file_).readlines():
# Remove UTF-3 marker
if line.startswith(codecs.BOM_UTF8):
line = line[3:]
# Split line
list_ = shlex.split(line)
# Determine special cases
is_empty, is_keyword = _process_cases(list_)
# Applicability
if is_empty:
continue
if is_keyword:
keyword = list_[0]
dict_[keyword] = {}
continue
if keyword not in ['BENE']:
dict_ = _process_not_bene(list_, dict_, keyword)
else:
dict_ = _process_bene(list_, dict_, keyword)
# Remove BENE
del dict_['BENE']
# Add auxiliary objects
dict_ = _add_auxiliary(dict_)
# Check quality.
_check_integrity_process(dict_)
# Finishing.
return dict_
Explanation: Now we will develop a set of function that processes the initialization file.
End of explanation
init_dict = process('material/msc/init.ini')
Explanation: Let us check if it is all working.
End of explanation
init_dict = process('material/msc/init.ini')
def _check_integrity_simulate(Y1, Y0, Y, D):
Check quality of simulated sample.
assert (np.all(np.isfinite(Y1)))
assert (np.all(np.isfinite(Y0)))
assert (np.all(np.isfinite(Y)))
assert (np.all(np.isfinite(D)))
assert (Y1.dtype == 'float')
assert (Y0.dtype == 'float')
assert (Y.dtype == 'float')
assert (D.dtype == 'float')
assert (D.all() in [1.0, 0.0])
def _write_out(Y, D, X, Z, source, unobserved=False, Y1=None, Y0=None):
Write out simulated data to file.
if not unobserved:
np.savetxt(source, np.column_stack((Y, D, X, Z)), fmt='%8.3f')
else:
assert (isinstance(Y1, np.ndarray))
assert (isinstance(Y0, np.ndarray))
np.savetxt(source, np.column_stack((Y, D, X, Z, Y1, Y0)),
fmt='%8.3f')
def simulate(init_dict, unobserved=False):
Simulate a model based on the initialization file.
# Antibugging
assert (isinstance(init_dict, dict))
assert (unobserved in [True, False])
# Ensure recomputability
np.random.seed(123)
# Distribute information
num_agents = init_dict['BASICS']['agents']
source = init_dict['BASICS']['source']
Y1_coeffs = init_dict['TREATED']['all']
Y0_coeffs = init_dict['UNTREATED']['all']
C_coeffs = np.array(init_dict['COST']['all'])
U1_sd = init_dict['TREATED']['sd']
U0_sd = init_dict['UNTREATED']['sd']
V_sd = init_dict['COST']['sd']
U1V_rho = init_dict['DIST']['rho1']
U0V_rho = init_dict['DIST']['rho0']
# Auxiliary objects
U1V_cov = U1V_rho * U1_sd * V_sd
U0V_cov = U0V_rho * U0_sd * V_sd
num_covars_out = Y1_coeffs.shape[0]
num_covars_cost = C_coeffs.shape[0]
# Simulate observables
means = np.tile(0.0, num_covars_out)
covs = np.identity(num_covars_out)
X = np.random.multivariate_normal(means, covs, num_agents)
means = np.tile(0.0, num_covars_cost)
covs = np.identity(num_covars_cost)
Z = np.random.multivariate_normal(means, covs, num_agents)
# Add intercepts. The first column of the X and Z matrix always contains
# the intercept term. This is exploited throughout the code.
Z[:,0], X[:, 0] = 1.0, 1.0
# Construct index of observable characteristics
Y1_level = np.dot(Y1_coeffs, X.T)
Y0_level = np.dot(Y0_coeffs, X.T)
C_level = np.dot(C_coeffs, Z.T)
# Simulate unobservables
means = np.tile(0.0, 3)
vars_ = [U1_sd**2, U0_sd**2, V_sd**2]
covs = np.diag(vars_)
covs[0, 2] = U1V_cov
covs[2, 0] = covs[0, 2]
covs[1, 2] = U0V_cov
covs[2, 1] = covs[1, 2]
U = np.random.multivariate_normal(means, covs, num_agents)
# Simulate endogenous variables
Y1 = np.tile(np.nan, num_agents)
Y0 = np.tile(np.nan, num_agents)
Y = np.tile(np.nan, num_agents)
D = np.tile(np.nan, num_agents)
for i in range(num_agents):
# Select individual unobservables and observables
u1, u0, v = U[i, 0], U[i, 1], U[i, 2]
y1_idx, y0_idx, c_idx = Y1_level[i], Y0_level[i], C_level[i]
# Decision Rule
expected_benefits = y1_idx - y0_idx
cost = c_idx + v
d = np.float((expected_benefits - cost > 0))
# Potential outcomes
y1, y0 = y1_idx + u1, y0_idx + u0
# Observed outcomes
y = d * y1 + (1.0 - d) * y0
# Collect data matrices
Y[i], Y0[i], Y1[i], D[i] = y, y1, y0, d
# Check integrity of simulated data
_check_integrity_simulate(Y1, Y0, Y, D)
# Save to disk
_write_out(Y, D, X, Z, source, unobserved, Y1, Y0)
# Return selected features of data
return Y1, Y0, D
Explanation: Setting up the Simulation
Distributinal Assumptions
Observables
\begin{align}
X & \sim \mathbb{N}(0, 1) \
Z & \sim \mathbb{N}(0, 1) \
\end{align}
Unobservables
\begin{eqnarray}
\begin{pmatrix}U_{1}\
U_{0}\
V
\end{pmatrix} & \sim & \mathbb{N}\left[\left(\begin{array}{c}
0\
0\
0
\end{array}\right),\left(\begin{array}{ccc}
\sigma_{U_1}^2 & 0 & \sigma_{U_1,V}\
0 & \sigma_{U_0}^2 & \sigma_{U_0,V}\
\sigma_{U_1,V} & \sigma_{U_0,V} & \sigma_{V}^2
\end{array}\right)\right]\
\end{eqnarray}
End of explanation
init_dict = process('material/msc/init.ini')
Y1, Y0, D = simulate(init_dict)
Explanation: Let us check if it is all working.
End of explanation
# Auxiliary variables
B = Y1 - Y0
# Create histogram and density estimate of benefits.
kde = sm.nonparametric.KDEUnivariate(B)
kde.fit()
# Initialize canvas
ax = plt.figure(figsize=(12,8)).add_subplot(111)
# Plot histogram and density
ax.hist(B, bins=50, normed=True, color='blue')
ax.plot(kde.support, kde.density, lw=2, color='black')
# Set axis labels
ax.set_xlabel('Individual-Specific Benefits', fontsize=18)
ax.set_ylabel('Density and Histogram', fontsize=18)
# Change background color
ax.set_axis_bgcolor('white')
# Remove first element on y-axis
ax.yaxis.get_major_ticks()[0].set_visible(False)
# Calcuate the average treatment effects
ate, tt, tut = np.mean(B), np.mean(B[D==1]), np.mean(B[D==0])
# Pretty formatting of strings and output
fmt = ' {0:<5}{1:10.2f}\n'
print '\nAverage Treatment Effects\n'
print fmt.format('ATE ', ate)
print fmt.format('TT', tt)
print fmt.format('TUT ', tut)
# Let us add them to our plot.
plt.axvline(x=ate, ymin=0, ymax=5, linewidth=2, color='g')
plt.axvline(x=tt, ymin=0, ymax=5, linewidth=2, color='b')
plt.axvline(x=tut, ymin=0, ymax=5, linewidth=2, color='y')
# Add title
plt.suptitle('Distribution of Individual-Specific Benefits', fontsize=20)
Explanation: Given our parametrization, let us revisit our objects of interest. We start with the individual-specific benefits. Please note the use of the StatsModels library,
that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. Think of it as a replacement for using R. In general, rpy2 provides a low-level interface to R from Python.
End of explanation
def _distribute_parameters(x, init_dict, num_covars_out):
Distribute the parameters.
# Antibugging
assert (isinstance(x, np.ndarray))
assert (isinstance(num_covars_out, int))
assert (num_covars_out > 0)
# Initialize containers
rslt = dict()
rslt['TREATED'] = dict()
rslt['UNTREATED'] = dict()
rslt['COST'] = dict()
rslt['DIST'] = dict()
# Distribute parameters
rslt['TREATED']['all'] = x[:num_covars_out]
rslt['UNTREATED']['all'] = x[num_covars_out:(2 * num_covars_out)]
rslt['COST']['all'] = x[(2 * num_covars_out):(-4)]
rslt['COST']['sd'] = init_dict['COST']['sd']
rslt['TREATED']['sd'] = np.exp(x[(-4)])
rslt['UNTREATED']['sd'] = np.exp(x[(-3)])
rslt['DIST']['rho1'] = -1.0 + 2.0 / (1.0 + float(np.exp(-x[-2])))
rslt['DIST']['rho0'] = -1.0 + 2.0 / (1.0 + float(np.exp(-x[-1])))
# Update auxiliary versions
rslt['AUX'] = dict()
rslt['AUX']['x_internal'] = x.copy()
rslt['AUX']['x_internal'][-4] = np.exp(x[(-4)])
rslt['AUX']['x_internal'][-3] = np.exp(x[(-3)])
rslt['AUX']['x_internal'][-2] = -1.0 + 2.0 / (1.0 + float(np.exp(-x[-2])))
rslt['AUX']['x_internal'][-1] = -1.0 + 2.0 / (1.0 + float(np.exp(-x[-1])))
rslt['AUX']['init_values'] = init_dict['AUX']['init_values']
# Finishing.
return rslt
def _max_interface(x, Y, D, X, Z, version, init_dict):
Interface to the SciPy maximization routines.
# Auxiliary objects
num_covars_out = X.shape[1]
# Collect maximization arguments
rslt = _distribute_parameters(x, init_dict, num_covars_out)
# Calculate likelihood
likl = _negative_log_likelihood(rslt, Y, D, X, Z)
# Finishing.
return likl
def _negative_log_likelihood(args, Y, D, X, Z):
Negative Log-likelihood function of the generalized Roy model.
# Distribute arguments
Y1_coeffs, Y0_coeffs, C_coeffs, choice_coeffs, U1_sd, U0_sd, U1V_rho, \
U0V_rho, V_sd = _distribute_arguments(args)
# Auxiliary objects
num_agents = Y.shape[0]
# Initialize containers
likl = np.tile(np.nan, num_agents)
choice_idx = np.tile(np.nan, num_agents)
# Likelihood construction.
for i in range(num_agents):
g = np.concatenate((X[i, :], Z[i,:]))
choice_idx[i] = np.dot(choice_coeffs, g)
# Select outcome information
if D[i] == 1.00:
coeffs, rho, sd = Y1_coeffs, U1V_rho, U1_sd
else:
coeffs, rho, sd = Y0_coeffs, U0V_rho, U0_sd
arg_one = (Y[i] - np.dot(coeffs, X[i, :])) / sd
arg_two = (choice_idx[i] - rho * V_sd * arg_one) / \
np.sqrt((1.0 - rho ** 2) * V_sd**2)
pdf_evals, cdf_evals = norm.pdf(arg_one), norm.cdf(arg_two)
if D[i] == 1.0:
contrib = (1.0 / float(sd)) * pdf_evals * cdf_evals
else:
contrib = (1.0 / float(sd)) * pdf_evals * (1.0 - cdf_evals)
likl[i] = contrib
# Transformations.
likl = -np.mean(np.log(np.clip(likl, 1e-20, np.inf)))
# Quality checks.
assert (isinstance(likl, float))
assert (np.isfinite(likl))
# Finishing.
return likl
def _get_start(which, init_dict, Y, D, X, Z):
Get different kind of starting values.
# Antibugging.
assert (which in ['random', 'init', 'auto'])
# Distribute auxiliary objects
num_paras = init_dict['AUX']['num_paras']
num_covars_cost = init_dict['AUX']['num_covars_cost']
# Construct auxiliary objects
G = np.concatenate((X, Z[:, 1:]), axis=1)
# Select relevant values.
if which == 'random':
x0 = np.random.uniform(size=num_paras)
# Variances
x0[(-4)] = max(x0[(-4)], 0.01)
x0[(-3)] = max(x0[(-3)], 0.01)
# Correlations
x0[(-2)] -= 0.5
x0[(-1)] -= 0.5
elif which == 'init':
x0 = np.array(init_dict['AUX']['init_values'][:])
elif which == 'auto':
# Subsetting
Y1, X1 = Y[D == 1], X[(D == 1), :]
olsRslt = sm.OLS(Y1, X1).fit()
# Extract results
coeffs_treated = olsRslt.params
sd_treated = np.array(np.sqrt(olsRslt.scale))
# Subsetting
Y0, X0 = Y[D == 0], X[(D == 0), :]
olsRslt = sm.OLS(Y0, X0).fit()
# Extract results
coeffs_untreated = olsRslt.params
sd_untreated = np.array(np.sqrt(olsRslt.scale))
# Estimate choice model
probitRslt = sm.Probit(D, G).fit()
sd = init_dict['COST']['sd']
coeffs = probitRslt.params*sd
# Special treatment of cost intercept
cost_int = coeffs_treated[0] - coeffs_untreated[0] - coeffs[0]
# Collect results
x0 = np.concatenate((coeffs_treated, coeffs_untreated))
x0 = np.concatenate((x0, [cost_int], -coeffs[-(num_covars_cost - 1):]))
x0 = np.concatenate((x0, [sd_treated, sd_untreated]))
x0 = np.concatenate((x0, [0.00, 0.00]))
else:
raise AssertionError
# Document starting values
init_dict['AUX']['start_values'] = x0.copy()
# Transform to real line
x0 = _transform_start(x0)
# Type conversion
x0 = np.array(x0)
# Quality assurance.
assert (np.all(np.isfinite(x0)))
# Finishing.
return x0
def _transform_start(x):
Transform starting values to cover the whole real line.
# Coefficients
x[:(-4)] = x[:(-4)]
# Variances
x[(-4)] = np.log(x[(-4)])
x[(-3)] = np.log(x[(-3)])
# Correlations
transform = (x[(-2)] + 1) / 2
x[(-2)] = np.log(transform / (1.0 - transform))
transform = (x[(-1)] + 1) / 2
x[(-1)] = np.log(transform / (1.0 - transform))
# Finishing
return x
def _load_data(init_dict):
Load dataset.
# Auxiliary objects
num_covars_out = init_dict['AUX']['num_covars_out']
num_covars_cost = init_dict['AUX']['num_covars_cost']
num_agents = init_dict['BASICS']['agents']
# Read dataset
data = np.genfromtxt(init_dict['BASICS']['source'])
# Reshaping, this ensure that the program also runs with just one agent
# as otherwise only an vector is created. This creates problems for the
# subsetting of the overall data into the components.
data = np.array(data, ndmin=2)
# Distribute data
Y, D = data[:, 0], data[:, 1]
X, Z = data[:, 2:(num_covars_out + 2)], data[:, -num_covars_cost:]
# Finishing
return Y, D, X, Z
def _distribute_arguments(args):
Distribute arguments for evaluation of criterion function and some
auxiliary parameters.
Y1_coeffs = np.array(args['TREATED']['all'])
Y0_coeffs = np.array(args['UNTREATED']['all'])
C_coeffs = np.array(args['COST']['all'])
U1_sd = args['TREATED']['sd']
U0_sd = args['UNTREATED']['sd']
U1V_rho = args['DIST']['rho1']
U0V_rho = args['DIST']['rho0']
V_sd = args['COST']['sd']
choice_coeffs = np.concatenate((Y1_coeffs - Y0_coeffs, - C_coeffs))
# Finishing
return Y1_coeffs, Y0_coeffs, C_coeffs, choice_coeffs, U1_sd, U0_sd, \
U1V_rho, U0V_rho, V_sd
Explanation: Estimation
Now, we will perform Maximum Likelihood Estimation using alternative optimization algorithms. Here is the likelihood function:
\begin{align}
\mathcal{L}(\theta; X, Z) =\sum^N_{i=1} D\mathcal{L_{i,1}} + (1 - D)\mathcal{L_{i,0}},
\end{align}
where
\begin{align}
\mathcal{L_1} = & \log\left(\frac{1}{\sigma_{U_1}}\phi\left(\frac{Y_i - X_i\beta_1}{\sigma_{U_1}}\right)\Phi\left(\frac{Z_i\gamma - \sigma_V/\sigma_{U_1}(Y_i - X_i\beta_1)}{\sqrt{(1 - \rho_{U_1,V})\sigma^2_{V}}}\right)\right) \
\mathcal{L_0} = &\log\left(\frac{1}{\sigma_{U_0}}\phi\left(\frac{Y_i - X_i\beta_0}{\sigma_{U_0}}\right)\Phi\left(\frac{Z_i\gamma - \sigma_V/\sigma_{U_0}(Y_i - X_i\beta_0)}{\sqrt{(1 - \rho_{U_0,V})\sigma^2_{V}}}\right)\right) \
\end{align}
End of explanation
# Load model information
init_dict = process('material/msc/init.ini')
Y, D, X, Z = _load_data(init_dict)
likl = _negative_log_likelihood(init_dict, Y, D, X, Z)
print 'Evaluation of criterion function', likl
Explanation: Let us evaluate the criterion function and see if that part is working.
End of explanation
def estimate(init_dict):
Estimate our version of the generalized Roy model.
# Antibugging
assert (isinstance(init_dict, dict))
# Load dataset
Y, D, X, Z = _load_data(init_dict)
# Create auxiliary objects
start = init_dict['ESTIMATION']['start']
maxiter = init_dict['ESTIMATION']['maxiter']
optimizer = init_dict['ESTIMATION']['algorithm']
num_covars_out = init_dict['AUX']['num_covars_out']
# Initialize different starting values
x0 = _get_start(start, init_dict, Y, D, X, Z)
# Select optimizer
if optimizer == 'nm':
optimizer = 'Nelder-Mead'
elif optimizer == 'bfgs':
optimizer = 'BFGS'
# Provide additional arguments to the optimizer
opts = dict()
opts['maxiter'] = maxiter
# Run optimization or just evaluate function at starting values
if maxiter == 0:
# Collect maximization arguments.
rslt = _distribute_parameters(np.array(x0), init_dict, num_covars_out)
# Calculate likelihood according to user's request
likl = _negative_log_likelihood(rslt, Y, D, X, Z)
# Compile results
x_rslt, fun, success = x0, likl, False
else:
# Check out the SciPy documentation for details about the interface
# to the `minimize' function that provides a convenient interface to
# a variety of alternative maximization algorithms. You will also
# find information about the return information.
opt_rslt = minimize(_max_interface, x0,
args=(Y, D, X, Z, init_dict),
method=optimizer, options=opts)
# Compile results
x_rslt, fun = opt_rslt['x'], opt_rslt['fun']
success = opt_rslt['success']
# Tranformation to internal parameters
rslt = _distribute_parameters(x_rslt, init_dict, num_covars_out)
rslt['fval'], rslt['success'] = fun, success
# Finishing
return rslt
Explanation: Finally, we create a function for the estimation. It builds on all our previously defined elements.
End of explanation
# Process model specification
init_dict = process('material/msc/init.ini')
# Simulate a synthetic sample
simulate(init_dict)
# Estimate the generalized Roy model
for algorithm in ['bfgs', 'nm']:
init_dict['ESTIMATION']['algorithm'] = algorithm
for start in ['random', 'init']:
init_dict['ESTIMATION']['start'] = start
# Monitoring
print '\n\n Current Request \n'
print ' Algorithm: ', algorithm
print ' Start: ', start
# Run estimation
rslt = estimate(init_dict)
# Inspect subset of results
print ' Variances: ', rslt['TREATED']['sd'], rslt['UNTREATED']['sd']
Explanation: Now, let us actually run some estimations. Our current implementation allow to easily switch between alternative optimzation algorithms and starting values.
End of explanation
def inspect(rslt, init_dict):
This function simulates a sample from the estimates of the model
and reports the average effects of treatment in a file.
# Antibugging
assert (isinstance(rslt, dict))
assert (isinstance(init_dict, dict))
# Update results
modified_init = copy.deepcopy(init_dict)
for key_ in rslt.keys():
if key_ in ['fval', 'success']:
continue
for subkey in rslt[key_].keys():
modified_init[key_][subkey] = rslt[key_][subkey]
# Modified dataset
modified_init['BASICS']['file'] = 'simulated.grm.txt'
# Simulate from estimation results
Y1, Y0, D = simulate(modified_init, True)
# Calculate the average treatment effectsa
B = Y1 - Y0
effects = []
effects += [np.mean(B)]
effects += [np.mean(B[D == 1])]
effects += [np.mean(B[D == 0])]
# Print selected results to file
with open('results.grm.txt', 'w') as file_:
file_.write('\n softEcon: Generalized Roy Model')
file_.write('\n -------------------------------\n')
# Average effects of treatment
fmt = ' {0:<5}{1:10.2f}\n\n'
file_.write('\n Average Treatment Effects\n\n')
for i, label in enumerate(['ATE', 'TT', 'TUT']):
str_ = fmt.format(label, effects[i])
file_.write(str_)
file_.write('\n Parameters\n\n')
file_.write(' Start Finish\n\n')
num_paras = init_dict['AUX']['num_paras']
# Structural parameters
x0, x = init_dict['AUX']['start_values'], rslt['AUX']['x_internal']
fmt = '{0:10.2f}{1:10.2f}\n'
for i in range(num_paras):
str_ = fmt.format(x0[i], x[i])
file_.write(str_)
Explanation: Inspection of Results
After conduction an estimation, the focus shifts back to the economic interpretation of our results. As we often have to study the results from hundreds of different estimation runs, it is convenient to have a function set up that produces the main objects of interest.
End of explanation
# Process initialization file
init_dict = process('material/msc/init.ini')
# Simulate synthetic sample
simulate(init_dict)
# Estimate model
rslt = estimate(init_dict)
# Write results
inspect(rslt, init_dict)
# Inspect the results
%cat results.grm.txt
Explanation: Let us try it out.
End of explanation
# Create list of all files generated the notebook
files = glob.glob('*.grm.*')
# Remove files
for file_ in files:
os.remove(file_)
Explanation: Cleanup
End of explanation
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1OKmNHN').read())
Explanation: Formatting
End of explanation |
13,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pick one of these to explore re
Step1: Run Decision Trees, Prune, and consider False Positives
Step2: As a check, consider Feature selection
Step3: Find the Principal Components
Step4: Seeing if I can get anything interesting out of KNN given above
Lecture 10, look at Confusion matrix and ROC curve. Fiddle with the thresholds and AUC
Step5: Cross Validation and Random Forest | Python Code:
# Look only at train IDs
features = df.columns.values
X = train_id_dummies
y = df['ord_del']
# Non Delay Specific
features = df.columns.values
target_cols = ['temp','precipiation',
'visability','windspeed','humidity','cloudcover',
'is_bullet','is_limited','t_northbound',
'd_monday','d_tuesday','d_wednesday','d_thursday','d_friday','d_saturday']
X = df[target_cols]
# del X['is_delay']
# del X['tweet_id']
# X['timestamp'] = X['timestamp'].apply(lambda x: (np.datetime64(x).astype('uint64') / 1e6).astype('uint32'))
# y = df['ord_del']
y = df['is_delay']
# Including train IDs
features = df.columns.values
target_cols = ['temp','precipiation',
'visability','windspeed','humidity','cloudcover',
'is_bullet','is_limited','t_northbound',
'd_monday','d_tuesday','d_wednesday','d_thursday','d_friday','d_saturday'] + list(tid_col)
X = df[target_cols]
# del X['is_delay']
# del X['tweet_id']
# X['timestamp'] = X['timestamp'].apply(lambda x: (np.datetime64(x).astype('uint64') / 1e6).astype('uint32'))
# y = df['ord_del']
y = df['is_delay']
# If there IS a delay...
features = df.columns.values
X = only_delay[['is_backlog', 'is_canceled',
'is_passing', 'is_accident', 'is_medical', 'is_mechanical',
'is_customer', 'is_event']]
# del X['is_delay']
# del X['tweet_id']
# X['timestamp'] = X['timestamp'].apply(lambda x: (np.datetime64(x).astype('uint64') / 1e6).astype('uint32'))
y = df['ord_del']
# X['timestamp'] = X['timestamp'].apply(lambda x:int(x))
# X['stop_pa'] = X['stop_pa'].apply(lambda x:int(x))
# X['train_id'] = X['train_id'].apply(lambda x:int(x))
X['t_northbound'] = X['t_northbound'].apply(lambda x:int(x))
X['cloudcover'] = X['cloudcover'].fillna(X['cloudcover'].mean())
# X.isnull().sum()
# df.plot.scatter(x='timestamp',y='del_ord',figsize=[15,5])
X_y = only_delay[['is_delay','ord_del','temp','precipiation',
'visability','windspeed','humidity','cloudcover',
'is_bullet','is_limited','t_northbound',
'd_monday','d_tuesday','d_wednesday','d_thursday','d_friday','d_saturday']]
cr = X_y.corr()
np.round(cr, 4)
#
X_y.sum()
Explanation: Pick one of these to explore re: below models
End of explanation
from sklearn.tree import DecisionTreeClassifier
TreeClass = DecisionTreeClassifier(
max_depth = 2,
min_samples_leaf = 5)
TreeClass.fit(X,y)
from sklearn.cross_validation import cross_val_score
scores = cross_val_score(TreeClass, X, y, cv=10)
print(scores.mean()) # Score = More is better, error is 1-score
from sklearn.metrics import confusion_matrix
y_hat = TreeClass.predict(X)
cmat = confusion_matrix(y, y_hat)
print cmat
from sklearn.metrics import roc_curve, auc,roc_auc_score
y_hat_probability = TreeClass.predict_proba(X).T[1]
print(y_hat_probability)
print(roc_auc_score(y, y_hat_probability))
vals = roc_curve(y, y_hat_probability)
Roc_DataFrame = pd.DataFrame({'False_Positive_Rate':vals[0],'True_Positive_Rate':vals[1]})
Roc_DataFrame.plot(x = 'False_Positive_Rate' , y = 'True_Positive_Rate' )
Explanation: Run Decision Trees, Prune, and consider False Positives
End of explanation
from sklearn import feature_selection
pvals = feature_selection.f_regression(X,y)[1]
sorted(zip(X.columns.values,np.round(pvals,4)),key=lambda x:x[1],reverse=True)
X_lr=df[['windspeed','t_northbound','precipiation','d_friday']]
# localize your search around the maximum value you found
c_list = np.logspace(-1,1,21)
c_index = np.linspace(-1,1,21)
#C is just the inverse of Lambda - the smaller the C - the stronger the
#regulatization. The smaller C's choose less variables
cv_scores = []
for c_score in c_list:
lm = LogisticRegression(C = c_score, penalty = "l1")
cv_scores.append(cross_val_score(lm,X,y,cv=10).mean())
C_Choice_df = pd.DataFrame({'cv_scores': cv_scores ,'Log_C': c_index })
C_Choice_df.plot(x ='Log_C',y = 'cv_scores' )
# it sounds like our best choice is C = -0.1 (we chose the most restrictive option)
Explanation: As a check, consider Feature selection
End of explanation
X = only_delay[['temp','precipiation',
'visability','windspeed','humidity','cloudcover',
'is_bullet','is_limited','t_northbound',
'd_monday','d_tuesday','d_wednesday','d_thursday','d_friday','d_saturday']]
from sklearn.decomposition import PCA
clf = PCA(.99)
X_trans = clf.fit_transform(X)
X_trans.shape
print "Exp Var ratio:",clf.explained_variance_ratio_
print "PCA Score:",clf.score(X,y)
plt.scatter(X_trans[:, 0], X_trans[:, 1],c=y, alpha=0.2)
plt.colorbar();
from sklearn.linear_model import LogisticRegression
lm = LogisticRegression()
lm.fit(X_trans,y)
print(lm.intercept_)
print(lm.coef_)
from sklearn.cross_validation import cross_val_score
print(cross_val_score(lm,X_trans,y,cv=10).mean())
MisClassificationError = 1 - (cross_val_score(lm,X_trans,y,cv=10).mean())
print(MisClassificationError)
Explanation: Find the Principal Components
End of explanation
print df['windspeed'].max()
print df['windspeed'].min()
df['windspeed_st'] = df['windspeed'].apply(lambda x:x/15.0) # Ballparking
X_reg = df[['precipiation','d_friday','t_northbound','windspeed_st']]
y_reg = df['is_delay']
from sklearn import cross_validation
from sklearn import neighbors, metrics
kf = cross_validation.KFold(len(X_reg), n_folds = 10, shuffle = True) #10 fold CV
Score_KNN_CV = []
RangeOfK = range(1,20)
scores = []
for k in RangeOfK:
knn = neighbors.KNeighborsClassifier(n_neighbors=k, weights='uniform')
scores = []
for train_index, test_index in kf:
knn.fit(X_reg.iloc[train_index], y_reg.iloc[train_index])
scores.append(knn.score(X_reg.iloc[test_index],y_reg.iloc[test_index]))
Score_KNN_CV.append(np.mean(scores))
Score_KNN_CV_df = pd.DataFrame({'Score_KNN_CV': Score_KNN_CV ,'K': RangeOfK })
Score_KNN_CV_df.plot(x = 'K',y = 'Score_KNN_CV',figsize=[15,5])
Explanation: Seeing if I can get anything interesting out of KNN given above
Lecture 10, look at Confusion matrix and ROC curve. Fiddle with the thresholds and AUC
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import cross_val_score
RFClass = RandomForestClassifier(n_estimators = 10000,
max_features = 4, # You can set it to a number or 'sqrt', 'log2', etc
min_samples_leaf = 5,
oob_score = True)
RFClass.fit(X,y)
print(RFClass.oob_score_)
scores = cross_val_score(RFClass, X, y, cv=10)
print(scores.mean())
#out of bag error = 25% , CV_error is 35%
RFClass.fit(X,y)
ImportanceDataFrame = pd.DataFrame({'feature':X.columns.values, 'importance':RFClass.feature_importances_})
ImportanceDataFrame.sort_values(by = ['importance'],ascending = 0)
Depth_Choice_df = pd.DataFrame({'cv_scores': score,'Number of Features': Features})
Depth_Choice_df.plot(x ='Number of Features',y = 'cv_scores')
Explanation: Cross Validation and Random Forest
End of explanation |
13,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
H2O Tutorial
Author
Step1: Enable inline plotting in the Jupyter Notebook
Step2: Intro to H2O Data Munging
Read csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store.
Step3: View the top of the H2O frame.
Step4: View the bottom of the H2O Frame
Step5: Select a column
fr["VAR_NAME"]
Step6: Select a few columns
Step7: Select a subset of rows
Unlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection.
Step8: Key attributes
Step9: Select rows based on value
Step10: Boolean masks can be used to subselect rows based on a criteria.
Step11: Get summary statistics of the data and additional data distribution information.
Step12: Set up the predictor and response column names
Using H2O algorithms, it's easier to reference predictor and response columns
by name in a single frame (i.e., don't split up X and y)
Step13: Machine Learning With H2O
H2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented.
Unlike Scikit-learn, H2O allows for categorical and missing data.
The basic work flow is as follows
Step14: The performance of the model can be checked using the holdout dataset
Step15: Train-Test Split
Instead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data.
Step16: There was a massive jump in the R^2 value. This is because the original data is not shuffled.
Cross validation
H2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits).
In conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either
Step17: However, you can still make use of the cross_val_score from Scikit-Learn
Cross validation
Step18: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is similar to the scikit-learn RandomForestRegressor object with its own train method.
Step19: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage.
Since the progress bar print out gets annoying let's disable that
Step20: Grid Search
Grid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties)
Randomized grid search
Step21: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions).
The steps to perform a randomized grid search
Step24: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report.
Step25: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs
Step26: Transformations
Rule of machine learning
Step27: Normalize Data
Step28: Then, we can apply PCA and keep the top 5 components. A user warning is expected here.
Step29: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers.
Pipelines
"Tranformers unite!"
If your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple.
Steps
Step30: This is so much easier!!!
But, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score.
Combining randomized grid search and pipelines
"Yo dawg, I heard you like models, so I put models in your models to model models."
Steps
Step31: Currently Under Development (drop-in scikit-learn pieces) | Python Code:
import pandas as pd
import numpy
from numpy.random import choice
from sklearn.datasets import load_boston
from h2o.estimators.random_forest import H2ORandomForestEstimator
import h2o
h2o.init()
# transfer the boston data from pandas to H2O
boston_data = load_boston()
X = pd.DataFrame(data=boston_data.data, columns=boston_data.feature_names)
X["Median_value"] = boston_data.target
X = h2o.H2OFrame.from_python(X.to_dict("list"))
# select 10% for valdation
r = X.runif(seed=123456789)
train = X[r < 0.9,:]
valid = X[r >= 0.9,:]
h2o.export_file(train, "Boston_housing_train.csv", force=True)
h2o.export_file(valid, "Boston_housing_test.csv", force=True)
Explanation: H2O Tutorial
Author: Spencer Aiello
Contact: [email protected]
This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms.
Detailed documentation about H2O's and the Python API is available at http://docs.h2o.ai.
Setting up your system for this demo
The following code creates two csv files using data from the Boston Housing dataset which is built into scikit-learn and adds them to the local directory
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Enable inline plotting in the Jupyter Notebook
End of explanation
fr = h2o.import_file("Boston_housing_train.csv")
Explanation: Intro to H2O Data Munging
Read csv data into H2O. This loads the data into the H2O column compressed, in-memory, key-value store.
End of explanation
fr.head()
Explanation: View the top of the H2O frame.
End of explanation
fr.tail()
Explanation: View the bottom of the H2O Frame
End of explanation
fr["CRIM"].head() # Tab completes
Explanation: Select a column
fr["VAR_NAME"]
End of explanation
columns = ["CRIM", "RM", "RAD"]
fr[columns].head()
Explanation: Select a few columns
End of explanation
fr[2:7,:] # explicitly select all columns with :
Explanation: Select a subset of rows
Unlike in Pandas, columns may be identified by index or column name. Therefore, when subsetting by rows, you must also pass the column selection.
End of explanation
# The columns attribute is exactly like Pandas
print("Columns:", fr.columns, "\n")
print("Columns:", fr.names, "\n")
print("Columns:", fr.col_names, "\n")
# There are a number of attributes to get at the shape
print("length:", str( len(fr) ), "\n")
print("shape:", fr.shape, "\n")
print("dim:", fr.dim, "\n")
print("nrow:", fr.nrow, "\n")
print("ncol:", fr.ncol, "\n")
# Use the "types" attribute to list the column types
print("types:", fr.types, "\n")
Explanation: Key attributes:
* columns, names, col_names
* len, shape, dim, nrow, ncol
* types
Note:
Since the data is not in local python memory
there is no "values" attribute. If you want to
pull all of the data into the local python memory
then do so explicitly with h2o.export_file and
reading the data into python memory from disk.
End of explanation
fr.shape
Explanation: Select rows based on value
End of explanation
mask = fr["CRIM"]>1
fr[mask,:].shape
Explanation: Boolean masks can be used to subselect rows based on a criteria.
End of explanation
fr.describe()
Explanation: Get summary statistics of the data and additional data distribution information.
End of explanation
x = fr.names[:]
y="Median_value"
x.remove(y)
Explanation: Set up the predictor and response column names
Using H2O algorithms, it's easier to reference predictor and response columns
by name in a single frame (i.e., don't split up X and y)
End of explanation
# Define and fit first 400 points
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=fr[:400,:])
model.predict(fr[400:fr.nrow,:]) # Predict the rest
Explanation: Machine Learning With H2O
H2O is a machine learning library built in Java with interfaces in Python, R, Scala, and Javascript. It is open source and well-documented.
Unlike Scikit-learn, H2O allows for categorical and missing data.
The basic work flow is as follows:
* Fit the training data with a machine learning algorithm
* Predict on the testing data
Simple model
End of explanation
perf = model.model_performance(fr[400:fr.nrow,:])
perf.r2() # get the r2 on the holdout data
perf.mse() # get the mse on the holdout data
perf # display the performance object
Explanation: The performance of the model can be checked using the holdout dataset
End of explanation
r = fr.runif(seed=12345) # build random uniform column over [0,1]
train= fr[r<0.75,:] # perform a 75-25 split
test = fr[r>=0.75,:]
model = H2ORandomForestEstimator(seed=42)
model.train(x=x, y=y, training_frame=train, validation_frame=test)
perf = model.model_performance(test)
perf.r2()
Explanation: Train-Test Split
Instead of taking the first 400 observations for training, we can use H2O to create a random test train split of the data.
End of explanation
model = H2ORandomForestEstimator(nfolds=10) # build a 10-fold cross-validated model
model.train(x=x, y=y, training_frame=fr)
scores = numpy.array([m.r2() for m in model.xvals]) # iterate over the xval models using the xvals attribute
print("Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96))
print("Scores:", scores.round(2))
Explanation: There was a massive jump in the R^2 value. This is because the original data is not shuffled.
Cross validation
H2O's machine learning algorithms take an optional parameter nfolds to specify the number of cross-validation folds to build. H2O's cross-validation uses an internal weight vector to build the folds in an efficient manner (instead of physically building the splits).
In conjunction with the nfolds parameter, a user may specify the way in which observations are assigned to each fold with the fold_assignment parameter, which can be set to either:
* AUTO: Perform random assignment
* Random: Each row has a equal (1/nfolds) chance of being in any fold.
* Modulo: Observations are in/out of the fold based by modding on nfolds
End of explanation
from sklearn.model_selection import cross_val_score
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
Explanation: However, you can still make use of the cross_val_score from Scikit-Learn
Cross validation: H2O and Scikit-Learn
End of explanation
model = H2ORandomForestEstimator(seed=42)
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=10, seed=42) # make a cv
scores = cross_val_score(model, fr[x], fr[y], scoring=scorer, cv=custom_cv)
print("Expected R^2: %.2f +/- %.2f \n" % (scores.mean(), scores.std()*1.96))
print("Scores:", scores.round(2))
Explanation: You still must use H2O to make the folds. Currently, there is no H2OStratifiedKFold. Additionally, the H2ORandomForestEstimator is similar to the scikit-learn RandomForestRegressor object with its own train method.
End of explanation
h2o.__PROGRESS_BAR__=False
h2o.no_progress()
Explanation: There isn't much difference in the R^2 value since the fold strategy is exactly the same. However, there was a major difference in terms of computation time and memory usage.
Since the progress bar print out gets annoying let's disable that
End of explanation
from sklearn import __version__
sklearn_version = __version__
print(sklearn_version)
Explanation: Grid Search
Grid search in H2O is still under active development and it will be available very soon. However, it is possible to make use of Scikit's grid search infrastructure (with some performance penalties)
Randomized grid search: H2O and Scikit-Learn
End of explanation
%%time
from sklearn.model_selection import RandomizedSearchCV # Import grid search
from scipy.stats import randint, uniform
model = H2ORandomForestEstimator(seed=42) # Define model
params = {"ntrees": randint(20,30),
"max_depth": randint(1,10),
"min_rows": randint(1,10), # scikit's min_samples_leaf
"mtries": randint(2,fr[x].shape[1]),} # Specify parameters to test
scorer = make_scorer(h2o_r2_score) # make h2o_r2_score into a scikit_learn scorer
custom_cv = H2OKFold(fr, n_folds=5, seed=42) # make a cv
random_search = RandomizedSearchCV(model, params,
n_iter=10,
scoring=scorer,
cv=custom_cv,
random_state=42,
n_jobs=1) # Define grid search object
random_search.fit(fr[x], fr[y])
print("Best R^2:", random_search.best_score_, "\n")
print("Best params:", random_search.best_params_)
Explanation: If you have 0.16.1, then your system can't handle complex randomized grid searches (it works in every other version of sklearn, including the soon to be released 0.16.2 and the older versions).
The steps to perform a randomized grid search:
1. Import model and RandomizedSearchCV
2. Define model
3. Specify parameters to test
4. Define grid search object
5. Fit data to grid search object
6. Collect scores
All the steps will be repeated from above.
Because 0.16.1 is installed, we use scipy to define specific distributions
ADVANCED TIP:
Turn off reference counting for spawning jobs in parallel (n_jobs=-1, or n_jobs > 1).
We'll turn it back on again in the aftermath of a Parallel job.
If you don't want to run jobs in parallel, don't turn off the reference counting.
Pattern is:
>>> h2o.turn_off_ref_cnts()
>>> .... parallel job ....
>>> h2o.turn_on_ref_cnts()
End of explanation
def report_grid_score_detail(random_search, charts=True):
Input fit grid search estimator. Returns df of scores with details
df_list = []
for line in random_search.grid_scores_:
results_dict = dict(line.parameters)
results_dict["score"] = line.mean_validation_score
results_dict["std"] = line.cv_validation_scores.std()*1.96
df_list.append(results_dict)
result_df = pd.DataFrame(df_list)
result_df = result_df.sort("score", ascending=False)
if charts:
for col in get_numeric(result_df):
if col not in ["score", "std"]:
plt.scatter(result_df[col], result_df.score)
plt.title(col)
plt.show()
for col in list(result_df.columns[result_df.dtypes == "object"]):
cat_plot = result_df.score.groupby(result_df[col]).mean()[0]
cat_plot.sort()
cat_plot.plot(kind="barh", xlim=(.5, None), figsize=(7, cat_plot.shape[0]/2))
plt.show()
return result_df
def get_numeric(X):
Return list of numeric dtypes variables
return X.dtypes[X.dtypes.apply(lambda x: str(x).startswith(("float", "int", "bool")))].index.tolist()
report_grid_score_detail(random_search).head()
Explanation: We might be tempted to think that we just had a large improvement; however we must be cautious. The function below creates a more detailed report.
End of explanation
%%time
params = {"ntrees": randint(30,35),
"max_depth": randint(5,8),
"mtries": randint(4,6),}
custom_cv = H2OKFold(fr, n_folds=5, seed=42) # In small datasets, the fold size can have a big
# impact on the std of the resulting scores. More
random_search = RandomizedSearchCV(model, params, # folds --> Less examples per fold --> higher
n_iter=5, # variation per sample
scoring=scorer,
cv=custom_cv,
random_state=43,
n_jobs=1)
random_search.fit(fr[x], fr[y])
print("Best R^2:", random_search.best_score_, "\n")
print("Best params:", random_search.best_params_)
report_grid_score_detail(random_search)
Explanation: Based on the grid search report, we can narrow the parameters to search and rerun the analysis. The parameters below were chosen after a few runs:
End of explanation
from h2o.transforms.preprocessing import H2OScaler
from h2o.estimators.pca import H2OPrincipalComponentAnalysisEstimator as H2OPCA
Explanation: Transformations
Rule of machine learning: Don't use your testing data to inform your training data. Unfortunately, this happens all the time when preparing a dataset for the final model. But on smaller datasets, you must be especially careful.
At the moment, there are no classes for managing data transformations. On the one hand, this requires the user to tote around some extra state, but on the other, it allows the user to be more explicit about transforming H2OFrames.
Basic steps:
Remove the response variable from transformations.
Import transformer
Define transformer
Fit train data to transformer
Transform test and train data
Re-attach the response variable.
First let's normalize the data using the means and standard deviations of the training data.
Then let's perform a principal component analysis on the training data and select the top 5 components.
Using these components, let's use them to reduce the train and test design matrices.
End of explanation
y_train = train.pop("Median_value")
y_test = test.pop("Median_value")
norm = H2OScaler()
norm.fit(train)
X_train_norm = norm.transform(train)
X_test_norm = norm.transform(test)
print(X_test_norm.shape)
X_test_norm
Explanation: Normalize Data: Use the means and standard deviations from the training data.
End of explanation
pca = H2OPCA(k=5)
pca.fit(X_train_norm)
X_train_norm_pca = pca.transform(X_train_norm)
X_test_norm_pca = pca.transform(X_test_norm)
# prop of variance explained by top 5 components?
print(X_test_norm_pca.shape)
X_test_norm_pca[:5]
model = H2ORandomForestEstimator(seed=42)
model.train(x=X_train_norm_pca.names, y=y_train.names, training_frame=X_train_norm_pca.cbind(y_train))
y_hat = model.predict(X_test_norm_pca)
h2o_r2_score(y_test,y_hat)
Explanation: Then, we can apply PCA and keep the top 5 components. A user warning is expected here.
End of explanation
from h2o.transforms.preprocessing import H2OScaler
from h2o.estimators.pca import H2OPrincipalComponentAnalysisEstimator as H2OPCA
from sklearn.pipeline import Pipeline # Import Pipeline <other imports not shown>
model = H2ORandomForestEstimator(seed=42)
pipe = Pipeline([("standardize", H2OScaler()), # Define pipeline as a series of steps
("pca", H2OPCA(k=5)),
("rf", model)]) # Notice the last step is an estimator
pipe.fit(train, y_train) # Fit training data
y_hat = pipe.predict(test) # Predict testing data (due to last step being an estimator)
h2o_r2_score(y_test, y_hat) # Notice the final score is identical to before
Explanation: Although this is MUCH simpler than keeping track of all of these transformations manually, it gets to be somewhat of a burden when you want to chain together multiple transformers.
Pipelines
"Tranformers unite!"
If your raw data is a mess and you have to perform several transformations before using it, use a pipeline to keep things simple.
Steps:
Import Pipeline, transformers, and model
Define pipeline. The first and only argument is a list of tuples where the first element of each tuple is a name you give the step and the second element is a defined transformer. The last step is optionally an estimator class (like a RandomForest).
Fit the training data to pipeline
Either transform or predict the testing data
End of explanation
pipe = Pipeline([("standardize", H2OScaler()),
("pca", H2OPCA()),
("rf", H2ORandomForestEstimator(seed=42))])
params = {"standardize__center": [True, False], # Parameters to test
"standardize__scale": [True, False],
"pca__k": randint(2, 6),
"rf__ntrees": randint(10,20),
"rf__max_depth": randint(4,10),
"rf__min_rows": randint(5,10), }
# "rf__mtries": randint(1,4),} # gridding over mtries is
# problematic with pca grid over
# k above
from sklearn.model_selection import RandomizedSearchCV
from h2o.cross_validation import H2OKFold
from h2o.model.regression import h2o_r2_score
from sklearn.metrics.scorer import make_scorer
custom_cv = H2OKFold(fr, n_folds=5, seed=42)
random_search = RandomizedSearchCV(pipe, params,
n_iter=5,
scoring=make_scorer(h2o_r2_score),
cv=custom_cv,
random_state=42,
n_jobs=1)
random_search.fit(fr[x],fr[y])
results = report_grid_score_detail(random_search)
results.head()
Explanation: This is so much easier!!!
But, wait a second, we did worse after applying these transformations! We might wonder how different hyperparameters for the transformations impact the final score.
Combining randomized grid search and pipelines
"Yo dawg, I heard you like models, so I put models in your models to model models."
Steps:
Import Pipeline, grid search, transformers, and estimators <Not shown below>
Define pipeline
Define parameters to test in the form: "(Step name)__(argument name)" A double underscore separates the two words.
Define grid search
Fit to grid search
End of explanation
best_estimator = random_search.best_estimator_ # fetch the pipeline from the grid search
h2o_model = h2o.get_model(best_estimator._final_estimator._id) # fetch the model from the pipeline
save_path = h2o.save_model(h2o_model, path=".", force=True)
print(save_path)
# assumes new session
my_model = h2o.load_model(path=save_path)
my_model.predict(X_test_norm_pca)
Explanation: Currently Under Development (drop-in scikit-learn pieces):
* Richer set of transforms (only PCA and Scale are implemented)
* Richer set of estimators (only RandomForest is available)
* Full H2O Grid Search
Other Tips: Model Save/Load
It is useful to save constructed models to disk and reload them between H2O sessions. Here's how:
End of explanation |
13,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Define analysis parameters
Step2: Step 2
Step3: Create function to Display Query results in bar chart
Step4: Step 3 | Python Code:
# Import all necessary libs
from google.colab import auth
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from IPython.display import display, HTML
# Authenticate the user to query datasets in Google BigQuery
auth.authenticate_user()
%matplotlib inline
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Important
This content are intended for educational and informational purposes only.
Conversion Blockers Analysis
<br>
In this analysis we will be looking into main user characteristics captured by Google Analytics which can affect website UX and how they impact e-commerce transaction rate.
<br>
Key notes / assumptions
<br>
For the following analysis, we will call specific data properties (i.e. Browser version) a FEATURE, and each value of a feature (i.e. <i>Chrome V10.1</i>), a LABEL
Step 1: Setup
Install all dependencies and authorize bigQuery access
End of explanation
#@title Define the data source in BigQuery:
project_id = 'bigquery-public-data' #@param
dataset_name = 'google_analytics_sample' #@param
table_name = 'ga_sessions_*'#@param
start_date = '2014-10-01'#@param {type:"date"}
end_date = '2019-12-12'#@param{type:"date"}
billing_project_id = 'my-project' #@param
Explanation: Define analysis parameters
End of explanation
#assemble dynamic content dictionary
dc = {}
dc['project_id'] = project_id
dc['dataset_name'] = dataset_name
dc['table_name'] = table_name
dc['start_date'] = start_date.replace('-','')
dc['end_date'] = end_date.replace('-','')
#render final query function
def render_final_query(dc, display = False):
q1 = '''
#fetch # of transaction, sessions and transaction rate for each feature value
WITH t0 AS
(SELECT
{feature} AS feature,
SUM(IFNULL(sessions.totals.transactions, 0)) AS transactions,
COUNT(sessions.visitStartTime) AS count_sessions,
SUM(IFNULL(sessions.totals.transactions, 0))/COUNT(sessions.visitStartTime) AS transaction_rate
FROM
`{project_id}.{dataset_name}.{table_name}` as sessions,
UNNEST(hits) AS hits
WHERE
hits.hitNumber = 1 AND
date BETWEEN '{start_date}'
AND '{end_date}'
GROUP BY 1
),
#calculate % of total sessions of each feature value and global (avg) transaction rate
t1 AS
(
SELECT
*,
SUM(count_sessions) OVER() AS total_sessions,
SUM(transactions) OVER() AS total_transaction,
AVG(transaction_rate) OVER() AS average_transaction_rate,
count_sessions/SUM(count_sessions) OVER() AS sessions_percentage
FROM t0
ORDER BY transaction_rate
)
#limit results to only values that represent over 2% of all sessions
#and, for remaining lines evaluate if they are bellow stdev limit
SELECT *,
IF(transaction_rate < average_transaction_rate * 0.2, true, false) AS bellow_limit
from t1
WHERE sessions_percentage > 0.01
'''.format(**dc)
if display:
print('Final BigQuery SQL:')
print(q1)
return q1
#run bigQuery query function
def run_big_query(q):
return pd.io.gbq.read_gbq(q, project_id=billing_project_id, verbose=False, dialect='standard')
Explanation: Step 2: Create analysis building blocks
On the following coding blocks, we will create functions that will allow us to easily run the analysis multiple times, one for each feature
Create query builder function based on tamplate
End of explanation
def plot_graph(df, title):
#define column colors:
colors = []
for index, row in df.iterrows():
bellow_limit = df['bellow_limit'][index]
if(bellow_limit):
colors.append('r') #set color to red
else:
colors.append('b') #set color to blue
# Specify this list of colors as the `color` option to `plot`.
df.plot(x='feature', y='transaction_rate', kind='bar', stacked=False, color = colors, title = title, yticks=[])
Explanation: Create function to Display Query results in bar chart
End of explanation
#uncomment each line to enable that analysis
features = [
("Operating System","CONCAT(sessions.device.operatingSystem, ' ', sessions.device.operatingSystemVersion)"),
("Browser","CONCAT( sessions.device.browser, ' ', sessions.device.browserversion)"),
("Language","sessions.device.language"),
#("Device Type","sessions.device.deviceCategory"),
#("Country","sessions.geoNetwork.country"),
#("Region","sessions.geoNetwork.region"),
#("City","sessions.geoNetwork.city"),
#("Landing Page","CONCAT(hits.page.hostname, hits.page.pagePath)"),
#("Screen Pixels (e5)","IF(ARRAY_LENGTH(SPLIT(sessions.device.screenResolution,'x')) = 2,ROUND(CAST(SPLIT(sessions.device.screenResolution,'x')[OFFSET(0)] AS INT64) * CAST(SPLIT(sessions.device.screenResolution,'x')[OFFSET(1)] AS INT64)/100000), Null)")
]
#for each feature Tuple
for item in features:
#define custom values for SQL Query generation
dc['feature'] = item[1]
#generate sql
q = render_final_query(dc, display=True)
# REMOVE LINE BELLOW to execute query (this might result in bigQuery costs)
#run query in BQ
df = run_big_query(q)
#print query results
print("Results for " + item[0])
display(df)
print(" ")
#plot graph
plot_graph(df, item[0])
Explanation: Step 3: Run entire pipeline for each feature and plot results
End of explanation |
13,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
Hour 1
Tuples
Dictionaries
Take Up Midterm
Hours 2 and 3
Work Time
Lists
A list is an object that contains multiple data items
Lists are mutable
Lists can be indexed and sliced
Lists have methods
Syntax
Square brackets
Commas as separators
Step1: Tuple
A tuple is an immutable version of a list
Immutable means that the contents cannot change after creation
Syntax
Parentheses
Commas as separators
Step2: [] brackets or square brackets
() parentheses
{} curly braces or braces
Converting Between Lists and Tuples
list()
tuple()
Step3: Why?
Performance
Tuples are faster
Safety
Can't be modified
Looking ahead
Step4: Dictionaries
A dictionary is a collection data structure. Each element in a dictionary has two parts
Step5: Retrieving a Value from a Dictionary
dictionary[key]
Step6: Testing for Value in a Dictionary
Step7: Adding Elements to a Dictionary
Step8: Deleting Elements
del dictionary[key]
Step9: Putting it together
Test for presence of key before deletion
Step10: Iterating over a Dictionary
Step11: Dictionary Methods
Step12: Midterm Discussion | Python Code:
number_list = [1, 2, 4, 8, 16, 32]
the_pythons = ["Graham", "Terry", "Michael", "Eric", "Terry", "John"]
mixed = [1, "Terry", 4]
print (mixed)
Explanation: Overview
Hour 1
Tuples
Dictionaries
Take Up Midterm
Hours 2 and 3
Work Time
Lists
A list is an object that contains multiple data items
Lists are mutable
Lists can be indexed and sliced
Lists have methods
Syntax
Square brackets
Commas as separators
End of explanation
monty = ("Graham", "Terry", "Michael", "Eric", "Terry", "John")
# the entire tuple
print (monty)
# one element at a time
for name in monty:
print(name)
# indexing
print(the_pythons[2])
print(monty[2])
Explanation: Tuple
A tuple is an immutable version of a list
Immutable means that the contents cannot change after creation
Syntax
Parentheses
Commas as separators
End of explanation
# monty is the tuple, and the_pythons is a list
print (monty)
print (list(monty))
print (the_pythons)
print (tuple(the_pythons))
Explanation: [] brackets or square brackets
() parentheses
{} curly braces or braces
Converting Between Lists and Tuples
list()
tuple()
End of explanation
# Safer than constants, because it is enforced by interpreter
CONVERSION_CONSTANT = 5/9
Explanation: Why?
Performance
Tuples are faster
Safety
Can't be modified
Looking ahead: tuples can be keys in a dictionary, but lists can't
End of explanation
my_dict = {}
my_dict[3.14] = "pi"
my_dict["pi"] = 3.14159
my_dict[(1,2)] = "x,y coordinates"
my_dict[(2,3)] = "x,y coordinates"
print my_dict
my_dict[(1,2)] = [4, 5, 6, 7]
print my_dict
len(my_dict)
phone_book = {"Graham":"555-111",
"Terry": "555-2222",
"Michael": "555-3333"}
Explanation: Dictionaries
A dictionary is a collection data structure. Each element in a dictionary has two parts: a key and a value. You use a key to locate a specific value.
Shorthand description: a dictionary is like a list, but instead of the index being a number, the index is any value, e.g. int, float, string, tuple, etc.
Think dictionaries and phone books
End of explanation
phone_book
phone_book['Michael']
my_dict[(1,2)]
phone_book['Wanda']
Explanation: Retrieving a Value from a Dictionary
dictionary[key]
End of explanation
# Using 'in'
if "Michael" in phone_book:
print phone_book["Michael"]
# Using 'not in'
if "Wanda" not in phone_book:
print("Fish don't need phone numbers")
Explanation: Testing for Value in a Dictionary
End of explanation
print(phone_book)
# Eric, Terry, John
phone_book["Eric"] = "555-4444"
phone_book["Terry"] = "555-5555"
phone_book["John"] = "555-6666"
Explanation: Adding Elements to a Dictionary
End of explanation
del phone_book["John"]
print(phone_book)
Explanation: Deleting Elements
del dictionary[key]
End of explanation
if 'Michael' in phone_book:
del phone_book['Michael']
print(phone_book)
if '555-4444' in phone_book:
print ("Can match values too!")
Explanation: Putting it together
Test for presence of key before deletion
End of explanation
for name in phone_book:
print (name, phone_book[name])
Explanation: Iterating over a Dictionary
End of explanation
phone_book.items()
phone_book.keys()
phone_book.values()
if '555-4444' in phone_book.values():
print("We can match values too")
Explanation: Dictionary Methods
End of explanation
even = False
if even = True:
print("It is even!")
154 >= 300 != False
def is_equal(t1, t2):
# return t1 == t2
return t1.sort() == t2.sort()
list1 = ["name", "age", "temp"]
list2 = ["name", "temp", "age"]
if is_equal(list1, list2):
print("Same!")
else:
print ("Different!")
Explanation: Midterm Discussion
End of explanation |
13,661 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installation tips
Create Anaconda virtual environment with ipython notebook support
conda create -n tf ipython-notebook --yes
The set up as explained in the official site failed for me. Something to do with failure to update setup tools. The remedy was doing as explained in here
Step1: Constants
Step2: Variables
Step3: Scopes
Step4: Training and visualization
To see the graphs invoke the command | Python Code:
import tensorflow as tf
#----------------------------------------------------------
# Basic graph structure and operations
# tf.add , tf.sub, tf.mul , tf.div , tf.mod , tf.poe
# tf.less , tf.greater , tf.less_equal , tf.greater_equal
# tf.logical_and , tf.logical_or , logical.xor
#------------------------------------------------------------
tf.reset_default_graph()
print tf.add(1,2)
print tf.mul(7,9)
graph = tf.get_default_graph()
for op in graph.get_operations():
print op.name
sess = tf.Session() # For regular python code
tf.initialize_all_variables()
print 'Addition is: {} + {} = {} '.format(sess.run('Add/x:0'),sess.run('Add/y:0'),sess.run('Add:0'))
print 'Multiplication: {} * {} = {}'.format(sess.run('Mul/x:0'),sess.run('Mul/y:0'),sess.run('Mul:0'))
Explanation: Installation tips
Create Anaconda virtual environment with ipython notebook support
conda create -n tf ipython-notebook --yes
The set up as explained in the official site failed for me. Something to do with failure to update setup tools. The remedy was doing as explained in here:
pip install --ignore-installed --upgrade pip setuptools
Hellow TensorFlow
Basic graph creation and how to inspect the elements of the graph
End of explanation
tf.reset_default_graph()
m1 = tf.constant([[1., 2.], [3.,4]])
m2 = tf.constant([[5.,6.],[7.,8.]])
m3 = tf.matmul(m1, m2)
# have to run the graph using a session
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print 'm3 = ',sess.run(m3)
sess.close()
Explanation: Constants
End of explanation
tf.reset_default_graph()
v1 = tf.Variable(1, name="my_variable")
v2 = tf.Variable(tf.zeros([3,5]),name='5_zeros') # Variable with innitializer
c1 = tf.random_normal([4, 4], mean=0.0, stddev=1.0) # 4x4 matrixwith normal random variables
v3 = tf.Variable(c1,name='RandomMatrix')
v4 = tf.Variable(tf.ones(6))
counter = tf.Variable(0)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
print 'v1 =',sess.run(v1)
print 'v2 =',sess.run(v2)
print 'v3=',sess.run(v3)
print 'v4=',sess.run(v4)
# Changing the value of a variable
print 'Changed v1 =',sess.run(v1.assign(v1 + 7))
print 'v1 new val =',sess.run(v1)
print sess.run(counter.assign_add(1))
print sess.run(counter.assign_add(1))
sess.close()
Explanation: Variables
End of explanation
tf.reset_default_graph()
v1 = tf.add(1,2,name='add')
with tf.name_scope("Scope1"):
with tf.name_scope("Scope_nested"):
vs = tf.mul(5, 5,name='mul')
print v1.name
print vs.name
tf.reset_default_graph()
graph = tf.get_default_graph()
graph.get_operations()
# Model of a simple neuron: y <-- x * w
x = tf.constant(1.0,name='x')
w = tf.Variable(0.8,name='w')
y = tf.mul(w , x, name='y')
y_ = tf.constant(0.0,name='y_train')
loss = (y-y_)**2
tf.reset_default_graph()
graph = tf.get_default_graph()
graph.get_operations()
# Model of a simple neuron: y <-- x * w
x = tf.constant(1.0,name='x')
w = tf.Variable(0.8,name='w')
y = tf.mul(w , x, name='y')
y_ = tf.constant(0.0,name='y_train')
loss = (y-y_)**2
#--------------------------------------------------------------
# Print the nodes of teh graph, also called 'operations' or 'ops'
#--------------------------------------------------------------
print 'Operations in graph \n==========================='
for op in graph.get_operations():
print op.name
Explanation: Scopes
End of explanation
import tensorflow as tf
x = tf.constant(1.0, name='input')
w = tf.Variable(0.8, name='weight')
y = tf.mul(w, x, name='output')
y_ = tf.constant(0.0, name='correct_value')
loss = tf.pow(y - y_, 2, name='loss')
train_step = tf.train.GradientDescentOptimizer(0.025).minimize(loss)
for value in [x, w, y, y_, loss]:
tf.scalar_summary(value.op.name, value)
summaries = tf.merge_all_summaries()
sess = tf.Session()
summary_writer = tf.train.SummaryWriter('log_simple_stats', sess.graph)
sess.run(tf.initialize_all_variables())
for i in range(100):
summary_writer.add_summary(sess.run(summaries), i)
sess.run(train_step)
Explanation: Training and visualization
To see the graphs invoke the command:
tensorboard --logdir=log_simple_stat
which can then be viewed in the browser at
localhost:6006/#events
End of explanation |
13,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Precision vs Accuracy Precision is the accuracy of basic arithmetic operations used in the computation Accuracy is the absolute or relative error of the approximate quantity* NOTE
Step1: The last definition avoids issues that the previous two had, but it's still non-ideal
Step2: Absolute and relative error
Step3: Relative Error for Non-ScalarsFor non-scalars, calculating normalized value ||x|| = max / sum, implies smaller components of x are bound by absolute error only. Consider compensative relative error
Step4: Measuring Accuracy## Backward and Forward Errors Forward error - error of the computed value Backward error
Step5: Quoting Wikipedia
Step6: Rounding Multiple Times Can Accumulate Error
Step7: (Optional) Differences between round and np.roundThere are few differences between built-in Python 2.7 round function and numpy (a)round
Step8: Loss of significanceError in floating point arithmetic when an operation increases relative error substantially more than absolute error
Step9: CancellationCancellation is an example of loss of significance and it happens when two nearly equal numbers are subtracted and can lead to significant inaccuracies. As an example let's look at function
Step10: Fortunately, we can rewrite f(x) in a form that is less prone to cancellation
Step11: Summing numbers
Step12: Calculations without subtractions are fine, right?Hint
Step13: Sometimes the rounding errors cancel out and produce a result more accurate than the intermediate calculations
Step14: Rounding errors are not random
Step15: Examples## Variance CalculationThere are two concepts that we refer to as variance | Python Code:
# Definition 1: Round down to p-sig. digit numberx = 0.90x1 = 0.99 # 2 correct significant digit, actual difference 0.09x2 = 0.89 # 1 correct significant digit, actual difference 0.01
# Definition 2: Round to nearest p-sig. digit numbery = 0.9951 # --> 0.10y1 = 0.9499 # --> 0.90 , only 1 correct sig. digity2 = 1.0000 # --> 0.10 , 3 correct sig. digits
Explanation: Precision vs Accuracy Precision is the accuracy of basic arithmetic operations used in the computation Accuracy is the absolute or relative error of the approximate quantity* NOTE: Accuracy is not limited by precision, finite precision arithmetic can simulate any precision with more computation# Measuring Precision## Significant DigitsThe number of significant digits may be imprecise, prefer to use relative error.
End of explanation
# Definition 3: Numbers x and x' match to p-sig. digits if x - x' < half a unit in p-th sig. digit of xx1 = 0.123x2 = 0.127# 0.004 < (0.01 / 2) => x1 and x2 match in 2 significant digits according to this definition wchich may be slightly confusing
Explanation: The last definition avoids issues that the previous two had, but it's still non-ideal
End of explanation
def absolute_error(true_value, approx_value): return abs(true_value - approx_value)print 'Absolute error: {0:.9f}'.format(absolute_error(10.951, 10.949))def relative_error(true_value, approx_value): return absolute_error(true_value, approx_value) / abs(true_value)print 'Relative error: {0:.9f}'.format(relative_error(10.951, 10.949))
Explanation: Absolute and relative error
End of explanation
import numpy as npdef relative_error(true_value, approx_value): return np.max(np.fabs((true_value - approx_value) / true_value))x_value = np.array([10000, 0.01])x_approx = np.array([9999, 0.02]) print relative_error(x_value, x_approx)
Explanation: Relative Error for Non-ScalarsFor non-scalars, calculating normalized value ||x|| = max / sum, implies smaller components of x are bound by absolute error only. Consider compensative relative error: max(i) |xi - xi’| / |xi|, which puts all components on equal footing.
End of explanation
# Wilkinson's polynomial is defined as p(x) = (x - 1)(x - 2)...(x - 20)import numpy as npx = np.linspace(0, 20, 4000)y = 1for i in range(1, 20): y *= (x - i)import matplotlib.pyplot as plt%matplotlib inlineplt.ylim([-4e11, 4e11])plt.plot(x, y)
Explanation: Measuring Accuracy## Backward and Forward Errors Forward error - error of the computed value Backward error: * Let y = f(x), given x we approximate f(x) with y’ * Let dx be the smallest quantity where y’ = f(x + dx) in exact computation * Then dx is the backward error Benefits of using backward error: * Unifies error w/ perturbation in the data * Removes the need to calculate forward error Forward-backward error: * f(x + dx) = y + dy * Used to define stability of computation where just using backward error isn’t possible, e.g. sin, cos If rounding errors are dominant source of errors, we call an algorithm numerically stable if it is stable in forward-backward error sense## Condition Number Condition number of a function with respect to its arguments is used to measure how how much the output of the function will change for a small change in the input As a rule of thumb, if the condition number kappa(A) = 10^k, then you may lose up to k digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned For example, the condition number associated with the linear equation Ax = b gives a bound on how inaccurate the solution x will be after approximation. Note that this is before the effects of round-off error are taken into account; conditioning is a property of the matrix, not the algorithm or floating point accuracy of the computer used to solve the corresponding system.An important example of an ill-conditioned problem is finding roots of a polynomial*. Let's look at Wilkinson’s polynomial as an example.
End of explanation
import numpy as npx = np.float64(0.1)y = np.float32(0.1)print '32- vs 64-bit representation difference:', abs(x - y)
Explanation: Quoting Wikipedia: "If the coefficient of x19 is decreased from −210 by 2−23 to −210.0000001192, then the polynomial value w(20) decreases from 0 to −6.25×10^17, and the root at x = 20 grows to x ≈ 20.8. The roots at x = 18 and x = 19 collide into a double root at x ≈ 18.62 which turns into a pair of complex conjugate roots at x ≈ 19.5±1.9i as the perturbation increases further." *"Wilkinson's polynomial is often used to illustrate the undesirability of naively computing eigenvalues of a matrix by first calculating the coefficients of the matrix's characteristic polynomial* and then finding its roots, since using the coefficients as an intermediate step may introduce an extreme ill-conditioning even if the original problem was well conditioned"
Sources of Error## Truncation error (discretization error)Error coming from representing a function or continuous variable using finite number of evaluations - outside of scope of this notebook, mentioned for completeness only.## Round-off error Difference between calculated approximation and exact value due to rounding Related to representation error, which is due to representing numbers with finite number of digits
End of explanation
import numpy as np# For explanation why we we use np.round rather than default Python 2.7.3 round function, see belowx = 9.945309 print np.round(x, 2), np.round(np.round(x, 2), 1)print np.round(x, 1)
Explanation: Rounding Multiple Times Can Accumulate Error
End of explanation
import numpy as npfor i in range(13): x = -3 + 0.5 * i print '\t{0:5.1f}\t{1:5.1f}\t{2:5.1f}'.format(x, round(x), np.round(x))
Explanation: (Optional) Differences between round and np.roundThere are few differences between built-in Python 2.7 round function and numpy (a)round: The built in function rounds away from zero Numpy round rounds to even, which tends to skew the results less and is a commonly accepted rounding method* From my (limited) exerience it looks numpy round is much better behaved in dealing with decimal-to-binary float rounding errorsNote that Python 3 has a different round function that behaves more similarly to numpy round.
End of explanation
x_value = 0.123123y_value = 0.123000# We want to learn the value of d = x - y:d_value = x_value - y_valueprint 'Actual d value:', # Assuming we're apprximating above calculation with a 4 decimal digits precision:import numpy as npd_approx = np.round(x_value, 4) - np.round(y_value, 4)print 'Approx d value:', d_approxprint 'Absolute error: {0:.9f}'.format(abs(d_value - d_approx))print 'Relative error: {0:.9f}'.format(abs((d_value - d_approx) / d_value))
Explanation: Loss of significanceError in floating point arithmetic when an operation increases relative error substantially more than absolute error
End of explanation
import numpy as npfrom math import sin, cosnear_zero = 1.2e-8def f(x): return (1 - np.cos(x)) / x**2print 'Value of f near zero:', f(near_zero)x = np.linspace(near_zero, 10, 100)y = f(x)import matplotlib.pyplot as plt%matplotlib inlineplt.ylim([0, 1])plt.plot(x, y)
Explanation: CancellationCancellation is an example of loss of significance and it happens when two nearly equal numbers are subtracted and can lead to significant inaccuracies. As an example let's look at function: f(x) = (1 - cos x) / x^2 We can see that calculating f(x) near zero may lead to issues, since cos(0) = 1.
End of explanation
def g(x): return 0.5 * (2 * np.sin(x / 2) / x)**2print 'Value of g near zero:', g(near_zero)x = np.linspace(near_zero, 10, 100)y = g(x)import matplotlib.pyplot as plt%matplotlib inlineplt.ylim([0, 1])plt.plot(x, y)
Explanation: Fortunately, we can rewrite f(x) in a form that is less prone to cancellation:
End of explanation
from math import fsumprint '{0:0.20f}'.format(sum([0.1] * 10))print '{0:0.20f}'.format(fsum([0.1] * 10))
Explanation: Summing numbers: https://docs.python.org/2/library/math.html https://en.m.wikipedia.org/wiki/Kahan_summation_algorithmInterestingly, IPython Notebook appears to do the right thing by default, what?IPython session:In [6]: from math import fsumIn [7]: sum([0.1] * 10) Out[7]: 0.9999999999999999In [8]: fsum([0.1] * 10) Out[8]: 1.0
End of explanation
def naive_e(n): return (1 + 1.0 / n)**nfrom math import expe = exp(1)for i in range(5, 20): print naive_e(10**i) - e
from math import sqrtdef identity(x, n): for i in xrange(n): x = sqrt(x) for i in xrange(n): x = x**2 return xx = 2for i in xrange(35, 60): print x - identity(x, i)
Explanation: Calculations without subtractions are fine, right?Hint: No, we can still over/under-flow the floating point precision numbers.
End of explanation
# Computing f(x) = (exp(x) - 1) / x == sum(x^i / (i + 1)!)from math import exp, logdef f1(x): if 0 == x: return 1 return (exp(x) - 1) / xdef f2(x): if 0 == x: return 1 y = exp(x) return (y - 1) / log(y)# f(epsilon) ~= 1for i in range(8, 15): epsilon = 1.0 / (10**i) print 'epsilon:', epsilon print '|1 - f1(epsilon)|:', abs(1 - f1(epsilon)) print '|1 - f2(epsilon)|:', abs(1 - f2(epsilon)) print# NOTE: Above doesn't hold if we calculate for powers of 2!# for i in range(30, 40):# print abs(1 - f1(1.0 / (2**i))), abs(1 - f2(1.0 / (2**i)))
Explanation: Sometimes the rounding errors cancel out and produce a result more accurate than the intermediate calculations
End of explanation
import numpy as npdef r(x): # Calculate value of a rational function using Horner's rule (https://en.wikipedia.org/wiki/Horner%27s_method) # More on the function can be looked up on Wolfram's Alpha: # http://www.wolframalpha.com/input/?i=f(x)+%3D+(622.0+-+x+*+(751.0+-+x+*+(324.0+-+x+*+(59.0+-+4+*+x))))+%2F+(112+-+x+*+(151+-+x+*+(72+-+x+*+(14+-+x))))&t=crmtb01 p = 622.0 - x * (751.0 - x * (324.0 - x * (59.0 - 4 * x))) q = 112 - x * (151 - x * (72 - x * (14 - x))) return p / qdef calc_f(a): t = np.array([a + k * 2**-52 for k in xrange(400)]) t = r(t) t -= t[0] t *= 1.0 / max(abs(t)) return timport matplotlib.pyplot as plt%matplotlib inlinedef plot(t): plt.plot(t, linestyle='--', marker='o') plt.show() for a in [1.606, 4, 8, 16, 32]: plot(calc_f(a))
Explanation: Rounding errors are not random
End of explanation
# Calculate the variance of X = sum of dice in N throws. The probabilities of each side of dice are given.import numpy as np# Dice sides probabilities, p[i] = probability of throwing i. We use fractions to be able to compare our calculations to# 'exact' values:from fractions import Fraction as Ffrom numpy.random import randint as rimax = 1000000pf = np.array([0] + [F(ri(max), 1) for _ in range(6)])pf /= sum(pf)p = np.array([float(f) for f in pf])# Number of throwsN = 10000 # 30000# Dynamic program, we're holding probability of throwing k-dice for each possibility.# First iteration:dp = np.ones(1)for i in range(N): dp = np.convolve(dp, p)print dp# Let's calculate variance using both ways.dice = np.arange(len(dp))# var1 = E[(X - E[X])^2]ex = (p * np.arange(7)).sum() * Nvar1 = (dp * (dice - ex)**2).sum()print 'Variance calculated using: E[(X - E[X])^2]: ', var1# var2 = E[X^2] - (E[X])^2ex2 = (dice**2 * dp).sum()var2 = ex2 - ex**2print 'Variance calculated using: E[X^2] - (E[X])^2: ', var2print 'E[X^2]: ', ex2print '(E[X2])^2: ', ex**2# There is a simpler way to calculate variance in this particular problem, since all N throws are independent we can simply# calculate variability of one and multiply it by N. To make sure this calculation is as precise as possible, we use Python# representation of rational numbers.# Let's use equation 1: E[(X - E[X])^2]rangef = np.array([F(i, 1) for i in range(7)])exf = (rangef * pf).sum()ex2f = (rangef**2 * pf).sum()varf = ex2f - exf**2varf *= Nprint 'Variance calculated fractions: ', float(varf)print 'Relative error of: E[(X - E[X])^2]: ', abs((var1 - varf) / varf)print 'Relative error of: E[X^2] - (E[X])^2: ', abs((var2 - varf) / varf)
Explanation: Examples## Variance CalculationThere are two concepts that we refer to as variance:0. A property of a distribution0. A characteristic of a set of observationsVariance of a random variable X is defined as: Var(X) = E[(X - E[X])^2] = ... = E[X^2] - (E[X])^2 Where E[Z] is the expected value of a random variable Z.The second form (E[X^2] - (E[X])^2) should be avoided when performing calculations on a fixed precision machine. Although it has a nice property that for sample variance it is easy to implement it while traversing the data just once, naive implementations usually suffer from extreme cancellation.
End of explanation |
13,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Intelligence II - Team MensaNord
Sheet 08
Nikolai Zaki
Alexander Moore
Johannes Rieke
Georg Hoelger
Oliver Atanaszov
Step1: Exercise 1
Step2: Simulation with M=1
Step3: Simulation with M=500
Step4: All possible states
Step5: Exercise 2 | Python Code:
from __future__ import division, print_function
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats
import numpy as np
Explanation: Machine Intelligence II - Team MensaNord
Sheet 08
Nikolai Zaki
Alexander Moore
Johannes Rieke
Georg Hoelger
Oliver Atanaszov
End of explanation
def E(W, s):
N = len(s)
return -0.5 * np.sum(W[i, j] * s[i] * s[j] for i, j in np.ndindex(N, N))
N = 6
beta_0 = 0.007
tau = 1.06
epsilon = 1e-20
t_max = 150
W = np.random.random(size=(N, N))
W = (W + W.T) / 2 # make symmetric
for i in range(N):
W[i, i] = 0
plt.imshow(W)
Explanation: Exercise 1
End of explanation
M = 1
beta = beta_0
s = np.random.choice([-1, 1], N)
temperatures = np.zeros(t_max)
energies = np.zeros(t_max)
%%time
validation_min = E(W, s)
for t in range(t_max):
for m in range(M):
i = np.random.randint(0, 6)
s_local = np.copy(s)
s_local[i] *= -1
E_1 = E(W, s)
E_2 = E(W, s_local)
E_d = E_2 - E_1
P = 1 / (1 + np.exp(beta*E_d))
# print("\nt:", t, " i:", i, "\n s1:", s, "\tE1:", E_1, "\n s2:", s_local, "\tE2:", E_2)
if np.random.random() < P:
s = np.copy(s_local)
# print("new s")
if E(W, s) < validation_min:
validation_min = E(W, s)
temperatures[t] = 1 / beta
energies[t] = E(W, s)
beta *= tau
plt.figure(figsize=(10, 5))
plt.plot(temperatures)
plt.xlabel('t')
plt.ylabel('Temperature')
plt.figure(figsize=(10, 5))
plt.plot(energies, '.-')
plt.xlabel('t')
plt.ylabel('Energy')
s
Explanation: Simulation with M=1
End of explanation
M = 500
beta = beta_0
s = np.random.choice([-1, 1], N)
temperatures = np.zeros(t_max)
energies = np.zeros(t_max)
%%time
validation_min = E(W, s)
for t in range(t_max):
for m in range(M):
i = np.random.randint(0, 6)
s_local = np.copy(s)
s_local[i] *= -1
E_1 = E(W, s)
E_2 = E(W, s_local)
E_d = E_2 - E_1
P = 1 / (1 + np.exp(beta*E_d))
# print("\nt:", t, " i:", i, "\n s1:", s, "\tE1:", E_1, "\n s2:", s_local, "\tE2:", E_2)
if np.random.random() < P:
s = np.copy(s_local)
# print("new s")
if E(W, s) < validation_min:
validation_min = E(W, s)
temperatures[t] = 1 / beta
energies[t] = E(W, s)
beta *= tau
plt.figure(figsize=(10, 5))
plt.plot(temperatures)
plt.xlabel('t')
plt.ylabel('Temperature')
plt.figure(figsize=(10, 5))
plt.plot(energies, '.-')
plt.xlabel('t')
plt.ylabel('Energy')
s
Explanation: Simulation with M=500
End of explanation
# generate all posible states & energies
all_states = [[0, 0, 0, 0, 0, 0] for i in range(2**6)]
all_energies = [0.0 for i in range(2**6)]
for si in range(2**6):
all_states[si] = [int(x) for x in list('{0:06b}'.format(si))]
all_energies[si] = E(W, all_states[si])
plt.figure(figsize=(10, 5))
plt.scatter(range(2**6), all_energies)
plt.title('histogram of all possible energies')
plt.grid()
plt.show()
probab_beta = [0.005, 1, 3]
for beta in probab_beta:
Z = 0
for en in all_energies:
Z += np.exp(-beta * en)
all_probabilities = [0.0 for i in range(2**6)]
for si in range(2**6):
all_probabilities[si] = np.exp(-beta * all_energies[si])
plt.figure(figsize=(10, 5))
plt.scatter(range(2**6), all_probabilities)
plt.title('histogram of all possible probabilities for beta {}'.format(beta))
plt.grid()
plt.show()
Explanation: All possible states
End of explanation
# Other parameters and W from exercise 1.
epsilon = 1e-50
s = np.random.choice([-1., 1.], N)
e = np.zeros_like(s)
beta = beta_0
temperatures = np.zeros(t_max)
energies = np.zeros(t_max)
%%time
for t in range(t_max):
#print('t =', t, '- beta =', beta)
distance = np.inf
while distance >= epsilon:
e_old = e.copy()
for i in range(N):
neighbors = range(N)
neighbors.remove(i)
e[i] = -np.sum(W[i, j] * s[j] for j in neighbors)
s[i] = np.tanh(-beta * e[i])
#print(distance)
distance = np.linalg.norm(e - e_old)
temperatures[t] = 1 / beta
energies[t] = E(W, s)
beta *= tau
#print('-'*10)
plt.figure(figsize=(10, 5))
plt.plot(temperatures)
plt.xlabel('t')
plt.ylabel('Temperature')
plt.figure(figsize=(10, 5))
plt.plot(energies, '.-')
plt.xlabel('t')
plt.ylabel('Energy')
s
Explanation: Exercise 2
End of explanation |
13,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<H1>PrimerDesign</H1>
We need to define a sequence of 17 bases with the following requirements
Step1: The function product is what we need to obtain a sequence of x elements with the four nucleotides A, G, C and T. This will give us $4^{x}$. To compute the product of an iterable with itself, specify the lenght of the sequence with the optional repeat keyword argument. For example,
product(A, repeat=4) means the same as product(A, A, A, A) will be taken.
Step2: In the first and last 5 bases, we need zero, one or two G or C
Step3: <H2> Generate sequences with around 50% of GC</H2>
We will insert about 50% of GC content in a 17 pairbase sequence. For that, we will fill the sequence with 9 nucleotides containing either G or C. This with first give us 2^9 sequence combinations.
Step5: For every GC sequence, we will add a AT sequence with the same combinatorial procedure
Step7: we will apply now more restrictions to the sequences
<H2>GC Clamp</H2>
This is the number of G or C in the last 5 bases of the sequence
Step9: Count all the posibilities with score less than 10 | Python Code:
%pylab inline
from itertools import product, permutations
from math import pow
Explanation: <H1>PrimerDesign</H1>
We need to define a sequence of 17 bases with the following requirements:
<ul>
<li>Total GC content: 40-60%</li>
<li>GC Clamp: < 3 in the last 5 bases at the 3' end of the primer.</li>
</ul>
End of explanation
pow(4,17) # all possible ATGC combinations in sequences of 17 bases
pow(4,7) # all possible ATCG combinations in sequences of 7 elements
pow(4,5) # all possible ATCG combinations in sequences of 5 elements
Explanation: The function product is what we need to obtain a sequence of x elements with the four nucleotides A, G, C and T. This will give us $4^{x}$. To compute the product of an iterable with itself, specify the lenght of the sequence with the optional repeat keyword argument. For example,
product(A, repeat=4) means the same as product(A, A, A, A) will be taken.
End of explanation
perms = [''.join(p) for p in permutations('ATCG')]
perms
pow(2,5) # only AT combinations in 5 elements
print list(permutations(['A','T'], 'C'))
pow(2,5)+ 5*pow(2,4) + 5*pow(2,4) +
mySeq = Seq
x = [i for i in list(product('GCAT', repeat=7))]
x[0]
x = [i for i in list(product('GCAT'), repeat)]
mybase = ('A', 'T', 'C','G')
product('ATCG',2)
from Bio.Seq import Seq # Biopython
from Bio.SeqUtils import GC
from Bio import pairwise2
from Bio.pairwise2 import format_alignment
mySeq = Seq('ATCG')
GC(mySeq) # returns % of GC content
mySeq
Explanation: In the first and last 5 bases, we need zero, one or two G or C
End of explanation
# this is the number of all possible C or G combinations in a sequence of 8 elements.
pow(2,9)
Explanation: <H2> Generate sequences with around 50% of GC</H2>
We will insert about 50% of GC content in a 17 pairbase sequence. For that, we will fill the sequence with 9 nucleotides containing either G or C. This with first give us 2^9 sequence combinations.
End of explanation
256*512
# example of joining list
myGC = [''.join(i) for i in list(product('GC', repeat=9))] # 512 sequences
myAT = [''.join(j) for j in list(product('AT', repeat=8))] # 256 sequences
print(myGC[0],myAT[0])
zip(myGC[0],myAT[0])
mystring = str()
for i,j in zip(myGC[0],myAT[100]):
mystring +=i+j
mystring
def generateSeq(listGC, listAT):
Create all possible combinations of the sequences in
list GC and listAT. The only requirement is that
the
Arguments:
==========
listGC -- a list of strings containing GC nucleotides
listAT -- a list of strings containing AT nucleotides
Returns
=======
A list of Seq objects
mySeqList = list()
for list1 in listGC:
for list2 in listAT:
mystring = str()
for i,j in zip(list1, list2):
mystring += i+j
mystring +=list1[-1]# add last element from listGC
mySeqList.append(Seq(mystring))
return mySeqList
generateSeq(myGC[:3],myAT[:3]) #dummy test
mySeq = generateSeq(myGC,myAT)
len(mySeq)
Explanation: For every GC sequence, we will add a AT sequence with the same combinatorial procedure
End of explanation
def GCClamp(seq):
returns the number of G or C within the last five bases of the sequence
return seq[-5:].count('G') + seq[-5:].count('C')
mySeq[0]
GCClamp(mySeq[0])
GC
# count the number of sequences GC Clamp below three
[seq for seq in mySeq if GCClamp(seq)]
mySeq[0][-5:].count('G')
'G' in mySeq[0][-5:]
mySeq[0][-5:]
print 'original = ' + mySeq[100000]
print 'complement = ' + mySeq[100000].complement()
alignments = pairwise2.align.globalxx(mySeq[100000], mySeq[100000])
alignments
%timeit
for a in pairwise2.align.globalxx(mySeq[100000].complement(), mySeq[100000].complement()):
print(format_alignment(*a))
al1,al2, score, begin, end = a
print score
Explanation: we will apply now more restrictions to the sequences
<H2>GC Clamp</H2>
This is the number of G or C in the last 5 bases of the sequence
End of explanation
def countScores(seqList, threshold=None):
Counts the number of sequences whose complementary gives
similarty less than
the threshold given as a argument.
Argument:
=========
seqList -- list, this is a list of Seq objects
threshod -- int, the number of complementary bases that binds
Returns:
========
A interger with the number of sequencies that fullfit that requirement
#generate complement list
compSeq = [i.complement() for i in seqList]
counter = 0
for seq in seqList:
average = list()
for comp in compSeq:
a = pairwise2.align.globalxx(seq, comp)
average.append(a[0][2]) # append score
if np.mean(average)<threshold:
counter +=1
return counter
countScores(mySeq[:3], threshold=10) # test for a list of three seq three
countScores(mySeq, threshold=10)
for a in pairwise2.align.globalxx(mySeq[0], mySeq[0].complement()):
print(format_alignment(*a))
al1,al2, score, begin, end = a
print score
alignments = pairwise2.align.globalxx("ACCGT", "ACG")
for a in pairwise2.align.globalxx("ACCGT", "ACG"):
print(format_alignment(*a))
print(mylist[0])
print(mylist[1])
for a in pairwise2.align.globalxx(mylist[0], mylist[1]):
print(format_alignment(*a))
myseq = 'ATCG'
print list(product(myseq, repeat=2))
256*256
Explanation: Count all the posibilities with score less than 10
End of explanation |
13,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 DeepMind Technologies Limited.
```
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Figure 4b
To generate the q-learning results
Step2: Figure 4c
To generate the regressed w results
Step3: Figure 5a
To generate the result for each set of policies
Step4: Figure 5b
To generate the result for each set of policies
Step5: Figure 6
To generate the q-learning results | Python Code:
#@title Util functions
import csv
import os
from matplotlib import pyplot as plt
import pandas as pd
import seaborn as sns
import tensorflow.compat.v1 as tf
from tensorflow.compat.v1.io import gfile
def read_csv_as_dataframe(path):
with gfile.GFile(path, "r") as file:
reader = csv.reader(file, delimiter=" ")
rows = [row for row in reader]
rows[1:] = [[float(v) for v in row] for row in rows[1:]]
cols = rows[0]
rows = dict(zip(cols, (zip(*rows[1:]))))
return pd.DataFrame(rows)
def read_data(path, num_seeds, verbose=False):
all_dfs = []
for seed in range(num_seeds):
seed_path = path.format(seed)
if verbose:
print(f"Reading {seed_path}")
df = read_csv_as_dataframe(seed_path)
df["seed"] = seed
all_dfs.append(df)
return pd.concat(all_dfs)
#@title Unpack precomputed training curves
!wget -q --no-check-certificate https://storage.googleapis.com/option_keyboard/gpe_gpi_experiments.zip -P -O /tmp
!unzip -o /tmp/gpe_gpi_experiments.zip -d /tmp
DATA_DIR = "/tmp"
Explanation: Copyright 2020 DeepMind Technologies Limited.
```
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
End of explanation
#@title Load Data
dqn_path = os.path.join(DATA_DIR, "fig4_dqn_{}.csv")
dqn_df = read_data(dqn_path, num_seeds=10)
dqn_df["method"] = "Q-Learning"
regressed_w_path = os.path.join(DATA_DIR, "fig4_regressed_w_{}.csv")
regressed_w_df = read_data(regressed_w_path, num_seeds=10)
regressed_w_df["method"] = "GPE+GPI with regressed w"
true_w_path = os.path.join(DATA_DIR, "fig4_true_w_{}.csv")
true_w_df = read_data(true_w_path, num_seeds=10)
true_w_df["method"] = "GPE+GPI with true w"
fig4b_df = pd.concat([dqn_df, regressed_w_df])
#@title Plot
fig, ax = plt.subplots(figsize=(12,6))
sns.tsplot(fig4b_df, time="episode", unit="seed", value="eval_0", condition="method", ci=95, color=["r", "b"], linestyle="--", ax=ax)
ax.axhline(dqn_df.groupby("seed").tail(1).mean()["eval_0"], color='r', linestyle='--')
ax.axhline(true_w_df.mean()["return"], color='b', linestyle='-', label="GPE+GPI with true w")
ax.text(
800,
4.3,
r"$Q$-learning after $10^6$ sample transitions",
fontdict=dict(fontsize=15))
ax.set_xlim([0, 3000])
ax.legend();
Explanation: Figure 4b
To generate the q-learning results:
python3 ../run_dqn.py --num_episodes=20000 --report_every=5 --output_path=/tmp/fig4_dqn.csv
To generate the regressed w results:
python3 train_keybooard.py --num_pretrain_episodes=20000 --policy_weights_name=12 --export_path=/tmp/fig4_keyboard
python3 run_regressed_w_fig4b.py --num_episodes=4000 --report_every=5 --keyboard_path=/tmp/fig6_keyboard/tfhub \
--output_path=/tmp/fig4b_regressed_w.csv
To generate the results with true w:
```
Make use of a pretrained keyboard.
python3 run_true_w_fig4.py --num_episodes=1000 --keyboard_path=/tmp/fig4_keyboard/tfhub -- output_path=/tmp/fig4b_true_w.csv
```
Repeat the above steps for multiple runs. Below shows the results for 10 runs.
End of explanation
#@title Load Data
dqn_path = os.path.join(DATA_DIR, "fig4_dqn_{}.csv")
dqn_df = read_data(dqn_path, num_seeds=10)
dqn_df["method"] = "Q-Learning"
true_w_path = os.path.join(DATA_DIR, "fig4_true_w_{}.csv")
true_w_df = read_data(true_w_path, num_seeds=10)
true_w_df["method"] = "GPE+GPI with true w"
regressed_w_path = os.path.join(DATA_DIR, "fig4c_regressed_w_{}.csv")
regressed_w_df = read_data(regressed_w_path, num_seeds=10)
regressed_w_df["method"] = "GPE+GPI with regressed w"
regressed_w_with_phi_2d_path = os.path.join(DATA_DIR, "fig4c_regressed_w_with_phi_{}_2d.csv")
regressed_w_with_phi_2d_df = read_data(regressed_w_with_phi_2d_path, num_seeds=10)
regressed_w_with_phi_2d_df["method"] = "GPE+GPI with regressed w and 2d phi"
regressed_w_with_phi_3d_path = os.path.join(DATA_DIR, "fig4c_regressed_w_with_phi_{}_3d.csv")
regressed_w_with_phi_3d_df = read_data(regressed_w_with_phi_3d_path, num_seeds=10)
regressed_w_with_phi_3d_df["method"] = "GPE+GPI with regressed w and 3d phi"
regressed_w_with_phi_4d_path = os.path.join(DATA_DIR, "fig4c_regressed_w_with_phi_{}_4d.csv")
regressed_w_with_phi_4d_df = read_data(regressed_w_with_phi_4d_path, num_seeds=10)
regressed_w_with_phi_4d_df["method"] = "GPE+GPI with regressed w and 4d phi"
fig4c_df = pd.concat([regressed_w_df, regressed_w_with_phi_2d_df, regressed_w_with_phi_3d_df, regressed_w_with_phi_4d_df])
#@title Plot
fig, ax = plt.subplots(figsize=(12,6))
sns.tsplot(fig4c_df, time="episode", unit="seed", value="eval_0", condition="method", ci=95, ax=ax)
ax.axhline(dqn_df.groupby("seed").tail(1).mean()["eval_0"], color='r', linestyle='--')
ax.axhline(true_w_df.mean()["return"], color='b', linestyle='-', label="GPE+GPI with true w")
ax.text(
8,
4.3,
r"$Q$-learning after $10^6$ sample transitions",
fontdict=dict(fontsize=15))
ax.set_xlim([0, 30])
ax.legend();
Explanation: Figure 4c
To generate the regressed w results:
python3 train_keybooard.py --num_pretrain_episodes=20000 --policy_weights_name=12 --export_path=/tmp/fig4_keyboard
python3 run_regressed_w_fig4c.py --num_episodes=100 --report_every=1 --keyboard_path=/tmp/fig6_keyboard/tfhub \
--output_path=/tmp/fig4b_regressed_w.csv
To generate the regressed w with learned phi results:
```
First train a phi model. Change num_phis to phi of different dimensions e.g. 3 or 4.
python3 train_phi_model.py --export_path=/tmp/phi_model_2d --num_phis=2
Then train a keyboard.
python3 train_keybooard_with_phi.py --num_pretrain_episodes=20000 --phi_model_phi=/tmp/phi_model_2d \
--export_path=/tmp/fig4_keyboard_with_phi
Finally regress w with both models.
python3 run_regressed_w_with_phi_fig4c.py --num_episodes=100 --report_every=1 --keyboard_path=/tmp/fig4_keyboard_with_phi/tfhub \
--output_path=/tmp/fig4c_regressed_w.csv
```
(Note that training of the phi model can converge to a poor local minima, so it maybe necessary to rerun it if the eval loss is too high, or use a larger set of random training tasks.)
Repeat the above steps for multiple runs. Below shows the results for 10 runs.
End of explanation
#@title Load Data
policy_12_path = os.path.join(DATA_DIR, "fig5_polar_{}_12.csv")
policy_12_df = read_data(policy_12_path, num_seeds=10)
policy_34_path = os.path.join(DATA_DIR, "fig5_polar_{}_34.csv")
policy_34_df = read_data(policy_34_path, num_seeds=10)
policy_5_path = os.path.join(DATA_DIR, "fig5_polar_{}_5.csv")
policy_5_df = read_data(policy_5_path, num_seeds=10)
#@title Plot
use_polar = True
plt.figure(figsize=(10, 10))
ax = plt.subplot(111, polar=use_polar)
policy_5_mean_df = policy_5_df.groupby("angle").mean()
ax.plot(
policy_5_mean_df.index,
policy_5_mean_df["return"],
".-",
linewidth=5, color='r')
policy_12_mean_df = policy_12_df.groupby("angle").mean()
ax.plot(
policy_12_mean_df.index,
policy_12_mean_df["return"],
".-",
linewidth=5, color='g')
policy_34_mean_df = policy_34_df.groupby("angle").mean()
ax.plot(
policy_34_mean_df.index,
policy_34_mean_df["return"],
".-",
linewidth=5, color='b')
legend = ax.legend([
r"GPE + GPI with $\Pi_{5}$", r"GPE + GPI with $\Pi_{12}$", r"GPE + GPI with $\Pi_{34}$",
r"$Q$-learning"
],
fontsize="22",
loc="lower left")
ax.set_theta_zero_location("N")
ax.set_theta_direction(-1)
lines, labels = ax.set_thetagrids(
(0, 45, 90, 135, 315),
(r"$\mathbf{w}_2 = [0,1]$", r"$\mathbf{w}_5 = [1,1]$",
r"$\qquad \mathbf{w}_1 = [1,0]$", r"$\mathbf{w}_3 = [1,-1]$",
r"$\mathbf{w}_4 = [-1,1]$"),
fontweight="bold",
fontsize=15)
Explanation: Figure 5a
To generate the result for each set of policies:
```
Train a keyboard for a set of policies, i.e. replace {POLICY} with 5, 12 or 34
python3 train_keyboard.py --num_pretrain_episodes=20000 --policy_weights_name={POLICY} --export_path=/tmp/fig5a_keyboard_{POLICY}
Evaluate the trained keyboard at regular interval between [-1, 0] to [0, -1]
python3 eval_keyboard_fig5.py --num_episodes=1000 --keyboard_paths=/tmp/fig5a_keyboard_{POLICY}/tfhub \
--output_path=/tmp/fig5_polar_{POLICY}.csv
```
Repeat the above steps for multiple runs. Below shows the results for 10 runs.
End of explanation
#@title Load Data
policy_42513_path = os.path.join(DATA_DIR, "fig5_polar_{}_42513.csv")
policy_42513_df = read_data(policy_42513_path, num_seeds=10)
policy_4251_path = os.path.join(DATA_DIR, "fig5_polar_{}_4251.csv")
policy_4251_df = read_data(policy_4251_path, num_seeds=10)
policy_425_path = os.path.join(DATA_DIR, "fig5_polar_{}_425.csv")
policy_425_df = read_data(policy_425_path, num_seeds=10)
policy_42_path = os.path.join(DATA_DIR, "fig5_polar_{}_42.csv")
policy_42_df = read_data(policy_42_path, num_seeds=10)
policy_4_path = os.path.join(DATA_DIR, "fig5_polar_{}_4.csv")
policy_4_df = read_data(policy_4_path, num_seeds=10)
#@title Plot
use_polar = True
plt.figure(figsize=(10, 10))
ax = plt.subplot(111, polar=use_polar)
policy_42513_mean_df = policy_42513_df.groupby("angle").mean()
ax.plot(
policy_42513_mean_df.index,
policy_42513_mean_df["return"],
".-",
linewidth=5, color='y')
policy_4251_mean_df = policy_4251_df.groupby("angle").mean()
ax.plot(
policy_4251_mean_df.index,
policy_4251_mean_df["return"],
".-",
linewidth=5, color='k')
policy_425_mean_df = policy_425_df.groupby("angle").mean()
ax.plot(
policy_425_mean_df.index,
policy_425_mean_df["return"],
".-",
linewidth=5, color='b')
policy_42_mean_df = policy_42_df.groupby("angle").mean()
ax.plot(
policy_42_mean_df.index,
policy_42_mean_df["return"],
".-",
linewidth=5, color='g')
policy_4_mean_df = policy_4_df.groupby("angle").mean()
ax.plot(
policy_4_mean_df.index,
policy_4_mean_df["return"],
".-",
linewidth=5, color='r')
legend = ax.legend([
r"GPE + GPI with $\Pi_{4}$",
r"GPE + GPI with $\Pi_{42}$",
r"GPE + GPI with $\Pi_{425}$",
r"GPE + GPI with $\Pi_{4251}$",
r"GPE + GPI with $\Pi_{42513}$",
r"$Q$-learning",
],
fontsize="15",
loc="best")
ax.set_theta_zero_location("N")
ax.set_theta_direction(-1)
lines, labels = ax.set_thetagrids(
(0, 45, 90, 135, 315),
(r"$\mathbf{w}_2 = [0,1]$", r"$\mathbf{w}_5 = [1,1]$",
r"$\qquad \mathbf{w}_1 = [1,0]$", r"$\mathbf{w}_3 = [1,-1]$",
r"$\mathbf{w}_4 = [-1,1]$"),
fontweight="bold",
fontsize=15)
Explanation: Figure 5b
To generate the result for each set of policies:
```
Train a keyboard for a set of policies, i.e. replace {POLICY} with 4, 42, 425, 4251 or 42513
python3 train_keyboard.py --num_pretrain_episodes=20000 --policy_weights_name={POLICY} --export_path=/tmp/fig5a_keyboard_{POLICY}
Evaluate the trained keyboard at regular interval between [-1, 0] to [0, -1]
python3 eval_keyboard_fig5.py --num_episodes=1000 --keyboard_paths=/tmp/fig5a_keyboard_{POLICY}/tfhub \
--output_path=/tmp/fig5_polar_{POLICY}.csv
```
Repeat the above steps for multiple runs. Below shows the results for 10 runs.
End of explanation
#@title Load Data
dqn_path = os.path.join(DATA_DIR, "fig6_dqn_{}.csv")
dqn_df = read_data(dqn_path, num_seeds=10)
dqn_df["method"] = "Q-Learning"
ok_path = os.path.join(DATA_DIR, "fig6_ok_{}.csv")
ok_df = read_data(ok_path, num_seeds=10)
ok_df["method"] = "GPE + GPI with varying w"
fig6_df = pd.concat([dqn_df, ok_df])
test_ws = [[1, 1], [1, 0], [1, -1]]
test_dfs = []
for test_w in test_ws:
test_w_str = "|".join([str(x) for x in test_w])
path = os.path.join(DATA_DIR, "fig6_true_w=" + test_w_str + "_{}.csv")
test_dfs.append(read_data(path, num_seeds=10))
#@title Plot
fig, ax = plt.subplots(figsize=(12,6))
sns.tsplot(fig6_df, time="episode", unit="seed", value="eval_0", condition="method", ci=95, ax=ax)
ax.axhline(test_dfs[0]["return"].mean(), color='g', linestyle='--', label=test_ws[0])
ax.axhline(test_dfs[1]["return"].mean(), color='g', linestyle='-.', label=test_ws[1])
ax.axhline(test_dfs[2]["return"].mean(), color='g', linestyle=':', label=test_ws[2])
ax.legend();
Explanation: Figure 6
To generate the q-learning results:
python3 ../run_dqn.py --num_episodes=20000 --report_every=100 --output_path=/tmp/fig6_dqn.csv
To generate the OK results:
python3 train_keybooard.py --num_pretrain_episodes=20000 --policy_weights_name=12 --export_path=/tmp/fig6_keyboard
python3 ../run_ok.py --num_episodes=20000 --report_every=100 --keyboard_path=/tmp/fig6_keyboard/tfhub --output_path=/tmp/fig6_ok.csv
To generate the results with fixed w:
```
Make use of a pretrained keyboard. Change test_w to evaluate other policies such as [1,0] and [1,-1].
python3 run_true_w_fig6.py --num_episodes=1000 --keyboard_path=/tmp/fig6_keyboard/tfhub --test_w=1,1
```
Repeat the above steps for multiple runs. Below shows the results for 10 runs.
End of explanation |
13,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Annotating Public Resources Using I-Python Notebook
<h2>A Linear Algebra Example</h2>
Step1: Yes, you may embed Youtubes in your I-Python Notebooks, meaning you may follow up on a presentation with some example interactive code (or static code for display purposes).
Consider the Khan Academy video above. He's looking for eigenvectors of a matrix and follows some time-worn and trusted algebraic techniques.
NumPy and SciPy come with their own linear algebra components. NumPy's matrix object will transpose, for example, and below we test the example Hermitian matrix from Wikipedia, proving it obey's the definition of Hermitian in equalling it's own conjugate transpose.
A and A.H may not look the same at first glance, but remember the zero terms (e.g. 0.j) don't matter.
Step2: Now let's return to Khan's example. He actually starts his solution in an earlier video, defining matrix A and seeking eigenvalues as a first step....
Step3: Of course the SciPy docs comes with it's own documentation on how the eigen-stuff is found.
Are the above solutions and Khan's really the same?
We may show that the 2nd and 3rd solutions obey the rule | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo("3Md5KCCQX-0")
Explanation: Annotating Public Resources Using I-Python Notebook
<h2>A Linear Algebra Example</h2>
End of explanation
import numpy as np
from scipy import linalg
# https://en.wikipedia.org/wiki/Hermitian_matrix
A = np.matrix('2, 2+1j, 4; 2-1j, 3, 1j; 4, -1j, 1')
assert (A == A.H).all() # expect True
print("A", A, sep='\n')
print("A.H", A.H, sep='\n')
Explanation: Yes, you may embed Youtubes in your I-Python Notebooks, meaning you may follow up on a presentation with some example interactive code (or static code for display purposes).
Consider the Khan Academy video above. He's looking for eigenvectors of a matrix and follows some time-worn and trusted algebraic techniques.
NumPy and SciPy come with their own linear algebra components. NumPy's matrix object will transpose, for example, and below we test the example Hermitian matrix from Wikipedia, proving it obey's the definition of Hermitian in equalling it's own conjugate transpose.
A and A.H may not look the same at first glance, but remember the zero terms (e.g. 0.j) don't matter.
End of explanation
YouTubeVideo("11dNghWC4HI")
A = np.array(
[[-1, 2, 2],
[2, 2, -1],
[2, -1, 2]]) # ordinary numpy Array
M_A = np.matrix(A) # special matrix version
la, v = linalg.eig(A) # get eigenvalues la and eigenvectors v
l1, l2, l3 = list(map(lambda c: c.real, la))
print("Eigenvalues :", l1, l2, l3)
print("Eigenvector 1:", v[:,0])
print("Eigenvector 2:", v[:,1])
print("Eigenvector 3:", v[:,2])
Explanation: Now let's return to Khan's example. He actually starts his solution in an earlier video, defining matrix A and seeking eigenvalues as a first step....
End of explanation
eigen1 = v[:,0].reshape(3, 1)
print("Scaling E1", (M_A * eigen1)/eigen1, sep="\n") # show the scale factor
eigen2 = v[:,1].reshape(3, 1)
print("Scaling E2", (M_A * eigen2)/eigen2, sep="\n") # show the scale factor
eigen3 = v[:,2].reshape(3, 1)
print("Scaling E3", (M_A * eigen3)/eigen3, sep="\n") # show the scale factor
Explanation: Of course the SciPy docs comes with it's own documentation on how the eigen-stuff is found.
Are the above solutions and Khan's really the same?
We may show that the 2nd and 3rd solutions obey the rule:
{a * [1/2, 0, 1] + b * [1/2, 1, 0], a,b both floats}
per Khan's algebraic solution.
To show this, divide through by x in [x,y,z] to get [1.0, 3.2500000543426637, -1.2500000181142212] i.e. ratios [4.0, 13.0, -5.0]. So a=-5, b=13 in Khan's equation of the eigenspace (back to top video). Likewise [1, 1, 1] (same ratios as Eigenvector 2) is obtained with a = b = 1.
Now say you want to prove that the original matrix, applied to any of the above eigenvectors, simply scales each one by some linear amount (the definition of an eigenvector):
End of explanation |
13,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kubeflow pipelines
Learning Objectives
Step1: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Import libraries and define constants
Step2: Setup a Kubeflow cluster on GCP
TODO 1
To deploy a Kubeflow cluster
in your GCP project, use the AI Platform pipelines
Step3: Authenticate your KFP cluster with a Kubernetes secret
If you run pipelines that requires calling any GCP services, you need to set the application default credential to a pipeline step by mounting the proper GCP service account token as a Kubernetes secret.
First point your kubectl current context to your cluster. Go back to your Kubeflow cluster dashboard or navigate to Navigation menu > AI Platform > Pipelines and look to see the cluster name, zone and namespace for the pipeline you deployed above. It's likely called cluster-1 if this is the first AI Pipelines you've created.
Step4: We'll create a service account called kfpdemo with the necessary IAM permissions for our cluster secret. We'll give this service account permissions for any GCP services it might need. This taxifare pipeline needs access to Cloud Storage, so we'll give it the storage.admin role and ml.admin. Open a Cloud Shell and copy/paste this code in the terminal there.
```bash
PROJECT=$(gcloud config get-value project)
Create service account
gcloud iam service-accounts create kfpdemo \
--display-name kfpdemo --project $PROJECT
Grant permissions to the service account by binding roles
gcloud projects add-iam-policy-binding $PROJECT \
--member=serviceAccount
Step5: Create an experiment
TODO 2
We will start by creating a Kubeflow client to pilot the Kubeflow cluster
Step6: Let's look at the experiments that are running on this cluster. Since you just launched it, you should see only a single "Default" experiment
Step7: Now let's create a 'taxifare' experiment where we could look at all the various runs of our taxifare pipeline
Step8: Let's make sure the experiment has been created correctly
Step9: Packaging your code into Kubeflow components
We have packaged our taxifare ml pipeline into three components
Step10: Now that the container images are pushed to the registry in your project, we need to create yaml files describing to Kubeflow how to use these containers. It boils down essentially to
* describing what arguments Kubeflow needs to pass to the containers when it runs them
* telling Kubeflow where to fetch the corresponding Docker images
In the cells below, we have three of these "Kubeflow component description files", one for each of our components.
TODO 3
IMPORTANT
Step12: Create a Kubeflow pipeline
The code below creates a kubeflow pipeline by decorating a regular function with the
@dsl.pipeline decorator. Now the arguments of this decorated function will be
the input parameters of the Kubeflow pipeline.
Inside the function, we describe the pipeline by
* loading the yaml component files we created above into a Kubeflow op
* specifying the order into which the Kubeflow ops should be run
Step13: The pipeline function above is then used by the Kubeflow compiler to create a Kubeflow pipeline artifact that can be either uploaded to the Kubeflow cluster from the UI, or programatically, as we will do below
Step14: If you untar and uzip this pipeline artifact, you'll see that the compiler has transformed the
Python description of the pipeline into yaml description!
Now let's feed Kubeflow with our pipeline and run it using our client | Python Code:
!pip3 install --user kfp --upgrade
Explanation: Kubeflow pipelines
Learning Objectives:
1. Learn how to deploy a Kubeflow cluster on GCP
1. Learn how to create a experiment in Kubeflow
1. Learn how to package you code into a Kubeflow pipeline
1. Learn how to run a Kubeflow pipeline in a repeatable and traceable way
Introduction
In this notebook, we will first setup a Kubeflow cluster on GCP.
Then, we will create a Kubeflow experiment and a Kubflow pipeline from our taxifare machine learning code. At last, we will run the pipeline on the Kubeflow cluster, providing us with a reproducible and traceable way to execute machine learning code.
End of explanation
from os import path
import kfp
import kfp.compiler as compiler
import kfp.components as comp
import kfp.dsl as dsl
import kfp.gcp as gcp
import kfp.notebook
Explanation: Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages.
Import libraries and define constants
End of explanation
HOST = "" # TODO: fill in the HOST information for the cluster
Explanation: Setup a Kubeflow cluster on GCP
TODO 1
To deploy a Kubeflow cluster
in your GCP project, use the AI Platform pipelines:
Go to AI Platform Pipelines in the GCP Console.
Create a new instance
Hit "Configure"
Check the box "Allow access to the following Cloud APIs"
Hit "Create Cluster"
Hit "Deploy"
When the cluster is ready, go back to the AI Platform pipelines page and click on "SETTINGS" entry for your cluster.
This will bring up a pop up with code snippets on how to access the cluster
programmatically.
Copy the "host" entry and set the "HOST" variable below with that.
End of explanation
# Change below if necessary
PROJECT = !gcloud config get-value project # noqa: E999
PROJECT = PROJECT[0]
BUCKET = PROJECT # change if needed
CLUSTER = "cluster-1" # change if needed
ZONE = "us-central1-a" # change if needed
NAMESPACE = "default" # change if needed
%env PROJECT=$PROJECT
%env CLUSTER=$CLUSTER
%env ZONE=$ZONE
%env NAMESPACE=$NAMESPACE
# Configure kubectl to connect with the cluster
!gcloud container clusters get-credentials "$CLUSTER" --zone "$ZONE" --project "$PROJECT"
Explanation: Authenticate your KFP cluster with a Kubernetes secret
If you run pipelines that requires calling any GCP services, you need to set the application default credential to a pipeline step by mounting the proper GCP service account token as a Kubernetes secret.
First point your kubectl current context to your cluster. Go back to your Kubeflow cluster dashboard or navigate to Navigation menu > AI Platform > Pipelines and look to see the cluster name, zone and namespace for the pipeline you deployed above. It's likely called cluster-1 if this is the first AI Pipelines you've created.
End of explanation
%%bash
gcloud iam service-accounts keys create application_default_credentials.json \
--iam-account kfpdemo@$PROJECT.iam.gserviceaccount.com
# Check that the key was downloaded correctly.
!ls application_default_credentials.json
# Create a k8s secret. If already exists, override.
!kubectl create secret generic user-gcp-sa \
--from-file=user-gcp-sa.json=application_default_credentials.json \
-n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
Explanation: We'll create a service account called kfpdemo with the necessary IAM permissions for our cluster secret. We'll give this service account permissions for any GCP services it might need. This taxifare pipeline needs access to Cloud Storage, so we'll give it the storage.admin role and ml.admin. Open a Cloud Shell and copy/paste this code in the terminal there.
```bash
PROJECT=$(gcloud config get-value project)
Create service account
gcloud iam service-accounts create kfpdemo \
--display-name kfpdemo --project $PROJECT
Grant permissions to the service account by binding roles
gcloud projects add-iam-policy-binding $PROJECT \
--member=serviceAccount:kfpdemo@$PROJECT.iam.gserviceaccount.com \
--role=roles/storage.admin
gcloud projects add-iam-policy-binding $PROJECT \
--member=serviceAccount:kfpdemo@$PROJECT.iam.gserviceaccount.com \
--role=roles/ml.admin
```
Then, we'll create and download a key for this service account and store the service account credential as a Kubernetes secret called user-gcp-sa in the cluster.
End of explanation
client = kfp.Client(host=HOST)
Explanation: Create an experiment
TODO 2
We will start by creating a Kubeflow client to pilot the Kubeflow cluster:
End of explanation
client.list_experiments()
Explanation: Let's look at the experiments that are running on this cluster. Since you just launched it, you should see only a single "Default" experiment:
End of explanation
exp = client.create_experiment(name="taxifare")
Explanation: Now let's create a 'taxifare' experiment where we could look at all the various runs of our taxifare pipeline:
End of explanation
client.list_experiments()
Explanation: Let's make sure the experiment has been created correctly:
End of explanation
# Builds the taxifare trainer container in case you skipped the optional part
# of lab 1
!taxifare/scripts/build.sh
# Pushes the taxifare trainer container to gcr/io
!taxifare/scripts/push.sh
# Builds the KF component containers and push them to gcr/io
!cd pipelines && make components
Explanation: Packaging your code into Kubeflow components
We have packaged our taxifare ml pipeline into three components:
* ./components/bq2gcs that creates the training and evaluation data from BigQuery and exports it to GCS
* ./components/trainjob that launches the training container on AI-platform and exports the model
* ./components/deploymodel that deploys the trained model to AI-platform as a REST API
Each of these components has been wrapped into a Docker container, in the same way we did with the taxifare training code in the previous lab.
If you inspect the code in these folders, you'll notice that the main.py or main.sh files contain the code we previously executed in the notebooks (loading the data to GCS from BQ, or launching a training job to AI-platform, etc.). The last line in the Dockerfile tells you that these files are executed when the container is run.
So we just packaged our ml code into light container images for reproducibility.
We have made it simple for you to build the container images and push them to the Google Cloud image registry gcr.io in your project:
End of explanation
%%writefile bq2gcs.yaml
name: bq2gcs
description: |
This component creates the training and
validation datasets as BiqQuery tables and export
them into a Google Cloud Storage bucket at
gs://qwiklabs-gcp-00-568a75dfa3e1/taxifare/data.
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/qwiklabs-gcp-00-568a75dfa3e1/taxifare-bq2gcs
args: ["--bucket", {inputValue: Input Bucket}]
%%writefile trainjob.yaml
name: trainjob
description: |
This component trains a model to predict that taxi fare in NY.
It takes as argument a GCS bucket and expects its training and
eval data to be at gs://<BUCKET>/taxifare/data/ and will export
the trained model at gs://<BUCKET>/taxifare/model/.
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/qwiklabs-gcp-00-568a75dfa3e1/taxifare-trainjob
args: [{inputValue: Input Bucket}]
%%writefile deploymodel.yaml
name: deploymodel
description: |
This component deploys a trained taxifare model on GCP as taxifare:dnn.
It takes as argument a GCS bucket and expects the model to deploy
to be found at gs://<BUCKET>/taxifare/model/export/savedmodel/
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/qwiklabs-gcp-00-568a75dfa3e1/taxifare-deploymodel
args: [{inputValue: Input Bucket}]
Explanation: Now that the container images are pushed to the registry in your project, we need to create yaml files describing to Kubeflow how to use these containers. It boils down essentially to
* describing what arguments Kubeflow needs to pass to the containers when it runs them
* telling Kubeflow where to fetch the corresponding Docker images
In the cells below, we have three of these "Kubeflow component description files", one for each of our components.
TODO 3
IMPORTANT: Modify the image URI in the cell
below to reflect that you pushed the images into the gcr.io associated with your project.
End of explanation
# TODO 3
PIPELINE_TAR = "taxifare.tar.gz"
BQ2GCS_YAML = "./bq2gcs.yaml"
TRAINJOB_YAML = "./trainjob.yaml"
DEPLOYMODEL_YAML = "./deploymodel.yaml"
@dsl.pipeline(
name="Taxifare",
description="Train a ml model to predict the taxi fare in NY",
)
def pipeline(gcs_bucket_name="<bucket where data and model will be exported>"):
bq2gcs_op = comp.load_component_from_file(BQ2GCS_YAML)
bq2gcs = bq2gcs_op(
input_bucket=gcs_bucket_name,
)
trainjob_op = comp.load_component_from_file(TRAINJOB_YAML)
trainjob = trainjob_op(
input_bucket=gcs_bucket_name,
)
deploymodel_op = comp.load_component_from_file(DEPLOYMODEL_YAML)
deploymodel = deploymodel_op(
input_bucket=gcs_bucket_name,
)
trainjob.after(bq2gcs)
deploymodel.after(trainjob)
Explanation: Create a Kubeflow pipeline
The code below creates a kubeflow pipeline by decorating a regular function with the
@dsl.pipeline decorator. Now the arguments of this decorated function will be
the input parameters of the Kubeflow pipeline.
Inside the function, we describe the pipeline by
* loading the yaml component files we created above into a Kubeflow op
* specifying the order into which the Kubeflow ops should be run
End of explanation
compiler.Compiler().compile(pipeline, PIPELINE_TAR)
ls $PIPELINE_TAR
Explanation: The pipeline function above is then used by the Kubeflow compiler to create a Kubeflow pipeline artifact that can be either uploaded to the Kubeflow cluster from the UI, or programatically, as we will do below:
End of explanation
# TODO 4
run = client.run_pipeline(
experiment_id=exp.id,
job_name="taxifare",
pipeline_package_path="taxifare.tar.gz",
params={
"gcs_bucket_name": BUCKET,
},
)
Explanation: If you untar and uzip this pipeline artifact, you'll see that the compiler has transformed the
Python description of the pipeline into yaml description!
Now let's feed Kubeflow with our pipeline and run it using our client:
End of explanation |
13,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
Step5: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
Explanation: A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.
In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
End of explanation
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
End of explanation
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
image_size = mnist.train.images.shape[1]
inputs_ = tf.placeholder(tf.float32, shape=(None, image_size))
targets_ = tf.placeholder(tf.float32, shape=(None, image_size))
# Output of hidden layer
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)
# Output layer logits
logits = tf.layers.dense(encoded, image_size, activation=None)
# Sigmoid output from
decoded = tf.nn.sigmoid(logits, name='output')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
Explanation: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Exercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
End of explanation
# Create the session
sess = tf.Session()
Explanation: Training
End of explanation
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
End of explanation |
13,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ApJdataFrames Erickson2011
Title
Step1: Table 2- Optical Properties of Candidate Young Stellar Objects
Step2: Table 3 - Association Members with Optical Spectra
Step3: The code to merge the tables isn't working
python
on_F_ap = ["F", "Ap"]
on_name = "Alt_Names"
erickson2011 = pd.merge(tbl2, tbl3, on=on_F_ap, how="right")
erickson2011 = pd.merge(tbl2, erickson2011, on="Alt_Names", how="right")
message = "Table 2
Step4: Another thing to do would be to filter out the "Possible dwarfs", etc...
Save the data tables locally. | Python Code:
%pylab inline
import seaborn as sns
sns.set_context("notebook", font_scale=1.5)
#import warnings
#warnings.filterwarnings("ignore")
import pandas as pd
Explanation: ApJdataFrames Erickson2011
Title: THE INITIAL MASS FUNCTION AND DISK FREQUENCY OF THE Rho OPHIUCHI CLOUD: AN EXTINCTION-LIMITED SAMPLE
Authors: Erickson et al.
Data is from this paper:
http://iopscience.iop.org/1538-3881/142/4/140/
End of explanation
addr = "http://iopscience.iop.org/1538-3881/142/4/140/suppdata/aj403656t2_ascii.txt"
names = ['F', 'Ap', 'Alt_Names', 'X-Ray ID', 'RA', 'DEC', 'Li', 'EW_Ha', 'I', 'R-I',
'SpT_Lit', 'Spectral_Type', 'Adopt', 'Notes', 'blank']
tbl2 = pd.read_csv(addr, sep='\t', skiprows=[0,1,2,3,4], skipfooter=7, engine='python', na_values=" ... ",
index_col=False, names = names, usecols=range(len(names)-1))
tbl2.head()
Explanation: Table 2- Optical Properties of Candidate Young Stellar Objects
End of explanation
addr = "http://iopscience.iop.org/1538-3881/142/4/140/suppdata/aj403656t3_ascii.txt"
names = ['F', 'Ap', 'Alt_Names', 'WMR', 'Spectral_Type', 'A_v', 'M_I',
'log_T_eff', 'log_L_bol', 'Mass', 'log_age', 'Criteria', 'Notes', 'blank']
tbl3 = pd.read_csv(addr, sep='\t', skiprows=[0,1,2,3,4], skipfooter=9, engine='python', na_values=" ... ",
index_col=False, names = names, usecols=range(len(names)-1))
tbl3.head()
! mkdir ../data/Erickson2011
Explanation: Table 3 - Association Members with Optical Spectra
End of explanation
plt.plot(10**tbl3.log_T_eff, 10**tbl3.log_L_bol, '.')
plt.yscale("log")
plt.xlim(5000, 2000)
plt.ylim(1.0E-4, 1.0E1)
plt.xlabel(r"$T_{eff}$")
plt.ylabel(r"$L/L_{sun}$")
plt.title("Erickson et al. 2011 Table 3 HR Diagram")
Explanation: The code to merge the tables isn't working
python
on_F_ap = ["F", "Ap"]
on_name = "Alt_Names"
erickson2011 = pd.merge(tbl2, tbl3, on=on_F_ap, how="right")
erickson2011 = pd.merge(tbl2, erickson2011, on="Alt_Names", how="right")
message = "Table 2: {} entries \nTable 3: {} entries \nMerge: {} entries"
print message.format(len(tbl2), len(tbl3), len(erickson2011))
End of explanation
tbl2.to_csv("../data/Erickson2011/tbl2.csv", sep="\t", index=False)
tbl3.to_csv("../data/Erickson2011/tbl3.csv", sep="\t", index=False)
Explanation: Another thing to do would be to filter out the "Possible dwarfs", etc...
Save the data tables locally.
End of explanation |
13,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Abstractions for representing environments
Environmental models can be represented either through a GridMeshModel or a TriMeshModel, using a grid and a triangular based representation of the environment, respectively. Here we will document how to use the GridMeshModel representation. Several ways exist to define the environment, we will start with the simplest one, which is based on having an array of elevations where each entry to the array represents the elevation at a coordinate corresponding to the row and the column.
Step1: This dataset is wrapped around numpy so we can access can easily access entries
Step2: Or access several entries, and even interpolate
Step3: 1.1 GridMesh
If we want to anchor the mesh to a geographical location, we will use the class GridMesh, and supply the coordinate of the upper left point of the terrain. Its important that this representation is agnostic of what the dataset contains; so far this just represents an abstract dataset with resolution, rows and columns (doesnt have to be a terrain)
Step4: We can read out some basic properties of the mesh, and plot it
Step5: The upper left corner can be accessed, and returns our original anchoring point. The lower right corner is also accessible for convenience
Step6: We can also access the original terrain dataset through the dataset keyword
Step7: GridMesh also stores the local coordinate system of the grid, which can then be converted back and forth to other representations. The two coordinate systems are called ROW_COL and COL_ROW, which allows to define a point given the row and the column.
Step8: 1.2 GridMeshModel
This is when we transform a terrain into a representation that the path planner can use. This adds a large set of methods to terrain useful for its analysis. The easiest way to generate the model is to just derive it from the GridMesh we already have. To do so, we can load a smaller part of the dataset we have so far, say for example in a given envelope.
All the methods from GridMesh are still available in GridMeshModel
Step9: If no envelope is passed as an argument, the entire dataset is processed. Be careful with this, as it might take up significant memory if a very large dataset is being used.
Step10: Before we used dataset to access the underlying heightmap. For the model we will use the data property, which carries the raw representation of the data.
Step11: 1.3 Model info
The model contains information on the slopes, and an obstacle map, which is currently set to terrain steeper than 35 degrees
Step12: The code below demonstrates a more advanced usage of GeoMesh
2. Importing DEMs
2.1 GeoTiff
SEXTANT comes with handy helper objects to help import GeoTiffs, which are the preferred datatype for Data Elevation Maps (DEM). This is done with a library called GDAL, the Geospatial Data Abstraction Library, and hence the class is called GDALMesh. The GDALMesh class inherits from GridMesh, meaning that we can play around with it in the exact same way we did with the earlier example.
Step13: We will use a 0.5 resolution DEM of NASA Ames Roverscape site
Step14: Whats different in this representation from when we had a GridMesh with a numpy array, is that if we access the dataset we wont get an array. This is because the DEM is still encoded, and wont be decoded until loadSubSection has been called; this is done to limit memory used when larger DEMs (100s of MB or GB size) are being used.
Since we see its a small dataset, let's just load it fully
Step15: We can display it, including the obstacles in red
Step16: We notice that some of the areas are white; these represent masked locations, that are points with a no data value such as -9999.
Now that we are dealing with real data, let's also do a hillshade visualization in matplotlib
Step17: 2.2 From text file
Legacy terrain for SEXTANT was stored in text files, and there is a simpler helper function that can load it as a GridMeshModel(so dont need to loadSubSection)
Step18: LOLA(Lunar Orbiter Laser Altimeter) instrument recently generated a 2m DEM of the Lunar terrain. The data has been post processed into the format of the legacy code, and is displayed below as an example.
Step19: 3. Other Representations | Python Code:
from pextant.mesh.abstractmesh import NpDataset
import numpy as np
xx,yy= np.mgrid[0:5,0:5]
basic_terrain = NpDataset(0.1*(xx**2+yy**2), resolution=1)
basic_terrain
Explanation: 1. Abstractions for representing environments
Environmental models can be represented either through a GridMeshModel or a TriMeshModel, using a grid and a triangular based representation of the environment, respectively. Here we will document how to use the GridMeshModel representation. Several ways exist to define the environment, we will start with the simplest one, which is based on having an array of elevations where each entry to the array represents the elevation at a coordinate corresponding to the row and the column.
End of explanation
basic_terrain[1,1]
Explanation: This dataset is wrapped around numpy so we can access can easily access entries:
End of explanation
basic_terrain.get_datapoint(np.array(([1,1],[1.5,1.5])))
Explanation: Or access several entries, and even interpolate
End of explanation
from pextant.EnvironmentalModel import GridMesh
from pextant.lib.geoshapely import GeoPoint, LAT_LONG
upper_left_corner = GeoPoint(LAT_LONG, 0, 0) # this will be the north-west corner of the dataset
basic_mesh = GridMesh(upper_left_corner, basic_terrain)
Explanation: 1.1 GridMesh
If we want to anchor the mesh to a geographical location, we will use the class GridMesh, and supply the coordinate of the upper left point of the terrain. Its important that this representation is agnostic of what the dataset contains; so far this just represents an abstract dataset with resolution, rows and columns (doesnt have to be a terrain)
End of explanation
print basic_mesh
Explanation: We can read out some basic properties of the mesh, and plot it
End of explanation
upper_left_corner, lower_right_corner = basic_mesh.nw_geo_point, basic_mesh.se_geo_point
Explanation: The upper left corner can be accessed, and returns our original anchoring point. The lower right corner is also accessible for convenience:
End of explanation
import matplotlib.pyplot as plt
plt.matshow(basic_mesh.dataset, cmap='gray_r')
plt.show()
Explanation: We can also access the original terrain dataset through the dataset keyword
End of explanation
point_in_mesh = GeoPoint(basic_mesh.ROW_COL, 1, 1)
Explanation: GridMesh also stores the local coordinate system of the grid, which can then be converted back and forth to other representations. The two coordinate systems are called ROW_COL and COL_ROW, which allows to define a point given the row and the column.
End of explanation
from pextant.lib.geoshapely import GeoEnvelope
model_envelope = GeoEnvelope(point_in_mesh, lower_right_corner)
terrain_model = basic_mesh.loadSubSection(model_envelope)
Explanation: 1.2 GridMeshModel
This is when we transform a terrain into a representation that the path planner can use. This adds a large set of methods to terrain useful for its analysis. The easiest way to generate the model is to just derive it from the GridMesh we already have. To do so, we can load a smaller part of the dataset we have so far, say for example in a given envelope.
All the methods from GridMesh are still available in GridMeshModel
End of explanation
import matplotlib.patches as patches
Explanation: If no envelope is passed as an argument, the entire dataset is processed. Be careful with this, as it might take up significant memory if a very large dataset is being used.
End of explanation
plt.matshow(basic_mesh.data, cmap='gray_r')
plt.gca().add_patch(patches.Rectangle(point_in_mesh.to(basic_mesh.ROW_COL)-np.array([0.5,0.5]),basic_mesh.y_size,basic_mesh.x_size, fill=False, hatch='/'))
plt.legend(["terrain_model area"])
plt.show()
Explanation: Before we used dataset to access the underlying heightmap. For the model we will use the data property, which carries the raw representation of the data.
End of explanation
terrain_model.slopes
plt.matshow(terrain_model.dataset, cmap='gray_r')
plt.imshow(terrain_model.obstacle_mask(), alpha=0.5, cmap='bwr_r')
plt.text(1.2,2.3,"Steep terrain \n in red", size=15, color="white")
plt.show()
Explanation: 1.3 Model info
The model contains information on the slopes, and an obstacle map, which is currently set to terrain steeper than 35 degrees
End of explanation
from pextant.EnvironmentalModel import GDALMesh
Explanation: The code below demonstrates a more advanced usage of GeoMesh
2. Importing DEMs
2.1 GeoTiff
SEXTANT comes with handy helper objects to help import GeoTiffs, which are the preferred datatype for Data Elevation Maps (DEM). This is done with a library called GDAL, the Geospatial Data Abstraction Library, and hence the class is called GDALMesh. The GDALMesh class inherits from GridMesh, meaning that we can play around with it in the exact same way we did with the earlier example.
End of explanation
ames_gridmesh = GDALMesh('Ames.tif')
print ames_gridmesh
Explanation: We will use a 0.5 resolution DEM of NASA Ames Roverscape site
End of explanation
ames_model = ames_gridmesh.loadSubSection()
Explanation: Whats different in this representation from when we had a GridMesh with a numpy array, is that if we access the dataset we wont get an array. This is because the DEM is still encoded, and wont be decoded until loadSubSection has been called; this is done to limit memory used when larger DEMs (100s of MB or GB size) are being used.
Since we see its a small dataset, let's just load it fully:
End of explanation
plt.matshow(ames_model.data, cmap='gray_r')
obstacle_transparent = np.ma.masked_array(np.ones_like(ames_model.data), ames_model.slopes<15)
plt.imshow(obstacle_transparent, alpha=0.5, cmap='bwr_r')
plt.show()
Explanation: We can display it, including the obstacles in red
End of explanation
from pextant.viz.utils import hillshade
hillshade(ames_model, 5) #5 is used to exaggerate the effect of the hillshade
plt.show()
Explanation: We notice that some of the areas are white; these represent masked locations, that are points with a no data value such as -9999.
Now that we are dealing with real data, let's also do a hillshade visualization in matplotlib:
End of explanation
from pextant.EnvironmentalModel import load_legacy
Explanation: 2.2 From text file
Legacy terrain for SEXTANT was stored in text files, and there is a simpler helper function that can load it as a GridMeshModel(so dont need to loadSubSection)
End of explanation
apollo14_model = load_legacy('Apollo14.txt')
print(apollo14_model)
hillshade(apollo14_model, 1)
plt.show()
Explanation: LOLA(Lunar Orbiter Laser Altimeter) instrument recently generated a 2m DEM of the Lunar terrain. The data has been post processed into the format of the legacy code, and is displayed below as an example.
End of explanation
from pextant.mesh.triangularmesh import grid_to_tri
apollo14_tri = grid_to_tri(apollo14_model, accuracy=3)
tri = apollo14_tri.data
plt.gca().invert_yaxis()
plt.tripcolor(tri.vertices[:,0], tri.vertices[:,1], tri.faces, facecolors=tri.triangles_center[:,2], cmap='gray_r', alpha=1.)
plt.axis('equal')
plt.show()
Explanation: 3. Other Representations: TriMesh
SEXTANT can also triangulate the terrain and represent it as a TriMesh. It triangulates the grid, and then uses an algorithm developped by Garland to decimate it and generate triangles in areas that need a larger density of triangles to accurately describe the terrain.
End of explanation |
13,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
9.试作下图所示电力系统的阻抗图,并将参数注载图上(不计线路和变压器的电阻和导纳)。
1.计算时取6KV电压为基本级。
2.计算时取10KV电压为基本级。
3.计算时取110KV电压为基本级。
<img src="./第9、10题图.png" />
1.解:
先计算各参数的实际值。
Step1: 计算各变压器变比:
Step2: 将实际值归算成6kv为基准的归算值:
Step3: 2.10kv为基准,因为之前算出了实际值,因此只需要重算k
Step4: 3.以110kv为基值
Step5: 10. 对上题所示电力系统,试作以标幺值表示的阻抗图。并将参数注在图上。取基准功率$S_{B}=100MVA$ ;$110KV$级的基准电压$U_{B}=110kV$ 。
有两种做法,一种将各实际值归算到110kv等级然后除以110kv的基值。另一种,将110kv基值归算到各电压等级,得到各电压等级的基值。
因为110的值已经算出很容易得出结论,因此在此我先采取第一种做法.
Step6: 第二种做法 | Python Code:
x1=0.4
L1=100
X_L1=x1*L1
x2=0.4
L2=80
X_L2=x2*L2
#T1 SF7-16000/110
Sn_T1=16 #MVA
Uk1=10.5 #%
Un_T1=121#KV
X_T1=Uk1*Un_T1**2/(100*Sn_T1)
#T2 S
Sn_T2=31.5 #MVA
Uk2=10.5 #%
Un_T2=121#KV
X_T2=Uk2*Un_T2**2/(100*Sn_T2)
X_T1
Explanation: 9.试作下图所示电力系统的阻抗图,并将参数注载图上(不计线路和变压器的电阻和导纳)。
1.计算时取6KV电压为基本级。
2.计算时取10KV电压为基本级。
3.计算时取110KV电压为基本级。
<img src="./第9、10题图.png" />
1.解:
先计算各参数的实际值。
End of explanation
k1=6.3/121
k2=110/11
Explanation: 计算各变压器变比:
End of explanation
imp_reduction=lambda z,k:z*(k**2)
X_L2x=imp_reduction(X_L2,k1*k2)
X_L1x=imp_reduction(X_L1,k1)
X_T1x=imp_reduction(X_T1,k1)
X_T2x=imp_reduction(X_T2,k1*k2)
print("X_L2=%.3f"%X_L2x)
print("X_L1=%.3f"%X_L1x)
print("X_T1=%.3f"%X_T1x)
print("X_T2=%.3f"%X_T2x)
Explanation: 将实际值归算成6kv为基准的归算值:
End of explanation
#10kv
k1=121/6.3
k2=11/110
X_L2x=imp_reduction(X_L2,1)
X_L1x=imp_reduction(X_L1,k2)
X_T1x=imp_reduction(X_T1,k2)
X_T2x=imp_reduction(X_T2,k2)
print("X_L2=%.3f"%X_L2x)
print("X_L1=%.3f"%X_L1x)
print("X_T1=%.3f"%X_T1x)
print("X_T2=%.3f"%X_T2x)
Explanation: 2.10kv为基准,因为之前算出了实际值,因此只需要重算k:
End of explanation
#110kv
k1=121/6.3
k2=110/11
X_L2x=imp_reduction(X_L2,k2)
X_L1x=imp_reduction(X_L1,1)
X_T1x=imp_reduction(X_T1,1)
X_T2x=imp_reduction(X_T2,1)
print("X_L2=%.3f"%X_L2x)
print("X_L1=%.3f"%X_L1x)
print("X_T1=%.3f"%X_T1x)
print("X_T2=%.3f"%X_T2x)
Explanation: 3.以110kv为基值:
End of explanation
#标幺值计算1
UB=110#kv
SB=100#MVA
XB=UB**2/SB
puv=lambda x,xb:x/xb
X_L2b=puv(X_L2x,XB)
X_L1b=puv(X_L1x,XB)
X_T1b=puv(X_T1x,XB)
X_T2b=puv(X_T2x,XB)
print("X_L2=%.3f"%X_L2b)
print("X_L1=%.3f"%X_L1b)
print("X_T1=%.3f"%X_T1b)
print("X_T2=%.3f"%X_T2b)
Explanation: 10. 对上题所示电力系统,试作以标幺值表示的阻抗图。并将参数注在图上。取基准功率$S_{B}=100MVA$ ;$110KV$级的基准电压$U_{B}=110kV$ 。
有两种做法,一种将各实际值归算到110kv等级然后除以110kv的基值。另一种,将110kv基值归算到各电压等级,得到各电压等级的基值。
因为110的值已经算出很容易得出结论,因此在此我先采取第一种做法.
End of explanation
#标幺值计算2
k1=6.3/121
k2=11/110
U_6B=UB*k1
U_11B=UB*k2
X_6B=U_6B**2/SB
X_11B=U_11B**2/SB
X_L2b=puv(X_L2,X_11B)
print("X_L2=%.3f"%X_L2b)
print("X_L1=%.3f"%X_L1b)
print("X_T1=%.3f"%X_T1b)
print("X_T2=%.3f"%X_T2b)
Explanation: 第二种做法
End of explanation |
13,672 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pandas 3
자료 안내
Step1: 분석을 위한 테스트 데이터를 만들어 보자.
Step2: 위의 함수를 이용하여 테스트 데이터를 만들고, 이를 다시 데이터프레임으로 만들어보자.
Step3: 위의 데이터프레임을 Excel 파일로 저장하자. 이 때 인덱스 값은 원래의 테스트 데이터셋의 일부가 아니기 때문에 저장하지 않는다.
Step4: 1. Excel로부터 데이터 가져오기
read_excel 함수를 이용하여 Excel 파일을 읽을 수 있다. 이 함수는 특정한 이름 또는 위치의 탭(tab)을 읽을 수 있다.
Step5: 2. 데이터 준비하기
분석을 위해서 데이터에 다음과 같은 전처리를 해보자.
1) state 열의 값이 모두 대문자인지를 확인
2) status 값이 1인 레코드만 선택
3) state열에서 NJ를 NY으로 변경
4) 이상치 제거
1) state 열의 값이 모두 대문자인지를 확인
Step6: State 열의 값을 모두 대문자로 변경하기 위해서 upper() 함수와 데이터프레임의 apply을 이용한다. apply 메소드를 통해서 각 로우(row)나 칼럼(column)의 1차원 배열에 함수를 적용할 수 있다. 그리고 lambda함수는 간단하게 State 열의 각 값을 대문자로 변경하도록 해준다.
먼저 lambda 함수에 대해서 간단히 알아보자.
[익명 함수 또는 lambda 함수]
파이썬은 익명 함수 또는 lambda 함수라고 하는, 값을 반환하는 단순한 한 문장으로 이루어진 함수를 지원한다. 람다 함수는 데이터 분석에서 특히 편리한데, 이는 람다 함수를 사용하면 코드를 적게 쓰며, 코드도 더 간결해지기 때문이다.
Step7: 이제 State 열의 값을 대문자로 변경해 보자.
Step8: 2) status 값이 1인 레코드만 선택
Step9: 3) state열에서 NJ를 NY으로 변경
[df.State == 'NJ'] - State 열의 값이 NJ 인 모든 레코드를 찾기
df.State[df.State == 'NJ'] = 'NY' - State 열의 값이 NJ인 모든 레코드의 NJ를 NY으로 변경.
Step10: 이제 정리된 데이터의 State의 열의 유일한 값들을 확인해 보자.
Step11: 4) 이상치 제거
본 절에서는 데이터프레임을 State와 StatusDate의 연도를 기준으로 그룹을 분리한 후, 각 그룹에 있는 CustomeCount에 대해서 사분위수를 이용하여 이상치 제거를 하려고 한다.
먼저 GroupBy과 apply, transform 메소드를 간단하게 살펴보자.
[GroupBy]
pandas는 데이터셋을 자연스럽게 나누고 요약할 수 있는 groupby라는 유연한 방법을 제공한다.
그룹연산(분리-적용-결합)의 첫 번째 단계는 데이터프레임에 들어있는 데이터를 하나 이상의 색인을 기준으로 분리한다. 예를 들어, 데이터프레임은 로우(axis = 0)로 분리하거나 칼럼(axis = 1)로 분리할 수 있다. 분리하고 나면 함수를 각 그룹에 적용시켜 새로운 값을 얻어낸다. 그리고 마지막으로 함수를 적용한 결과를 하나의 객체로 결합한다.
[그림 9-1]은 그룹 연산의 예시이다.
Step12: 실제로 데이터프레임을 만들어 그룹 연산을 시행해 보자.
Step13: [apply 과 transform]
위에서 생성한 데이터프레임 dftest에 apply와 transform 메소드로 그룹 연산을 수행해보자.
Step14: apply의 결과는 병합된 것을 볼 수 있는 반면 transform 메소드는 데이터프레임의 크기를 유지하는 것을 볼 수 있다.
이제 State와 StatusDate를 기준으로 CustomerCount 값을 합해보자. 이때, 데이터프레임 df에는 StatusDate가 index이므로 StatusDate를 기준으로 그룹화하기 위해서 이를 일반열로 보내야 한다. 이를 위해 reset_index()를 이용한다.
Step15: Status의 값은 필요가 없으므로, 아래와 같이 삭제한다.
Step16: 데이터프레임 Daily의 인덱스를 확인해 보자.
Step17: 다음과 같이 각각의 인덱스도 확인할 수 있다.
Step18: 이제 데이터프레임을 State와 StatusDate의 연도를 기준으로 그룹을 분리해 보자.
Step19: StateYear의 각 그룹에 있는 CustomerCount에 대해서 사분위수를 이용하여 이상치를 제거를 시행해 보고자 한다. 이를 위해 먼저 사분위수를 이용하여 이상치를 제거하는 방법에 대해서 간단하게 살펴보자.
[사분위수를 이용하여 이상치를 제거하는 방법]
(a) 사분위수
전체 관측값을 작은 순서로 배열하였을 때, 사분위수는 전체를 사등분하는 값이다. 전체의 사분의 1, 사분의 2, 사분의 3은 각각 전체의 25%, 50%, 75%이고, 이를 제 1사분위수(Q1), 제 2사분위수(Q2) = 중앙값, 제 3사분위수(Q3)라고 한다.
(c) 사분위수 범위
제 3 사분위수와 제 1사분위수 사이의 거리를 퍼진 정도의 측도로 사용할 수 있는데, 이를 사분위수 범위(IQR)이라고 한다. 즉, IQR = 제 3사분위수 - 제 1사분위수 = Q3 - Q1
(d) 사분위수를 이용하여 이상치를 제거하는 방법
관측값이 Q1 - 1.5 IQR 보다 작거나 Q3 + 1.5 IQR 보다 크면, 이 값을 이상치라고 한다.
예제로 살펴보자.
Step20: dftest1의 A열의 자료 중 100은 Upper보다 크므로 이상치라고 할 수 있다.
이제 StateYear의 각 그룹에 있는 CustomerCount에 대해서 사분위수를 이용하여 이상치를 제거 해보자. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import numpy.random as np
# 쥬피터 노트북에서 그래프를 직접 나타내기 위해 사용하는 코드
# 파이썬 전문 에디터에서는 사용하지 않음
%matplotlib inline
Explanation: pandas 3
자료 안내:
pandas 라이브러리 튜토리얼에
있는 Lessons for new pandas users의 03-Lesson 내용을 담고 있다.
익명함수(lambda 함수), GroupBy, apply, transform에 대한 설명은 파이썬 튜토리얼,
pandas 튜토리얼과 한빛미디어의 <파이썬 라이브러리를 활용한 데이터 분석>책의 일부이다.
사분위수에 관한 내용은 자유아카데미의 <통계학>책의 일부이다.
End of explanation
# seed 값을 111
np.seed(111)
# 테스트 데이터를 생성하는 함수 정의
def CreateDataSet(Number=1):
Output = []
for i in range(Number):
# 2009년 1월 1일부터 2012년 12월 31일 사이에 있는 월요일에 해당하는 날짜를 생성
rng = pd.date_range(start='1/1/2009', end='12/31/2012', freq='W-MON')
# rng의 길이와 같은 크기의 랜덤한 수에 대한 리스트 만들기
# 이때, 랜덤수는 25와 1000 사이에 있는 정수
data = np.randint(low=25,high=1000,size=len(rng))
# Status에 대한 리스트 만들기
status = [1,2,3]
# rng의 길이와 같은 크기의 랜덤한 statuses 리스트 만들기
random_status = [status[np.randint(low=0,high=len(status))] for i in range(len(rng))]
# State에 대한 리스트 만들기
states = ['GA','FL','fl','NY','NJ','TX']
# rng의 길이와 같은 크기의 랜덤한 states 리스트 만들기
random_states = [states[np.randint(low=0,high=len(states))] for i in range(len(rng))]
Output.extend(zip(random_states, random_status, data, rng))
return Output
Explanation: 분석을 위한 테스트 데이터를 만들어 보자.
End of explanation
dataset = CreateDataSet(4)
df = pd.DataFrame(data=dataset, columns=['State','Status','CustomerCount','StatusDate'])
df.info()
df.head()
Explanation: 위의 함수를 이용하여 테스트 데이터를 만들고, 이를 다시 데이터프레임으로 만들어보자.
End of explanation
df.to_excel('Lesson3.xlsx', index=False)
print('Done')
Explanation: 위의 데이터프레임을 Excel 파일로 저장하자. 이 때 인덱스 값은 원래의 테스트 데이터셋의 일부가 아니기 때문에 저장하지 않는다.
End of explanation
# 파일의 위치
Location = 'Lesson3.xlsx'
# 아래의 코드에서 0은 첫번째 시트를 의미.
# index_col = 'StatusDate'는 StatusDate를 인덱스로 가져오라는 의미
df = pd.read_excel(Location, 0, index_col='StatusDate')
df.dtypes
# 데이터프레임의 인덱스를 확인
df.index
df.head()
Explanation: 1. Excel로부터 데이터 가져오기
read_excel 함수를 이용하여 Excel 파일을 읽을 수 있다. 이 함수는 특정한 이름 또는 위치의 탭(tab)을 읽을 수 있다.
End of explanation
df['State'].unique()
Explanation: 2. 데이터 준비하기
분석을 위해서 데이터에 다음과 같은 전처리를 해보자.
1) state 열의 값이 모두 대문자인지를 확인
2) status 값이 1인 레코드만 선택
3) state열에서 NJ를 NY으로 변경
4) 이상치 제거
1) state 열의 값이 모두 대문자인지를 확인
: State 열의 값이 대문자인지, 소문자인지를 빠르게 확인해 보자.
End of explanation
# 람다 함수는 아래와 같이 사용
# lambda arguments : expression
# 예를 들어, 아래의 코드는 두 개의 argument의 합을 리턴
x = lambda a, b : a + b
x(3, 5)
Explanation: State 열의 값을 모두 대문자로 변경하기 위해서 upper() 함수와 데이터프레임의 apply을 이용한다. apply 메소드를 통해서 각 로우(row)나 칼럼(column)의 1차원 배열에 함수를 적용할 수 있다. 그리고 lambda함수는 간단하게 State 열의 각 값을 대문자로 변경하도록 해준다.
먼저 lambda 함수에 대해서 간단히 알아보자.
[익명 함수 또는 lambda 함수]
파이썬은 익명 함수 또는 lambda 함수라고 하는, 값을 반환하는 단순한 한 문장으로 이루어진 함수를 지원한다. 람다 함수는 데이터 분석에서 특히 편리한데, 이는 람다 함수를 사용하면 코드를 적게 쓰며, 코드도 더 간결해지기 때문이다.
End of explanation
# State 열의 값을 대문자로 변경
df['State'] = df.State.apply(lambda x: x.upper())
df['State'].unique()
Explanation: 이제 State 열의 값을 대문자로 변경해 보자.
End of explanation
# Only grab where Status == 1
mask = df['Status'] == 1
df = df[mask]
Explanation: 2) status 값이 1인 레코드만 선택
End of explanation
mask = df.State == 'NJ'
df['State'][mask] = 'NY'
Explanation: 3) state열에서 NJ를 NY으로 변경
[df.State == 'NJ'] - State 열의 값이 NJ 인 모든 레코드를 찾기
df.State[df.State == 'NJ'] = 'NY' - State 열의 값이 NJ인 모든 레코드의 NJ를 NY으로 변경.
End of explanation
df['State'].unique()
Explanation: 이제 정리된 데이터의 State의 열의 유일한 값들을 확인해 보자.
End of explanation
from IPython.display import Image
Image("python_for_data_analysis_p346.png")
Explanation: 4) 이상치 제거
본 절에서는 데이터프레임을 State와 StatusDate의 연도를 기준으로 그룹을 분리한 후, 각 그룹에 있는 CustomeCount에 대해서 사분위수를 이용하여 이상치 제거를 하려고 한다.
먼저 GroupBy과 apply, transform 메소드를 간단하게 살펴보자.
[GroupBy]
pandas는 데이터셋을 자연스럽게 나누고 요약할 수 있는 groupby라는 유연한 방법을 제공한다.
그룹연산(분리-적용-결합)의 첫 번째 단계는 데이터프레임에 들어있는 데이터를 하나 이상의 색인을 기준으로 분리한다. 예를 들어, 데이터프레임은 로우(axis = 0)로 분리하거나 칼럼(axis = 1)로 분리할 수 있다. 분리하고 나면 함수를 각 그룹에 적용시켜 새로운 값을 얻어낸다. 그리고 마지막으로 함수를 적용한 결과를 하나의 객체로 결합한다.
[그림 9-1]은 그룹 연산의 예시이다.
End of explanation
dftest = pd.DataFrame({'key': ['A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'C' ], 'data' : [0, 5, 10, 5, 10, 15, 10, 15, 20]})
dftest
# key라는 열에 대해서 그룹으로 분리하고, 각 그룹에 sum()를 적용
dftest.groupby('key').sum()
Explanation: 실제로 데이터프레임을 만들어 그룹 연산을 시행해 보자.
End of explanation
dftest.groupby('key')['data'].apply(lambda x : x.sum())
dftest.groupby('key')['data'].transform(lambda x : x.sum())
Explanation: [apply 과 transform]
위에서 생성한 데이터프레임 dftest에 apply와 transform 메소드로 그룹 연산을 수행해보자.
End of explanation
df.reset_index().head()
Daily = df.reset_index().groupby(['State','StatusDate']).sum()
Daily.head()
Explanation: apply의 결과는 병합된 것을 볼 수 있는 반면 transform 메소드는 데이터프레임의 크기를 유지하는 것을 볼 수 있다.
이제 State와 StatusDate를 기준으로 CustomerCount 값을 합해보자. 이때, 데이터프레임 df에는 StatusDate가 index이므로 StatusDate를 기준으로 그룹화하기 위해서 이를 일반열로 보내야 한다. 이를 위해 reset_index()를 이용한다.
End of explanation
del Daily['Status']
Daily.head()
Explanation: Status의 값은 필요가 없으므로, 아래와 같이 삭제한다.
End of explanation
Daily.index
Explanation: 데이터프레임 Daily의 인덱스를 확인해 보자.
End of explanation
# State 인덱스 확인
Daily.index.levels[0]
# StatusDate 인덱스 확인
Daily.index.levels[1]
Explanation: 다음과 같이 각각의 인덱스도 확인할 수 있다.
End of explanation
StateYear = Daily.groupby([Daily.index.get_level_values(0), Daily.index.get_level_values(1).year])
Explanation: 이제 데이터프레임을 State와 StatusDate의 연도를 기준으로 그룹을 분리해 보자.
End of explanation
dftest1 = pd.DataFrame({'A' : [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 100]})
dftest1
# A 열의 값에 대한 제 1사분위수는 3.5
Q1 = dftest1.quantile(q = 0.25)
Q1
# A 열의 값에 대한 제 2사분위수는 5.5
Q2 = dftest1.quantile(q = 0.5)
Q2
# A 열의 값에 대한 제 3사분위수는 8.5
Q3 = dftest1.quantile(q = 0.75)
Q3
# Lower = Q1 - 1.5 IQR
Lower = Q1 - 1.5*(Q3 - Q1)
Lower
# Upper = Q3 + 1.5 IQR
Upper = Q3 + 1.5*(Q3 - Q1)
Upper
Explanation: StateYear의 각 그룹에 있는 CustomerCount에 대해서 사분위수를 이용하여 이상치를 제거를 시행해 보고자 한다. 이를 위해 먼저 사분위수를 이용하여 이상치를 제거하는 방법에 대해서 간단하게 살펴보자.
[사분위수를 이용하여 이상치를 제거하는 방법]
(a) 사분위수
전체 관측값을 작은 순서로 배열하였을 때, 사분위수는 전체를 사등분하는 값이다. 전체의 사분의 1, 사분의 2, 사분의 3은 각각 전체의 25%, 50%, 75%이고, 이를 제 1사분위수(Q1), 제 2사분위수(Q2) = 중앙값, 제 3사분위수(Q3)라고 한다.
(c) 사분위수 범위
제 3 사분위수와 제 1사분위수 사이의 거리를 퍼진 정도의 측도로 사용할 수 있는데, 이를 사분위수 범위(IQR)이라고 한다. 즉, IQR = 제 3사분위수 - 제 1사분위수 = Q3 - Q1
(d) 사분위수를 이용하여 이상치를 제거하는 방법
관측값이 Q1 - 1.5 IQR 보다 작거나 Q3 + 1.5 IQR 보다 크면, 이 값을 이상치라고 한다.
예제로 살펴보자.
End of explanation
Daily['Lower'] = StateYear['CustomerCount'].transform( lambda x: x.quantile(q=.25) - 1.5*(x.quantile(q=.75)-x.quantile(q=.25)))
Daily['Upper'] = StateYear['CustomerCount'].transform( lambda x: x.quantile(q=.75) + 1.5*(x.quantile(q=.75)-x.quantile(q=.25)))
Daily['Outlier'] = (Daily['CustomerCount'] < Daily['Lower']) | (Daily['CustomerCount'] > Daily['Upper'])
# 이상치를 제거해 보자.
Daily = Daily[Daily['Outlier'] == False]
Daily.head()
Explanation: dftest1의 A열의 자료 중 100은 Upper보다 크므로 이상치라고 할 수 있다.
이제 StateYear의 각 그룹에 있는 CustomerCount에 대해서 사분위수를 이용하여 이상치를 제거 해보자.
End of explanation |
13,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Structure Data Example
Step1: Please Download
https
Step2: Dealing with NaN
There are many approaches possibles for NaN values in the data, here we just changing it to " " or 0 depending of the data type. This is the simplest way, but for sure is not the best in most cases, so in practice you should try some other ways to use the NaN data. Some approaches are
Step3: Standardize features
Step4: Separating training data from testing data
Step5: Using Tensorflow
Defining input function
Step6: Defining a Linear Estimator
Step7: Training
Step8: Evaluating
Step9: Predicting
Step10: Defining a DNN Estimator
Step11: Training
Step12: Evaluating
Step13: Predicting
Step14: Creating an Experiment | Python Code:
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
# We're using pandas to read the CSV file. This is easy for small datasets, but for large and complex datasets,
# tensorflow parsing and processing functions are more powerful
import pandas as pd
import numpy as np
# TensorFlow
import tensorflow as tf
print('please make sure that version >= 1.2:')
print(tf.__version__)
print('@monteirom: I made changes so it also works with 1.1.0 that is the current pip install version')
print('@monteirom: The lines that were changed have @1.2 as comment')
# Layers that will define the features
#
# real_value_column: real values, float32
# sparse_column_with_hash_bucket: Use this when your sparse features are in string or integer format,
# but you don't have a vocab file that maps each value to an integer ID.
# output_id = Hash(input_feature_string) % bucket_size
# sparse_column_with_keys: Look up logic is as follows:
# lookup_id = index_of_feature_in_keys if feature in keys else default_value.
# You should use this when you know the vocab file for the feature
# one_hot_column: Creates an _OneHotColumn for a one-hot or multi-hot repr in a DNN.
# The input can be a _SparseColumn which is created by `sparse_column_with_*`
# or crossed_column functions
from tensorflow.contrib.layers import real_valued_column, sparse_column_with_keys, sparse_column_with_hash_bucket
from tensorflow.contrib.layers import one_hot_column
Explanation: Structure Data Example: Automobile dataset
https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data
End of explanation
# The CSV file does not have a header, so we have to fill in column names.
names = [
'symboling',
'normalized-losses',
'make',
'fuel-type',
'aspiration',
'num-of-doors',
'body-style',
'drive-wheels',
'engine-location',
'wheel-base',
'length',
'width',
'height',
'curb-weight',
'engine-type',
'num-of-cylinders',
'engine-size',
'fuel-system',
'bore',
'stroke',
'compression-ratio',
'horsepower',
'peak-rpm',
'city-mpg',
'highway-mpg',
'price',
]
# We also have to specify dtypes.
dtypes = {
'symboling': np.int32,
'normalized-losses': np.float32,
'make': str,
'fuel-type': str,
'aspiration': str,
'num-of-doors': str,
'body-style': str,
'drive-wheels': str,
'engine-location': str,
'wheel-base': np.float32,
'length': np.float32,
'width': np.float32,
'height': np.float32,
'curb-weight': np.float32,
'engine-type': str,
'num-of-cylinders': str,
'engine-size': np.float32,
'fuel-system': str,
'bore': np.float32,
'stroke': np.float32,
'compression-ratio': np.float32,
'horsepower': np.float32,
'peak-rpm': np.float32,
'city-mpg': np.float32,
'highway-mpg': np.float32,
'price': np.float32,
}
# Read the file.
df = pd.read_csv('data/imports-85.data', names=names, dtype=dtypes, na_values='?')
# Some rows don't have price data, we can't use those.
df = df.dropna(axis='rows', how='any', subset=['price'])
Explanation: Please Download
https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data
And move it to data/
So: data/imports-85.data is expected to exist!
Preparing the data
End of explanation
# Fill missing values in continuous columns with zeros instead of NaN.
float_columns = [k for k,v in dtypes.items() if v == np.float32]
df[float_columns] = df[float_columns].fillna(value=0., axis='columns')
# Fill missing values in continuous columns with '' instead of NaN (NaN mixed with strings is very bad for us).
string_columns = [k for k,v in dtypes.items() if v == str]
df[string_columns] = df[string_columns].fillna(value='', axis='columns')
Explanation: Dealing with NaN
There are many approaches possibles for NaN values in the data, here we just changing it to " " or 0 depending of the data type. This is the simplest way, but for sure is not the best in most cases, so in practice you should try some other ways to use the NaN data. Some approaches are:
use the mean of the row
use the mean of the column
if/else substituion (e.g if a lot of NaN do this, else do this other thing)
...
google others
End of explanation
# We have too many variables let's just use some of them
df = df[['num-of-doors','num-of-cylinders', 'horsepower', 'make', 'price', 'length', 'height', 'width']]
# Since we're possibly dealing with parameters of different units and scales. We'll need to rescale our data.
# There are two main ways to do it:
# * Normalization, which scales all numeric variables in the range [0,1].
# Example:
# * Standardization, it will then transform it to have zero mean and unit variance.
# Example:
# Which is better? It deppends of your data and your features.
# But one disadvantage of normalization over standardization is that it loses
# some information in the data. Since normalization loses more info it can make harder
# for gradient descent to converse, so we'll use standardization.
# In practice: please analyse your data and see what gives you better results.
def std(x):
return (x - x.mean()) / x.std()
before = df.length[0]
df.length = std(df.length)
df.width = std(df.width)
df.height = std(df.height)
df.horsepower = std(df.horsepower)
after = df.length[0]
print('before:', before, 'after:', after)
Explanation: Standardize features
End of explanation
TRAINING_DATA_SIZE = 160
TEST_DATA_SIZE = 10
LABEL = 'price'
# Split the data into a training set, eval set and test set
training_data = df[:TRAINING_DATA_SIZE]
eval_data = df[TRAINING_DATA_SIZE: TRAINING_DATA_SIZE + TEST_DATA_SIZE]
test_data = df[TRAINING_DATA_SIZE + TEST_DATA_SIZE:]
# Separate input features from labels
training_label = training_data.pop(LABEL)
eval_label = eval_data.pop(LABEL)
test_label = test_data.pop(LABEL)
Explanation: Separating training data from testing data
End of explanation
BATCH_SIZE = 64
# Make input function for training:
# num_epochs=None -> will cycle through input data forever
# shuffle=True -> randomize order of input data
training_input_fn = tf.estimator.inputs.pandas_input_fn(x=training_data,
y=training_label,
batch_size=BATCH_SIZE,
shuffle=True,
num_epochs=None)
# Make input function for evaluation:
# shuffle=False -> do not randomize input data
eval_input_fn = tf.estimator.inputs.pandas_input_fn(x=eval_data,
y=eval_label,
batch_size=BATCH_SIZE,
shuffle=False)
# Make input function for testing:
# shuffle=False -> do not randomize input data
eval_input_fn = tf.estimator.inputs.pandas_input_fn(x=test_data,
y=test_label,
batch_size=1,
shuffle=False)
Explanation: Using Tensorflow
Defining input function
End of explanation
# Describe how the model should interpret the inputs. The names of the feature columns have to match the names
# of the series in the dataframe.
# @1.2.0 tf.feature_column.numeric_column -> tf.contrib.layers.real_valued_column
horsepower = real_valued_column('horsepower')
width = real_valued_column('width')
height = real_valued_column('height')
length = real_valued_column('length')
# @1.2.0 tf.feature_column.categorical_column_with_hash_bucket -> tf.contrib.layers.sparse_column_with_hash_bucket
make = sparse_column_with_hash_bucket('make', 50)
# @1.2.0 tf.feature_column.categorical_column_with_vocabulary_list -> tf.contrib.layers.sparse_column_with_keys
fuel_type = sparse_column_with_keys('fuel-type', keys=['diesel', 'gas'])
num_of_doors = sparse_column_with_keys('num-of-doors', keys=['two', 'four'])
num_of_cylinders = sparse_column_with_keys('num-of-cylinders', ['eight', 'five', 'four', 'six', 'three', 'twelve', 'two'])
linear_features = [horsepower, make, num_of_doors, num_of_cylinders, length, width, height]
regressor = tf.contrib.learn.LinearRegressor(feature_columns=linear_features, model_dir='tensorboard/linear_regressor/')
Explanation: Defining a Linear Estimator
End of explanation
regressor.fit(input_fn=training_input_fn, steps=10000)
Explanation: Training
End of explanation
regressor.evaluate(input_fn=eval_input_fn)
Explanation: Evaluating
End of explanation
preds = list(regressor.predict(input_fn=eval_input_fn))
for i in range(TEST_DATA_SIZE):
print('prediction:', preds[i], 'real value:', test_label.iloc[i])
Explanation: Predicting
End of explanation
# @1.2.0 tf.feature_column.indicator_column -> tf.contrib.layers.one_hot_column(tf.contrib.layers.sparse_column_with_keys(...))
dnn_features = [
#numerical features
length, width, height, horsepower,
# densify categorical features:
one_hot_column(make),
one_hot_column(num_of_doors)
]
dnnregressor = tf.contrib.learn.DNNRegressor(feature_columns=dnn_features,
hidden_units=[50, 30, 10], model_dir='tensorboard/DNN_regressor/')
Explanation: Defining a DNN Estimator
End of explanation
dnnregressor.fit(input_fn=training_input_fn, steps=10000)
Explanation: Training
End of explanation
dnnregressor.evaluate(input_fn=eval_input_fn)
Explanation: Evaluating
End of explanation
preds = list(dnnregressor.predict(input_fn=eval_input_fn))
for i in range(TEST_DATA_SIZE):
print('prediction:', preds[i], 'real value:', test_label.iloc[i])
Explanation: Predicting
End of explanation
# @1.2.0 experiment_fn(run_config, params) - > experiment_fn(output_dir)
def experiment_fn(output_dir):
# This function makes an Experiment, containing an Estimator and inputs for training and evaluation.
# You can use params and config here to customize the Estimator depending on the cluster or to use
# hyperparameter tuning.
# Collect information for training
# @1.2.0 config=run_config -> ''
return tf.contrib.learn.Experiment(estimator=tf.contrib.learn.LinearRegressor(
feature_columns=linear_features, model_dir=output_dir),
train_input_fn=training_input_fn,
train_steps=10000,
eval_input_fn=eval_input_fn)
import shutil
# @1.2.0 tf.contrib.learn.learn_runner(exp, run_config=tf.contrib.learn.RunConfig(model_dir="/tmp/output_dir")
# -> tf.contrib.learn.python.learn.learm_runner.run(exp, output_dir='/tmp/output_dir')
shutil.rmtree("/tmp/output_dir", ignore_errors=True)
from tensorflow.contrib.learn.python.learn import learn_runner
learn_runner.run(experiment_fn, output_dir='/tmp/output_dir')
Explanation: Creating an Experiment
End of explanation |
13,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Su-Schrieffer–Heeger (SSH) model
Saumya Biswas ([email protected])
The celebrated SSH model is analyzed with QuTiP's lattice module below.
The above figure shows a SSH model with 6 sites with periodic boundary condition. The same lattice with hardwall/aperiodic boundary condition would be the folloowing.
In the secod quantized formalism, the periodic lattice Hamiltonian can be written as
\begin{eqnarray}
H_{per} = -t_{intra} (c_{-2}^{\dagger} c_{-1} + c_{-1}^{\dagger} c_{-2} ) -t_{intra} (c_{0}^{\dagger} c_{1} + c_{1}^{\dagger} c_{0} ) -t_{intra} (c_{2}^{\dagger} c_{3} + c_{3}^{\dagger} c_{2} ) \nonumber \
-t_{inter} (c_{-1}^{\dagger} c_{0} + c_{0}^{\dagger} c_{-1} ) -t_{inter} (c_{1}^{\dagger} c_{2} + c_{2}^{\dagger} c_{1} ) -t_{inter} (c_{3}^{\dagger} c_{-2} + c_{-2}^{\dagger} c_{3} ) \nonumber
\end{eqnarray}
The aperiodic lattice Hamiltonian can be obtained by discarding the very last term.
\begin{eqnarray}
H_{aper} = -t_{intra} (c_{-2}^{\dagger} c_{-1} + c_{-1}^{\dagger} c_{-2} ) -t_{intra} (c_{0}^{\dagger} c_{1} + c_{1}^{\dagger} c_{0} ) -t_{intra} (c_{2}^{\dagger} c_{3} + c_{3}^{\dagger} c_{2} ) \nonumber \
-t_{inter} (c_{-1}^{\dagger} c_{0} + c_{0}^{\dagger} c_{-1} ) -t_{inter} (c_{1}^{\dagger} c_{2} + c_{2}^{\dagger} c_{1} ) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber
\end{eqnarray}
The representation in terms of unit cell blocks become obvious once we resolve the terms into unit cell operators.
\begin{eqnarray}
H_{per}= \begin{bmatrix}
c_{-2}^{\dagger} & c_{-1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_1 \
-t_1 & 0
\end{bmatrix}
\begin{bmatrix}
c_{-2} \
c_{-1}
\end{bmatrix} +
\begin{bmatrix}
c_{0}^{\dagger} & c_{1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_1 \
-t_1 & 0
\end{bmatrix}
\begin{bmatrix}
c_{0} \
c_{1}
\end{bmatrix}
\nonumber \
+ \begin{bmatrix}
c_{2}^{\dagger} & c_{3}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_1 \
-t_1 & 0
\end{bmatrix}
\begin{bmatrix}
c_{2} \
c_{3}
\end{bmatrix} \nonumber \
+ \begin{bmatrix}
c_{-2}^{\dagger} & c_{-1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & 0 \
-t_2 & 0
\end{bmatrix}
\begin{bmatrix}
c_{0} \
c_{1}
\end{bmatrix} +
\begin{bmatrix}
c_{0}^{\dagger} & c_{1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_2 \
0 & 0
\end{bmatrix}
\begin{bmatrix}
c_{-2} \
c_{-1}
\end{bmatrix}
\nonumber \
+ \begin{bmatrix}
c_{0}^{\dagger} & c_{1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & 0 \
-t_2 & 0
\end{bmatrix}
\begin{bmatrix}
c_{2} \
c_{3}
\end{bmatrix} +
\begin{bmatrix}
c_{2}^{\dagger} & c_{3}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_2 \
0 & 0
\end{bmatrix}
\begin{bmatrix}
c_{0} \
c_{1}
\end{bmatrix}
\nonumber \
+ \begin{bmatrix}
c_{2}^{\dagger} & c_{3}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & 0 \
-t_2 & 0
\end{bmatrix}
\begin{bmatrix}
c_{-2} \
c_{-1}
\end{bmatrix} +
\begin{bmatrix}
c_{-2}^{\dagger} & c_{-1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_2 \
0 & 0
\end{bmatrix}
\begin{bmatrix}
c_{2} \
c_{3}
\end{bmatrix} \nonumber
\end{eqnarray}
Hence, $H_{TB}$ can be succinctly put in the form
Step1: Guided by cell_H_form and inter_cell_T_form, we can set values to cell_H and inter_cell_T which were initialized to all zero elements by cell_structures().
Step2: Using cell_structures() is completely optional. The user could equally well have defined cell_H and inter_cell_T directly.
Step3: For our SSH lattice with 3 unit cells, 2 sites in each unit cell, and [1] degree of freedom per each site, we can initiate an instance of the Lattice1d class at this stage.
Step4: The model can be visualized with the display functions.
Step5: The Hamiltonian of the lattice can be obtained with the method Hamiltonian()
Step6: Sublattice Projectors and Chiral Symmetry of the SSH model
Step7: Hence, it is verified that $\hat{\Sigma}_z$ and H indeed anticommute and the SSH Hamiltonian has chiral symmetry.
The dispersion relationship for the lattice can be obtained with plot_dispersion() method.
Step8: plot_dispersion() plots the 3(since number of unit cells in 3) points in k-space
(the first and last one are the same) over the dispersion relation of an infinite
crystal.
Step9: First, the eigen-values are the same as the ones obtained from the dispersion calculation. Second, they are symmetric about the value 0.
The second is a consequence of the chiral symmetry of the Hamiltonian, as we explain now.
\begin{eqnarray}
\hat{\bf{1}} = \hat{P}_A + \hat{P}_B, \ \ \ \ \, \hat{P}_A = \frac{1}{2}(\hat{\bf{1}}+\hat{\Sigma}_z), \ \ \ \ \, \hat{P}_B = \frac{1}{2}(\hat{\bf{1}}-\hat{\Sigma}_z) \nonumber \
H |\psi_n\rangle = E_n | \psi_n \rangle \implies H (\hat{\Sigma}_z | \psi_n \rangle) = - \hat{\Sigma}_z H | \psi_n \rangle = -E_n (\hat{\Sigma}_z|\psi_n\rangle)
\end{eqnarray}
So, if $|\psi_n\rangle$ is an eigenstate with eigenenergy $E_n$, $\hat{\Sigma}_z | \psi_n \rangle$ is also an eigenstate with energy $-E_n$ and the eigen-spectrum is symmetric about 0.
Here, S0 is the eigenvector with eigenvalue -1 and S5 is the eigenvector with eigenvalue +1. So, we can verify if S5 is the same eigenvector(withinn a phase factor) as ($\hat{\Sigma}_z*$S0).
Step10: Clearly, S5 is the same eigenvector as ($\hat{\Sigma}_z*$S0).
Since, $\hat{\Sigma}_z | \psi_n \rangle$ and $| \psi_n \rangle$ are eigenvectors of a Hermitian oerator,H with distinct eigenvalues, they must be orthogonal.
\begin{eqnarray}
E_n \ne 0 \implies 0 = \langle \phi_n| \hat{\Sigma}_z | \phi_n \rangle = \langle \phi_n \hat{P}_A |\phi_n \rangle - \langle \phi_n| \hat{P}_B |\phi_n \rangle
\end{eqnarray}
i.e., an eigenstate with nonzero eigenvalue has equal support on both sublattices. We, now check this for S5.
Step11: We discuss the implications of $E_n = 0$ for an eigenvector later in the context of edge states later.
Unsurprisingly, diagonalizing the Hamiltonian gives the same spectrum of eigen-values
as the one obtained from the plot_dispersion() function. We shall soon illustrate, translational symmetry is a very useful computational hack. Here, we see how they can produce the eigen-values and eigen-vectors of the Hamiltonian ($6\times6$, in our example) from diagonalizing a $2\times2$ matrix. The reduction in size by a factor of 3 comes from the fact that the lattice of 3 cells repeats itself infinitely on both ends.
Using Translational Symmetry
Step12: The array of $|u_n(k) \rangle$(in terms of its expansion in {a(k),b(k)}) at the good quantum numbers, k can be produced with the method array_of_unk().
Step13: In both cases, knxA is simply an array containing the valid values of k, in units of $2\pi/a$, a being the length of the unit cell i.e. 1. a is always 1 in all methods().
Step14: bloch_wave_functions() yields an ordered array for the eigenvalues and eigenvectors(which are bloch wave functions) for the Hailtonian of the lattice.
Step15: Knowing the cell periodic part of a bloch wavefunction, $u_n(k)$ suffices to calculate it. The translational symmetry enables calculation of the eigenstates of a $6\times6$ matrix through diagonalizing a $2\times2$ matrix.
Topology
Step16: Unsurprisingly, we see a gap in the dispersion relationship and also in the spectrum of
eigen values of the Hamiltonian.
However, a calculation of the winding number yields 0, since it has trivial topology.
Step17: The Topologically Nontrivial Insulator
The other limit ($|t_{inter}| < |t_{intra}| $) is the interesting one, although the same
calculations on a lattice with periodic boundary condition reveal nothing interesting.
Step18: Topologically Nontrivial Insulator
Step19: We observe the much celebrated mid-gap edge states, who appear at mid-gaps. The nontriviality of the topological insulator is revealed with their placement next to a boundary. The edge states are the consequences of the bulk-bundary correspondence. Plotting the absolute values of the eigen-functions reveal that they are localized at the edges. We plot the absolute values of the eigenfunctions corresponding to the two eigenvalues found at mid-gaps.
Step20: It is quite obvious, the edge states are localized towards the edges.
The eigenstates are not unique, they are only unique within a phase factor. The routine used in QuTiP only yields eigenstates that are one of the possibilities within the phase factor.
Let us check out the two edge state eigenvalues as well as the three preceding and three succeeding ones.
Step21: It is obvious that, n_D[99] and n_D[100] are values 0 within numerical precision. There is an interesting property of the eigenstates of a chiral Hamiltonian with eigenvalue 0.
\begin{eqnarray}
H|\psi_n \rangle = 0 \implies H(\hat{P}_{A/B}| \psi_n\rangle) = \frac{1}{2}H(\hat{\bf{1}} \pm \Sigma_z )| \psi_n \rangle = \frac{1}{2}(\hat{\bf{1}} \mp \Sigma_z )H| \psi_n \rangle = 0
\end{eqnarray}
Therefore, $\hat{P}_{A/B} |\psi_n\rangle$ is an eigenstate with eigenvalue 0 as well. As a result, the edge-states can be chosen to have support in only one sublattice.
Step22: $\hat{P}{A} |\psi_n\rangle$ and $\hat{P}{B} |\psi_n\rangle$ are orthogonal eigenstates with eigenvalue 0, and we can choose the two edge states to be localized at two edges of the lattice. Just to be sure, we also check if $H\hat{P}_{A/B} |\psi_n\rangle $ is indeed 0. | Python Code:
from qutip import *
import matplotlib.pyplot as plt
import numpy as np
val_s = ['site0','site1']
(H_cell_form,T_inter_cell_form,H_cell,T_inter_cell) = cell_structures( val_s)
H_cell_form
T_inter_cell_form
Explanation: The Su-Schrieffer–Heeger (SSH) model
Saumya Biswas ([email protected])
The celebrated SSH model is analyzed with QuTiP's lattice module below.
The above figure shows a SSH model with 6 sites with periodic boundary condition. The same lattice with hardwall/aperiodic boundary condition would be the folloowing.
In the secod quantized formalism, the periodic lattice Hamiltonian can be written as
\begin{eqnarray}
H_{per} = -t_{intra} (c_{-2}^{\dagger} c_{-1} + c_{-1}^{\dagger} c_{-2} ) -t_{intra} (c_{0}^{\dagger} c_{1} + c_{1}^{\dagger} c_{0} ) -t_{intra} (c_{2}^{\dagger} c_{3} + c_{3}^{\dagger} c_{2} ) \nonumber \
-t_{inter} (c_{-1}^{\dagger} c_{0} + c_{0}^{\dagger} c_{-1} ) -t_{inter} (c_{1}^{\dagger} c_{2} + c_{2}^{\dagger} c_{1} ) -t_{inter} (c_{3}^{\dagger} c_{-2} + c_{-2}^{\dagger} c_{3} ) \nonumber
\end{eqnarray}
The aperiodic lattice Hamiltonian can be obtained by discarding the very last term.
\begin{eqnarray}
H_{aper} = -t_{intra} (c_{-2}^{\dagger} c_{-1} + c_{-1}^{\dagger} c_{-2} ) -t_{intra} (c_{0}^{\dagger} c_{1} + c_{1}^{\dagger} c_{0} ) -t_{intra} (c_{2}^{\dagger} c_{3} + c_{3}^{\dagger} c_{2} ) \nonumber \
-t_{inter} (c_{-1}^{\dagger} c_{0} + c_{0}^{\dagger} c_{-1} ) -t_{inter} (c_{1}^{\dagger} c_{2} + c_{2}^{\dagger} c_{1} ) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \nonumber
\end{eqnarray}
The representation in terms of unit cell blocks become obvious once we resolve the terms into unit cell operators.
\begin{eqnarray}
H_{per}= \begin{bmatrix}
c_{-2}^{\dagger} & c_{-1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_1 \
-t_1 & 0
\end{bmatrix}
\begin{bmatrix}
c_{-2} \
c_{-1}
\end{bmatrix} +
\begin{bmatrix}
c_{0}^{\dagger} & c_{1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_1 \
-t_1 & 0
\end{bmatrix}
\begin{bmatrix}
c_{0} \
c_{1}
\end{bmatrix}
\nonumber \
+ \begin{bmatrix}
c_{2}^{\dagger} & c_{3}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_1 \
-t_1 & 0
\end{bmatrix}
\begin{bmatrix}
c_{2} \
c_{3}
\end{bmatrix} \nonumber \
+ \begin{bmatrix}
c_{-2}^{\dagger} & c_{-1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & 0 \
-t_2 & 0
\end{bmatrix}
\begin{bmatrix}
c_{0} \
c_{1}
\end{bmatrix} +
\begin{bmatrix}
c_{0}^{\dagger} & c_{1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_2 \
0 & 0
\end{bmatrix}
\begin{bmatrix}
c_{-2} \
c_{-1}
\end{bmatrix}
\nonumber \
+ \begin{bmatrix}
c_{0}^{\dagger} & c_{1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & 0 \
-t_2 & 0
\end{bmatrix}
\begin{bmatrix}
c_{2} \
c_{3}
\end{bmatrix} +
\begin{bmatrix}
c_{2}^{\dagger} & c_{3}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_2 \
0 & 0
\end{bmatrix}
\begin{bmatrix}
c_{0} \
c_{1}
\end{bmatrix}
\nonumber \
+ \begin{bmatrix}
c_{2}^{\dagger} & c_{3}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & 0 \
-t_2 & 0
\end{bmatrix}
\begin{bmatrix}
c_{-2} \
c_{-1}
\end{bmatrix} +
\begin{bmatrix}
c_{-2}^{\dagger} & c_{-1}^{\dagger}
\end{bmatrix}
\begin{bmatrix}
o & -t_2 \
0 & 0
\end{bmatrix}
\begin{bmatrix}
c_{2} \
c_{3}
\end{bmatrix} \nonumber
\end{eqnarray}
Hence, $H_{TB}$ can be succinctly put in the form:
\begin{eqnarray}
H_{per/aper} = \sum_i \psi_i^{\dagger} D \psi_i + \sum_{i} \left( \psi_i^{\dagger} T \psi_{i+1} + \psi_{i+1}^{\dagger} T^{\dagger} \psi_i \right) \label{eq:TB_block}
\end{eqnarray}
where $D = \begin{bmatrix}
0 & -t_{intra} \
-t_{intra} & 0
\end{bmatrix}$ and $T = \begin{bmatrix}
0 & 0 \
-t_{inter} & 0
\end{bmatrix}$. The $\psi_i = \begin{bmatrix}
c^0_{i} \
c^1_{i}
\end{bmatrix}$ is associated with $\vec{R}=\vec{R}i$ and $\psi{i+1}$ is associated with $\vec{R}=\vec{R}_i + \hat{x}$, with $\hat{x}$ being the unit vector along the x direction.
The equation above can be put into the alternate form (also changing the summation variables to m,n to distinguish them from the imaginary i):
\begin{eqnarray}
H_{per/aper} = \sum_{m,n} \psi_m^{\dagger} (D \delta_{m,n} + T \delta_{m,n-1} + T^{\dagger} \delta_{m,n+1} ) \psi_n \label{eq:TB_block_1}
\end{eqnarray}
So far, we have see that a SSH lattice can be put into the unit cell representations and the lattice can be completely defied with two matrices D and T. In declaring an instance of Qutip.lattice.Lattice1d we only need to input the two matrices.
In declaring an instance of Lattice1d class, the two arguments cell_Hamiltonian
and inter_hop are set to D and T respectively. The matrix structure of D and T can be
obtained from the aide function cell_structures().
End of explanation
t_intra = -0.5
t_inter = -0.5
H_cell[0,1] = t_intra
H_cell[1,0] = t_intra
T_inter_cell[1,0] = t_inter
H_cell = Qobj(H_cell)
T_inter_cell = Qobj(T_inter_cell)
Explanation: Guided by cell_H_form and inter_cell_T_form, we can set values to cell_H and inter_cell_T which were initialized to all zero elements by cell_structures().
End of explanation
H_cell = Qobj( np.array( [[ 0, t_intra ],[t_intra,0]] ) )
T_inter_cell = Qobj( np.array( [[ 0, 0 ],[t_inter,0]] ) )
Explanation: Using cell_structures() is completely optional. The user could equally well have defined cell_H and inter_cell_T directly.
End of explanation
boundary_condition = "periodic"
cells = 3
cell_sites = 2
site_dof = [1]
SSH_lattice = Lattice1d(num_cell=cells, boundary = boundary_condition, cell_num_site = cell_sites, cell_site_dof = site_dof, Hamiltonian_of_cell = H_cell, inter_hop = T_inter_cell )
Explanation: For our SSH lattice with 3 unit cells, 2 sites in each unit cell, and [1] degree of freedom per each site, we can initiate an instance of the Lattice1d class at this stage.
End of explanation
H = SSH_lattice.display_unit_cell(label_on = True)
T = SSH_lattice.display_lattice()
Explanation: The model can be visualized with the display functions.
End of explanation
SSH_Haml = SSH_lattice.Hamiltonian()
SSH_Haml
Explanation: The Hamiltonian of the lattice can be obtained with the method Hamiltonian()
End of explanation
chiral_op = SSH_lattice.distribute_operator(sigmaz())
anti_commutator_chi_H = chiral_op * SSH_Haml + SSH_Haml * chiral_op
is_null = (np.abs(anti_commutator_chi_H) < 1E-10).all()
print(is_null)
Explanation: Sublattice Projectors and Chiral Symmetry of the SSH model:
We explain now that, the Hamiltonian of the SSH model is bipartite. We can set the sites into two partitions. Each partition consists of the set of every other sites starting from site 0(or 1). Since, the Hamiltonian does not enable any transition between sites of the same sublattice, only between them, the Hamiltonian is said to be bipartite.
\begin{eqnarray}
\hat{P}{A} = \sum\limits{m=1}^{N} |m,A\rangle \langle n, A|,\ \ \ \ \ \ \hat{P}{B} = \sum\limits{m=1}^{N} |m,B\rangle \langle n, B|, \ \ \ \ \ \
\hat{\Sigma}z = \hat{P}{A} - \hat{P}_{B}
\end{eqnarray}
The projectors $\hat{P}A$(or $\hat{P}_B$) project out a ket vector into its projection into sublattice A(or B). Because, the Hamiltonian is bipartite,
\begin{eqnarray}
\hat{P}{A} H \hat{P}{A} = \hat{P}{B} H \hat{P}_{B} = 0
\end{eqnarray}
The SSH Hamiltonian is said to have chiral symmetry, since
\begin{eqnarray}
\hat{\Sigma}_z H \hat{\Sigma}_z = -H
\end{eqnarray}
which is a generalization of the symmetry concept of a Hamiltonian, since, ordinarily, a Hamiltonian is said to have a symmetry if an unitary operator leaves it invariant.
\begin{eqnarray}
\hat{U} H \hat{U}^{\dagger} = H
\end{eqnarray}
Now, we form the operators $\hat{P}_A$,$\hat{P}_B$ and $\hat{\Sigma}_z$, and verify the properties of the SSH Hamiltonian.
End of explanation
SSH_lattice.plot_dispersion()
Explanation: Hence, it is verified that $\hat{\Sigma}_z$ and H indeed anticommute and the SSH Hamiltonian has chiral symmetry.
The dispersion relationship for the lattice can be obtained with plot_dispersion() method.
End of explanation
[V,[S0,S1,S2,S3,S4,S5]]=SSH_Haml.eigenstates()
V
Explanation: plot_dispersion() plots the 3(since number of unit cells in 3) points in k-space
(the first and last one are the same) over the dispersion relation of an infinite
crystal.
End of explanation
print(S0)
print(chiral_op*S0)
print(S5)
Explanation: First, the eigen-values are the same as the ones obtained from the dispersion calculation. Second, they are symmetric about the value 0.
The second is a consequence of the chiral symmetry of the Hamiltonian, as we explain now.
\begin{eqnarray}
\hat{\bf{1}} = \hat{P}_A + \hat{P}_B, \ \ \ \ \, \hat{P}_A = \frac{1}{2}(\hat{\bf{1}}+\hat{\Sigma}_z), \ \ \ \ \, \hat{P}_B = \frac{1}{2}(\hat{\bf{1}}-\hat{\Sigma}_z) \nonumber \
H |\psi_n\rangle = E_n | \psi_n \rangle \implies H (\hat{\Sigma}_z | \psi_n \rangle) = - \hat{\Sigma}_z H | \psi_n \rangle = -E_n (\hat{\Sigma}_z|\psi_n\rangle)
\end{eqnarray}
So, if $|\psi_n\rangle$ is an eigenstate with eigenenergy $E_n$, $\hat{\Sigma}_z | \psi_n \rangle$ is also an eigenstate with energy $-E_n$ and the eigen-spectrum is symmetric about 0.
Here, S0 is the eigenvector with eigenvalue -1 and S5 is the eigenvector with eigenvalue +1. So, we can verify if S5 is the same eigenvector(withinn a phase factor) as ($\hat{\Sigma}_z*$S0).
End of explanation
is_null = (np.abs(S0.dag()*S5) < 1E-10).all()
print(is_null) # Are S0 and S5 orthogonal?
dimH = chiral_op.dims
identity_H = Qobj(np.identity(6), dims=dimH)
identity_H
PA = 0.5*( identity_H + chiral_op)
PB = 0.5*( identity_H - chiral_op)
support_S5_A = S5.dag() * PA * S5
support_S5_B = S5.dag() * PB * S5
print(support_S5_A == support_S5_B) # Does S5 have equal support on A and B?
Explanation: Clearly, S5 is the same eigenvector as ($\hat{\Sigma}_z*$S0).
Since, $\hat{\Sigma}_z | \psi_n \rangle$ and $| \psi_n \rangle$ are eigenvectors of a Hermitian oerator,H with distinct eigenvalues, they must be orthogonal.
\begin{eqnarray}
E_n \ne 0 \implies 0 = \langle \phi_n| \hat{\Sigma}_z | \phi_n \rangle = \langle \phi_n \hat{P}_A |\phi_n \rangle - \langle \phi_n| \hat{P}_B |\phi_n \rangle
\end{eqnarray}
i.e., an eigenstate with nonzero eigenvalue has equal support on both sublattices. We, now check this for S5.
End of explanation
(knxA, qH_ks) = SSH_lattice.bulk_Hamiltonians()
qH_ks
Explanation: We discuss the implications of $E_n = 0$ for an eigenvector later in the context of edge states later.
Unsurprisingly, diagonalizing the Hamiltonian gives the same spectrum of eigen-values
as the one obtained from the plot_dispersion() function. We shall soon illustrate, translational symmetry is a very useful computational hack. Here, we see how they can produce the eigen-values and eigen-vectors of the Hamiltonian ($6\times6$, in our example) from diagonalizing a $2\times2$ matrix. The reduction in size by a factor of 3 comes from the fact that the lattice of 3 cells repeats itself infinitely on both ends.
Using Translational Symmetry:
Any periodic lattice Hamiltonian can be diagonalized easier exploiting translational symmetry, this feature is not specific to SSH model alone.
\begin{eqnarray}
|\psi_n(k) \rangle = |k \rangle \otimes | u_{n}(k) \rangle \nonumber \
| u_{n}(k) \rangle = a_n(k)|a\rangle + b_n(k)|b\rangle \nonumber \
\end{eqnarray}
The vectors $| u_{nk}(r) \rangle \in H_{internal}$ are the eigenstates of the bulk momentum space Hamiltonian $H(k)$ defined as
\begin{eqnarray}
\langle k | H_{bulk} | k \rangle = \sum\limits_{\alpha,\beta \in {A,B}} \langle k, \alpha | H_{bulk} | k, \beta \rangle | \alpha \rangle \langle \beta | \nonumber \
H(k)|u_{n}(k) \rangle = E(k)| u_{n}(k) \rangle
\end{eqnarray}
In a lattice with N unitcells, since $|\psi_n(k) \rangle$ is required to be periodic over N cells, it needs to be invariant with a translation by N cells, and the valid/good quantum number for k are $0,1,2,...,\frac{2\pi}{N}$.
The dispersion, $E(k)$ can be obtained from get_dispersion() method. A list of $H(k)$ at the valid quantum numbers can be produced from the bulk_Hamiltonian_array() method.
End of explanation
(knxA, vec_kns) = SSH_lattice.cell_periodic_parts()
vec_kns
Explanation: The array of $|u_n(k) \rangle$(in terms of its expansion in {a(k),b(k)}) at the good quantum numbers, k can be produced with the method array_of_unk().
End of explanation
knxA
Explanation: In both cases, knxA is simply an array containing the valid values of k, in units of $2\pi/a$, a being the length of the unit cell i.e. 1. a is always 1 in all methods().
End of explanation
eigen_states = SSH_lattice.bloch_wave_functions()
eigen_states
Explanation: bloch_wave_functions() yields an ordered array for the eigenvalues and eigenvectors(which are bloch wave functions) for the Hailtonian of the lattice.
End of explanation
t_intra = -0.5
t_inter = -0.35
H_cell = Qobj( np.array( [[ 0, t_intra ],[t_intra,0]] ) )
T_inter_cell = Qobj( np.array( [[ 0, 0 ],[t_inter,0]] ) )
SSH_lattice_TrI = Lattice1d(num_cell=100, boundary = "periodic", cell_num_site = 2, cell_site_dof = [1], Hamiltonian_of_cell = H_cell, inter_hop = T_inter_cell )
SSH_lattice_TrI.plot_dispersion()
SSH_H_t = SSH_lattice_TrI.Hamiltonian()
D = SSH_H_t.eigenenergies()
plt.plot(D,'ro')
plt.xlabel('index of eigen values')
plt.ylabel('eigen values')
plt.show()
plt.close()
Explanation: Knowing the cell periodic part of a bloch wavefunction, $u_n(k)$ suffices to calculate it. The translational symmetry enables calculation of the eigenstates of a $6\times6$ matrix through diagonalizing a $2\times2$ matrix.
Topology: Winding Number:
Due to the chiral symmetry, the bulk momentum space Hamiltonian,$H(k)$ can be written in terms of $\sigma_x$ and $\sigma_y$ components alone.
\begin{eqnarray}
H(k)= h_x(k)\sigma_x + h_y(k)\sigma_y
\end{eqnarray}
For this specific model, where $\bf{h}(k)$ moves about in a 2d plane($h_x-h_y$), winding number is a topological invariant characterizing the topology of the model. It enumerates the number of times, $\bf{h}(k)$ traverses around the origin in a positive sense as k is varied from 0 to $2\pi$.
The method winding_number() evaluates the following integral to determine it as well as plots the trajectory of $\bf{h}(k)$ in the $h_x-h_y$ plane.
\begin{eqnarray}
\nu = \frac{1}{2\pi i}\int\limits_{-\pi}^{\pi} dk \frac{d}{dk}Log(h(k)) \ \ \ \text{,where} \ \ \ \ \ \ \ \ h(k) = h_x(k)- ih_y(k)
\end{eqnarray}
The trivial Insulator:
The latice become a gapped system when $t_{inter}$ does not equal $t_{intra}$. For the case of $|t_{inter}| < |t_{intra}|$, the lattice is a topologically trivial insulator.
End of explanation
SSH_lattice_TrI.winding_number()
Explanation: Unsurprisingly, we see a gap in the dispersion relationship and also in the spectrum of
eigen values of the Hamiltonian.
However, a calculation of the winding number yields 0, since it has trivial topology.
End of explanation
t_intra = -0.5
t_inter = -0.65
H_cell = Qobj( np.array( [[ 0, t_intra ],[t_intra,0]] ) )
T_inter_cell = Qobj( np.array( [[ 0, 0 ],[t_inter,0]] ) )
pSSH_lattice_nTrI = Lattice1d(num_cell=100, boundary = "periodic", cell_num_site = 2, cell_site_dof = [1], Hamiltonian_of_cell = H_cell, inter_hop = T_inter_cell )
pSSH_lattice_nTrI.plot_dispersion()
pSSH_H_nt = pSSH_lattice_nTrI.Hamiltonian()
nD = pSSH_H_nt.eigenenergies()
plt.plot(nD,'ro')
plt.ylabel('eigen values')
plt.show()
plt.close()
Explanation: The Topologically Nontrivial Insulator
The other limit ($|t_{inter}| < |t_{intra}| $) is the interesting one, although the same
calculations on a lattice with periodic boundary condition reveal nothing interesting.
End of explanation
t_intra = -0.5
t_inter = -0.65
cell_H = Qobj( np.array( [[ 0, t_intra ],[t_intra,0]] ) )
inter_cell_T = Qobj( np.array( [[ 0, 0 ],[t_inter,0]] ) )
apSSH_lattice_nTrI = Lattice1d(num_cell=100, boundary = "aperiodic", cell_num_site = 2, cell_site_dof = [1], cell_Hamiltonian = cell_H, inter_hop = inter_cell_T )
apSSH_lattice_nTrI.winding_number()
apSSH_H_nt = apSSH_lattice_nTrI.Hamiltonian()
[n_D,Vx] = apSSH_H_nt.eigenstates()
plt.plot(n_D,'ro')
plt.ylabel('eigen values')
plt.show()
plt.close()
Explanation: Topologically Nontrivial Insulator: Hardwall Boundary Condition
To reveal the topological nontriviality, we have to put the insulator next to a
boundary, i.e. use a hardwall boundary condition and the spectrum of eigenvalues of
the Hamiltonian shows the conspicuous edge states which plotted as a eigenvector
are dense towards the edge considerably. However, the dispersion looks completely the
same.
End of explanation
xA = [i for i in range(200)]
Es0 = np.abs(Vx[99])
Es1 = np.abs(Vx[100])
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4), sharey=True)
ax1.plot(xA, Es0, label="State0")
ax2.plot(xA, Es1, label="State1")
ax1.legend()
ax2.legend()
plt.show()
fig.suptitle('Mid-gap Edge states')
plt.close()
Explanation: We observe the much celebrated mid-gap edge states, who appear at mid-gaps. The nontriviality of the topological insulator is revealed with their placement next to a boundary. The edge states are the consequences of the bulk-bundary correspondence. Plotting the absolute values of the eigen-functions reveal that they are localized at the edges. We plot the absolute values of the eigenfunctions corresponding to the two eigenvalues found at mid-gaps.
End of explanation
n_D[96:104]
Explanation: It is quite obvious, the edge states are localized towards the edges.
The eigenstates are not unique, they are only unique within a phase factor. The routine used in QuTiP only yields eigenstates that are one of the possibilities within the phase factor.
Let us check out the two edge state eigenvalues as well as the three preceding and three succeeding ones.
End of explanation
chiral_op_nTrI = apSSH_lattice_nTrI.distribute_operator(sigmaz())
dimH_nTrI = chiral_op_nTrI.dims
identity_H_nTrI = Qobj(np.identity(200), dims=dimH_nTrI)
identity_H_nTrI
PA_200 = 0.5*( identity_H_nTrI + chiral_op_nTrI)
PB_200 = 0.5*( identity_H_nTrI - chiral_op_nTrI)
xA = [i for i in range(200)]
Es0 = np.abs(PA_200*Vx[99])
Es1 = np.abs(PB_200*Vx[99])
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 4), sharey=True)
ax1.plot(xA, Es0, label="State0")
ax2.plot(xA, Es1, label="State1")
ax1.legend()
ax2.legend()
plt.show()
fig.suptitle('Mid-gap Edge states')
plt.close()
Explanation: It is obvious that, n_D[99] and n_D[100] are values 0 within numerical precision. There is an interesting property of the eigenstates of a chiral Hamiltonian with eigenvalue 0.
\begin{eqnarray}
H|\psi_n \rangle = 0 \implies H(\hat{P}_{A/B}| \psi_n\rangle) = \frac{1}{2}H(\hat{\bf{1}} \pm \Sigma_z )| \psi_n \rangle = \frac{1}{2}(\hat{\bf{1}} \mp \Sigma_z )H| \psi_n \rangle = 0
\end{eqnarray}
Therefore, $\hat{P}_{A/B} |\psi_n\rangle$ is an eigenstate with eigenvalue 0 as well. As a result, the edge-states can be chosen to have support in only one sublattice.
End of explanation
is_null = (np.abs(apSSH_H_nt*PA_200*Vx[99]) < 1E-10).all()
print(is_null)
qutip.about()
qutip.cite()
Explanation: $\hat{P}{A} |\psi_n\rangle$ and $\hat{P}{B} |\psi_n\rangle$ are orthogonal eigenstates with eigenvalue 0, and we can choose the two edge states to be localized at two edges of the lattice. Just to be sure, we also check if $H\hat{P}_{A/B} |\psi_n\rangle $ is indeed 0.
End of explanation |
13,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sets implemented as AVL Trees
This notebook implements <em style="color
Step1: Given an ordered binary tree $t$, the expression $t.\texttt{isEmpty}()$ checks whether $t$ is the empty tree.
Step2: Given an ordered binary tree $t$ and a key $k$, the expression $t.\texttt{member}(k)$ returns True if the key $k$ is stored in the tree $t$.
The method member is defined inductively as follows
Step3: The method $\texttt{insert}()$ is specified via recursive equations.
- $\texttt{Nil}.\texttt{insert}(k) = \texttt{Node}(k, \texttt{Nil}, \texttt{Nil})$,
- $\texttt{Node}(k, l, r).\texttt{insert}(k) = \texttt{Node}(k, l, r)$,
- $k_1 < k_2 \rightarrow
\texttt{Node}(k_2, l, r).\texttt{insert}(k_1) =
\texttt{Node}\bigl(k_2, l.\texttt{insert}(k_1), r\bigr).\texttt{restore}()$,
- $k_1 > k_2 \rightarrow
\texttt{Node}(k_2, l, r).\texttt{insert}\bigl(k_1\bigr) =
\texttt{Node}\bigl(k_2, l, r.\texttt{insert}(k_1)\bigr).\texttt{restore}()$.
The function $\texttt{restore}$ is an auxiliary function that is defined below. This function restores the balancing condition if it is violated after an insertion.
Step4: The method $\texttt{self}.\texttt{delete}(k)$ removes the key $k$ from the tree $\texttt{self}$. It is defined as follows
Step5: The method $\texttt{self}.\texttt{delMin}()$ removes the smallest key from the given tree $\texttt{self}$
and returns a pair of the form
$$ (\texttt{self}, k_m) $$
where $\texttt{self}$ is the tree that remains after removing the smallest key, while $k_m$ is the smallest key that has been found.
The function is defined as follows
Step6: Given two ordered binary trees $s$ and $t$, the expression $s.\texttt{update}(t)$ overwrites the attributes of $s$ with the corresponding attributes of $t$.
Step7: The function $\texttt{restore}(\texttt{self})$ restores the balancing condition of the given binary tree
at the root node and recompute the variable $\texttt{mHeight}$.
The method $\texttt{restore}$ is specified via conditional equations.
$\texttt{Nil}.\texttt{restore}() = \texttt{Nil}$,
because the empty tree already is an AVL tree.
- $|l.\texttt{height}() - r.\texttt{height}()| \leq 1 \rightarrow
\texttt{Node}(k,l,r).\texttt{restore}() = \texttt{Node}(k,l,r)$.
If the balancing condition is satisfied, then nothing needs to be done.
- $\begin{array}[t]{cl}
& l_1.\texttt{height}() = r_1.\texttt{height}() + 2 \
\wedge & l_1 = \texttt{Node}(k_2,l_2,r_2) \
\wedge & l_2.\texttt{height}() \geq r_2.\texttt{height}() \[0.2cm]
\rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
\texttt{Node}\bigl(k_2,l_2,\texttt{Node}(k_1,r_2,r_1)\bigr)
\end{array}
$
- $\begin{array}[t]{cl}
& l_1.\texttt{height}() = r_1.\texttt{height}() + 2 \
\wedge & l_1 = \texttt{Node}(k_2,l_2,r_2) \
\wedge & l_2.\texttt{height}() < r_2.\texttt{height}() \
\wedge & r_2 = \texttt{Node}(k_3,l_3,r_3) \
\rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
\texttt{Node}\bigl(k_3,\texttt{Node}(k_2,l_2,l_3),\texttt{Node}(k_1,r_3,r_1) \bigr)
\end{array}
$
- $\begin{array}[t]{cl}
& r_1.\texttt{height}() = l_1.\texttt{height}() + 2 \
\wedge & r_1 = \texttt{Node}(k_2,l_2,r_2) \
\wedge & r_2.\texttt{height}() \geq l_2.\texttt{height}() \[0.2cm]
\rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
\texttt{Node}\bigl(k_2,\texttt{Node}(k_1,l_1,l_2),r_2\bigr)
\end{array}
$
- $\begin{array}[t]{cl}
& r_1.\texttt{height}() = l_1.\texttt{height}() + 2 \
\wedge & r_1 = \texttt{Node}(k_2,l_2,r_2) \
\wedge & r_2.\texttt{height}() < l_2.\texttt{height}() \
\wedge & l_2 = \texttt{Node}(k_3,l_3,r_3) \
\rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
\texttt{Node}\bigl(k_3,\texttt{Node}(k_1,l_1,l_3),\texttt{Node}(k_2,r_3,r_2) \bigr)
\end{array}
$
Step8: The function $\texttt{self}.\texttt{_setValues}(k, l, r)$ overwrites the member variables of the node $\texttt{self}$ with the given values.
Step9: The function $\texttt{createNode}(k, l, r)$ creates an AVL-tree of that has the key $k$ stored at its root,
left subtree $l$ and right subtree $r$.
Step10: The method $t.\texttt{pop}()$ take an AVL tree $t$ and removes and returns the smallest key that is present in $t$. It is specified as follows
Step11: Display Code
Step12: Given an ordered binary tree, this function renders the tree graphically using graphviz.
Step13: This method assigns a unique identifier with each node. The dictionary NodeDict maps these identifiers to the nodes where they occur.
Step14: Testing
The function $\texttt{demo}()$ creates a small ordered binary tree.
Step15: Let's generate an ordered binary tree with random keys.
Step16: This tree looks more or less balanced. Lets us try to create a tree by inserting sorted numbers because that resulted in linear complexity for ordered binary trees.
Step17: Next, we compute the set of prime numbers $\leq 100$. Mathematically, this set is given as follows | Python Code:
class Set:
def __init__(self):
self.mKey = None
self.mLeft = None
self.mRight = None
self.mHeight = 0
Explanation: Sets implemented as AVL Trees
This notebook implements <em style="color:blue;">sets</em> as <a href="https://en.wikipedia.org/wiki/AVL_tree">AVL trees</a>. The set $\mathcal{A}$ of <em style="color:blue;">AVL trees</em> is defined inductively:
$\texttt{Nil} \in \mathcal{A}$.
$\texttt{Node}(k,l,r) \in \mathcal{A}\quad$ iff
$\texttt{Node}(k,l,r) \in \mathcal{B}_<$,
$l, r \in \mathcal{A}$, and
$|l.\texttt{height}() - r.\texttt{height}()| \leq 1$.
According to this definition, an AVL tree is an <em style="color:blue;">ordered binary tree</em>
such that for every node $\texttt{Node}(k,l,r)$ in this tree the height of the left subtree $l$ and the right
subtree $r$ differ at most by one.
The class Set represents the nodes of an AVL tree. This class has the following member variables:
mKey is the key stored at the root of the tree,
mLeft is the left subtree,
mRight is the right subtree, and
mHeight is the height.
The constructor __init__ creates the empty tree.
End of explanation
def isEmpty(self):
return self.mKey == None
Set.isEmpty = isEmpty
Explanation: Given an ordered binary tree $t$, the expression $t.\texttt{isEmpty}()$ checks whether $t$ is the empty tree.
End of explanation
def member(self, key):
if self.isEmpty():
return
elif self.mKey == key:
return True
elif key < self.mKey:
return self.mLeft.member(key)
else:
return self.mRight.member(key)
Set.member = member
Explanation: Given an ordered binary tree $t$ and a key $k$, the expression $t.\texttt{member}(k)$ returns True if the key $k$ is stored in the tree $t$.
The method member is defined inductively as follows:
- $\texttt{Nil}.\texttt{member}(k) = \Omega$,
because the empty tree is interpreted as the empty map.
$\texttt{Node}(k, l, r).\texttt{member}(k) = v$,
because the node $\texttt{Node}(k,l,r)$ stores the assignment $k \mapsto v$.
- $k_1 < k_2 \rightarrow \texttt{Node}(k_2, l, r).\texttt{member}(k_1) = l.\texttt{member}(k_1)$,
because if $k_1$ is less than $k_2$, then any mapping for $k_1$ has to be stored in the left subtree $l$.
- $k_1 > k_2 \rightarrow \texttt{Node}(k_2, l, r).\texttt{member}(k_1) = r.\texttt{member}(k_1)$,
because if $k_1$ is greater than $k_2$, then any mapping for $k_1$ has to be stored in the right subtree $r$.
End of explanation
def insert(self, key):
if self.isEmpty():
self.mKey = key
self.mLeft = Set()
self.mRight = Set()
self.mHeight = 1
elif self.mKey == key:
pass
elif key < self.mKey:
self.mLeft.insert(key)
self._restore()
else:
self.mRight.insert(key)
self._restore()
Set.insert = insert
Explanation: The method $\texttt{insert}()$ is specified via recursive equations.
- $\texttt{Nil}.\texttt{insert}(k) = \texttt{Node}(k, \texttt{Nil}, \texttt{Nil})$,
- $\texttt{Node}(k, l, r).\texttt{insert}(k) = \texttt{Node}(k, l, r)$,
- $k_1 < k_2 \rightarrow
\texttt{Node}(k_2, l, r).\texttt{insert}(k_1) =
\texttt{Node}\bigl(k_2, l.\texttt{insert}(k_1), r\bigr).\texttt{restore}()$,
- $k_1 > k_2 \rightarrow
\texttt{Node}(k_2, l, r).\texttt{insert}\bigl(k_1\bigr) =
\texttt{Node}\bigl(k_2, l, r.\texttt{insert}(k_1)\bigr).\texttt{restore}()$.
The function $\texttt{restore}$ is an auxiliary function that is defined below. This function restores the balancing condition if it is violated after an insertion.
End of explanation
def delete(self, key):
if self.isEmpty():
return
if key == self.mKey:
if self.mLeft.isEmpty():
self._update(self.mRight)
elif self.mRight.isEmpty():
self._update(self.mLeft)
else:
self.mRight, self.mKey = self.mRight._delMin()
elif key < self.mKey:
self.mLeft.delete(key)
else:
self.mRight.delete(key)
Set.delete = delete
Explanation: The method $\texttt{self}.\texttt{delete}(k)$ removes the key $k$ from the tree $\texttt{self}$. It is defined as follows:
$\texttt{Nil}.\texttt{delete}(k) = \texttt{Nil}$,
$\texttt{Node}(k,\texttt{Nil},r).\texttt{delete}(k) = r$,
$\texttt{Node}(k,l,\texttt{Nil}).\texttt{delete}(k) = l$,
$l \not= \texttt{Nil} \,\wedge\, r \not= \texttt{Nil} \,\wedge\,
\langle r',k_{min} \rangle := r.\texttt{delMin}() \;\rightarrow\;
\texttt{Node}(k,l,r).\texttt{delete}(k) = \texttt{Node}(k_{min},l,r')$
$k_1 < k_2 \rightarrow \texttt{Node}(k_2,l,r).\texttt{delete}(k_1) =
\texttt{Node}\bigl(k_2,l.\texttt{delete}(k_1),r\bigr)$,
$k_1 > k_2 \rightarrow \texttt{Node}(k_2,l,r).\texttt{delete}(k_1) =
\texttt{Node}\bigl(k_2,l,r.\texttt{delete}(k_1)\bigr)$.
End of explanation
def _delMin(self):
if self.mLeft.isEmpty():
return self.mRight, self.mKey
else:
ls, km = self.mLeft._delMin()
self.mLeft = ls
self._restore()
return self, km
Set._delMin = _delMin
Explanation: The method $\texttt{self}.\texttt{delMin}()$ removes the smallest key from the given tree $\texttt{self}$
and returns a pair of the form
$$ (\texttt{self}, k_m) $$
where $\texttt{self}$ is the tree that remains after removing the smallest key, while $k_m$ is the smallest key that has been found.
The function is defined as follows:
$\texttt{Node}(k, \texttt{Nil}, r).\texttt{delMin}() = \langle r, k \rangle$,
$l\not= \texttt{Nil} \wedge \langle l',k_{min}\rangle := l.\texttt{delMin}()
\;\rightarrow\;
\texttt{Node}(k, l, r).\texttt{delMin}() =
\langle \texttt{Node}(k, l', r).\texttt{restore}(), k_{min} \rangle
$
End of explanation
def _update(self, t):
self.mKey = t.mKey
self.mLeft = t.mLeft
self.mRight = t.mRight
self.mHeight = t.mHeight
Set._update = _update
Explanation: Given two ordered binary trees $s$ and $t$, the expression $s.\texttt{update}(t)$ overwrites the attributes of $s$ with the corresponding attributes of $t$.
End of explanation
def _restore(self):
if abs(self.mLeft.mHeight - self.mRight.mHeight) <= 1:
self._restoreHeight()
return
if self.mLeft.mHeight > self.mRight.mHeight:
k1, l1, r1 = self.mKey, self.mLeft, self.mRight
k2, l2, r2 = l1.mKey, l1.mLeft, l1.mRight
if l2.mHeight >= r2.mHeight:
self._setValues(k2, l2, createNode(k1, r2, r1))
else:
k3, l3, r3 = r2.mKey, r2.mLeft, r2.mRight
self._setValues(k3, createNode(k2, l2, l3),
createNode(k1, r3, r1))
elif self.mRight.mHeight > self.mLeft.mHeight:
k1, l1, r1 = self.mKey, self.mLeft, self.mRight
k2, l2, r2 = r1.mKey, r1.mLeft, r1.mRight
if r2.mHeight >= l2.mHeight:
self._setValues(k2, createNode(k1, l1, l2), r2)
else:
k3, l3, r3 = l2.mKey, l2.mLeft, l2.mRight
self._setValues(k3, createNode(k1, l1, l3),
createNode(k2, r3, r2))
self._restoreHeight()
Set._restore = _restore
Explanation: The function $\texttt{restore}(\texttt{self})$ restores the balancing condition of the given binary tree
at the root node and recompute the variable $\texttt{mHeight}$.
The method $\texttt{restore}$ is specified via conditional equations.
$\texttt{Nil}.\texttt{restore}() = \texttt{Nil}$,
because the empty tree already is an AVL tree.
- $|l.\texttt{height}() - r.\texttt{height}()| \leq 1 \rightarrow
\texttt{Node}(k,l,r).\texttt{restore}() = \texttt{Node}(k,l,r)$.
If the balancing condition is satisfied, then nothing needs to be done.
- $\begin{array}[t]{cl}
& l_1.\texttt{height}() = r_1.\texttt{height}() + 2 \
\wedge & l_1 = \texttt{Node}(k_2,l_2,r_2) \
\wedge & l_2.\texttt{height}() \geq r_2.\texttt{height}() \[0.2cm]
\rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
\texttt{Node}\bigl(k_2,l_2,\texttt{Node}(k_1,r_2,r_1)\bigr)
\end{array}
$
- $\begin{array}[t]{cl}
& l_1.\texttt{height}() = r_1.\texttt{height}() + 2 \
\wedge & l_1 = \texttt{Node}(k_2,l_2,r_2) \
\wedge & l_2.\texttt{height}() < r_2.\texttt{height}() \
\wedge & r_2 = \texttt{Node}(k_3,l_3,r_3) \
\rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
\texttt{Node}\bigl(k_3,\texttt{Node}(k_2,l_2,l_3),\texttt{Node}(k_1,r_3,r_1) \bigr)
\end{array}
$
- $\begin{array}[t]{cl}
& r_1.\texttt{height}() = l_1.\texttt{height}() + 2 \
\wedge & r_1 = \texttt{Node}(k_2,l_2,r_2) \
\wedge & r_2.\texttt{height}() \geq l_2.\texttt{height}() \[0.2cm]
\rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
\texttt{Node}\bigl(k_2,\texttt{Node}(k_1,l_1,l_2),r_2\bigr)
\end{array}
$
- $\begin{array}[t]{cl}
& r_1.\texttt{height}() = l_1.\texttt{height}() + 2 \
\wedge & r_1 = \texttt{Node}(k_2,l_2,r_2) \
\wedge & r_2.\texttt{height}() < l_2.\texttt{height}() \
\wedge & l_2 = \texttt{Node}(k_3,l_3,r_3) \
\rightarrow & \texttt{Node}(k_1,l_1,r_1).\texttt{restore}() =
\texttt{Node}\bigl(k_3,\texttt{Node}(k_1,l_1,l_3),\texttt{Node}(k_2,r_3,r_2) \bigr)
\end{array}
$
End of explanation
def _setValues(self, k, l, r):
self.mKey = k
self.mLeft = l
self.mRight = r
Set._setValues = _setValues
def _restoreHeight(self):
self.mHeight = max(self.mLeft.mHeight, self.mRight.mHeight) + 1
Set._restoreHeight = _restoreHeight
Explanation: The function $\texttt{self}.\texttt{_setValues}(k, l, r)$ overwrites the member variables of the node $\texttt{self}$ with the given values.
End of explanation
def createNode(key, left, right):
node = Set()
node.mKey = key
node.mLeft = left
node.mRight = right
node.mHeight = max(left.mHeight, right.mHeight) + 1
return node
Explanation: The function $\texttt{createNode}(k, l, r)$ creates an AVL-tree of that has the key $k$ stored at its root,
left subtree $l$ and right subtree $r$.
End of explanation
def pop(self):
if self.mKey == None:
raise KeyError
if self.mLeft.mKey == None:
key = self.mKey
self._update(self.mRight)
return key
return self.mLeft.pop()
Set.pop = pop
Explanation: The method $t.\texttt{pop}()$ take an AVL tree $t$ and removes and returns the smallest key that is present in $t$. It is specified as follows:
- $\texttt{Nil}.\texttt{pop}() = \Omega$
- $\texttt{Node}(k,\texttt{Nil}, r).\texttt{pop}() = \langle k, r\rangle$
- $l \not=\texttt{Nil} \wedge \langle k',l'\rangle := l.\texttt{pop}() \rightarrow
\texttt{Node}(k, l, r).\texttt{pop}() = \langle k', \texttt{Node}(k, l', r)\rangle$
End of explanation
import graphviz as gv
Explanation: Display Code
End of explanation
def toDot(self):
Set.sNodeCount = 0 # this is a static variable of the class Set
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
NodeDict = {}
self._assignIDs(NodeDict)
for n, t in NodeDict.items():
if t.mKey != None:
dot.node(str(n), label=str(t.mKey))
else:
dot.node(str(n), label='', shape='point')
for n, t in NodeDict.items():
if not t.mLeft == None:
dot.edge(str(n), str(t.mLeft.mID))
if not t.mRight == None:
dot.edge(str(n), str(t.mRight.mID))
return dot
Set.toDot = toDot
Explanation: Given an ordered binary tree, this function renders the tree graphically using graphviz.
End of explanation
def _assignIDs(self, NodeDict):
Set.sNodeCount += 1
self.mID = Set.sNodeCount
NodeDict[self.mID] = self
if self.isEmpty():
return
self.mLeft ._assignIDs(NodeDict)
self.mRight._assignIDs(NodeDict)
Set._assignIDs = _assignIDs
Explanation: This method assigns a unique identifier with each node. The dictionary NodeDict maps these identifiers to the nodes where they occur.
End of explanation
def demo():
m = Set()
m.insert("anton")
m.insert("hugo")
m.insert("gustav")
m.insert("jens")
m.insert("hubert")
m.insert("andre")
m.insert("philipp")
m.insert("rene")
return m
t = demo()
t.toDot()
while not t.isEmpty():
print(t.pop())
display(t.toDot())
Explanation: Testing
The function $\texttt{demo}()$ creates a small ordered binary tree.
End of explanation
import random as rnd
t = Set()
for k in range(30):
k = rnd.randrange(100)
t.insert(k)
display(t.toDot())
while not t.isEmpty():
print(t.pop(), end=' ')
display(t.toDot())
Explanation: Let's generate an ordered binary tree with random keys.
End of explanation
t = Set()
for k in range(30):
t.insert(k)
display(t.toDot())
while not t.isEmpty():
print(t.pop(), end=' ')
display(t.toDot())
Explanation: This tree looks more or less balanced. Lets us try to create a tree by inserting sorted numbers because that resulted in linear complexity for ordered binary trees.
End of explanation
S = Set()
for k in range(2, 101):
S.insert(k)
display(S.toDot())
for i in range(2, 101):
for j in range(2, 101):
S.delete(i * j)
display(S.toDot())
while not S.isEmpty():
print(S.pop(), end=' ')
display(S.toDot())
Explanation: Next, we compute the set of prime numbers $\leq 100$. Mathematically, this set is given as follows:
$$ \bigl{2, \cdots, 100 \bigr} - \bigl{ i \cdot j \bigm| i, j \in {2, \cdots, 100 }\bigr}$$
End of explanation |
13,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import data
Step1: Set parameters
Step2: Preprocessing
Step6: 1. Distribution regression
Kernel mean embedding
Instead of fitting a model to the instances, the idea of distribution regression is to find a regression on the underlying probability distributions the instances come from. It is based on the assumption that the data is ${(x_i, y_i)}_{i=1}^{n}$ with
Step7: Kernel selection
A kernel is characterised by a parameter we will call $\theta$ and the ridge regression depends on the L2 regularisation $\lambda$. Through cross-validation, we selected the kernels giving the most stable validation loss. They are given below with their associated parameters
Step8: Second stage prediction with SVR
In the second stage, a Support Vector Regression is fed with the different predictions from the kernel ridge regression to predict the target value $y$.
The SVR uses these predictions to compute the optimal weights assigned to each kernel regression and we might hope to find a better optimum to approximate the true regression function $f$.
Step9: Tuning SVR's parameters
Step10: Prediction | Python Code:
def load_data():
full_data = pd.read_csv("Data/X.csv")
train_y = pd.read_csv("Data/y_train.csv")
# Rename columns to something more interpretable
columns = (["reflectance_" + str(i) for i in range(7)]
+ ["solar_" + str(i) for i in range(5)] + ["id"])
full_data.columns = columns
# Move ID column to the beginning
id_column = full_data["id"]
full_data.drop(labels=["id"], axis=1, inplace = True)
full_data.insert(0, "id", id_column)
# Add the target value column to the training part
# in full_data
split = 98000
y_id_dict = train_y.set_index("id")["y"].to_dict()
full_data.loc[:(split-1), "y"] = full_data.loc[:(split-1), "id"].map(y_id_dict)
# Split into training and testing data
train, test = full_data[:split], full_data[split:]
return (train, test)
train, test = load_data()
Explanation: Import data
End of explanation
random_seed = 8
# Set folds for out-of-fold prediction
n_folds = 5
Explanation: Set parameters
End of explanation
cols_excl = ["id", "y"]
cols_orig = [c for c in train.columns if c not in cols_excl]
# Standardise data can make training faster and reduce
# the chances of getting stuck in local optima
train[cols_orig] = scale(train[cols_orig])
test[cols_orig] = scale(test[cols_orig])
Explanation: Preprocessing
End of explanation
class Kernel(object):
Kernel class from Zoltan Szabo
giving the kernel mean embedding.
def __init__(self, par=None):
Initialization.
Parameters
----------
par : dictionary, optional
Name of the kernel and its parameters (default is
{"name": "RBF", "sigma": 1}). The name of the kernel comes
from "RBF", "exponential", "Cauchy", "student", "Matern3p2",
"Matern5p2", "polynomial", "ratquadr" (rational quadratic),
"invmquadr" (inverse multiquadr).
if par is None:
par = {"name": "RBF", "sigma": 1}
name = par["name"]
self.name = name
# other attributes:
if name == "RBF" or name == "exponential" or name == "Cauchy":
self.sigma = par["sigma"]
elif name == "student":
self.d = par["d"]
elif name == "Matern3p2" or name == "Matern5p2":
self.l = par["l"]
elif name == "polynomial":
self.c = par["c"]
self.exponent = par["exponent"]
elif name == "ratquadr" or name == "invmquadr":
self.c = par["c"]
else:
raise Exception("kernel=?")
def gram_matrix(self, y1, y2):
Compute the Gram matrix = [k(y1[i,:], y2[j,:])]; i, j: running.
Parameters
----------
y1 : (number of samples1, dimension)-ndarray
One row of y1 corresponds to one sample.
y2 : (number of samples2, dimension)-ndarray
One row of y2 corresponds to one sample.
Returns
-------
g : ndarray.
Gram matrix of y1 and y2.
if self.name == "RBF":
sigma = self.sigma
g = cdist(y1, y2)
g = exp(-g ** 2 / (2 * sigma ** 2))
elif self.name == "exponential":
sigma = self.sigma
g = cdist(y1, y2)
g = exp(-g / (2 * sigma ** 2))
elif self.name == "Cauchy":
sigma = self.sigma
g = cdist(y1, y2)
g = 1 / (1 + g ** 2 / sigma ** 2)
elif self.name == "student":
d = self.d
g = cdist(y1, y2)
g = 1 / (1 + g ** d)
elif self.name == "Matern3p2":
l = self.l
g = cdist(y1, y2)
g = (1 + sqrt(3) * g / l) * exp(-sqrt(3) * g / l)
elif self.name == "Matern5p2":
l = self.l
g = cdist(y1, y2)
g = (1 + sqrt(5) * g / l + 5 * g ** 2 / (3 * l ** 2)) * \
exp(-sqrt(5) * g / l)
elif self.name == "polynomial":
c = self.c
exponent = self.exponent
g = (dot(y1, y2.T) + c) ** exponent
elif self.name == "ratquadr":
c = self.c
g = cdist(y1, y2) ** 2
g = 1 - g / (g + c)
elif self.name == "invmquadr":
c = self.c
g = cdist(y1, y2)
g = 1 / sqrt(g ** 2 + c ** 2)
else:
raise Exception("kernel=?")
return g
# Compute the linear kernel product of
# the mean embedding of X1 and X2
# denoted as K(i, j) above
def mean_embedding(X1, X2, kernel):
k = Kernel(kernel)
gram_mat = k.gram_matrix(X1, X2)
# Number of instances in the bag
N = float(gram_mat.shape[0])
mu_X1_X2 = gram_mat.ravel().sum() / N**2
return (mu_X1_X2)
# Return a symmetrised matrix
def symmetrise(A):
return(A + A.T - np.diag(A.diagonal()))
# Compute the Gram matrix K given the kernel and
# the smoothing parameter theta
def compute_gram(df, kernel, theta):
nb_bag = df["id"].nunique()
K_matrix = np.zeros((nb_bag, nb_bag))
print("Computing {0} Gram matrix for theta={1}:".format(kernel, theta))
for i in range(nb_bag):
if (i%50 == 0):
print("Bag number: {0}". format(i))
for j in range(i+1):
# Compute mean embedding
X1 = df.loc[train["id"] == (i+1), cols_orig].values
X2 = df.loc[train["id"] == (j+1), cols_orig].values
K_matrix[i, j] = mean_embedding(X1, X2, {'name': kernel, 'sigma': theta})
return symmetrise(K_matrix)
#K_cauchy = compute_gram(train, "Cauchy", 2**4)
# Class for kernel ridge regression
class RidgeRegression(object):
def __init__(self, l2_reg):
self.l2_reg = l2_reg
def fit(self, G, y):
# Train size
n_train = G.shape[0]
ridge_mat = G + (self.l2_reg * n_train) * np.identity(n_train)
self.ridge_mat = ridge_mat
# Shape of y_train is (1, n_train)
self.y_train = y
def predict(self, G_test):
y_test_hat = self.y_train.dot(np.linalg.solve(self.ridge_mat, G_test))
return y_test_hat
Explanation: 1. Distribution regression
Kernel mean embedding
Instead of fitting a model to the instances, the idea of distribution regression is to find a regression on the underlying probability distributions the instances come from. It is based on the assumption that the data is ${(x_i, y_i)}_{i=1}^{n}$ with:
$n$ the number of bags in the dataset ;
$x_i$ the probability distribution of bag $i$ ;
$y_i$ is the aerosol optical depth of bag $i$.
However, $x_i$ is not observed: for each bag $i$, the $100$ instances $x_{i,l}$, $l=1,...,100$, are samples from the distribution $x_i$. Our dataset is thus ${({x_{i,l}}{l=1}^{100}, y_i)}{i=1}^{n}$ and we want to find a mapping $\hat{f}$ that will best predict unseen bags.
The mapping $\hat{f}$ on ${({x_{i,l}}{l=1}^{100}, y_i)}{i=1}^{n}$ will try to learn the relationship between the true distributions ${x_i}{i=1}^{n}$ and the target values ${y_i}{i=1}^{n}$. To achieve that, the information of the 100 instances in each bag has to be summarised whilst losing the less information possible. The aggregated approach that simply computes the mean of the features for each bag is an example of information summary, yet plenty of data is lost that way.
A better way to represent each bag is via kernel mean embedding:
$$\mu_{\hat{x}i} = \frac{1}{100}\sum{l=1}^{100} k(\cdot, x_{i,l})$$
Each bag is represented as a linear combination of kernels, and with the right choice of kernel, the lost information can be very negligible.
Kernel Ridge Regression
We now want to find $\hat{f}$ that minimises the following regularised least square problem:
$$ \underset{f}{arg min} \sum_{i=1}^{n} (f(\mu_{\hat{x}_i}) - y_i)^2 + \lambda \Vert f \Vert^2$$
with $\lambda>0$ the L2 regularisation parameter.
In kernel ridge regression, $f$ is interpreted as a linear combination of feature space mappings $\phi$ of the data points $\mu_{\hat{x}i}$:
$$ f = \sum{i=1}^{n} \alpha_i \phi(\mu_{\hat{x}_i} ) $$
The equation thus becomes:
$$ \underset{\alpha}{arg min} (\Vert y -K\alpha \Vert^2 + \lambda \alpha^T K \alpha)$$
with :
* $K(i,j) = k'(\mu_{\hat{x}i} , \mu{\hat{x}_j})$ for $i,j=1..n$ ;
* $k'$ another kernel.
By differentiating with respect to $\alpha$ and setting it to zero:
$$ \alpha^{*} = (K + \lambda I_n)^{-1}y $$
For the sake of simplicity and because the results proved to be reasonably good, we set $k'$ as the linear kernel and as a result:
$$ K(i,j) = \frac{1}{100^2} \sum_{l,k=1}^{100} k(x_{i,l} , x_{j,k})$$
End of explanation
# G_train and G_test are pandas dataframes
# krr is a kernel ridge regression
def oof_prediction(krr, G_train, y_train, G_test, n_folds, random_seed):
kf = KFold(n_splits=n_folds, shuffle=True, random_state=random_seed)
n_train = G_train.shape[0]
n_test = G_test.shape[1]
oof_train = np.zeros(n_train)
oof_test = np.zeros(n_test)
oof_test_folds = np.zeros((n_test, n_folds))
for i, (train_index, test_index) in enumerate(kf.split(G_train)):
G_tr = G_train.loc[train_index, train_index].values
y_tr = y_train[train_index].reshape((1, -1))
G_te = G_train.loc[train_index, test_index].values
krr.fit(G_tr, y_tr)
oof_train[test_index] = krr.predict(G_te)
G_test_partial = G_test.loc[train_index, :]
oof_test_folds[:, i] = krr.predict(G_test_partial.values)
oof_test = oof_test_folds.mean(axis=1)
return oof_train, oof_test
nb_bags_train = 980
# Create a vector with the unique values of y for each ID.
y_train = train.groupby("id")["y"].median().values
# Load Gram matrices
def load_gram(csv_file, nb_bags_train):
# Import data
G = pd.read_csv(csv_file, header=None)
idx_train = nb_bags_train - 1
idx_test = nb_bags_train
G_train = G.loc[:idx_train, :idx_train]
G_test = G.loc[:idx_train, idx_test:]
return (G_train, G_test)
# Define models and import Gram matrices
# Cauchy
l2_reg_cauchy = 2**(-23)
cauchy = RidgeRegression(l2_reg_cauchy)
G_train_cauchy, G_test_cauchy = load_gram("kernels_me/Cauchy_16.csv", nb_bags_train)
# Matern 5/2
l2_reg_matern_52 = 2**(-31)
matern_52 = RidgeRegression(l2_reg_matern_52)
G_train_matern_52, G_test_matern_52 = load_gram("kernels_me/Matern_52_64.csv", nb_bags_train)
# Rational quadratic
l2_reg_rquadr = 2**(-26)
rquadr = RidgeRegression(l2_reg_rquadr)
G_train_rquadr, G_test_rquadr = load_gram("kernels_me/rquadr_512.csv", nb_bags_train)
# Create OOF train and test predictions
# Cauchy
cauchy_oof_train, cauchy_oof_test = oof_prediction(cauchy, G_train_cauchy,
y_train, G_test_cauchy,
n_folds, random_seed)
# Matern 5/2
matern_52_oof_train, matern_52_oof_test = oof_prediction(matern_52, G_train_matern_52,
y_train, G_test_matern_52,
n_folds, random_seed)
# Rational quadratic
rquadr_oof_train, rquadr_oof_test = oof_prediction(rquadr, G_train_rquadr,
y_train, G_test_rquadr,
n_folds, random_seed)
print("Training is finished.")
Explanation: Kernel selection
A kernel is characterised by a parameter we will call $\theta$ and the ridge regression depends on the L2 regularisation $\lambda$. Through cross-validation, we selected the kernels giving the most stable validation loss. They are given below with their associated parameters:
Cauchy:
$$k_C(a,b) = \dfrac{1}{1 + \dfrac{\Vert a-b\Vert_2^2}{\theta^2}}, \quad\theta_C = 16, \quad\lambda_C = 2^{-23} $$
Matérn 5/2:
$$k_M(a,b) = \left(1 + \dfrac{\sqrt{5}\Vert a-b\Vert_2^2}{\theta} + \dfrac{5\Vert a-b\Vert_2^2}{3\theta^2} \right)e^{-\dfrac{\sqrt{5}\Vert a-b\Vert_2^2}{\theta}}, \quad\theta_M = 64, \quad\lambda_M = 2^{-31} $$
Rational quadratic:
$$k_r(a,b) = 1 - \dfrac{\Vert a-b\Vert_2^2}{\Vert a-b\Vert_2^2 + \theta}, \quad\theta_r = 512, \quad\lambda_r = 2^{-26}$$
2. Stacking
We will then map the features in the three spaces that describes the data in different ways. Each kernel ridge regression gives a prediction of the labels and combining them might give a better result for three reasons:
Statistical reason: we might not have enough data and even if each model $h_i$ performs well on the training set, the true model $f$ might still be not reached ;
Computational reason: each model $h_i$ only finds a local optima ;
Representational reason: the true model is out of the representation of functions we're considering.
Combining our model might take us a step closer to finding the true model $f$. The ensembling technique we used was out-of-fold stacking.
Out-of-fold prediction
In the first stage, out-of-fold prediction is applied to ensure that each first-layer regressor does not overfit by predicting on data already seen. For each regressor, we iteratively separate the training data in $N$ folds ($N=5$ in our model), and then use N-1 folds to train the model and then predict the target value of the remaining fold. To create the new testing set, the average of the predictions of each fold is taken.
End of explanation
# Building the new data frames using the
# of out-of-fold predictions
kernel_train = pd.DataFrame({'cauchy': cauchy_oof_train,
'matern_52': matern_52_oof_train,
'rquadr': rquadr_oof_train})
kernel_train["y"] = y_train
kernel_test = pd.DataFrame({'cauchy': cauchy_oof_test,
'matern_52': matern_52_oof_test,
'rquadr': rquadr_oof_test})
cols_excl_kernel = ["y"]
cols_kernel = [c for c in kernel_train.columns if c not in cols_excl_kernel]
kernel_train.head()
Explanation: Second stage prediction with SVR
In the second stage, a Support Vector Regression is fed with the different predictions from the kernel ridge regression to predict the target value $y$.
The SVR uses these predictions to compute the optimal weights assigned to each kernel regression and we might hope to find a better optimum to approximate the true regression function $f$.
End of explanation
# Root mean squared error metric
def RMSE(y, y_hat):
out = np.sqrt(mean_squared_error(y.reshape((-1,)), y_hat.reshape((-1,))))
return (out)
def scoring_function(parameters):
print("Training the model with parameters: ")
print(parameters)
# Run several KFold shuffles and take the mean RMSE
average_RMSE = []
nb_run = 10
for m in range(nb_run):
KFold_RMSE = 0.0
n_splits = 5
kf = KFold(n_splits=n_splits, shuffle=True, random_state=(random_seed+m))
nb_fold = 0
for train_index, validation_index in kf.split(kernel_train):
nb_fold += 1
train_fold, validation_fold = kernel_train.loc[train_index], kernel_train.loc[validation_index]
svr = SVR(C=parameters["C"], epsilon=parameters["epsilon"])
svr.fit(train_fold[cols_kernel], train_fold["y"])
y_hat_test = svr.predict(validation_fold[cols_kernel])
RMSE_test = RMSE(y_hat_test, validation_fold["y"].values)
KFold_RMSE += RMSE_test
KFold_RMSE /= n_splits
average_RMSE.append(KFold_RMSE)
average_RMSE = np.array(average_RMSE)
print("Cross-validation score: {0} +/- {1}\n".format(average_RMSE.mean(),
2*average_RMSE.std()))
return {"loss": average_RMSE.mean(), "status": STATUS_OK}
# Grid to pick parameters from.
parameters_grid = {"C": hp.choice("C", np.arange(0.5, 3, 0.5)),
"epsilon": hp.choice("epsilon", np.arange(0.05, 0.25, 0.05))
}
# Record the information about the cross-validation.
trials = Trials()
best = fmin(scoring_function, parameters_grid, algo=tpe.suggest, max_evals=10,
trials=trials)
min(trials.losses())
# Save the best parameters as a csv.
best_parameters = pd.DataFrame({key: [value] for (key, value) in
zip(space_eval(parameters_grid, best).keys(),
space_eval(parameters_grid, best).values())})
# Add the corresponding score.
best_parameters["score"] = min(trials.losses())
best_parameters.to_csv("Parameters/best_parameters_SVR.csv", encoding="utf-8", index=False)
best_parameters
Explanation: Tuning SVR's parameters
End of explanation
best_parameters = pd.read_csv("Parameters/best_parameters_SVR.csv", encoding="utf-8")
best_parameters
svr = SVR(C=best_parameters["C"][0],
epsilon=best_parameters["epsilon"][0])
svr.fit(kernel_train[cols_kernel], y_train)
# Training error
RMSE(svr.predict(kernel_train[cols_kernel]), y_train)
# Prediction
y_hat_test = svr.predict(kernel_test[cols_kernel])
test_pred = test.groupby("id")[["y"]].mean().reset_index()
test_pred["y"] = y_hat_test
test_pred.columns = ["Id", "y"]
# Save as a .csv
test_pred.to_csv("Predictions/Prediction_SVR.csv", index=False)
Explanation: Prediction
End of explanation |
13,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
odm2api demo with Little Bear SQLite sample DB
Largely from https
Step1: Read the database
Step2: Run some basic sample queries
Step3: Read some metadata from the database
Step4: SamplingFeatures tests
Get all of the SamplingFeatures from the database that are Sites
Step6: Now get the SamplingFeature object for a SamplingFeature code
Step7: Back to the rest of the demo
Step8: Foreign Key Example
Drill down and get objects linked by foreign keys
Step9: Example of Retrieving Attributes of a Time Series Result using a ResultID
Step10: Why are ProcessingLevelObj, VariableObj and UnitsObj objects not shown in the above vars() listing!? They are actually available, as demonstrated in much of the code below.
Step11: Example of Retrieving Time Series Result Values, then plotting them | Python Code:
import os
from odm2api.ODMconnection import dbconnection
odm2db_fpth = os.path.join('data', 'ODM2.sqlite')
session_factory = dbconnection.createConnection('sqlite', odm2db_fpth, 2.0)
Explanation: odm2api demo with Little Bear SQLite sample DB
Largely from https://github.com/ODM2/ODM2PythonAPI/blob/master/Examples/Sample.py
- 4/25/2016. Started testing with the new odm2 conda channel, based on the new 0.5.0-alpha odm2api release. See my odm2api_odm2channel env. Ran into problems b/c the SQLite database needed to be updated to have a SamplingFeature.FeatureGeometryWKT field; so I added and populated it manually with SQLite Manager.
- 2/7/2016. Tested successfully with sfgeometry_em_1 branch, with my overhauls. Using odm2api_dev env.
- 2/1 - 1/31. Errors with SamplingFeatures code, with latest odm2api from master (on env odm2api_jan31test). The code also fails the same way with the odm2api env, but it does still run fine with the odm2api_jan21 env! I'm investigating the differences between those two envs.
- 1/22-20,9/2016.
Author: Emilio Mayorga
Create a connection to the ODM2 database
End of explanation
from odm2api.ODM2.services.readService import ReadODM2
read = ReadODM2(session_factory)
Explanation: Read the database
End of explanation
allVars = read.getVariables()
for x in allVars:
print('{}: {}'.format(x.VariableCode, x.VariableNameCV))
Explanation: Run some basic sample queries
End of explanation
allPeople = read.getPeople()
if allPeople:
for x in allPeople:
print('{} {}'.format(x.PersonFirstName, x.PersonLastName))
allaff = read.getAffiliations()
if allaff:
for x in allaff:
print('{}: {}'.format(x.PersonObj.PersonFirstName, x.OrganizationID))
Explanation: Read some metadata from the database: people and affiliation
End of explanation
try:
siteFeatures = read.getSamplingFeatures(type='Site')
numSites = len(siteFeatures)
for x in siteFeatures:
print(': '.format(x.SamplingFeatureCode, x.SamplingFeatureName))
except Exception as e:
print('Unable to demo getSamplingFeatures(type="Site")\n{}'.format(e))
read.getSamplingFeatures()
read.getSamplingFeatures(codes=['USU-LBR-Mendon'])
Explanation: SamplingFeatures tests
Get all of the SamplingFeatures from the database that are Sites
End of explanation
sf_lst = read.getSamplingFeatures(codes=['USU-LBR-Mendon'])
vars(sf_lst[0])
sf = sf_lst[0]
sf
print(type(sf))
print(type(sf.FeatureGeometryWKT), sf.FeatureGeometryWKT)
table =
<!DOCTYPE html>
<html>
<head>
<style>
table {{
width:100%;
}}
table, th, td {{
border: 1px solid black;
border-collapse: collapse;
}}
th, td {{
padding: 5px;
text-align: left;
}}
table#t01 tr:nth-child(odd) {{
background-color: #eee;
}}
table#t01 tr:nth-child(even) {{
background-color:#fff;
}}
</style>
</head>
<body>
<table id="t01">
<tr>
<td>Code</td>
<td>{}</td>
</tr>
<tr>
<td>TypeCV</td>
<td>{}</td>
</tr>
<tr>
<td>Name</td>
<td>{}</td>
</tr>
</table>
</body>
</html>
.format
import folium
lon, lat = sf.Longitude, sf.Latitude
m = folium.Map(location=[lat, lon], zoom_start=16)
icon = folium.Icon(color='orange', icon='info-sign', prefix='glyphicon')
width, height = 310, 130
html = table(sf.SamplingFeatureCode, sf.SamplingFeatureTypeCV, sf.SamplingFeatureName)
iframe = folium.IFrame(html, width=width, height=height)
popup = folium.Popup(iframe)
folium.Marker(location=[lat, lon], icon=icon, popup=popup).add_to(m)
m
Explanation: Now get the SamplingFeature object for a SamplingFeature code
End of explanation
read.getResults()
firstResult = read.getResults()[0]
firstResult.FeatureActionObj.ActionObj
Explanation: Back to the rest of the demo
End of explanation
try:
# Call getResults, but return only the first result.
firstResult = read.getResults()[0]
action_firstResult = firstResult.FeatureActionObj.ActionObj
print('The FeatureAction object for the Result is: {}'.format(firstResult.FeatureActionObj))
print('The Action object for the Result is: {}'.format(action_firstResult))
print(
'\nThe following are some of the attributes for the Action that created the Result: \n' +
'ActionTypeCV: ' + action_firstResult.ActionTypeCV + '\n' +
'ActionDescription: ' + action_firstResult.ActionDescription + '\n' +
'BeginDateTime: ' + str(action_firstResult.BeginDateTime) + '\n' +
'EndDateTime: ' + str(action_firstResult.EndDateTime) + '\n' +
'MethodName: ' + action_firstResult.MethodObj.MethodName + '\n' +
'MethodDescription: ' + action_firstResult.MethodObj.MethodDescription
)
except Exception as e:
print('Unable to demo Foreign Key Example: {}'.format(e))
Explanation: Foreign Key Example
Drill down and get objects linked by foreign keys
End of explanation
tsResult = read.getResults(ids=[1])[0]
type(tsResult), vars(tsResult)
Explanation: Example of Retrieving Attributes of a Time Series Result using a ResultID
End of explanation
try:
tsResult = read.getResults(ids=[1])[0]
# Get the site information by drilling down.
sf_tsResult = tsResult.FeatureActionObj.SamplingFeatureObj
print('Some of the attributes for the TimeSeriesResult retrieved using getResults(ids=[]): \n' +
'ResultTypeCV: ' + tsResult.ResultTypeCV + '\n' +
# Get the ProcessingLevel from the TimeSeriesResult's ProcessingLevel object.
'ProcessingLevel: ' + tsResult.ProcessingLevelObj.Definition + '\n' +
'SampledMedium: ' + tsResult.SampledMediumCV + '\n' +
# Get the variable information from the TimeSeriesResult's Variable object.
'Variable: ' + tsResult.VariableObj.VariableCode + ': ' + tsResult.VariableObj.VariableNameCV + '\n' +
'AggregationStatistic: ' + tsResult.AggregationStatisticCV + '\n' +
# Get the site information by drilling down.
'Elevation_m: ' + str(sf_tsResult.Elevation_m) + '\n' +
'SamplingFeature: ' + sf_tsResult.SamplingFeatureCode + ' - ' +
sf_tsResult.SamplingFeatureName)
except Exception as e:
print('Unable to demo Example of retrieving Attributes of a time Series Result: {}'.format(e))
Explanation: Why are ProcessingLevelObj, VariableObj and UnitsObj objects not shown in the above vars() listing!? They are actually available, as demonstrated in much of the code below.
End of explanation
tsValues = read.getResultValues(resultids=[1]) # Get the values for a particular TimeSeriesResult.
tsValues.set_index('valuedatetime', inplace=True)
tsValues.head() # Return type is a pandas dataframe.
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import dates
fig, ax = plt.subplots(figsize=(11, 2.25))
tsValues['datavalue'].plot(ax=ax)
ax.set_ylabel('{} ({})'.format(
tsResult.VariableObj.VariableNameCV,
tsResult.UnitsObj.UnitsAbbreviation))
ax.set_xlabel('')
ax.xaxis.set_minor_locator(dates.MonthLocator())
ax.xaxis.set_minor_formatter(dates.DateFormatter('%b'))
ax.xaxis.set_major_locator(dates.YearLocator())
ax.xaxis.set_major_formatter(dates.DateFormatter('\n%Y'))
ax.grid(which='major', axis='y')
ax.grid(which='minor', axis='x')
Explanation: Example of Retrieving Time Series Result Values, then plotting them
End of explanation |
13,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
Step1: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Here, just creating some placeholders like normal.
Step6: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper
Step7: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note
Step9: Model Loss
Calculating the loss like before, nothing new here.
Step11: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
Step12: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise | Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
# 4*4*512 = 8192 neurons
x1 = tf.layers.dense(z, 8192)
x1 = tf.reshape(x1, (-1, 4, 4, 512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha * x1, x1)
# 4x4x512 now
# conv transpose layer of 8x8x256
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=training)
x2 = tf.maximum(alpha * x2, x2)
# x2 is now 8x8x256
# conv transpose of 16x16x128
x3 = tf.layers.conv2d_transpose(x2, 128, 16, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=training)
x3 = tf.maximum(alpha * x3, x3)
# x3 is now 16x16x128
# conv transpose of 32x32x3
# Output layer, 32x32x3
logits = tf.layers.conv2d_transpose(x3, 3, 32, strides=2, padding='same')
out = tf.tanh(logits)
return out
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.
End of explanation
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
# conv layer of 5x5x32
x1 = tf.layers.conv2d(x, 32, 5, strides=2, padding='same')
x1 = tf.layers.batch_normalization(x1, training=True)
x1 = tf.maximum(alpha * x1, x1)
# x1 is now 16x16x32
# conv layer of 3x3x64
x2 = tf.layers.conv2d(x1, 64, 3, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=True)
x2 = tf.maximum(alpha * x2, x2)
# x2 is now 8x8x64
# conv layer of 3x3x128
x3 = tf.layers.conv2d(x2, 128, 3, strides=2, padding='same')
x3 = tf.layers.batch_normalization(x3, training=True)
x3 = tf.maximum(alpha * x3, x3)
# x2 is now 4x4x128
x3 = tf.reshape(x3, (-1, 2048)) # 4x4x128
logits = tf.layers.dense(x3, 1)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.
Exercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.
End of explanation
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
Explanation: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
End of explanation
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=alpha)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
Explanation: Here is a function for displaying generated images.
End of explanation
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
End of explanation
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
Explanation: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.
End of explanation |
13,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intermediate Linear Algebra - Eigenvalues & Eigenvectors
Key Equation
Step1: Solving Equation $Ax=\lambda x$
Special Case
Step2: 3 x 3 Matrix
Let us write it in the form
$$ Ax = \lambda x $$
$$ \begin{bmatrix}1 & 1 & 1 \ 3 & 8 & 1 \ 5 & -4 & 3\end{bmatrix}\begin{bmatrix} x \y \ z\end{bmatrix}= \lambda \begin{bmatrix} x\ y \ x \end{bmatrix} $$ | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('fivethirtyeight')
plt.rcParams['figure.figsize'] = (10, 6)
def vector_plot (vector):
X,Y,U,V = zip(*vector)
C = [1,1,2,2]
plt.figure()
ax = plt.gca()
ax.quiver(X,Y,U,V,C, angles='xy',scale_units='xy',scale=1)
ax.set_xlim([-6,6])
ax.set_ylim([-6,6])
plt.axhline(0, color='grey', linewidth=1)
plt.axvline(0, color='grey', linewidth=1)
plt.axes().set_aspect('equal')
plt.draw()
A = np.array([[ 6 , 2],
[ 2 , 6]])
x = np.array([[-1],
[1]])
v = A.dot(x)
# All the vectors start at 0, 0
vAX = np.r_[[0,0],A[:,0]]
vAY = np.r_[[0,0],A[:,1]]
vx = np.r_[[0,0],x[:,0]]
vv = np.r_[[0,0],v[:,0]]
vector_plot([vAX, vAY, vx, vv])
Explanation: Intermediate Linear Algebra - Eigenvalues & Eigenvectors
Key Equation: $Ax = \lambda b ~~ \text{for} ~~ n \times n $
Transformations
So what really happens when we multiply the $A$ matrix with a vector $x$
Lets say we have a vector - $x$
$$ x = \begin{bmatrix} -1 \ 1 \end{bmatrix} $$
What happens when we multiply by a matrix - $A$
$$ A = \begin{bmatrix} 6 & 2 \ 2 & 6 \end{bmatrix} $$
$$ Ax = \begin{bmatrix} 6 & 2 \ 2 & 6 \end{bmatrix} \begin{bmatrix} -1 \ 1 \end{bmatrix} = \begin{bmatrix} -4 \ 4 \end{bmatrix} $$
$$ Ax = 4Ix $$
$$ Ax = 4x $$
So this particular matrix has just scaled our original vector. It is a scalar transformation. Other matrices can do reflection, rotation and any arbitary transformation in the same 2d space for n = 2.
Lets see what has happened through code.
End of explanation
A = np.array([[ 3 , 1],
[ 1 , 3]])
eigen_val, eigen_vec = np.linalg.eig(A)
eigen_val
eigen_vec
eigen_vec[:,0]
# All the vectors start at 0, 0
vX1 = np.r_[[0,0],A[:,0]]
vY1 = np.r_[[0,0],A[:,1]]
vE1 = np.r_[[0,0],eigen_vec[:,0]] * 2
vE2 = np.r_[[0,0],eigen_vec[:,1]] * 2
vector_plot([vX1, vY1, vE1, vE2])
Explanation: Solving Equation $Ax=\lambda x$
Special Case: $Ax = 0$
So far we have been solving the equation $Ax = b$. Let us just look at special case when $b=0$.
$$ Ax =0 $$
If $A^{-1}$ exists (the matrix is non-singular and invertable), then the solution is trival
$$ A^{-1}Ax =0 $$
$$ x = 0$$
If $A^{-1}$ does not exist, then there may be infinitely many other solutions $x$. And since $A^{-1}$ is a singular matrix then
$$||A|| = 0 $$
General Case
The second part of linear algebra is solving the equation, for a given $A$ -
$$ Ax = \lambda x$$
Note that both $x$ and $\lambda$ are unknown in this equation. For all solutions of them:
$$ \text{eigenvalues} = \lambda $$
$$ \text{eigenvectors} = x $$
Calculating Eigenvalues
So let us first solve this for $\lambda$ :
$$ Ax = \lambda Ix $$
$$ (A-\lambda I)x = 0 $$
So for non-trivial solution of $x$, $A$ should be singular:
$$ ||A - \lambda I|| = 0 $$
For 2 x 2 Matrix
Let us use the sample $A$ vector:
$$ A = \begin{bmatrix}3 & 1\ 1 & 3\end{bmatrix} $$
So our equation becomes:
$$ \begin{bmatrix}3 & 1\ 1 & 3\end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}\lambda & 0\ 0 & \lambda \end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} $$
$$ \begin{bmatrix}3 - \lambda & 1\ 1 & 3 - \lambda \end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = 0 $$
So for a singular matrix:
$$ \begin{Vmatrix}3 - \lambda & 1\ 1 & 3 - \lambda \end{Vmatrix} = 0 $$
$$ (3 - \lambda)^2 - 1 = 0 $$
$$ \lambda^2 - 6\lambda + 8 = 0 $$
$$ (\lambda - 4)(\lambda - 2) = 0 $$
$$ \lambda_1 = 2, \lambda_2 = 4 $$
$$||A|| = \lambda_{1} \lambda_{2} $$
Calculating Eigenvectors
For $\lambda = 2$,
$$ \begin{bmatrix}3 - \lambda & 1\ 1 & 3 - \lambda \end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}1 & 1\ 1 & 1 \end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = 0 $$
So one simple solution is:
$$ \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}-1 \ 1\end{bmatrix} $$
For $\lambda = 4$,
$$ \begin{bmatrix}3 - \lambda & 1\ 1 & 3 - \lambda \end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}-1 & 1\ 1 & -1 \end{bmatrix} \begin{bmatrix}x \ y\end{bmatrix} = 0 $$
So one simple solution is:
$$ \begin{bmatrix}x \ y\end{bmatrix} = \begin{bmatrix}1 \ 1\end{bmatrix} $$
The eigenvectors are orthonormal to each other in this case.
Vector Representation (2x2)
A vector representation for this is now:
$$ \begin{bmatrix}3 \ 1\end{bmatrix} x + \begin{bmatrix}1 \ 3\end{bmatrix} y = \begin{bmatrix} \lambda \ 0 \end{bmatrix} x + \begin{bmatrix} 0 \ \lambda \end{bmatrix} y $$
Now we need to draw these vectors and see the result
End of explanation
f = np.matrix([[1,1,1],
[3,8,1],
[5,-4,3]])
np.linalg.eig(f)
Explanation: 3 x 3 Matrix
Let us write it in the form
$$ Ax = \lambda x $$
$$ \begin{bmatrix}1 & 1 & 1 \ 3 & 8 & 1 \ 5 & -4 & 3\end{bmatrix}\begin{bmatrix} x \y \ z\end{bmatrix}= \lambda \begin{bmatrix} x\ y \ x \end{bmatrix} $$
End of explanation |
13,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis of Movie Reviews
In this tutorial, we will load a trained model and perform inference on a new movie review.
Setup
As before, we first create a computational backend to tell neon on what device to execute the computation.
Step1: We also define a few parameters, and the load the vocabulary. The vocab is a 1
Step2: Load Model
To load the model, we just pass in the saved model file. neon will automatically generate the layers specified in the model file and load the corresponding weights.
Step3: Inference
We first generate some buffers on both the host (CPU) and the device (GPU) to hold the input data that we would like to pass to the model for inference. Below the variable be is the backend that we creater with gen_backend earlier in the code. Our backend supports numpy-like functions for allocating buffers on the compute device.
Step5: Now we write our new movie review. We've included a sample here, but feel free to write your own and see how well the model responds.
POSITIVE
Step6: Before we send the data to the model, we need to convert the string to a sequence of numbers, with each number representing a word, using the vocab that we loaded earlier in the code. If a word is not in our vocab, we use a special out-of-vocab number.
Step7: The text data is now converted to a list of integers
Step8: We truncate the input to sentence_length=128 words. If the text is less than 128 words, we pad with zeros. The text is then loaded into the numpy array named input_host.
Step9: Experimentation
To make it easy for you to experiment with the model inference, below we wrap all the text above into a single function that you can call.
Step11: Now you can easily enter your own review and get the result. Here we included a more neutral review below | Python Code:
from neon.backends import gen_backend
be = gen_backend(backend='gpu', batch_size=1)
print be
Explanation: Sentiment Analysis of Movie Reviews
In this tutorial, we will load a trained model and perform inference on a new movie review.
Setup
As before, we first create a computational backend to tell neon on what device to execute the computation.
End of explanation
import pickle as pkl
sentence_length = 128
vocab_size = 20000
# we have some special codes
pad_char = 0 # padding character
start = 1 # marker for start of review
oov = 2 # when the word is out of the vocab
index_from = 3 # index of first word in vocab
# load the vocab
vocab, rev_vocab = pkl.load(open('data/imdb.vocab', 'rb'))
Explanation: We also define a few parameters, and the load the vocabulary. The vocab is a 1:1 mapping of words to numbers. The file imdb.vocab can be downloaded from https://s3-us-west-1.amazonaws.com/nervana-course/imdb.vocab and placed in the data directory.
End of explanation
from neon.models import Model
model = Model('imdb_lstm.pkl')
# we initialize the model, passing in the size of the input data.
model.initialize(dataset=(sentence_length, 1))
Explanation: Load Model
To load the model, we just pass in the saved model file. neon will automatically generate the layers specified in the model file and load the corresponding weights.
End of explanation
import numpy as np
input_device = be.zeros((sentence_length, 1), dtype=np.int32) # `be` is the backend that we created earlier in the code.
input_numpy = np.zeros((sentence_length, 1), dtype=np.int32)
Explanation: Inference
We first generate some buffers on both the host (CPU) and the device (GPU) to hold the input data that we would like to pass to the model for inference. Below the variable be is the backend that we creater with gen_backend earlier in the code. Our backend supports numpy-like functions for allocating buffers on the compute device.
End of explanation
line = Beautiful attracts excellent idea, but ruined with a bad selection of the actors. The main character is
a loser and his woman friend and his friend upset viewers. Apart from the first episode all the other become
more boring and boring. First, it considers it illogical behavior. No one normal would not behave the way the
main character behaves. It all represents a typical Halmark way to endear viewers to the reduced amount of
intelligence. Does such a scenario, or the casting director and destroy this question is on Halmark
producers. Cat is the main character is wonderful. The main character behaves according to
his friend selfish.
Explanation: Now we write our new movie review. We've included a sample here, but feel free to write your own and see how well the model responds.
POSITIVE:
"The pace is steady and constant, the characters full and engaging, the relationships and interactions natural showing that you do not need floods of tears to show emotion, screams to show fear, shouting to show dispute or violence to show anger. Naturally Joyce's short story lends the film a ready made structure as perfect as a polished diamond, but the small changes Huston makes such as the inclusion of the poem fit in neatly. It is truly a masterpiece of tact, subtlety and overwhelming beauty."
NEGATIVE:
"Beautiful attracts excellent idea, but ruined with a bad selection of the actors. The main character is a loser and his woman friend and his friend upset viewers. Apart from the first episode all the other become more boring and boring. First, it considers it illogical behavior. No one normal would not behave the way the main character behaves. It all represents a typical Halmark way to endear viewers to the reduced amount of intelligence. Does such a scenario, or the casting director and destroy this question is on Halmark producers. Cat is the main character is wonderful. The main character behaves according to his friend selfish."
NEUTRAL:
"The characters voices were very good. I was only really bothered by Kanga. The music, however, was twice as loud in parts than the dialog, and incongruous to the film. As for the story, it was a bit preachy and militant in tone. Overall, I was disappointed, but I would go again just to see the same excitement on my child's face. I liked Lumpy's laugh..."
End of explanation
from neon.data.text_preprocessing import clean_string
tokens = clean_string(line).strip().split()
sent = [len(vocab) + 1 if t not in vocab else vocab[t] for t in tokens]
sent = [start] + [w + index_from for w in sent]
sent = [oov if w >= vocab_size else w for w in sent]
Explanation: Before we send the data to the model, we need to convert the string to a sequence of numbers, with each number representing a word, using the vocab that we loaded earlier in the code. If a word is not in our vocab, we use a special out-of-vocab number.
End of explanation
print sent
Explanation: The text data is now converted to a list of integers:
End of explanation
trunc = sent[-sentence_length:] # take the last sentence_length words
input_numpy[:] = 0 # fill with zeros
input_numpy[-len(trunc):, 0] = trunc # place the input into the numpy array
print input_numpy.T
input_device.set(input_numpy) # copy the numpy array to device
y_pred = model.fprop(input_device, inference=True) # run the forward pass through the model
print("Predicted sentiment: {}".format(y_pred.get()[1])) # print the estimated sentiment
Explanation: We truncate the input to sentence_length=128 words. If the text is less than 128 words, we pad with zeros. The text is then loaded into the numpy array named input_host.
End of explanation
def sentiment(sent, model):
input_device = be.zeros((sentence_length, 1), dtype=np.int32)
input_numpy = np.zeros((sentence_length, 1), dtype=np.int32)
tokens = clean_string(line).strip().split()
sent = [len(vocab) + 1 if t not in vocab else vocab[t] for t in tokens]
sent = [start] + [w + index_from for w in sent]
sent = [oov if w >= vocab_size else w for w in sent]
trunc = sent[-sentence_length:] # take the last sentence_length words
input_numpy[:] = 0 # fill with zeros
input_numpy[-len(trunc):, 0] = trunc # place the input into the numpy array
input_device.set(input_numpy) # copy the numpy array to device
y_pred = model.fprop(input_device, inference=True) # run the forward pass through the model
return y_pred.get()[1]
Explanation: Experimentation
To make it easy for you to experiment with the model inference, below we wrap all the text above into a single function that you can call.
End of explanation
line = The characters voices were very good. I was only really bothered by Kanga. The music, however, was twice
as loud in parts than the dialog, and incongruous to the film. As for the story, it was a bit preachy and
militant in tone. Overall, I was disappointed, but I would go again just to see the same excitement on my
child's face. I liked Lumpy's laugh...
result = sentiment(line, model)
print("Sentiment: {}".format(result))
Explanation: Now you can easily enter your own review and get the result. Here we included a more neutral review below:
End of explanation |
13,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculate Coverage
You've defined an AOI, you've specified the image type you are interested and the search query. Great! But what is the coverage of your AOI given your search query? Wouldn't you like to know before you start downloading images?
This notebook will allow you to answer that question quickly and painlessly.
Coverage calculation is performed in the UTM projected coordinate system. The geojson features are defined in the WGS84 geographic coordinate system, which is not a 2D projection.
UTM preserves shape and minimizes distortion (wikipedia)
Step1: Define AOI
Define the AOI as a geojson polygon. This can be done at geojson.io. If you use geojson.io, only copy the single aoi feature, not the entire feature collection.
Step2: Build Request
Build the Planet API Filter request.
Customize this code for your own purposes
Step3: Check AOI and Determine Coverage Grid Dimensions
We convert the AOI to UTM and ensure that it is large enough to include at least a few grid cells 9m x 9m (approximately 3x PS Orthotile resolution). Then we determine the appropriate coverage grid dimensions from the AOI.
There are a lot of UTM zones, and the UTM zone we project to depends on the location of the AOI. Once this zone is determined, we create a function that can be used to project any shape. We will use that function to project the scene footprints to the same UTM zone once we get them.
Step4: Search Planet API
The client is how we interact with the planet api. It is created with the user-specific api key, which is pulled from $PL_API_KEY environment variable.
Unless you are expecting over 500 images (in which case, why are you concerned about coverage?), this code doesn't need to be altered.
Step5: Calculate Coverage
First query the planet api for the items that match the request defined above, then calculate the overlap between each item and the aoi. Finally, convert each overlap to a grid using rasterio.rasterize, accumulate coverage over the overlap grids, and display the coverage grid.
Step6: Demo | Python Code:
# Notebook dependencies
from __future__ import print_function
import datetime
import copy
from functools import partial
import os
from IPython.display import display, Image
import matplotlib
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from planet import api
from planet.api import filters
import pyproj
import rasterio
from rasterio import features as rfeatures
from shapely import geometry as sgeom
import shapely.ops
%matplotlib inline
Explanation: Calculate Coverage
You've defined an AOI, you've specified the image type you are interested and the search query. Great! But what is the coverage of your AOI given your search query? Wouldn't you like to know before you start downloading images?
This notebook will allow you to answer that question quickly and painlessly.
Coverage calculation is performed in the UTM projected coordinate system. The geojson features are defined in the WGS84 geographic coordinate system, which is not a 2D projection.
UTM preserves shape and minimizes distortion (wikipedia)
End of explanation
aoi = {u'geometry': {u'type': u'Polygon', u'coordinates': [[[-121.3113248348236, 38.28911976564886], [-121.3113248348236, 38.34622533958], [-121.2344205379486, 38.34622533958], [-121.2344205379486, 38.28911976564886], [-121.3113248348236, 38.28911976564886]]]}, u'type': u'Feature', u'properties': {u'style': {u'opacity': 0.5, u'fillOpacity': 0.2, u'noClip': False, u'weight': 4, u'color': u'blue', u'lineCap': None, u'dashArray': None, u'smoothFactor': 1, u'stroke': True, u'fillColor': None, u'clickable': True, u'lineJoin': None, u'fill': True}}}
# this notebook uses rasterio Shapes for processing, so lets convert that geojson to a shape
aoi_shape = sgeom.shape(aoi['geometry'])
Explanation: Define AOI
Define the AOI as a geojson polygon. This can be done at geojson.io. If you use geojson.io, only copy the single aoi feature, not the entire feature collection.
End of explanation
def build_request(aoi_shape):
old = datetime.datetime(year=2016,month=6,day=1)
new = datetime.datetime(year=2016,month=10,day=1)
query = filters.and_filter(
filters.geom_filter(sgeom.mapping(aoi_shape)),
filters.range_filter('cloud_cover', lt=5),
filters.date_range('acquired', gt=old),
filters.date_range('acquired', lt=new)
)
item_types = ['PSOrthoTile']
return filters.build_search_request(query, item_types)
request = build_request(aoi_shape)
print(request)
Explanation: Build Request
Build the Planet API Filter request.
Customize this code for your own purposes
End of explanation
# Utility functions: projecting a feature to the appropriate UTM zone
def get_utm_projection_fcn(shape):
# define projection
# from shapely [docs](http://toblerity.org/shapely/manual.html#shapely.ops.transform)
proj_fcn = partial(
pyproj.transform,
pyproj.Proj(init='epsg:4326'), #wgs84
_get_utm_projection(shape))
return proj_fcn
def _get_utm_zone(shape):
'''geom: geojson geometry'''
centroid = shape.centroid
lon = centroid.x
lat = centroid.y
if lat > 84 or lat < -80:
raise Exception('UTM Zones only valid within [-80, 84] latitude')
# this is adapted from
# https://www.e-education.psu.edu/natureofgeoinfo/book/export/html/1696
zone = int((lon + 180) / 6 + 1)
hemisphere = 'north' if lat > 0 else 'south'
return (zone, hemisphere)
def _get_utm_projection(shape):
zone, hemisphere = _get_utm_zone(shape)
proj_str = "+proj=utm +zone={zone}, +{hemi} +ellps=WGS84 +datum=WGS84 +units=m +no_defs".format(
zone=zone, hemi=hemisphere)
return pyproj.Proj(proj_str)
proj_fcn = get_utm_projection_fcn(aoi_shape)
aoi_shape_utm = shapely.ops.transform(proj_fcn, aoi_shape)
print(aoi_shape_utm)
def get_coverage_dimensions(aoi_shape_utm):
'''Checks that aoi is big enough and calculates the dimensions for coverage grid.'''
minx, miny, maxx, maxy = aoi_shape_utm.bounds
width = maxx - minx
height = maxy - miny
min_cell_size = 9 # in meters, approx 3x ground sampling distance
min_number_of_cells = 3
max_number_of_cells = 3000
min_dim = min_cell_size * min_number_of_cells
if height < min_dim:
raise Exception('AOI height too small, should be {}m.'.format(min_dim))
if width < min_dim:
raise Exception('AOI width too small, should be {}m.'.format(min_dim))
def _dim(length):
return min(int(length/min_cell_size), max_number_of_cells)
return [_dim(l) for l in (height, width)]
dimensions = get_coverage_dimensions(aoi_shape_utm)
print(dimensions)
Explanation: Check AOI and Determine Coverage Grid Dimensions
We convert the AOI to UTM and ensure that it is large enough to include at least a few grid cells 9m x 9m (approximately 3x PS Orthotile resolution). Then we determine the appropriate coverage grid dimensions from the AOI.
There are a lot of UTM zones, and the UTM zone we project to depends on the location of the AOI. Once this zone is determined, we create a function that can be used to project any shape. We will use that function to project the scene footprints to the same UTM zone once we get them.
End of explanation
def get_api_key():
return os.environ['PL_API_KEY']
# quick check that key is defined
assert get_api_key(), "PL_API_KEY not defined."
def create_client():
return api.ClientV1(api_key=get_api_key())
def search_pl_api(request, limit=500):
client = create_client()
result = client.quick_search(request)
# note that this returns a generator
return result.items_iter(limit=limit)
Explanation: Search Planet API
The client is how we interact with the planet api. It is created with the user-specific api key, which is pulled from $PL_API_KEY environment variable.
Unless you are expecting over 500 images (in which case, why are you concerned about coverage?), this code doesn't need to be altered.
End of explanation
def get_overlap_shapes_utm(items, aoi_shape):
'''Determine overlap between item footprint and AOI in UTM.'''
proj_fcn = get_utm_projection_fcn(aoi_shape)
aoi_shape_utm = shapely.ops.transform(proj_fcn, aoi_shape)
def _calculate_overlap(item):
footprint_shape = sgeom.shape(item['geometry'])
footprint_shape_utm = shapely.ops.transform(proj_fcn, footprint_shape)
return aoi_shape_utm.intersection(footprint_shape_utm)
for i in items:
yield _calculate_overlap(i)
items = search_pl_api(request)
# cache the overlaps as a list so we don't have to refetch items
overlaps = list(get_overlap_shapes_utm(items, aoi_shape))
print(len(overlaps))
# what do overlaps look like?
# lets just look at the first overlap to avoid a long output cell
display(overlaps[0])
def calculate_coverage(overlaps, dimensions, bounds):
# get dimensions of coverage raster
mminx, mminy, mmaxx, mmaxy = bounds
y_count, x_count = dimensions
# determine pixel width and height for transform
width = (mmaxx - mminx) / x_count
height = (mminy - mmaxy) / y_count # should be negative
# Affine(a, b, c, d, e, f) where:
# a = width of a pixel
# b = row rotation (typically zero)
# c = x-coordinate of the upper-left corner of the upper-left pixel
# d = column rotation (typically zero)
# e = height of a pixel (typically negative)
# f = y-coordinate of the of the upper-left corner of the upper-left pixel
# ref: http://www.perrygeo.com/python-affine-transforms.html
transform = rasterio.Affine(width, 0, mminx, 0, height, mmaxy)
coverage = np.zeros(dimensions, dtype=np.uint16)
for overlap in overlaps:
if not overlap.is_empty:
# rasterize overlap vector, transforming to coverage raster
# pixels inside overlap have a value of 1, others have a value of 0
overlap_raster = rfeatures.rasterize(
[sgeom.mapping(overlap)],
fill=0,
default_value=1,
out_shape=dimensions,
transform=transform)
# add overlap raster to coverage raster
coverage += overlap_raster
return coverage
# what is a low-resolution look at the coverage grid?
display(calculate_coverage(overlaps, (6,3), aoi_shape_utm.bounds))
def plot_coverage(coverage):
fig, ax = plt.subplots()
cax = ax.imshow(coverage, interpolation='nearest', cmap=cm.viridis)
ax.set_title('Coverage\n(median: {})'.format(int(np.median(coverage))))
ax.axis('off')
ticks_min = coverage.min()
ticks_max = coverage.max()
cbar = fig.colorbar(cax,ticks=[ticks_min, ticks_max])
plot_coverage(calculate_coverage(overlaps, dimensions, aoi_shape_utm.bounds))
Explanation: Calculate Coverage
First query the planet api for the items that match the request defined above, then calculate the overlap between each item and the aoi. Finally, convert each overlap to a grid using rasterio.rasterize, accumulate coverage over the overlap grids, and display the coverage grid.
End of explanation
demo_aoi = aoi # use the same aoi that was used before
demo_aoi_shape = sgeom.shape(demo_aoi['geometry'])
proj_fcn = get_utm_projection_fcn(demo_aoi_shape)
demo_aoi_shape_utm = shapely.ops.transform(proj_fcn, demo_aoi_shape)
demo_dimensions = get_coverage_dimensions(demo_aoi_shape_utm)
# Parameterize our search request by start/stop dates for this comparison
def build_request_by_dates(aoi_shape, old, new):
query = filters.and_filter(
filters.geom_filter(sgeom.mapping(aoi_shape)),
filters.range_filter('cloud_cover', lt=5),
filters.date_range('acquired', gt=old),
filters.date_range('acquired', lt=new)
)
item_types = ['PSOrthoTile']
return filters.build_search_request(query, item_types)
request_2016 = build_request_by_dates(demo_aoi_shape,
datetime.datetime(year=2016,month=6,day=1),
datetime.datetime(year=2016,month=8,day=1))
items = search_pl_api(request_2016)
overlaps = list(get_overlap_shapes_utm(items, demo_aoi_shape))
plot_coverage(calculate_coverage(overlaps, demo_dimensions, demo_aoi_shape_utm.bounds))
request_2017 = build_request_by_dates(demo_aoi_shape,
datetime.datetime(year=2017,month=6,day=1),
datetime.datetime(year=2017,month=8,day=1))
items = search_pl_api(request_2017)
overlaps = list(get_overlap_shapes_utm(items, demo_aoi_shape))
plot_coverage(calculate_coverage(overlaps, demo_dimensions, demo_aoi_shape_utm.bounds))
Explanation: Demo: Comparing Coverage
We will compare coverage of PS OrthoTiles June and July between 2016 and 2017 for the same aoi.
End of explanation |
13,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Domestic Load Research Programme Load Profile Uncertainty Analysis
This notebook requires access to a directory with hourly load profile data. The data files must be saved in /data/profiles/hourly.
Step1: Exploring Profiles
Step2: Exploring missing values in customer load data
Step3: Aggregating load profile data | Python Code:
#load support functions
import observations.obs_processing as obs
import features.feature_ts as ts
import features.feature_socios as socios
#initiate offline plotting for plotly
import plotly.offline as offline
import cufflinks as cf
offline.init_notebook_mode()
#cf.set_config_file(offline=True, world_readable=False, theme='ggplot')
#cf.go_offline()
Explanation: Domestic Load Research Programme Load Profile Uncertainty Analysis
This notebook requires access to a directory with hourly load profile data. The data files must be saved in /data/profiles/hourly.
End of explanation
a94 = obs.loadProfiles(1994,'A','H')
a94.head()
df = a94.pivot_table(columns='ProfileID',index='Datefield',values='Unitsread')
df.iloc[:10,:10]
fig = df.iplot(kind='scatter', asFigure=True)
offline.iplot(fig)
Explanation: Exploring Profiles
End of explanation
obs.nanAnalysis(2001, 'A', 'H')
obs.nanAnalysis(2001, 'V', 'H', threshold = 0.9)
Explanation: Exploring missing values in customer load data
End of explanation
socios.recorderLocations(2000)
ts.aggTs(2012, 'A', 'M', locstring='VLK')[:20]
Explanation: Aggregating load profile data
End of explanation |
13,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Insertion Sort
The function sort is specified via two equations
Step1: The auxiliary function insert is specified as follows | Python Code:
def sort(L):
if L == []:
return []
x, *R = L
return insert(x, sort(R))
Explanation: Insertion Sort
The function sort is specified via two equations:
$\mathtt{sort}([]) = []$
$\mathtt{sort}\bigl([x] + R\bigr) =
\mathtt{insert}\bigl(x, \mathtt{sort}(R)\bigr)$
This is most easily implemented in a recursive fashion.
End of explanation
def insert(x, L):
if L == []:
return [x]
y, *R = L
if x <= y:
return [x, y] + R
else:
return [y] + insert(x, R)
insert(5, [1, 3, 4, 7, 9])
sort([7, 8, 11, 12, 2, 5, 3, 7, 9])
Explanation: The auxiliary function insert is specified as follows:
$\mathtt{insert}(x,[]) = [x]$
$x \preceq y \rightarrow \mathtt{insert}\bigl(x, [y] + R\bigr) = [x,y] + R$
$\neg x \preceq y \rightarrow
\mathtt{insert}\bigl(x, [y] + R\bigr) = [y] + \mathtt{insert}(x,R)$
Again, a recursive implementation is straightforward.
End of explanation |
13,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probabilistic Programming in Python using PyMC
Authors
Step1: Here is what the simulated data look like. We use the pylab module from the plotting library matplotlib.
Step2: Model Specification
Specifiying this model in PyMC3 is straightforward because the syntax is as close to the statistical notation. For the most part, each line of Python code corresponds to a line in the model notation above.
First, we import the components we will need from PyMC.
Step3: Now we build our model, which we will present in full first, then explain each part line-by-line.
Step4: The first line,
python
basic_model = Model()
creates a new Model object which is a container for the model random variables.
Following instantiation of the model, the subsequent specification of the model components is performed inside a with statement
Step5: Having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship
Step6: By default, this uses Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. For example, below we use Powell's method to find the MAP.
Step7: It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together.
Most techniques for finding the MAP estimate also only find a local optimium (which is often good enough), but can fail badly for multimodal posteriors if the different modes are meaningfully different.
Sampling methods
Though finding the MAP is a fast and easy way of obtaining estimates of the unknown model parameters, it is limited because there is no associated estimate of uncertainty produced with the MAP estimates. Instead, a simulation-based approach such as Markov chain Monte Carlo (MCMC) can be used to obtain a Markov chain of values that, given the satisfaction of certain conditions, are indistinguishable from samples from the posterior distribution.
To conduct MCMC sampling to generate posterior samples in PyMC3, we specify a step method object that corresponds to a particular MCMC algorithm, such as Metropolis, Slice sampling, or the No-U-Turn Sampler (NUTS). PyMC3's step_methods submodule contains the following samplers
Step8: The sample function returns a trace object that can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can see the last 5 values for the alpha variable as follows
Step9: Posterior analysis
PyMC3 provides plotting and summarization functions for inspecting the sampling output. A simple posterior plot can be created using traceplot.
Step10: The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients.
In addition, summary provides a text-based output of common posterior statistics
Step11: Case study 1
Step12: Model Specification
As with the linear regession example, specifying the model in PyMC3 mirrors its statistical specification. This model employs several new distributions
Step13: Notice that we transform the log volatility process s into the volatility process by exp(-2*s). Here, exp is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does.
Also note that we have declared the Model name sp500_model in the first occurrence of the context manager, rather than splitting it into two lines, as we did for the first example.
Fitting
Before we draw samples from the posterior, it is prudent to find a decent starting valuwa by finding a point of relatively high probability. For this model, the full maximum a posteriori (MAP) point over all variables is degenerate and has infinite density. But, if we fix log_sigma and nu it is no longer degenerate, so we find the MAP with respect only to the volatility process s keeping log_sigma and nu constant at their default values (remember that we set testval=.1 for sigma). We use the Limited-memory BFGS (L-BFGS) optimizer, which is provided by the scipy.optimize package, as it is more efficient for high dimensional functions and we have 400 stochastic random variables (mostly from s).
To do the sampling, we do a short initial run to put us in a volume of high probability, then start again at the new starting point. trace[-1] gives us the last point in the sampling trace. NUTS will recalculate the scaling parameters based on the new point, and in this case it leads to faster sampling due to better scaling.
Step14: We can check our samples by looking at the traceplot for nu and log_sigma.
Step15: Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph. Each is rendered partially transparent (via the alpha argument in Matplotlib's plot function) so the regions where many paths overlap are shaded more darkly.
Step16: Case study 2
Step17: One approach for dealing with excess zeros is to use a mixture model. The mixture model contains two components
Step18: Notice that since the latent occupancy indicators are discrete, we cannot use a gradient-based MCMC step method like HMC or NUTS for this variable. Instead, we will sample using a BinaryMetropolis sampler that proposes only binary values at each iteration for z; for the continuous-valued parameters, theta and p we will use a standard Metropolis sampler.
We sample with both samplers at once by passing them to sample in a list. Each new sample is generated by first applying step1 then step2.
Step19: The resulting posteriors for the unknown parameters suggest an occupancy rate in the neighborhood of 0.3 to 0.4, and an expected count (conditional on occupancy) of just over 2.
Step20: Arbitrary deterministics
Due to its reliance on Theano, PyMC3 provides many mathematical functions and operators for transforming random variables into new random variables. However, the library of functions in Theano is not exhaustive, therefore Theano and PyMC3 provide functionality for creating arbitrary Theano functions in pure Python, and including these functions in PyMC models. This is supported with the as_op function decorator.
Theano needs to know the types of the inputs and outputs of a function, which are specified for as_op by itypes for inputs and otypes for outputs. The Theano documentation includes an overview of the available types.
Step21: An important drawback of this approach is that it is not possible for theano to inspect these functions in order to compute the gradient required for the Hamiltonian-based samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op instead of using as_op. The PyMC example set includes a more elaborate example of the usage of as_op.
Arbitrary distributions
Similarly, the library of statistical distributions in PyMC3 is not exhaustive, but PyMC allows for the creation of user-defined functions for an arbitrary probability distribution. For simple statistical distributions, the DensityDist function takes as an argument any function that calculates a log-probability $log(p(x))$. This function may employ other random variables in its calculation. Here is an example inspired by a blog post by Jake Vanderplas on which priors to use for a linear regression (Vanderplas, 2014).
```python
import theano.tensor as T
from pymc3 import DensityDist
with Model() as model
Step22: Generalized Linear Models
Generalized Linear Models (GLMs) are a class of flexible models that are widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. Because these models are so common, PyMC3 offers a glm submodule that allows flexible creation of various GLMs with an intuitive R-like syntax that is implemented via the patsy module.
The glm submodule requires data to be included as a pandas DataFrame. Hence, for our linear regression example
Step23: The model can then be very concisely specified in one line of code.
Step24: The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object.
Step25: Backends
PyMC3 has support for different ways to store samples during and after sampling, called backends, including in-memory (default), text file, and SQLite. These can be found in pymc.backends
Step26: The stored trace can then later be loaded using the load command | Python Code:
import numpy as np
# Intialize random number generator
np.random.seed(123)
# True parameter values
alpha, sigma = 1, 1
beta = [1, 2.5]
# Size of dataset
size = 100
# Predictor variable
X1 = np.linspace(0, 1, size)
X2 = np.linspace(0,.2, size)
# Simulate outcome variable
Y = alpha + beta[0]*X1 + beta[1]*X2 + np.random.randn(size)*sigma
Explanation: Probabilistic Programming in Python using PyMC
Authors: John Salvatier, Thomas V. Wiecki, Christopher Fonnesbeck
Introduction
Probabilistic Programming (PP) allows flexible specification of statistical Bayesian models in code. PyMC3 is a new, open-source PP framework with an intutive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. It features next-generation Markov chain Monte Carlo (MCMC) sampling algorithms such as the No-U-Turn Sampler (NUTS; Hoffman, 2014), a self-tuning variant of Hamiltonian Monte Carlo (HMC; Duane, 1987). This class of samplers work well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. HMC and NUTS take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo, which means you usually don't need to have specialized knowledge about how the algorithms work. PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC.
Probabilistic programming in Python confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via C, C++, Fortran or Cython. These features make it relatively straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by Bayesian analysis.
While most of PyMC3's user-facing features are written in pure Python, it leverages Theano (Bergstra et al., 2010) to transparently transcode models to C and compile it to machine code, thereby boosting performance. Theano is a library that allows expressions to be defined using generalized vector data structures called tensors, which are tightly integrated with the popular NumPy ndarray data structure, and similarly allows for broadcasting and advanced indexing, just as NumPy arrays do. Theano also automatically optimizes the likelihood's computational graph for speed and provides simple GPU integration.
Here, we present a primer on the use of PyMC3 for solving general Bayesian statistical inference and prediction problems. We will first see the basics of how to use PyMC3, motivated by a simple example: installation, data creation, model definition, model fitting and posterior analysis. Then we will cover two case studies and use them to show how to define and fit more sophisticated models. Finally we will show how to extend PyMC3 and discuss other useful features: the Generalized Linear Models subpackage, custom distributions, custom transformations and alternative storage backends.
Installation
Running PyMC3 requires a working Python interpreter, either version 2.7 (or more recent) or 3.4 (or more recent); we recommend that new users install version 3.4. A complete Python installation for Mac OSX, Linux and Windows can most easily be obtained by downloading and installing the free Anaconda Python Distribution by ContinuumIO.
PyMC3 can also be installed manually using pip (https://pip.pypa.io/en/latest/installing.html):
pip install git+https://github.com/pymc-devs/pymc3
PyMC3 depends on several third-party Python packages which will be automatically installed when installing via pip. The four required dependencies are: Theano, NumPy, SciPy, and Matplotlib. To take full advantage of PyMC3, the optional dependencies Pandas and Patsy should also be installed. These are not automatically installed, but can be installed by:
pip install patsy pandas
The source code for PyMC3 is hosted on GitHub at https://github.com/pymc-devs/pymc3 and is distributed under the liberal Apache License 2.0. On the GitHub site, users may also report bugs and other issues, as well as contribute code to the project, which we actively encourage.
A Motivating Example: Linear Regression
To introduce model definition, fitting and posterior analysis, we first consider a simple Bayesian linear regression model with normal priors for the parameters. We are interested in predicting outcomes $Y$ as normally-distributed observations with an expected value $\mu$ that is a linear function of two predictor variables, $X_1$ and $X_2$.
$$\begin{aligned}
Y &\sim \mathcal{N}(\mu, \sigma^2) \
\mu &= \alpha + \beta_1 X_1 + \beta_2 X_2
\end{aligned}$$
where $\alpha$ the intercept, and $\beta_i$ the coefficient for covariate $X_i$, while $\sigma$ represents the observation error. Since we are constructing a Bayesian model, the unknown variables in the model must be assigned a prior distribution. Our choices will be zero-mean normal priors with variance of 100 for both regression coefficients (which corresponds to relatively diffuse information regarding the true parameter values), and $\sigma$ is modeled as the absolute of a Normal distribution (so-called HalfNormal).
$$\begin{aligned}
\alpha &\sim \mathcal{N}(0, 100) \
\beta_i &\sim \mathcal{N}(0, 100) \
\sigma &\sim \lvert\mathcal{N}(0, 1){\rvert}
\end{aligned}$$
Generating data
We can simulate some artificial data from this model using only NumPy's random module, and then use PyMC3 to try to recover the corresponding parameters. We are intentionally generating the data to closely correspond the PyMC3 model structure.
End of explanation
%matplotlib inline
import pylab as pl
fig, axes = pl.subplots(1, 2, sharex=True, figsize=(10,4))
axes[0].scatter(X1, Y)
axes[1].scatter(X2, Y)
axes[0].set_ylabel('Y'); axes[0].set_xlabel('X1'); axes[1].set_xlabel('X2');
Explanation: Here is what the simulated data look like. We use the pylab module from the plotting library matplotlib.
End of explanation
from pymc3 import Model, Normal, HalfNormal
Explanation: Model Specification
Specifiying this model in PyMC3 is straightforward because the syntax is as close to the statistical notation. For the most part, each line of Python code corresponds to a line in the model notation above.
First, we import the components we will need from PyMC.
End of explanation
basic_model = Model()
with basic_model:
# Priors for unknown model parameters
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
# Expected value of outcome
mu = alpha + beta[0]*X1 + beta[1]*X2
# Likelihood (sampling distribution) of observations
Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
Explanation: Now we build our model, which we will present in full first, then explain each part line-by-line.
End of explanation
help(Normal) #try help(Model), help(Uniform) or help(basic_model)
Explanation: The first line,
python
basic_model = Model()
creates a new Model object which is a container for the model random variables.
Following instantiation of the model, the subsequent specification of the model components is performed inside a with statement:
python
with basic_model:
This creates a context manager, with our basic_model as the context, that includes all statements until the indented block ends. This means all PyMC3 objects introduced in the indented code block below the with statement are added to the model behind the scenes. Absent this context manager idiom, we would be forced to manually associate each of the variables with basic_model right after we create them. If you try to create a new random variable without a with model: statement, it will raise an error since there is no obvious model for the variable to be added to.
The first three statements in the context manager:
python
alpha = Normal('alpha', mu=0, sd=10)
beta = Normal('beta', mu=0, sd=10, shape=2)
sigma = HalfNormal('sigma', sd=1)
create a stochastic random variables with a Normal prior distributions for the regression coefficients with a mean of 0 and standard deviation of 10 for the regression coefficients, and a half-normal distribution for the standard deviation of the observations, $\sigma$. These are stochastic because their values are partly determined by its parents, which for priors are simple constants, and partly random (or stochastic).
We call the Normal constructor to create a random variable to use as a normal prior. The first argument is always the name of the random variable, which should almost always match the name of the Python variable being assigned to, since it sometimes used to retrieve the variable from the model for summarizing output. The remaining required arguments for a stochastic object are the parameters, in this case mu, the mean, and sd, the standard deviation, which we assign hyperparameter values for the model. In general, a distribution's parameters are values that determine the location, shape or scale of the random variable, depending on the parameterization of the distribution. Most commonly used distributions, such as Beta, Exponential, Categorical, Gamma, Binomial and many others, are available in PyMC3.
The beta variable has an additional shape argument to denote it as a vector-valued parameter of size 2. The shape argument is available for all distributions and specifies the length or shape of the random variable, but is optional for scalar variables, since it defaults to a value of one. It can be an integer, to specify an array, or a tuple, to specify a multidimensional array (e.g. shape=(5,7) makes random variable that takes on 5 by 7 matrix values).
Detailed notes about distributions, sampling methods and other PyMC3 functions are available via the help function.
End of explanation
from pymc3 import find_MAP
map_estimate = find_MAP(model=basic_model)
print(map_estimate)
Explanation: Having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship:
python
mu = alpha + beta * X
This creates a deterministic random variable, which implies that its value is completely determined by its parents' values. That is, there is no uncertainty beyond that which is inherent in the parents' values. Here, mu is just the sum of the intercept alpha and the product of the slope beta and the predictor variable, whatever their values may be. PyMC3 random variables and data can be arbitrarily added, subtracted, divided, multiplied together and indexed-into to create new random variables. This allows for great model expressivity. Many common mathematical functions like sum, sin, exp and linear algebra functions like dot (for inner product) and inv (for inverse) are also provided.
The final line of the model, defines Y_obs, the sampling distribution of the outcomes in the dataset.
python
Y_obs = Normal('Y_obs', mu=mu, sd=sigma, observed=Y)
This is a special case of a stochastic variable that we call an observed stochastic, and represents the data likelihood of the model. It is identical to a standard stochastic, except that its observed argument, which passes the data two the variable, indicates that the values for this variable were observed, and should not be changed by any fitting algorithm applied to the model. The data can be passed in the form of either a numpy.ndarray or pandas.DataFrame object.
Notice that, unlike for the priors of the model, the parameters for the normal distribution of Y_obs are not fixed values, but rather are the deterministic object mu and the stochastic sigma. This creates parent-child relationships between the likelihood and these two variables.
Model fitting
Having completely specified our model, the next step is to obtain posterior estimates for the unknown variables in the model. Ideally, we could calculate the posterior analytically, but for most non-trivial models, this is not feasible. We will consider two approaches, whose appropriateness depends on the structure of the model and the goals of the analysis: finding the maximum a posteriori (MAP) point using optimization methods, and computing summaries based on samples drawn from the posterior distribution using Markov Chain Monte Carlo (MCMC) sampling methods.
Maximum a posteriori methods
The maximum a posteriori (MAP) estimate for a model, is the mode of the posterior distribution and is generally found using numercal optimization methods. This is often fast and easy to do, but only gives a point estimate for the parameters and can be biased if the mode isn't representative of the distribution. PyMC3 provides this functionality with the find_MAP function.
Below we find the MAP for our original model. The MAP is returned as a parameter point, which is always represented by a Python dictionary of variable names to NumPy arrays of parameter values.
End of explanation
from scipy import optimize
map_estimate = find_MAP(model=basic_model, fmin=optimize.fmin_powell)
print(map_estimate)
Explanation: By default, this uses Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. For example, below we use Powell's method to find the MAP.
End of explanation
from pymc3 import NUTS, sample
with basic_model:
# obtain starting values via MAP
start = find_MAP(fmin=optimize.fmin_powell)
# instantiate sampler
step = NUTS(scaling=start)
# draw 500 posterior samples
trace = sample(500, step, start=start)
Explanation: It is important to note that the MAP estimate is not always reasonable, especially if the mode is at an extreme. This can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. This will often occur in hierarchical models with the variance parameter for the random effect. If the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together.
Most techniques for finding the MAP estimate also only find a local optimium (which is often good enough), but can fail badly for multimodal posteriors if the different modes are meaningfully different.
Sampling methods
Though finding the MAP is a fast and easy way of obtaining estimates of the unknown model parameters, it is limited because there is no associated estimate of uncertainty produced with the MAP estimates. Instead, a simulation-based approach such as Markov chain Monte Carlo (MCMC) can be used to obtain a Markov chain of values that, given the satisfaction of certain conditions, are indistinguishable from samples from the posterior distribution.
To conduct MCMC sampling to generate posterior samples in PyMC3, we specify a step method object that corresponds to a particular MCMC algorithm, such as Metropolis, Slice sampling, or the No-U-Turn Sampler (NUTS). PyMC3's step_methods submodule contains the following samplers: NUTS, Metropolis, Slice, HamiltonianMC, and BinaryMetropolis.
Gradient-based sampling methods
PyMC3 has the standard sampling algorithms like adaptive Metropolis-Hastings and adaptive slice sampling, but PyMC3's most capable step method is the No-U-Turn Sampler. NUTS is especially useful on models that have many continuous parameters, a situatiuon where other MCMC algorithms work very slowly. It takes advantage of information about where regions of higher probability are, based on the gradient of the log posterior-density. This helps them achieve dramatically faster convergence on large problems than traditional sampling methods achieve. PyMC3 relies on Theano to analytically compute model gradients via automatic differentation of the computational of the posterior density. NUTS also has several self-tuning strategies for adaptively setting the tunable parameters of Hamiltonian Monte Carlo. For random variables that are undifferentiable (namely, discrete variables) NUTS and HMC cannot be used, but they may still be used on the differentiable variables in a model that contains undifferentiable variables.
Both NUTS and HMC require a scaling matrix parameter, which is analogous to the variance parameter for the jump proposal distribution in Metropolis-Hastings, althrough HMC and NUTS use it somewhat differently. The matrix gives the rough shape of the distribution so that NUTS does not make jumps that are too large in some directions and too small in other directions. It is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. This is especially true for models that have many unobserved stochastic random variables or models with highly non-normal posterior distributions. Poor scaling parameters will slow down NUTS significantly, sometimes almost stopping it completely. A reasonable starting point for sampling can also be important for efficient sampling, but not as often.
Fortunately NUTS can often make good guesses for the scaling parameters. If you pass a point in parameter space (as a dictionary of variable names to parameter values, the same format as returned by find_MAP) to HMC or NUTS, they will look at the local curvature of the log posterior-density (the diagonal of the Hessian matrix) at that point to make a guess for a good scaling vector, which often results in a good value. The MAP estimate is often a good point to use to initiate sampling. It is also possible to supply your own vector or scaling matrix to HMC/NUTS, though this is a more advanced use. If you wish to modify a Hessian at a specific point to use as your scaling matrix or vector, you can use find_hessian or find_hessian_diag.
For our linear regression example in basic_model, we will use NUTS to sample 500 draws from the posterior using the MAP as the starting point and scaling point. This must also be performed inside the context of the model.
End of explanation
trace['alpha'][-5:]
Explanation: The sample function returns a trace object that can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. The first dimension of the array is the sampling index and the later dimensions match the shape of the variable. We can see the last 5 values for the alpha variable as follows
End of explanation
from pymc3 import traceplot
traceplot(trace);
Explanation: Posterior analysis
PyMC3 provides plotting and summarization functions for inspecting the sampling output. A simple posterior plot can be created using traceplot.
End of explanation
from pymc3 import summary
summary(trace)
Explanation: The left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the Markov chain plotted in sequential order. The beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients.
In addition, summary provides a text-based output of common posterior statistics:
End of explanation
n = 400
returns = np.genfromtxt("data/SP500.csv")[-n:]
pl.plot(returns);
Explanation: Case study 1: Stochastic volatility
We present a case study of stochastic volatility, time varying stock market volatility, to illustrate PyMC3's use in addressing a more realistic problem. The distribution of market returns is highly non-normal, which makes sampling the volatlities significantly more difficult. This example has 400+ parameters so using common sampling algorithms like Metropolis-Hastings would get bogged down, generating highly autocorrelated samples. Instead, we use NUTS, which is dramatically more efficient.
The Model
Asset prices have time-varying volatility (variance of day over day returns). In some periods, returns are highly variable, while in others they are very stable. Stochastic volatility models address this with a latent volatility variable, which changes over time. The following model is similar to the one described in the NUTS paper (Hoffman 2014, p. 21).
$$\begin{aligned}
\sigma &\sim exp(50) \
\nu &\sim exp(.1) \
s_i &\sim \mathcal{N}(s_{i-1}, \sigma^{-2}) \
log(y_i) &\sim t(\nu, 0, exp(-2 s_i))
\end{aligned}$$
Here, $y$ is the daily return series which is modeled with a Student-t distribution with an unknown degrees of freedom parameter, and a scale parameter determined by a latent process $s$. The individual $s_i$ are the individual daily log volatilities in the latent log volatility process.
The Data
Our data consist of the last 400 daily returns of the S&P 500.
End of explanation
from pymc3 import Exponential, T, logtransform, exp, Deterministic
from pymc3.distributions.timeseries import GaussianRandomWalk
with Model() as sp500_model:
nu = Exponential('nu', 1./10, testval=.1)
sigma, log_sigma = sp500_model.TransformedVar('sigma', Exponential.dist(1./.02, testval=.1),
logtransform)
s = GaussianRandomWalk('s', sigma**-2, shape=n)
volatility_process = Deterministic('volatility_process', exp(-2*s))
r = T('r', nu, lam=volatility_process, observed=returns)
Explanation: Model Specification
As with the linear regession example, specifying the model in PyMC3 mirrors its statistical specification. This model employs several new distributions: the Exponential distribution for the $ \nu $ and $\sigma$ priors, the student-t (T) distribution for distribution of returns, and the GaussianRandomWalk for the prior for the latent volatilities.
It is easier to sample the scale of the log volatility process innovations, $\sigma$, on a log scale, so we create it using the model's TransformedVar method and use the appropriate transformation, logtransform, as an argument. TransformedVar creates one variable in the transformed space and one in the normal space, whereby the one in the transformed space (here $\text{log}(\sigma) $) is the one over which sampling will occur, and the one in the normal space is used throughout the rest of the model. The required arguments for TransformedVar are a variable name, a distribution and a transformation to use.
Although, unlike model specifiation in PyMC2, we do not typically provide starting points for variables at the model specification stage, we can also provide an initial value for any distribution (called a "test value") using the testval argument. This overrides the default test value for the distribution (usually the mean, median or mode of the distribution), and is most often useful if some values are illegal and we want to ensure we select a legal one. The test values for the distributions are also used as a starting point for sampling and optimization by default, though this is easily overriden.
The vector of latent volatilities s is given a prior distribution by GaussianRandomWalk. As its name suggests GaussianRandomWalk is a vector valued distribution where the values of the vector form a random normal walk of length n, as specified by the shape argument. The scale of the innovations of the random walk, sigma, is specified in terms of the precision of the normally distributed innovations and can be a scalar or vector.
End of explanation
import scipy
with sp500_model:
start = find_MAP(vars=[s], fmin=scipy.optimize.fmin_l_bfgs_b)
step = NUTS(scaling=start)
trace = sample(50, step, progressbar=False)
# Start next run at the last sampled position.
step = NUTS(scaling=trace[-1], gamma=.25)
trace = sample(400, step, start=trace[-1])
Explanation: Notice that we transform the log volatility process s into the volatility process by exp(-2*s). Here, exp is a Theano function, rather than the corresponding function in NumPy; Theano provides a large subset of the mathematical functions that NumPy does.
Also note that we have declared the Model name sp500_model in the first occurrence of the context manager, rather than splitting it into two lines, as we did for the first example.
Fitting
Before we draw samples from the posterior, it is prudent to find a decent starting valuwa by finding a point of relatively high probability. For this model, the full maximum a posteriori (MAP) point over all variables is degenerate and has infinite density. But, if we fix log_sigma and nu it is no longer degenerate, so we find the MAP with respect only to the volatility process s keeping log_sigma and nu constant at their default values (remember that we set testval=.1 for sigma). We use the Limited-memory BFGS (L-BFGS) optimizer, which is provided by the scipy.optimize package, as it is more efficient for high dimensional functions and we have 400 stochastic random variables (mostly from s).
To do the sampling, we do a short initial run to put us in a volume of high probability, then start again at the new starting point. trace[-1] gives us the last point in the sampling trace. NUTS will recalculate the scaling parameters based on the new point, and in this case it leads to faster sampling due to better scaling.
End of explanation
#figsize(12,6)
traceplot(trace, [nu, log_sigma]);
Explanation: We can check our samples by looking at the traceplot for nu and log_sigma.
End of explanation
pl.title(str(volatility_process));
pl.plot(trace[volatility_process][::10].T,'b', alpha=.03);
pl.xlabel('time');
pl.ylabel('log volatility');
Explanation: Finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph. Each is rendered partially transparent (via the alpha argument in Matplotlib's plot function) so the regions where many paths overlap are shaded more darkly.
End of explanation
y = np.array([0, 2, 1, 0, 4, 2, 0, 0, 4, 0, 0, 0, 0, 0, 3, 0, 0, 6, 0, 0, 0, 2, 1,
2, 0, 0, 0, 1, 0, 0, 0, 4, 2, 0, 0, 0, 1, 0, 2, 4, 0, 0, 1, 0, 0, 0,
0, 0, 2, 0, 2, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 0, 0,
0, 0, 3, 0, 2, 0, 1, 2, 2, 2, 2, 3, 0, 0, 0, 0, 1, 0, 3, 1, 0, 0, 0,
0, 0, 2, 0, 0, 1, 0, 0])
pl.hist(y, bins=range(7));
Explanation: Case study 2: Occupancy estimation
Ecologists often use survey data to make inferences regarding the abundance and distribution of plants and animals. Such data are often zero-inflated, whereby there are more zeros observed than you would expect if the data were distributed according to some common distribution. This is sometimes due to habitat heterogeneity, which causes areas of low quality to be unoccupied by a particular species. However, some sites may be unoccupied simply due to chance.
Here is an example of such data; each element in the array (n=100) represents a count of a particular species among a set of sites. The data are clearly zero-inflated:
End of explanation
from pymc3 import Beta, Bernoulli, ZeroInflatedPoisson, Uniform, Poisson
with Model() as zip_model:
# Estimated occupancy
p = Beta('p', 1, 1)
# Latent variable for occupancy
z = Bernoulli('z', p, shape=y.shape)
# Estimated mean count
theta = Uniform('theta', 0, 100)
# Poisson likelihood
yd = ZeroInflatedPoisson('y', theta, z, observed=y)
Explanation: One approach for dealing with excess zeros is to use a mixture model. The mixture model contains two components: one which models the count data without inflated zeros (here, an abundance model), and another that accounts for the occurrence of excess zeros (here, a habitat suitability model). In this model the, abundance component is conditional on the habitat being suitable. Suitability is a binary variable, which indicates suitability ($z=1$) with some probability $p$ and unsuitability ($z=0$) with probability $1-p$. If it is a suitable habitat then the abundance is modeled according to a Poisson distributtion with mean and varaince $\theta$, whereas unsuitable patches always have zero aundance.
$$\begin{aligned}
p &\sim Beta(1,1) \
\theta &\sim Unif(0,100) \
z_i &\sim \text{Bernoulli}(p) \
(y_i|z_i=1) &\sim \text{Poisson}(\theta) \
(y_i|z_i=0) &= 0
\end{aligned}$$
PyMC3 includes a ZeroInflatedPoisson distribution class among its standard distributions, which takes a conditional mean parameter as well as an array of indicators for the excess zeros. Since we do not know which zeros are excess a priori, this array is modeled as a latent variable using a Bernoulli distribution, with a hyperparameter representing the occupancy rate.
End of explanation
from pymc3 import Metropolis, BinaryMetropolis, sample
with zip_model:
start = {'p': 0.5, 'z': (y > 0), 'theta': 5, 'yd_missing': np.array([1,1])}
step1 = Metropolis([theta, p])
step2 = BinaryMetropolis([z])
trace = sample(10000, [step1, step2], start)
Explanation: Notice that since the latent occupancy indicators are discrete, we cannot use a gradient-based MCMC step method like HMC or NUTS for this variable. Instead, we will sample using a BinaryMetropolis sampler that proposes only binary values at each iteration for z; for the continuous-valued parameters, theta and p we will use a standard Metropolis sampler.
We sample with both samplers at once by passing them to sample in a list. Each new sample is generated by first applying step1 then step2.
End of explanation
traceplot(trace[5000:], vars=['p', 'theta']);
Explanation: The resulting posteriors for the unknown parameters suggest an occupancy rate in the neighborhood of 0.3 to 0.4, and an expected count (conditional on occupancy) of just over 2.
End of explanation
import theano.tensor as T
from theano.compile.ops import as_op
@as_op(itypes=[T.lscalar], otypes=[T.lscalar])
def crazy_modulo3(value):
if value > 0:
return value % 3
else :
return (-value + 1) % 3
with Model() as model_deterministic:
a = Poisson('a', 1)
b = crazy_modulo3(a)
Explanation: Arbitrary deterministics
Due to its reliance on Theano, PyMC3 provides many mathematical functions and operators for transforming random variables into new random variables. However, the library of functions in Theano is not exhaustive, therefore Theano and PyMC3 provide functionality for creating arbitrary Theano functions in pure Python, and including these functions in PyMC models. This is supported with the as_op function decorator.
Theano needs to know the types of the inputs and outputs of a function, which are specified for as_op by itypes for inputs and otypes for outputs. The Theano documentation includes an overview of the available types.
End of explanation
from pymc3.distributions import Continuous
class Beta(Continuous):
def __init__(self, mu, *args, **kwargs):
super(Beta, self).__init__(*args, **kwargs)
self.mu = mu
self.mode = mu
def logp(self, value):
mu = self.mu
return beta_logp(value - mu)
@as_op(itypes=[T.dscalar], otypes=[T.dscalar])
def beta_logp(value):
return -1.5 * np.log(1 + (value)**2)
with Model() as model:
beta = Beta('slope', mu=0, testval=0)
Explanation: An important drawback of this approach is that it is not possible for theano to inspect these functions in order to compute the gradient required for the Hamiltonian-based samplers. Therefore, it is not possible to use the HMC or NUTS samplers for a model that uses such an operator. However, it is possible to add a gradient if we inherit from theano.Op instead of using as_op. The PyMC example set includes a more elaborate example of the usage of as_op.
Arbitrary distributions
Similarly, the library of statistical distributions in PyMC3 is not exhaustive, but PyMC allows for the creation of user-defined functions for an arbitrary probability distribution. For simple statistical distributions, the DensityDist function takes as an argument any function that calculates a log-probability $log(p(x))$. This function may employ other random variables in its calculation. Here is an example inspired by a blog post by Jake Vanderplas on which priors to use for a linear regression (Vanderplas, 2014).
```python
import theano.tensor as T
from pymc3 import DensityDist
with Model() as model:
alpha = Uniform('intercept', -100, 100)
# Create custom densities
beta = DensityDist('beta', lambda value: -1.5 * T.log(1 + value**2), testval=0)
eps = DensityDist('eps', lambda value: -T.log(T.abs_(value)), testval=1)
# Create likelihood
like = Normal('y_est', mu=alpha + beta * X, sd=eps, observed=Y)
```
For more complex distributions, one can create a subclass of Continuous or Discrete and provide the custom logp function, as required. This is how the built-in distributions in PyMC are specified. As an example, fields like psychology and astrophysics have complex likelihood functions for a particular process that may require numerical approximation. In these cases, it is impossible to write the function in terms of predefined theano operators and we must use a custom theano operator using as_op or inheriting from theano.Op.
Implementing the beta variable above as a Continuous subclass is shown below, along with a sub-function using the as_op decorator, though this is not strictly necessary.
End of explanation
# Convert X and Y to a pandas DataFrame
import pandas
df = pandas.DataFrame({'x1': X1, 'x2': X2, 'y': Y})
Explanation: Generalized Linear Models
Generalized Linear Models (GLMs) are a class of flexible models that are widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. Because these models are so common, PyMC3 offers a glm submodule that allows flexible creation of various GLMs with an intuitive R-like syntax that is implemented via the patsy module.
The glm submodule requires data to be included as a pandas DataFrame. Hence, for our linear regression example:
End of explanation
from pymc3.glm import glm
with Model() as model_glm:
glm('y ~ x1 + x2', df)
Explanation: The model can then be very concisely specified in one line of code.
End of explanation
from pymc3.glm.families import Binomial
df_logistic = pandas.DataFrame({'x1': X1, 'x2': X2, 'y': Y > 0})
with Model() as model_glm_logistic:
glm('y ~ x1 + x2', df_logistic, family=Binomial())
Explanation: The error distribution, if not specified via the family argument, is assumed to be normal. In the case of logistic regression, this can be modified by passing in a Binomial family object.
End of explanation
from pymc3.backends import SQLite
with model_glm_logistic:
backend = SQLite('trace.sqlite')
trace = sample(5000, Metropolis(), trace=backend)
summary(trace, vars=['x1', 'x2'])
Explanation: Backends
PyMC3 has support for different ways to store samples during and after sampling, called backends, including in-memory (default), text file, and SQLite. These can be found in pymc.backends:
By default, an in-memory ndarray is used but if the samples would get too large to be held in memory we could use the sqlite backend:
End of explanation
from pymc3.backends.sqlite import load
with basic_model:
trace_loaded = load('trace.sqlite')
trace_loaded
Explanation: The stored trace can then later be loaded using the load command:
End of explanation |
13,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
The Series Data Structure
Step1: Querying a Series
Step2: The DataFrame Data Structure
Step3: Dataframe Indexing and Loading
Step4: Querying a DataFrame
Step5: Indexing Dataframes
Step6: Missing values | Python Code:
import pandas as pd
pd.Series?
animals = ['Tiger', 'Bear', 'Moose']
pd.Series(animals)
numbers = [1, 2, 3]
pd.Series(numbers)
animals = ['Tiger', 'Bear', None]
df = pd.Series(animals)
df['number_column'] = -99999
df
numbers = [1, 2, None]
pd.Series(numbers)
import numpy as np
np.nan == None
np.nan == np.nan
np.isnan(np.nan)
sports = {'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'}
s = pd.Series(sports)
s
s.index
s = pd.Series(['Tiger', 'Bear', 'Moose'], index=['India', 'America', 'Canada'])
s
sports = {'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'}
s = pd.Series(sports, index=['Golf', 'Sumo', 'Hockey'])
s
Explanation: You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
The Series Data Structure
End of explanation
sports = {'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'}
s = pd.Series(sports)
s
s.iloc[3]
s.loc['Golf']
s[3]
s['Golf']
sports = {99: 'Bhutan',
100: 'Scotland',
101: 'Japan',
102: 'South Korea'}
s = pd.Series(sports)
s.iloc[0] #This won't call s.iloc[0] as one might expect, it generates an error instead
s = pd.Series([100.00, 120.00, 101.00, 3.00])
s
total = 0
for item in s:
total+=item
print(total)
import numpy as np
total = np.sum(s)
print(total)
#this creates a big series of random numbers
s = pd.Series(np.random.randint(0,1000,10000))
s.head()
len(s)
%%timeit -n 100
summary = 0
for item in s:
summary+=item
%%timeit -n 100
summary = np.sum(s)
s+=2 #adds two to each item in s using broadcasting
s.head()
for label, value in s.iteritems():
s.set_value(label, value+2)
s.head()
%%timeit -n 10
s = pd.Series(np.random.randint(0,1000,10000))
for label, value in s.iteritems():
s.loc[label]= value+2
%%timeit -n 10
s = pd.Series(np.random.randint(0,1000,10000))
s+=2
s = pd.Series([1, 2, 3])
s.loc['Animal'] = 'Bears'
s
original_sports = pd.Series({'Archery': 'Bhutan',
'Golf': 'Scotland',
'Sumo': 'Japan',
'Taekwondo': 'South Korea'})
cricket_loving_countries = pd.Series(['Australia',
'Barbados',
'Pakistan',
'England'],
index=['Cricket',
'Cricket',
'Cricket',
'Cricket'])
all_countries = original_sports.append(cricket_loving_countries)
original_sports
cricket_loving_countries
all_countries
all_countries.loc['Cricket']
Explanation: Querying a Series
End of explanation
import pandas as pd
purchase_1 = pd.Series({'Name': 'Chris',
'Item Purchased': 'Dog Food',
'Cost': 22.50})
purchase_2 = pd.Series({'Name': 'Kevyn',
'Item Purchased': 'Kitty Litter',
'Cost': 2.50})
purchase_3 = pd.Series({'Name': 'Vinod',
'Item Purchased': 'Bird Seed',
'Cost': 5.00})
df = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2'])
df.head()
df.loc['Store 2']
type(df.loc['Store 2'])
df.loc['Store 1']
df.loc['Store 1', 'Cost']
df.T
df.T.loc['Cost']
df['Cost']
df.loc['Store 1']['Cost']
df.loc[:,['Name', 'Cost']]
df[['Name', 'Cost']]
df['Name']
df.drop('Store 1')
df
copy_df = df.copy()
copy_df = copy_df.drop('Store 1')
copy_df
copy_df.drop?
del copy_df['Name']
copy_df
df['Location'] = None
df
Explanation: The DataFrame Data Structure
End of explanation
costs = df['Cost']
costs
costs+=2
costs
df
!cat olympics.csv
df = pd.read_csv('olympics.csv')
df.head()
df = pd.read_csv('olympics.csv', index_col = 0, skiprows=1)
df.head()
df.columns
for col in df.columns:
if col[:2]=='01':
df.rename(columns={col:'Gold' + col[4:]}, inplace=True)
if col[:2]=='02':
df.rename(columns={col:'Silver' + col[4:]}, inplace=True)
if col[:2]=='03':
df.rename(columns={col:'Bronze' + col[4:]}, inplace=True)
if col[:1]=='№':
df.rename(columns={col:'#' + col[1:]}, inplace=True)
df.head()
Explanation: Dataframe Indexing and Loading
End of explanation
df['Gold'] > 0
#boulean mask
only_gold = df.where(df['Gold'] > 0)
only_gold.head()
only_gold['Gold'].sum()
df['Gold'].count()
only_gold = only_gold.dropna()
only_gold.head()
only_gold = df[df['Gold'] > 0]
only_gold.head()
len(df[(df['Gold'] > 0) | (df['Gold.1'] > 0)])
df[(df['Gold.1'] > 0) & (df['Gold'] == 0)]
Explanation: Querying a DataFrame
End of explanation
df.head()
# if changing index, he index column will be deleted so create a new column for the old index country
df['country'] = df.index
df = df.set_index('Gold')
df.head()
df = df.reset_index()
df.head()
df = pd.read_csv('census.csv')
df.head()
df['SUMLEV'].unique()
df=df[df['SUMLEV'] == 50]
df.head()
columns_to_keep = ['STNAME',
'CTYNAME',
'BIRTHS2010',
'BIRTHS2011',
'BIRTHS2012',
'BIRTHS2013',
'BIRTHS2014',
'BIRTHS2015',
'POPESTIMATE2010',
'POPESTIMATE2011',
'POPESTIMATE2012',
'POPESTIMATE2013',
'POPESTIMATE2014',
'POPESTIMATE2015']
df = df[columns_to_keep]
df.head()
df = df.set_index(['STNAME', 'CTYNAME'])
df.head()
df.loc['Michigan', 'Washtenaw County']
df.loc[ [('Michigan', 'Washtenaw County'),
('Michigan', 'Wayne County')] ]
Explanation: Indexing Dataframes
End of explanation
df = pd.read_csv('log.csv')
df
df.fillna?
df = df.set_index('time')
df = df.sort_index()
df
df = df.reset_index()
df = df.set_index(['time', 'user'])
df
df = df.fillna(method='ffill')
df.head()
Explanation: Missing values
End of explanation |
13,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Will explore aggregation framework for some analysis and then explore how we could use it for data cleaning
Example of Aggregation Framework
Let's find out who tweeted the most
- group tweets by user
- count each user's tweets
- sort into descending order
- select user at top
Step1: Aggregation Operators
$project - shape documents e.g. select
$match - filtering
$skip - skip at start
$limit - limit after some
$unwind - for every field of the array field on which it is used it will create an instance of document containing the values of the field. This can be used for grouping
Match operator
Who has the highest followers to friend ratio?
Step2: For $match we use the same syntax that we use for read operations
Project operator
include fields from the original document
insert computed fields
rename fields
create fields that hold sub documents
Unwind operator
need to use array values somehow
Let's try and find who included the most user mentions
Step3: group operators
$sum
$first
$last
$max
$min
$avg
array operators
- $push
- $addToSet | Python Code:
import pprint
def get_client():
from pymongo import MongoClient
return MongoClient('mongodb://localhost:27017/')
def get_collection():
return get_client().examples.twitter
collection = get_collection()
def aggregate_and_show(collection, query, limit = True):
_query = query[:]
if limit:
_query.append({"$limit": 5})
result = collection.aggregate(_query)
pprint.pprint(list(r for r in result))
query = [
{"$group": {"_id": "$user.screen_name",
"count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
aggregate_and_show(collection, query)
Explanation: Introduction
Will explore aggregation framework for some analysis and then explore how we could use it for data cleaning
Example of Aggregation Framework
Let's find out who tweeted the most
- group tweets by user
- count each user's tweets
- sort into descending order
- select user at top
End of explanation
query = [
{"$match": {"user.friends_count": {"$gt": 0},
"user.followers_count": {"$gt": 0}}},
{"$project": {"ratio": {"$divide": ["$user.followers_count",
"$user.friends_count"]},
"screen_name": "$user.screen_name"}},
{"$sort": {"ratio": -1}}
]
aggregate_and_show(collection, query)
Explanation: Aggregation Operators
$project - shape documents e.g. select
$match - filtering
$skip - skip at start
$limit - limit after some
$unwind - for every field of the array field on which it is used it will create an instance of document containing the values of the field. This can be used for grouping
Match operator
Who has the highest followers to friend ratio?
End of explanation
query = [
{"$unwind": "$entities.user_mentions"},
{"$group": {"_id": "$user.screen_name",
"count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
aggregate_and_show(collection, query)
Explanation: For $match we use the same syntax that we use for read operations
Project operator
include fields from the original document
insert computed fields
rename fields
create fields that hold sub documents
Unwind operator
need to use array values somehow
Let's try and find who included the most user mentions
End of explanation
#get unique hashtags by user
query = [
{"$unwind": "$entities.hashtags"},
{"$group": {"_id": "$user.screen_name",
"unique_hashtags": {
"$addToSet": "$entities.hashtags.text"
}}},
{"$sort": {"_id": -1}}
]
aggregate_and_show(collection, query)
# find number of unique user mentions
query = [
{"$unwind": "$entities.user_mentions"},
{"$group": {
"_id": "$user.screen_name",
"mset": {
"$addToSet": "$entities.user_mentions.screen_name"
}
}},
{"$unwind": "$mset"},
{"$group": {"_id": "$_id", "count": {"$sum": 1}}},
{"$sort": {"count": -1}}
]
aggregate_and_show(collection, query)
Explanation: group operators
$sum
$first
$last
$max
$min
$avg
array operators
- $push
- $addToSet
End of explanation |
13,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extracting Structure from Scientific Abstracts
using a LSTM neural network
Paul Willot
This project was made for the ICADL 2015 conference.
In this notebook we will go through all steps required to build a LSTM neural network to classify sentences inside a scientific paper abstract.
Summary
Step1: First, let's gather some data. We use the PubMed database of medical paper.
Specificaly, we will focus on structured abstracts. There is approximately 3 million avalaible, and we will focus on a reduced portion of this (500.000) but feel free to use a bigger corpus.
The easiest way to try this is to use the toy_corpus.txt and tokenizer.pickle included in the project repo.
To work on real dataset, for convenience I prepared the following files. Use the one appropriate for your needs, for example you can download the training and testing datas and jump to the next notebook.
Download the full corpus (~500.000 structured abstracts, 500 MB compressed)
Step2: Download a toy corpus (224 structured abstracts, 200 KB compressed)
Note
Step3: Download a lemmatized corpus (preprocessed, 350 MB compressed)
Step4: Download training and testing datas for the LSTM (preprocessed, vectorized and splitted, 100 MB compressed)
Step5: Some imports
Step6: <a id='extract'></a>
Extract and parse the dataset
Separate each documents, isolate the abstracts
Step7: Our data currently look like this
Step8: Cleaning, dumping the abstracts with incorrect number of labels
Step9: <a id='pre-process'></a>
Pre-process
Replacing numbers with ##NB.
Step10: For correct sentence splitting, we train a tokenizer using NLTK Punkt Sentence Tokenizer. This tokenizer use an unsupervised algorithm to learn how to split sentences on a corpus.
Step11: Our data look now like this
Step12: Lemmatization
It may be a long process on huge dataset, but using spacy make it currently 50 times faster than a slimple use of the NLTK tools.
It get a huge speedup with paralellisation (tryed on 80 cores). Specify nb_core=X if needed.
Step13: Let's save that
Step14: To directly load a lemmatized corpus
Step15: <a id='label analysis'></a>
Label analysis
Does not affect the corpus, we simply do this get some insights.
Step16: <a id='choosing labels'></a>
Choosing labels
Does affect the corpus
We can restrict our data to work only on abstracts having labels maching a specific pattern...
Step17: ... Or we can keep a more noisy dataset and reduce it to a set of labels
Step18: <a id='create train'></a>
Creating train and test data
Let's format the datas for the classifier
Reorder the labels for better readability
Step19: Vectorize the sentences.
Step20: Now let's save all this | Python Code:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
%load_ext watermark
# for reproducibility
%watermark -a 'Paul Willot' -mvp numpy,scipy,spacy
Explanation: Extracting Structure from Scientific Abstracts
using a LSTM neural network
Paul Willot
This project was made for the ICADL 2015 conference.
In this notebook we will go through all steps required to build a LSTM neural network to classify sentences inside a scientific paper abstract.
Summary:
* Extract dataset
* Pre-process
* Label analysis
* Choosing labels
* Create train and test set
End of explanation
!wget https://www.dropbox.com/s/lhqe3bls0mkbq57/pubmed_result_548899.txt.zip -P ./data/
!unzip -o ./data/pubmed_result_548899.txt.zip -d ./data/
Explanation: First, let's gather some data. We use the PubMed database of medical paper.
Specificaly, we will focus on structured abstracts. There is approximately 3 million avalaible, and we will focus on a reduced portion of this (500.000) but feel free to use a bigger corpus.
The easiest way to try this is to use the toy_corpus.txt and tokenizer.pickle included in the project repo.
To work on real dataset, for convenience I prepared the following files. Use the one appropriate for your needs, for example you can download the training and testing datas and jump to the next notebook.
Download the full corpus (~500.000 structured abstracts, 500 MB compressed)
End of explanation
#!wget https://www.dropbox.com/s/ujo1l8duu31js34/toy_corpus.txt.zip -P ./data/
#!unzip -o ./TMP/toy_corpus.txt.zip -d ./data/
Explanation: Download a toy corpus (224 structured abstracts, 200 KB compressed)
Note: this file is already included in the project GitHub repository.
End of explanation
!wget https://www.dropbox.com/s/lmv88n1vpmp6c19/corpus_lemmatized.pickle.zip -P ./data/
!unzip -o ./data/corpus_lemmatized.pickle.zip -d ./data/
Explanation: Download a lemmatized corpus (preprocessed, 350 MB compressed)
End of explanation
!wget https://www.dropbox.com/s/0o7i0ejv4aqf6gs/training_4_BacObjMetCon.pickle.zip -P ./data/
!unzip -o ./data/training_4_BacObjMetCon.pickle.zip -d ./data/
Explanation: Download training and testing datas for the LSTM (preprocessed, vectorized and splitted, 100 MB compressed)
End of explanation
from __future__ import absolute_import
from __future__ import print_function
# import local libraries
import tools
import prepare
import lemmatize
import analyze
import preprocess
Explanation: Some imports
End of explanation
data = prepare.extract_txt('data/toy_corpus.txt')
Explanation: <a id='extract'></a>
Extract and parse the dataset
Separate each documents, isolate the abstracts
End of explanation
print("%s\n[...]"%data[0][:800])
abstracts = prepare.get_abstracts(data)
Explanation: Our data currently look like this:
End of explanation
def remove_err(datas,errs):
err=sorted([item for subitem in errs for item in subitem],reverse=True)
for e in err:
for d in datas:
del d[e]
remove_err([abstracts],prepare.get_errors(abstracts))
print("Working on %d documents."%len(abstracts))
Explanation: Cleaning, dumping the abstracts with incorrect number of labels
End of explanation
abstracts = prepare.filter_numbers(abstracts)
Explanation: <a id='pre-process'></a>
Pre-process
Replacing numbers with ##NB.
End of explanation
tokenizer = prepare.create_sentence_tokenizer(abstracts)
# For a more general parser, use the one provided in NLTK:
#import nltk.data
#tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
abstracts_labeled = prepare.ex_all_labels(abstracts,tokenizer)
Explanation: For correct sentence splitting, we train a tokenizer using NLTK Punkt Sentence Tokenizer. This tokenizer use an unsupervised algorithm to learn how to split sentences on a corpus.
End of explanation
abstracts_labeled[0][0]
Explanation: Our data look now like this:
End of explanation
lemmatized = lemmatize.lemm(abstracts_labeled)
lemmatized[0]
Explanation: Lemmatization
It may be a long process on huge dataset, but using spacy make it currently 50 times faster than a slimple use of the NLTK tools.
It get a huge speedup with paralellisation (tryed on 80 cores). Specify nb_core=X if needed.
End of explanation
tools.dump_pickle(lemmatized,"data/fast_lemmatized.pickle")
Explanation: Let's save that
End of explanation
lemmatized = tools.load_pickle("data/corpus_lemmatized.pickle")
Explanation: To directly load a lemmatized corpus
End of explanation
dic = analyze.create_dic_simple(lemmatized)
print("Number of labels :",len(dic.keys()))
analyze.show_keys(dic,threshold=10)
primary_keyword=['AIM','BACKGROUND','INTRODUCTION','METHOD','RESULT','CONCLUSION','OBJECTIVE','DESIGN','FINDING','OUTCOME','PURPOSE']
analyze.regroup_keys(dic,primary_keyword)
analyze.show_keys(dic,threshold=10)
keys_to_replace = [['INTRODUCTION','CONTEXT','PURPOSE'],
['AIM','SETTING'],
['FINDING','OUTCOME','DISCUSSION']]
replace_with = ['BACKGROUND',
'METHOD',
'CONCLUSION']
analyze.replace_keys(dic,keys_to_replace,replace_with)
analyze.show_keys(dic,threshold=10)
Explanation: <a id='label analysis'></a>
Label analysis
Does not affect the corpus, we simply do this get some insights.
End of explanation
pattern = [
['BACKGROUND','BACKGROUNDS'],
['METHOD','METHODS'],
['RESULT','RESULTS'],
['CONCLUSION','CONCLUSIONS'],
]
sub_perfect = analyze.get_exactly(lemmatized,pattern=pattern,no_truncate=True)
sub_perfect = analyze.get_exactly(lemmatized,pattern=pattern,no_truncate=False)
print("%d abstracts labeled and ready for the next part"%len(sub_perfect))
Explanation: <a id='choosing labels'></a>
Choosing labels
Does affect the corpus
We can restrict our data to work only on abstracts having labels maching a specific pattern...
End of explanation
dic = preprocess.create_dic(lemmatized,100)
# We can re-use the variables defined in the analysis section
#primary_keyword=['AIM','BACKGROUND','METHOD','RESULT','CONCLUSION','OBJECTIVE','DESIGN','FINDINGS','OUTCOME','PURPOSE']
analyze.regroup_keys(dic,primary_keyword)
#keys_to_replace = [['INTRODUCTION','BACKGROUND','AIM','PURPOSE','CONTEXT'],
# ['CONCLUSION']]
#replace_with = ['OBJECTIVE',
# 'RESULT']
analyze.replace_keys(dic,keys_to_replace,replace_with)
# We can restrict our analysis to the main labels
dic = {key:dic[key] for key in ['BACKGROUND','RESULT','METHOD','CONCLUSION']}
analyze.show_keys(dic,threshold=10)
print("Sentences per label :",["%s %d"%(s,len(dic[s][1])) for s in dic.keys()])
Explanation: ... Or we can keep a more noisy dataset and reduce it to a set of labels
End of explanation
classes_names = ['BACKGROUND', 'METHOD', 'RESULT','CONCLUSION']
dic.keys()
# train/test split
split = 0.8
# truncate the number of abstracts to consider for each label,
# -1 to set to the maximum while keeping the number of sentences per labels equal
raw_x_train, raw_y_train, raw_x_test, raw_y_test = preprocess.split_data(dic,classes_names,
split_train_test=split,
truncate=-1)
Explanation: <a id='create train'></a>
Creating train and test data
Let's format the datas for the classifier
Reorder the labels for better readability
End of explanation
X_train, y_train, X_test, y_test, feature_names, max_features, vectorizer = preprocess.vectorize_data(raw_x_train,
raw_y_train,
raw_x_test,
raw_y_test)
print("Number of features : %d"%(max_features))
Explanation: Vectorize the sentences.
End of explanation
tools.dump_pickle([X_train, y_train, X_test, y_test, feature_names, max_features, classes_names, vectorizer],
"data/unpadded_4_BacObjMetCon.pickle")
Explanation: Now let's save all this
End of explanation |
13,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Image Classification using tf.keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: TODO
Step3: Data Loading
In order to build our image classifier, we can begin by downloading the flowers dataset. We first need to download the archive version of the dataset and after the download we are storing it to "/tmp/" directory.
After downloading the dataset, we need to extract its contents.
Step4: The dataset we downloaded contains images of 5 types of flowers
Step5: Also, the dataset we have downloaded has following directory structure. n
<pre style="font-size
Step6: For convenience, let us set up the path for the training and validation sets
Step7: Data Augmentation
Overfitting generally occurs when we have small number of training examples. One way to fix this problem is to augment our dataset so that it has sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples, by augmenting the samples via a number of random transformations that yield believable-looking images. The goal is that at training time, your model will never see the exact same picture twice. This helps expose the model to more aspects of the data and generalize better.
In tf.keras we can implement this using the same ImageDataGenerator class we used before. We can simply pass different transformations we would want to our dataset as a form of arguments and it will take care of applying it to the dataset during our training process.
Experiment with Various Image Transformations
In this section you will get some practice doing some basic image transformations. Before we begin making transformations let's define our batch_size and our image size. Remember that the input to our CNN are images of the same size. We therefore have to resize the images in our dataset to the same size.
TODO
Step8: TODO
Step9: Let's take 1 sample image from our training examples and repeat it 5 times so that the augmentation can be applied to the same image 5 times over randomly, to see the augmentation in action.
Step10: TODO
Step11: Let's take 1 sample image from our training examples and repeat it 5 times so that the augmentation can be applied to the same image 5 times over randomly, to see the augmentation in action.
Step12: TODO
Step13: Let's take 1 sample image from our training examples and repeat it 5 times so that the augmentation can be applied to the same image 5 times over randomly, to see the augmentation in action.
Step14: TODO
Step15: Let's visualize how a single image would look like 5 different times, when we pass these augmentations randomly to our dataset.
Step16: TODO
Step17: TODO
Step18: TODO
Step19: TODO
Step20: TODO | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import os
import numpy as np
import glob
import shutil
import matplotlib.pyplot as plt
Explanation: Image Classification using tf.keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l05c04_exercise_flowers_with_data_augmentation_solution.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l05c04_exercise_flowers_with_data_augmentation_solution.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this Colab you will classify images of flowers. You will build an image classifier using tf.keras.Sequential model and load data using tf.keras.preprocessing.image.ImageDataGenerator.
Importing Packages
Let's start by importing required packages. os package is used to read files and directory structure, numpy is used to convert python list to numpy array and to perform required matrix operations and matplotlib.pyplot is used to plot the graph and display images in our training and validation data.
End of explanation
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
Explanation: TODO: Import TensorFlow and Keras Layers
In the cell below, import Tensorflow and the Keras layers and models you will use to build your CNN. Also, import the ImageDataGenerator from Keras so that you can perform image augmentation.
End of explanation
_URL = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
zip_file = tf.keras.utils.get_file(origin=_URL,
fname="flower_photos.tgz",
extract=True)
base_dir = os.path.join(os.path.dirname(zip_file), 'flower_photos')
Explanation: Data Loading
In order to build our image classifier, we can begin by downloading the flowers dataset. We first need to download the archive version of the dataset and after the download we are storing it to "/tmp/" directory.
After downloading the dataset, we need to extract its contents.
End of explanation
classes = ['roses', 'daisy', 'dandelion', 'sunflowers', 'tulips']
Explanation: The dataset we downloaded contains images of 5 types of flowers:
Rose
Daisy
Dandelion
Sunflowers
Tulips
So, let's create the labels for these 5 classes:
End of explanation
for cl in classes:
img_path = os.path.join(base_dir, cl)
images = glob.glob(img_path + '/*.jpg')
print("{}: {} Images".format(cl, len(images)))
num_train = int(round(len(images)*0.8))
train, val = images[:num_train], images[num_train:]
for t in train:
if not os.path.exists(os.path.join(base_dir, 'train', cl)):
os.makedirs(os.path.join(base_dir, 'train', cl))
shutil.move(t, os.path.join(base_dir, 'train', cl))
for v in val:
if not os.path.exists(os.path.join(base_dir, 'val', cl)):
os.makedirs(os.path.join(base_dir, 'val', cl))
shutil.move(v, os.path.join(base_dir, 'val', cl))
round(len(images)*0.8)
Explanation: Also, the dataset we have downloaded has following directory structure. n
<pre style="font-size: 10.0pt; font-family: Arial; line-height: 2; letter-spacing: 1.0pt;" >
<b>flower_photos</b>
|__ <b>daisy</b>
|__ <b>dandelion</b>
|__ <b>roses</b>
|__ <b>sunflowers</b>
|__ <b>tulips</b>
</pre>
As you can see there are no folders containing training and validation data. Therefore, we will have to create our own training and validation set. Let's write some code that will do this.
The code below creates a train and a val folder each containing 5 folders (one for each type of flower). It then moves the images from the original folders to these new folders such that 80% of the images go to the training set and 20% of the images go into the validation set. In the end our directory will have the following structure:
<pre style="font-size: 10.0pt; font-family: Arial; line-height: 2; letter-spacing: 1.0pt;" >
<b>flower_photos</b>
|__ <b>daisy</b>
|__ <b>dandelion</b>
|__ <b>roses</b>
|__ <b>sunflowers</b>
|__ <b>tulips</b>
|__ <b>train</b>
|______ <b>daisy</b>: [1.jpg, 2.jpg, 3.jpg ....]
|______ <b>dandelion</b>: [1.jpg, 2.jpg, 3.jpg ....]
|______ <b>roses</b>: [1.jpg, 2.jpg, 3.jpg ....]
|______ <b>sunflowers</b>: [1.jpg, 2.jpg, 3.jpg ....]
|______ <b>tulips</b>: [1.jpg, 2.jpg, 3.jpg ....]
|__ <b>val</b>
|______ <b>daisy</b>: [507.jpg, 508.jpg, 509.jpg ....]
|______ <b>dandelion</b>: [719.jpg, 720.jpg, 721.jpg ....]
|______ <b>roses</b>: [514.jpg, 515.jpg, 516.jpg ....]
|______ <b>sunflowers</b>: [560.jpg, 561.jpg, 562.jpg .....]
|______ <b>tulips</b>: [640.jpg, 641.jpg, 642.jpg ....]
</pre>
Since we don't delete the original folders, they will still be in our flower_photos directory, but they will be empty. The code below also prints the total number of flower images we have for each type of flower.
End of explanation
train_dir = os.path.join(base_dir, 'train')
val_dir = os.path.join(base_dir, 'val')
Explanation: For convenience, let us set up the path for the training and validation sets
End of explanation
batch_size = 100
IMG_SHAPE = 150
Explanation: Data Augmentation
Overfitting generally occurs when we have small number of training examples. One way to fix this problem is to augment our dataset so that it has sufficient number of training examples. Data augmentation takes the approach of generating more training data from existing training samples, by augmenting the samples via a number of random transformations that yield believable-looking images. The goal is that at training time, your model will never see the exact same picture twice. This helps expose the model to more aspects of the data and generalize better.
In tf.keras we can implement this using the same ImageDataGenerator class we used before. We can simply pass different transformations we would want to our dataset as a form of arguments and it will take care of applying it to the dataset during our training process.
Experiment with Various Image Transformations
In this section you will get some practice doing some basic image transformations. Before we begin making transformations let's define our batch_size and our image size. Remember that the input to our CNN are images of the same size. We therefore have to resize the images in our dataset to the same size.
TODO: Set Batch and Image Size
In the cell below, create a batch_size of 100 images and set a value to IMG_SHAPE such that our training data consists of images with width of 150 pixels and height of 150 pixels.
End of explanation
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE)
)
Explanation: TODO: Apply Random Horizontal Flip
In the cell below, use ImageDataGenerator to create a transformation that rescales the images by 255 and then applies a random horizontal flip. Then use the .flow_from_directory method to apply the above transformation to the images in our training set. Make sure you indicate the batch size, the path to the directory of the training images, the target size for the images, and to shuffle the images.
End of explanation
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip( images_arr, axes):
ax.imshow(img)
plt.tight_layout()
plt.show()
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
Explanation: Let's take 1 sample image from our training examples and repeat it 5 times so that the augmentation can be applied to the same image 5 times over randomly, to see the augmentation in action.
End of explanation
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE))
Explanation: TODO: Apply Random Rotation
In the cell below, use ImageDataGenerator to create a transformation that rescales the images by 255 and then applies a random 45 degree rotation. Then use the .flow_from_directory method to apply the above transformation to the images in our training set. Make sure you indicate the batch size, the path to the directory of the training images, the target size for the images, and to shuffle the images.
End of explanation
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
Explanation: Let's take 1 sample image from our training examples and repeat it 5 times so that the augmentation can be applied to the same image 5 times over randomly, to see the augmentation in action.
End of explanation
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)
train_data_gen = image_gen.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE)
)
Explanation: TODO: Apply Random Zoom
In the cell below, use ImageDataGenerator to create a transformation that rescales the images by 255 and then applies a random zoom of up to 50%. Then use the .flow_from_directory method to apply the above transformation to the images in our training set. Make sure you indicate the batch size, the path to the directory of the training images, the target size for the images, and to shuffle the images.
End of explanation
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
Explanation: Let's take 1 sample image from our training examples and repeat it 5 times so that the augmentation can be applied to the same image 5 times over randomly, to see the augmentation in action.
End of explanation
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=45,
width_shift_range=.15,
height_shift_range=.15,
horizontal_flip=True,
zoom_range=0.5
)
train_data_gen = image_gen_train.flow_from_directory(
batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE),
class_mode='sparse'
)
Explanation: TODO: Put It All Together
In the cell below, use ImageDataGenerator to create a transformation that rescales the images by 255 and that applies:
random 45 degree rotation
random zoom of up to 50%
random horizontal flip
width shift of 0.15
height shift of 0.15
Then use the .flow_from_directory method to apply the above transformation to the images in our training set. Make sure you indicate the batch size, the path to the directory of the training images, the target size for the images, to shuffle the images, and to set the class mode to sparse.
End of explanation
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
Explanation: Let's visualize how a single image would look like 5 different times, when we pass these augmentations randomly to our dataset.
End of explanation
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=batch_size,
directory=val_dir,
target_size=(IMG_SHAPE, IMG_SHAPE),
class_mode='sparse')
Explanation: TODO: Create a Data Generator for the Validation Set
Generally, we only apply data augmentation to our training examples. So, in the cell below, use ImageDataGenerator to create a transformation that only rescales the images by 255. Then use the .flow_from_directory method to apply the above transformation to the images in our validation set. Make sure you indicate the batch size, the path to the directory of the validation images, the target size for the images, and to set the class mode to sparse. Remember that it is not necessary to shuffle the images in the validation set.
End of explanation
model = Sequential()
model.add(Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_SHAPE,IMG_SHAPE, 3)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, 3, padding='same', activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(5))
Explanation: TODO: Create the CNN
In the cell below, create a convolutional neural network that consists of 3 convolution blocks. Each convolutional block contains a Conv2D layer followed by a max pool layer. The first convolutional block should have 16 filters, the second one should have 32 filters, and the third one should have 64 filters. All convolutional filters should be 3 x 3. All max pool layers should have a pool_size of (2, 2).
After the 3 convolutional blocks you should have a flatten layer followed by a fully connected layer with 512 units. The CNN should output class probabilities based on 5 classes which is done by the softmax activation function. All other layers should use a relu activation function. You should also add Dropout layers with a probability of 20%, where appropriate.
End of explanation
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: TODO: Compile the Model
In the cell below, compile your model using the ADAM optimizer, the sparse cross entropy function as a loss function. We would also like to look at training and validation accuracy on each epoch as we train our network, so make sure you also pass the metrics argument.
End of explanation
epochs = 80
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(train_data_gen.n / float(batch_size))),
epochs=epochs,
validation_data=val_data_gen,
validation_steps=int(np.ceil(val_data_gen.n / float(batch_size)))
)
Explanation: TODO: Train the Model
In the cell below, train your model using the fit_generator function instead of the usual fit function. We have to use the fit_generator function because we are using the ImageDataGenerator class to generate batches of training and validation data for our model. Train the model for 80 epochs and make sure you use the proper parameters in the fit_generator function.
End of explanation
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
Explanation: TODO: Plot Training and Validation Graphs.
In the cell below, plot the training and validation accuracy/loss graphs.
End of explanation |
13,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../../images/qiskit-heading.gif" alt="Note
Step1: Next, we create a Python dictionary to specify the problem we want to solve. There are defaults for many additional values that are not show here for simpicity. Indeed we take advantage of using sensisble defaults that the qischem stack provides to help us here. Please notice that the Qiskit Aqua Chemistry GUI allows for automatic extraction of the Python dictionary reflecting the current configuration. Once the Python dictionary has been extracted, it can be pasted into a Python program or a Jupyter Notebook and, if necessary, edited.
The first entry names a chemistry driver. This example uses HDF5 and the next line configures the driver for an hdf5 file that contains data from a prior computation for an H2 molecule with basis set sto-3g. The operator line would default but I have added it here to show it and to say that this is where the problem is converted into a quantum qubit form. We then have a VQE algorithm, using the COBYLA optimizer with a UCCSD variatonal form and initial state of HartreeFock. VQE is Variational Quantum Eigensolver and as its name suggests uses a variational method to find the mimimum eigenvalue of the problem, which in this case is the ground state energy of the molecule.
[Optional] Setup token to run the experiment on a real device
If you would like to run the experiement on a real device, you need to setup your account first.
Note
Step2: We can now create a AquaChemistry object and call run on it passing in the problem dictionary to get a result. This may take a short time and it will use a local quantum simulator to carry out the quantum computation that the VQE algorithm uses.
Step3: The run method returns a result dictionary. Some notable fields include 'energy' which is the computed ground state energy. We can print it.
Step4: There is also a 'printable' field containing a complete ready to print readable result | Python Code:
from qiskit_aqua_chemistry import AquaChemistry
Explanation: <img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Qiskit Aqua: Chemistry basic how to
The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.
Contributors
Richard Chen<sup>[1]</sup>, Antonio Mezzacapo<sup>[1]</sup>, Marco Pistoia<sup>[1]</sup>, Stephen Wood<sup>[1]</sup>
Affiliation
<sup>[1]</sup>IBMQ
Introduction
This notebook demonstrates how to use Qiskit Aqua Chemistry to compute the ground state energy of a Hydrogen (H2) molecule using VQE and UCCSD.
This notebook has been written to use the HDF5 chemistry driver. This driver uses molecular data that has been saved from a prior computation so that this notebook can be run with no additional driver installation requirements. See the HDF5 chemistry driver readme for more detail.
First we import AquaChemistry, which is the object that will carry out the computation for us
End of explanation
from qiskit import IBMQ
IBMQ.load_accounts()
# Input dictionary to configure Qiskit AQUA Chemistry for the chemistry problem.
aqua_chemistry_dict = {
'driver': {'name': 'HDF5'},
'HDF5': {'hdf5_input': 'H2/0.7_sto-3g.hdf5'},
'operator': {'name': 'hamiltonian'},
'algorithm': {'name': 'VQE'},
'optimizer': {'name': 'COBYLA'},
'variational_form': {'name': 'UCCSD'},
'initial_state': {'name': 'HartreeFock'},
'backend': {'name': 'statevector_simulator'}
}
Explanation: Next, we create a Python dictionary to specify the problem we want to solve. There are defaults for many additional values that are not show here for simpicity. Indeed we take advantage of using sensisble defaults that the qischem stack provides to help us here. Please notice that the Qiskit Aqua Chemistry GUI allows for automatic extraction of the Python dictionary reflecting the current configuration. Once the Python dictionary has been extracted, it can be pasted into a Python program or a Jupyter Notebook and, if necessary, edited.
The first entry names a chemistry driver. This example uses HDF5 and the next line configures the driver for an hdf5 file that contains data from a prior computation for an H2 molecule with basis set sto-3g. The operator line would default but I have added it here to show it and to say that this is where the problem is converted into a quantum qubit form. We then have a VQE algorithm, using the COBYLA optimizer with a UCCSD variatonal form and initial state of HartreeFock. VQE is Variational Quantum Eigensolver and as its name suggests uses a variational method to find the mimimum eigenvalue of the problem, which in this case is the ground state energy of the molecule.
[Optional] Setup token to run the experiment on a real device
If you would like to run the experiement on a real device, you need to setup your account first.
Note: If you do not store your token yet, use IBMQ.save_accounts() to store it first.
End of explanation
solver = AquaChemistry()
result = solver.run(aqua_chemistry_dict)
Explanation: We can now create a AquaChemistry object and call run on it passing in the problem dictionary to get a result. This may take a short time and it will use a local quantum simulator to carry out the quantum computation that the VQE algorithm uses.
End of explanation
print('Ground state energy: {}'.format(result['energy']))
Explanation: The run method returns a result dictionary. Some notable fields include 'energy' which is the computed ground state energy. We can print it.
End of explanation
for line in result['printable']:
print(line)
Explanation: There is also a 'printable' field containing a complete ready to print readable result
End of explanation |
13,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Further testing of the sklearn pull request
Step1: Test of the warnings
Testing
Area calculation tests
Testing in the context of modelling
Step2: Test of the warnings
Step3: Testing
The measure_area helper function so you can easily visualize arbitrary examples to try to find problems with the auc implementation. The caption tells you the area calculated using the same interpolation as in the figure.
Step4: Area calculation tests
Step5: Testing in the context of modelling
Now we want to make sure that this interpolation strategy makes sense in the context of precision, recall, and average precision (see the main blog post notebook for details). The old sklearn-default 'linear' interpolation is on the left, and on the right is the new 'stepwise' interpolation.
Step6: Constant score
Step7: Random score
Step8: Random score with more data
Step9: Rounded versions of random score
Step10: Pretty good classifier
Step11: Lots more data with a good classifer
Step12: The same data but this time we've rounded the scores | Python Code:
__author__ = 'Nick Dingwall'
Explanation: Further testing of the sklearn pull request
End of explanation
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
import numpy as np
from sklearn.metrics.base import _average_binary_score
from sklearn.metrics import precision_recall_curve
import warnings
import roam_average_precision as rap
matplotlib.style.use('../../src/roam.mplstyle')
y_true = [1,1,0,0,0,0,0,0,0]
y_const = 0.5 * np.ones(len(y_true))
y_rand = np.random.random(len(y_true))
Explanation: Test of the warnings
Testing
Area calculation tests
Testing in the context of modelling
End of explanation
rap.average_precision_score(y_true, y_const)
rap.average_precision_score(y_true, y_const, interpolation='step')
rap.average_precision_score(y_true, y_rand)
rap.average_precision_score(y_true, y_rand, interpolation='step')
Explanation: Test of the warnings
End of explanation
def measure_area(x, y, **kwargs):
fig, ax = plt.subplots(1,1, figsize=(7,4))
plot_area(ax, x, y, **kwargs)
plt.show()
def plot_area(ax, x, y,
reorder=False,
interpolation=None,
interpolation_direction='right'):
direction = 1
if reorder:
# reorder the data points according to the x axis and using y to
# break ties
order = np.lexsort((y, x))
x, y = x[order], y[order]
else:
dx = np.diff(x)
if np.any(dx < 0):
if np.all(dx <= 0):
direction = -1
else:
raise ValueError("Reordering is not turned on, and "
"the x array is not increasing: %s" % x)
ax.scatter(x, y, marker='o', linewidths=0, s=25)
if interpolation == 'linear':
x_long = x
y_long = y
elif interpolation == 'step':
if direction == -1:
x, y = list(reversed(x)), list(reversed(y))
if interpolation_direction == 'right':
y_long = [v for v in y for _ in (0, 1)][1:]
x_long = [v for v in x for _ in (0, 1)][:-1]
elif interpolation_direction == 'left':
y_long = [v for v in y for _ in (0, 1)][:-1]
x_long = [v for v in x for _ in (0, 1)][1:]
else:
raise ValueError
if max(x) < 1.1:
ax.set_xticks(np.arange(-1, max(x)+1, 0.1))
else:
ax.set_xticks(np.arange(-1, max(x)+1, 1.0))
if max(y) < 1.1:
ax.set_yticks(np.arange(-1, max(y)+1, 0.1))
else:
ax.set_yticks(np.arange(-1, max(y)+1, 1.0))
ax.plot(x_long, y_long)
area = rap.auc(x, y, interpolation=interpolation,
interpolation_direction=interpolation_direction)
ax.fill_between(x_long, 0, y_long, alpha=0.2,
label='Area = {:5.4f}'.format(area))
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.2),
ncol=3, fancybox=True, shadow=True)
Explanation: Testing
The measure_area helper function so you can easily visualize arbitrary examples to try to find problems with the auc implementation. The caption tells you the area calculated using the same interpolation as in the figure.
End of explanation
measure_area([1,2,3,4], [5,3,1,7],
interpolation='step',
interpolation_direction='right')
measure_area([1,2,3,4], [5,3,1,7],
interpolation='step',
interpolation_direction='left')
measure_area([1,5,6,10], [2, 3.5, 4, 5],
interpolation='step',
interpolation_direction='left')
measure_area([1,5,6,10], [2, 3.5, 4, 5],
interpolation='step',
interpolation_direction='right')
measure_area([1,5,6,10], [2, 3.5, 4, 5],
interpolation='linear')
measure_area([1,2,3,4], [5,3,1,4],
interpolation='linear')
Explanation: Area calculation tests
End of explanation
def compare_interpolations_from_scores(y_true, y_score):
p, r, _ = precision_recall_curve(y_true, y_score)
compare_interpolations(r, p)
def compare_interpolations(x, y, **kwargs):
fig, ax = plt.subplots(1, 2, figsize=(15,4))
plot_area(ax[0], x, y,
interpolation='linear',
**kwargs)
plot_area(ax[1], x, y,
interpolation='step',
interpolation_direction='right',
**kwargs)
plt.show()
Explanation: Testing in the context of modelling
Now we want to make sure that this interpolation strategy makes sense in the context of precision, recall, and average precision (see the main blog post notebook for details). The old sklearn-default 'linear' interpolation is on the left, and on the right is the new 'stepwise' interpolation.
End of explanation
y_true = [1,1,1,0,0,0,0,0,0,0]
y_const = 0.5 * np.ones(len(y_true))
compare_interpolations_from_scores(y_true, y_const)
Explanation: Constant score
End of explanation
y_rand = np.random.random(len(y_true))
compare_interpolations_from_scores(y_true, y_rand)
Explanation: Random score
End of explanation
y_true = [1 for _ in range(10)] + [0 for _ in range(50)]
y_rand = np.random.random(len(y_true))
compare_interpolations_from_scores(y_true, y_rand)
Explanation: Random score with more data
End of explanation
y_round = np.round(y_rand, 1)
compare_interpolations_from_scores(y_true, y_round)
Explanation: Rounded versions of random score
End of explanation
y_close = [np.random.normal(loc=i, scale=len(y_true)/3) for i in range(len(y_true), 0, -1)]
compare_interpolations_from_scores(y_true, y_close)
Explanation: Pretty good classifier
End of explanation
y_true = [1 for _ in range(1000)] + [0 for _ in range(10000)]
y_close = [np.random.normal(loc=i, scale=len(y_true)/8)
for i in range(len(y_true), 0, -1)]
compare_interpolations_from_scores(y_true, y_close)
Explanation: Lots more data with a good classifer
End of explanation
y_close_round = np.round(np.array(y_close) / max(y_close), 1)
compare_interpolations_from_scores(y_true, y_close_round)
Explanation: The same data but this time we've rounded the scores
End of explanation |
13,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Converting automata to strings
Use to_str() to output a string representing the automaton in different formats.
Step1: Saving automata to files
Use save() to save the automaton into a file.
Step2: Reading automata from files
Use spot.automata() to read multiple automata from a file, and spot.automaton() to read only one.
Step3: The --ABORT-- feature of the HOA format allows discarding the automaton being read and starting over.
Step5: Reading automata from strings
Instead of passing a filename, you can also pass the contents of a file. spot.automata() and spot.automaton() look for the absence of newline to decide if this is a filename or a string containing some actual automaton text.
Step6: Reading automata output from processes
If an argument of spot.automata ends with |, then it is interpreted as a shell command that outputs one automaton or more.
Step7: A single automaton can be read using spot.automaton(), with the same convention. | Python Code:
a = spot.translate('a U b')
for fmt in ('hoa', 'spin', 'dot', 'lbtt'):
print(a.to_str(fmt))
Explanation: Converting automata to strings
Use to_str() to output a string representing the automaton in different formats.
End of explanation
a.save('example.aut').save('example.aut', format='lbtt', append=True)
!cat example.aut
Explanation: Saving automata to files
Use save() to save the automaton into a file.
End of explanation
for a in spot.automata('example.aut'):
display(a)
Explanation: Reading automata from files
Use spot.automata() to read multiple automata from a file, and spot.automaton() to read only one.
End of explanation
%%file example.aut
HOA: v1
States: 2
Start: 1
AP: 2 "a" "b"
acc-name: Buchi
Acceptance: 1 Inf(0)
--BODY--
State: 0 {0}
[t] 0
--ABORT-- /* the previous automaton should be ignored */
HOA: v1
States: 2
Start: 1
AP: 2 "a" "b"
Acceptance: 1 Inf(0)
--BODY--
State: 0 {0}
[t] 0
State: 1
[1] 0
[0&!1] 1
--END--
for a in spot.automata('example.aut'):
display(a)
Explanation: The --ABORT-- feature of the HOA format allows discarding the automaton being read and starting over.
End of explanation
for a in spot.automata(
HOA: v1
States: 2
Start: 1
name: "Hello world"
AP: 2 "a" "b"
Acceptance: 1 Inf(0)
--BODY--
State: 0 {0}
[t] 0
State: 1
[1] 0
[0&!1] 1
--END--
HOA: v1
States: 1
Start: 0
name: "Hello world 2"
AP: 2 "a" "b"
Acceptance: 2 Inf(0)&Inf(1)
--BODY--
State: 0 {0}
[t] 0 {1}
[0&!1] 0
--END--
):
display(a)
Explanation: Reading automata from strings
Instead of passing a filename, you can also pass the contents of a file. spot.automata() and spot.automaton() look for the absence of newline to decide if this is a filename or a string containing some actual automaton text.
End of explanation
for a in spot.automata('ltl2tgba -s "a U b"; ltl2tgba --lbtt "b"|', 'ltl2tgba -H "GFa" "a & GFb"|'):
display(a)
Explanation: Reading automata output from processes
If an argument of spot.automata ends with |, then it is interpreted as a shell command that outputs one automaton or more.
End of explanation
spot.automaton('ltl2tgba -s6 "a U b"|')
!rm example.aut
Explanation: A single automaton can be read using spot.automaton(), with the same convention.
End of explanation |
13,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pedestrian and Face Detection on Simple Azure
Pedestrian and Face Detection uses OpenCV to identify people standing in a picture or a video and NIST use case in this document is built with Apache Spark and Mesos clusters on multiple compute nodes.
Simple Azure supports deploying software stacks for the NIST Pedestrian and Face Detection use case on top of Azure compute resources with the templates.
Original | Pedestrian Detected
Step1: IP Addresses of Compute Nodes
Step2: Load Ansible API with IPs
Step3: Download Ansible Playbooks from Github
The ansible scripts for Pedestrian and Face Detection is here
Step4: Install Software Stacks to Targeted VMs | Python Code:
from simpleazure import SimpleAzure
saz = SimpleAzure()
Explanation: Pedestrian and Face Detection on Simple Azure
Pedestrian and Face Detection uses OpenCV to identify people standing in a picture or a video and NIST use case in this document is built with Apache Spark and Mesos clusters on multiple compute nodes.
Simple Azure supports deploying software stacks for the NIST Pedestrian and Face Detection use case on top of Azure compute resources with the templates.
Original | Pedestrian Detected
:-----------------------------------:|:------------------------------------------------------:
|
Original | Pedestrian and Face Detected
:---------------------------------------:|:----------------------------------------------------------:
|
Introduction
Human (pedestrian) detection and face detection have been studied during the last several years and models for them have improved along with Histograms of Oriented Gradients (HOG) for Human Detection [1]. OpenCV is a Computer Vision library including the SVM classifier and the HOG object detector for pedestrian detection and INRIA Person Dataset [2] is one of popular samples for both training and testing purposes. In this document, we deploy Apache Spark on Mesos clusters to train and apply detection models from OpenCV using Python API.
Ansible Automation Tool
Ansible is a python tool to install/configure/manage software on multiple machines with JSON files where system descriptions are defined. There are reasons why we use Ansible:
Expandable: Leverages Python (default) but modules can be written in any language
Agentless: no setup required on managed node
Security: Allows deployment from user space; uses ssh for authentication
Flexibility: only requires ssh access to privileged user
Transparency: YAML Based script files express the steps of installing and configuring software
Modularity: Single Ansible Role (should) contain all required commands and variables to deploy software package independently
Sharing and portability: roles are available from source (github, bitbucket, gitlab, etc) or the Ansible Galaxy portal
INRIA Person Dataset
This dataset contains positive and negative images for training and test purposes with annotation files for upright persons in each image. 288 positive test images, 453 negative test images, 614 positive training images and 1218 negative training images are included along with normalized 64x128 pixel formats. 970MB dataset is available to download [3].
HOG with SVM model
Histogram of Oriented Gradient (HOG) and Support Vector Machine (SVM) are used as object detectors and classifiers and built-in python libraries from OpenCV provide these models for human detection.
Deployment by Ansible
When it comes to deploy applications and build clusters for batch-processing large datasets, Ansible scripts play a big role such as installation and configuration towards available machines. Ansible provides abstractions by Playbook Roles and reusability by Include statements. We define X application in X Ansible Role, for example, and use include statements to combine with other applications e.g. Y or Z. Five Ansible roles are used in this use case to build clusters for Human and Face Detection with INRIA dataset. The main Ansible playbook runs Ansible roles in order which looks like:
```
include: sched/00-mesos.yml
include: proc/01-spark.yml
include: apps/02-opencv.yml
include: data/03-inria-dataset.yml
Include: anlys/04-human-face-detection.yml
```
Directory names e.g. sched, proc, data, or anlys indicate BDSS layers like:
- sched: scheduler layer
- proc: data processing layer
- apps: application layer
- data: dataset layer
- anlys: analytics layer
and two digits in the filename indicate an order of roles to be run.
It is assumed that virtual machines are created by virtual-cluster-libs, the command line tool to start VM instances. For example on OpenStack, vcl boot -p openstack -P $USER- command starts a set of virtual machine instances with a cluster definition file .cluster.py. The number of machines and groups for clusters e.g. namenodes and datanodes are specified in the file and Ansible inventory file, a list of target machines with groups, is generated once machines are ready to use. Ansible roles run to install applications on virtual clusters.
Mesos role is installed first with Ansible inventory groups for masters and slaves in which mesos-master runs on the masters group and mesos-slave runs on the slaves group. Apache Zookeeper is included in the mesos role so that mesos slaves find an elected mesos leader from the zookeeper. Spark, as a data processing layer, provides two options for distributed job processing, batch job processing via a cluster mode and real-time processing via a client mode. The Mesos dispatcher runs on a masters group to accept a batch job submission and Spark interactive shell, which is the client mode, provides real-time processing on any node in the cluster. Either way, Spark is installed after a scheduler layer i.e. mesos to identify a master host for a job submission. Installation of OpenCV, INRIA Person Dataset and Human and Face Detection Python applications are followed.
Software Stacks
The following software are expected in the stacks according to the github:
mesos cluster (master, worker)
spark (with dispatcher for mesos cluster mode)
openCV
zookeeper
INRIA Person Dataset
Detection Analytics in Python
[1] Dalal, Navneet, and Bill Triggs. "Histograms of oriented gradients for human detection." 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). Vol. 1. IEEE, 2005. [pdf]
[2] http://pascal.inrialpes.fr/data/human/
[3] ftp://ftp.inrialpes.fr/pub/lear/douze/data/INRIAPerson.tar
[4] https://docs.python.org/2/library/configparser.html
Simple Azure with Ansible
Simple Azure supports Ansible to import and run Ansible scripts towards target machines i.e. Azure virtual machines. In the previous tutorial, we've learned how to deploy 3 VMs from the 101-vm-sshkey template and we are going to use the three virtual machines in this example.
Server groups (inventory)
We may separate compute nodes in two groups: masters and workers therefore Mesos masters and zookeeper quorums manage job requests and leaders and workers run actual tasks. Ansible needs group definitions in their inventory therefore software installation associated with a proper part is completed.
Quick Instructions (under development)
Load SimpleAzure
End of explanation
ips = saz.arm.view_info()
Explanation: IP Addresses of Compute Nodes
End of explanation
from simpleazure.ansible_api import AnsibleAPI
ansible_client = AnsibleAPI(ips)
Explanation: Load Ansible API with IPs
End of explanation
from simpleazure.github_cli import GithubCLI
git_client = GithubCLI()
git_client.set_repo('https://github.com/futuresystems/pedestrian-and-face-detection')
git_client.clone()
Explanation: Download Ansible Playbooks from Github
The ansible scripts for Pedestrian and Face Detection is here: https://github.com/futuresystems/pedestrian-and-face-detection.
We clone the repository using Github command line tools.
End of explanation
ansible_client.playbook(git_client.path + "/site.yml")
ansible_client.run()
Explanation: Install Software Stacks to Targeted VMs
End of explanation |
13,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Truly an 11x Developer solution to the ~~world's~~ universe's premier code interview question!
What is "pynads"
All joking aside, I've been hacking together a collection of Haskell-esque tools for Python over the last few weeks. It started as a "Well, I'll learn Haskell better this way." and has become...well, honestly a tool for me learning Haskell still.
Check it out, I think it's nifty.
A Quick Tour of Pynads
pynads strives to be a pythonic form of Haskell. Which is buzzword for I did some stuff. There's
Step1: Functors
Functors are data types that can be mapped over with the fmap method. But calling methods isn't very Haskell like, so there's an operator that does the same thing
Step2: Of course, you could just use FmapPerson.fmap but that's not very Haskellic. pynads also exports a funcs name space that contains functional shortcuts to some of these (though f % functor doesn't get much shorter). In this case, there's also
Step3: Every class in the pynads.concrete namespace is a functor except for Mempty (we'll get there!).
Applicatives
Applicative Functors are functors that hold functions and can then be applied to other Applicatives/Functors. Every class (except for Mempty) in pynads.concrete is also an Applicative!
Step4: Yeah, like Haskell there's an infix operator. Except of <*> I just dropped the angle brackets because Python doesn't let you define custom operators (easily). It also combines nicely with % because they have the same precedence level!
Step5: BOOM! Mind blown! Whaaaaaat. Applicative uses the abstract method apply to determine how * operates. Just inherit from Applicative, define fmap and apply and boom, you're on your way. Well, you also need the unit method -- which is a class method for all pynad types, but that's not a requirement -- which knows how to put a value in a minimal context.
But wait, what if you have a curried function and you stuffed it into a Just and now you don't want to write out just_f * just_v1 * just_v2 .... Sure, you could think "Well, what if I used reduce(operator.mul, *justs)" But I thought of that already.
Step6: If you're mind isn't blown yet, it's because I haven't revealed...
MOOOOOOOOONAAAAAAAADS!!!!!
Monads get a bad rap because they get all sorts of overblown explainations. You want to know a what a monad is? It's another way to compute things. It's a glorified container with a special method. You have a value in a monad, a function that takes a regular value and returns a monad and you bind them together. That's it. Literally all there is to it.
Step7: The Maybe monad (which consists of the Just and Nothing data types) is basically a glorified if expression. That's it! The bind operation will detect if you have a failure in your computation and short circuit it. It's essentially an abstraction over this
Step8: Notice how that didn't cause a nasty AttributeError, because None doesn't have attributes? That's all Maybe lets you do (this behavior is mimicked in its fmap and apply
Step9: Before you start thinking, "Well, monads are just glorified if expressions" becuase that's missing the point, Monads are the abstraction abstraction. They represent a way to compute something by abstracting away how it's computated.
There's the Writer monad above which is a value in a monadic context but it also keeps some extra side-output as well. Instead of us trying to track this extra information ourselves, Writer says, "Hey, I'll handle it for you!" It just wants a function that accepts a value and returns a Writer. But here's the really cool thing, I didn't have to use a dictionary. It could have a list, or a string or an integer, or a custom class! I hear, "But how!" Through the power of...
Monoids!
So monoids are pretty cool. They're a set of something, a "zero" and a binary operator that's transative. Lots of things form monoids. Numbers are monoids! There's two ways to make numbers monoids
Step10: pynads.Monoid overrides + to be a shortcut to mappend. That's all well and good, but why other than have a unified way of combining values?! Because we get a way to reduce a list of monoids into a single value for free!
Step11: Monoid.mconcat actually delegates to the mappend method and essentially looks like reduce(cls.mappend, monoids). That's it. That's all there is. But you can define your own mconcat to get performace bonuses if you need to.
Step12: pynads.List and pynads.Map take advantage of this to create only one intermediate object rather than a bunch. pynads will also let you treat the builtin types as monoids as well through pynads.funcs.monoid namespace which has four functions we're interested in
Step13: The monoid namespace is just a nice interface to the nasty plumbing that lives in pynads.utils.monoidal. It's pretty gross and actually probably pretty fragile, but it works! IT WORKS!!!
Mempty
So here's the thing that lets Writer do its little trick by accept any monoid as a potential log. I can't know ahead of time what you're going to use to keep track of stuff with Writer -- I've drank plenty of spice water, but I've yet to develop prescient abilities. And rather than making a bunch of subclasses specialized to handle a dictionary and a list and a string and a blah blah blah and forcing you to make your own for WriterLogWithMyFirstMonoid I decided to create a mempty monoid -- Mempty. It's not an original idea. Really, it's kind of a dumb object.
It's a singleton, so that's a strike against it (two singletons in my library, my god!). It doesn't actually do anything. It just sits around, taking up space until a real monoid comes along and scares it off. It's mempty value is actually itself! It's mappend just returns whatever its mappended with. And it's mconcat filters out any Mempty values before trying to mconcat the remaining values (you get a Mempty if mconcat an iter of Mempties). There's even an __iter__ method that yields from an empty tuple! What's going on!
In Haskell, mempty can be used a place holder and Haskell knows to do the right thing already. However, we have to teach Python how to use a placeholder and silently step it out of the way when a real value comes along. I suspect that this is similar, maybe, to how Haskell handles it, but I've not dug at all. | Python Code:
from pynads import Container
class Person(Container):
__slots__ = ('name', 'age')
def __init__(self, name, age):
self.name = name
self.age = age
def _get_val(self):
return {'name': self.name, 'age': self.age}
def __repr__(self):
return "Person(name={!s}, age={!s})".format(self.name, self.age)
print(Person(name="Alec", age=26).v)
Explanation: Truly an 11x Developer solution to the ~~world's~~ universe's premier code interview question!
What is "pynads"
All joking aside, I've been hacking together a collection of Haskell-esque tools for Python over the last few weeks. It started as a "Well, I'll learn Haskell better this way." and has become...well, honestly a tool for me learning Haskell still.
Check it out, I think it's nifty.
A Quick Tour of Pynads
pynads strives to be a pythonic form of Haskell. Which is buzzword for I did some stuff. There's:
Functors
Applicatives
Monads
Monoids
Helpers
All of the base classes are implemented as Abstract Base Classes which makes inheritance easy. Well, that's a lie, the root object of pynads is a concrete class that just serves as an endpoint for __new__ and __init__.
Container
pynads.Container is the root object of every pynads class. It serves as a final endpoint for __new__ and __init__ as well as providing a consistent name for accessing the values held by objects in pynads. Some would say that it's a silly idea, but it works! Every class in pynads is also slotted for memory reasons since it's built around the idea of not manipulating a container but creating a new one.
The only important thing to know about Container is that it defines v as a property which actually delagates to the _get_val method. Meaning that's all that needs to be overriden to get multiple values out of a container.
For most subclasses of Container, the provided __init__ is fine, but it's a-ok to override it as well as the only setup that happens is setting a single attribute _v.
End of explanation
from pynads import Functor
class FmapPerson(Person, Functor):
__slots__ = ()
def fmap(self, f):
return self.__class__(f(self.name), self.age)
print(str.upper % FmapPerson(name="Alec", age=26))
Explanation: Functors
Functors are data types that can be mapped over with the fmap method. But calling methods isn't very Haskell like, so there's an operator that does the same thing: %.
End of explanation
from pynads import funcs
print(funcs.fmap(str.upper, FmapPerson(name="Alec", age=26)))
Explanation: Of course, you could just use FmapPerson.fmap but that's not very Haskellic. pynads also exports a funcs name space that contains functional shortcuts to some of these (though f % functor doesn't get much shorter). In this case, there's also:
End of explanation
from pynads import Just
print(Just(lambda x: x+2) * Just(2))
Explanation: Every class in the pynads.concrete namespace is a functor except for Mempty (we'll get there!).
Applicatives
Applicative Functors are functors that hold functions and can then be applied to other Applicatives/Functors. Every class (except for Mempty) in pynads.concrete is also an Applicative!
End of explanation
print((lambda x: lambda y: x+y) % Just(4) * Just(6))
Explanation: Yeah, like Haskell there's an infix operator. Except of <*> I just dropped the angle brackets because Python doesn't let you define custom operators (easily). It also combines nicely with % because they have the same precedence level!
End of explanation
add_three_together = lambda x: lambda y: lambda z: x+y+z
print(funcs.multiapply(Just(add_three_together), *[Just(x) for x in range(1,4)]))
Explanation: BOOM! Mind blown! Whaaaaaat. Applicative uses the abstract method apply to determine how * operates. Just inherit from Applicative, define fmap and apply and boom, you're on your way. Well, you also need the unit method -- which is a class method for all pynad types, but that's not a requirement -- which knows how to put a value in a minimal context.
But wait, what if you have a curried function and you stuffed it into a Just and now you don't want to write out just_f * just_v1 * just_v2 .... Sure, you could think "Well, what if I used reduce(operator.mul, *justs)" But I thought of that already.
End of explanation
from pynads import Nothing
inc_if_odd_else_nothing = lambda x: Just(x+1) if not x&1 else Nothing
print(Just(2) >> inc_if_odd_else_nothing)
Explanation: If you're mind isn't blown yet, it's because I haven't revealed...
MOOOOOOOOONAAAAAAAADS!!!!!
Monads get a bad rap because they get all sorts of overblown explainations. You want to know a what a monad is? It's another way to compute things. It's a glorified container with a special method. You have a value in a monad, a function that takes a regular value and returns a monad and you bind them together. That's it. Literally all there is to it.
End of explanation
def safe_func(x):
if x is None:
return None
else:
return x+1
print(safe_func(1), safe_func(None))
Explanation: The Maybe monad (which consists of the Just and Nothing data types) is basically a glorified if expression. That's it! The bind operation will detect if you have a failure in your computation and short circuit it. It's essentially an abstraction over this:
End of explanation
print(multibind(Just(1), *repeat(lambda x: Just(x+1), 5)))
Explanation: Notice how that didn't cause a nasty AttributeError, because None doesn't have attributes? That's all Maybe lets you do (this behavior is mimicked in its fmap and apply: fmap(f, Nothing) and apply(ap_f, Nothing) both return you a Nothing). Nothing is extraspecialsauce because it's a singleton. It's basically a monadic None. Actually, it is a monadic None because it represents...well, Nothing.
If you've got more binds than editor columns, then there's something for you as well!
End of explanation
from pynads import Monoid
# also inherits from Container
# so we get all the Container goodness for free
class MTuple(Monoid):
mempty = ()
def __init__(self, *vs):
super(MTuple, self).__init__(vs)
def mappend(self, other):
# type checking optional
if not isinstance(other, MTuple):
raise TypeError("Can only mappend MTuple with MTuple")
return MTuple(*(self.v + other.v))
def __repr__(self):
return "MTuple{!r}".format(self.v)
print(MTuple(4,5) + MTuple(6,7))
Explanation: Before you start thinking, "Well, monads are just glorified if expressions" becuase that's missing the point, Monads are the abstraction abstraction. They represent a way to compute something by abstracting away how it's computated.
There's the Writer monad above which is a value in a monadic context but it also keeps some extra side-output as well. Instead of us trying to track this extra information ourselves, Writer says, "Hey, I'll handle it for you!" It just wants a function that accepts a value and returns a Writer. But here's the really cool thing, I didn't have to use a dictionary. It could have a list, or a string or an integer, or a custom class! I hear, "But how!" Through the power of...
Monoids!
So monoids are pretty cool. They're a set of something, a "zero" and a binary operator that's transative. Lots of things form monoids. Numbers are monoids! There's two ways to make numbers monoids: with 0 and +, with 1 and *. However, pynads is lazy and only defines the first...sorry, hard choices were made.
Wait, "a zero value" but 1 != 0. That's missing the point, a zero value is a value that doesn't change the input when combined with the binary operator. x * 1 == x.
But Python's "primitive" types all form monads!
list is a monoid. That's the zero value right there and it's binary operator would be list.extend if it was actually a binop.
dict is a monoid with {} and dict.update
bool is a monoid with False and |\or
set and frozenset are also monoids with their empty instances and |
str is a monoid with '' and + (boooo using + to combine strings! but whatever)
Tuple, Complex and Float are also monoids in exactly the ways you expect. There's a catch though: When combining tuples, you actually get a list back. I'll probably rectify this at a later point, but it's just something to live with right now.
pynads also defines ~~two~~ three monoids of its own: List (the monadic form of tuple), Map (the applicative form of dict) and Mempty (which we're getting to!).
Making your own monoid is easy, and you've probably done it before (just not in this fashion). Just inherit from pynads.Monoid create a mempty attribute (it won't let you without it through __new__ hackery) and the mappend method for combining two instances of your monoid. Let's assume we want a real tuple Monoid. We'd do it like this:
End of explanation
print(MTuple.mconcat(MTuple(1), MTuple(2), MTuple(3)))
Explanation: pynads.Monoid overrides + to be a shortcut to mappend. That's all well and good, but why other than have a unified way of combining values?! Because we get a way to reduce a list of monoids into a single value for free!
End of explanation
from itertools import chain
class CMTuple(MTuple):
def __iter__(self):
return iter(self.v)
@classmethod
def mconcat(cls, *MTuples):
return CMTuple(*chain.from_iterable(MTuples))
print(CMTuple.mconcat(CMTuple(1), CMTuple(2), CMTuple(3)))
Explanation: Monoid.mconcat actually delegates to the mappend method and essentially looks like reduce(cls.mappend, monoids). That's it. That's all there is. But you can define your own mconcat to get performace bonuses if you need to.
End of explanation
from pynads.funcs import monoid
print(monoid.mempty(list()))
print(monoid.mappend({'a':10}, {'b': 7}))
print(monoid.mconcat("hello", " ", "world"))
print(monoid.is_monoid(set()))
Explanation: pynads.List and pynads.Map take advantage of this to create only one intermediate object rather than a bunch. pynads will also let you treat the builtin types as monoids as well through pynads.funcs.monoid namespace which has four functions we're interested in: mempty, mappend, mconcat and is_monoid. mempty returns the "zero" value for a type, mappend knows how to combine types, mconcat knows how to combine an iter of types into a single one and is_monoid knows if something is monoidal or not (generally, it doesn't declare decimal.Decimal to be a monoid but this is because I didn't want to add a special case -- special cases beget special cases).
This is done through introspection of types and abstract base classes (to make the type introspection ~~more acceptable~~ less painful).
End of explanation
from pynads import Mempty
print(monoid.mempty(Mempty))
print(Mempty + 4)
print(monoid.mconcat(Mempty, {4}, {5}, Mempty))
Explanation: The monoid namespace is just a nice interface to the nasty plumbing that lives in pynads.utils.monoidal. It's pretty gross and actually probably pretty fragile, but it works! IT WORKS!!!
Mempty
So here's the thing that lets Writer do its little trick by accept any monoid as a potential log. I can't know ahead of time what you're going to use to keep track of stuff with Writer -- I've drank plenty of spice water, but I've yet to develop prescient abilities. And rather than making a bunch of subclasses specialized to handle a dictionary and a list and a string and a blah blah blah and forcing you to make your own for WriterLogWithMyFirstMonoid I decided to create a mempty monoid -- Mempty. It's not an original idea. Really, it's kind of a dumb object.
It's a singleton, so that's a strike against it (two singletons in my library, my god!). It doesn't actually do anything. It just sits around, taking up space until a real monoid comes along and scares it off. It's mempty value is actually itself! It's mappend just returns whatever its mappended with. And it's mconcat filters out any Mempty values before trying to mconcat the remaining values (you get a Mempty if mconcat an iter of Mempties). There's even an __iter__ method that yields from an empty tuple! What's going on!
In Haskell, mempty can be used a place holder and Haskell knows to do the right thing already. However, we have to teach Python how to use a placeholder and silently step it out of the way when a real value comes along. I suspect that this is similar, maybe, to how Haskell handles it, but I've not dug at all.
End of explanation |
13,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" align="left"/>
<img src="images/inf.png" alt="" align="right"/>
</header>
<br/><br/><br/><br/><br/>
IWI131
Programación de Computadores
Sebastián Flores
¿Qué contenido aprenderemos?
1- Reglas del curso
2- Características de python
3- Consejos para aprender
¿Porqué aprenderemos ese contenido?
Conocer las reglas -> optimizar recursos y anticipar dificultades.
Características de python -> ¿porqué python?
Consejos para aprender -> optimizar recursos.
1- Reglas del Curso
Evaluaciones
Nota final
Sitio web
Conflicto de metodología
Evaluaciones
Obligatorias
3 certámenes individuales.
3 tareas en equipo.
5 actividades.
Opcionales
1 Certamen Recuperativo
Step1: Ejemplo 2
¿Que hace el siguiente código?
Step2: Ejemplo 3
¿Que hace el siguiente archivo? | Python Code:
a, b = 2, 3
while b < 300:
print b,
a, b = b, a+b
Explanation: <header class="w3-container w3-teal">
<img src="images/utfsm.png" alt="" align="left"/>
<img src="images/inf.png" alt="" align="right"/>
</header>
<br/><br/><br/><br/><br/>
IWI131
Programación de Computadores
Sebastián Flores
¿Qué contenido aprenderemos?
1- Reglas del curso
2- Características de python
3- Consejos para aprender
¿Porqué aprenderemos ese contenido?
Conocer las reglas -> optimizar recursos y anticipar dificultades.
Características de python -> ¿porqué python?
Consejos para aprender -> optimizar recursos.
1- Reglas del Curso
Evaluaciones
Nota final
Sitio web
Conflicto de metodología
Evaluaciones
Obligatorias
3 certámenes individuales.
3 tareas en equipo.
5 actividades.
Opcionales
1 Certamen Recuperativo: reemplaza el peor certamen
Asistencia a Ayudantías: reemplaza el peor trabajo en equipo
Nota final:
Calcular:
$$ PP = 60\% PC + 20\% PT + 20\% PAE $$
Si $PC ≥ 55$ y $PP ≥ 55$:
$$ NF = PP$$
Sino:
$$ NF = \min(PC,PP) $$
Sitio web del ramo
Información oficial del ramo:
http://progra.usm.cl (materia, ejercicios, material, entrega tareas, etc.)
Otros medios:
http://twitter.com/progra_usm y http://facebook.com/ (anuncios, consultas, etc.)
Adicional a este paralelo:
https://github.com/sebastiandres/iwi131 (material adicional)
Conflicto de metodología
Los certámenes ($60\%$ de la nota final) son en papel e individiduales.
Certámenes requieren las siguientes habilidades:
lectura
análisis
modelamiento
programación
Sobre este mí
Ingeniero Civil Matemático - UTFSM, Chile (2000).
Ingeniero y Magíster en Mecánica - Ecole Polytechnique, Francia (2005).
Magíster en Computación y Matemática Aplicada - Stanford, EEUU (2010).
Esval, Peugeot-Citroen, Lexity, CMM-UCh, Thinkful.
Proyectos de mecánica de sólidos y fluidos, minería, química y sismología.
Actualmente
Clases en el mundo real: IWI131 y MAT281
Clases online: Data Science @ Thinkful
Software para propagación de tsunamis
Mi visión de la educación
Todos pueden aprender, si hay esfuerzo.
Escuchar < Ver < Reproducir < Modificar < Crear < Innovar.
Mi visión de la programación
Python es fácil, útil y entretenido.
Programar es como andar en bicicleta o tocar piano.
Ingenieros que no sepan programar estarán en desventaja al egresar.
2- Sobre python
¿Qué es python?
¿Porqué python?
<img src="images/python.jpg" alt="" align="right"/>
¿Qué es python?
Lenguaje de alto nivel: programar sin conocer el hardware.
Navaja suiza de lenguajes de programación.
2 versiones:
2.7: Utilizado en este curso
3.5: Versión "consistente", todavía en adopción.
¿Porqué python?
Ampliamente utilizado en ingeniería
Fácil de leer, mantener y escribir
Gran cantidad de librerías
Alto nivel de abstracción
Ejecución directa
Gratis
Sobre este paralelo piloto
Responsabilidad: 50% profesor, 50% estudiantes.
Mutable: feedback es esencial.
Práctico: python para la vida, no para certámenes.
Interactivo: participación en clases NO es opcional.
Ejemplo 1
¿Que hace el siguiente archivo?
End of explanation
anexos = {'Cesar':4001,
'Sebastian': 4002}
anexos['Claudio'] = 4003
print anexos
del anexos['Claudio']
anexos['Patricio'] = 4004
print anexos
if "Sebastian" in anexos:
print anexos["Sebastian"]
if "sebastian" in anexos:
print anexos["sebastian"]
print anexos["Luis"]
Explanation: Ejemplo 2
¿Que hace el siguiente código?
End of explanation
import urllib2
def download_file(download_url):
response = urllib2.urlopen(download_url)
file = open("document.pdf", 'wb')
file.write(response.read())
file.close()
print("Completed")
download_file("http://progra.usm.cl/Archivos/certamenes/Libro_prograRB.pdf")
Explanation: Ejemplo 3
¿Que hace el siguiente archivo?
End of explanation |
13,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code block cell to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(predictions, outcomes[:5])
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were told to make a prediction about any passenger aboard the RMS Titanic who we did not know anything about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers as a whole did not survive the ship sinking.
The function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Sex')
Explanation: Answer: Replace this text with the prediction accuracy you found above.
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
pass
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: Replace this text with the prediction accuracy you found above.
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. Consider, for example, all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
pass
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
Explanation: Answer: Replace this text with the prediction accuracy you found above.
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
pass
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation |
13,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyGSLIB
QQ and PP plots
Step1: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
Step2: QQ-Plot | Python Code:
#general imports
import pygslib
Explanation: PyGSLIB
QQ and PP plots
End of explanation
#get the data in gslib format into a pandas Dataframe
cluster= pygslib.gslib.read_gslib_file('../data/cluster.dat')
true= pygslib.gslib.read_gslib_file('../data/true.dat')
true['Declustering Weight'] = 1
Explanation: Getting the data ready for work
If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.
End of explanation
npoints = len(cluster['Primary'])
true['Declustering Weight'] = 1
#using declustering wight
parameters_qpplt = {
# gslib parameters for qq-pp calculation
'qqorpp': 0, # integer (Optional, default 0, Q-Q plot). Q-Q plot (qqorpp=0); P-P plot (qqorpp=1)
#'npts' : None, # integer (Optional, default min length of va1 and va2). Number of points to use on the Q-Q or P-P plot (should not exceed the smallest number of data in data1 / data2
'va1' : cluster['Primary'], # rank-1 array('d') with bounds (nd). Variable 1
'wt1' : cluster['Declustering Weight'], # rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight for variable 1.
'va2' : true['Primary'], # rank-1 array('d') with bounds (nd). Variable 2
'wt2' : true['Declustering Weight'], # rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight for variable 2.
# visual parameters for figure (if a new figure is created)
#'figure' : None, # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure.
#'title' : None, # string (Optional, "QQ plot" or "PP plot"). Figure title
#'xlabel' : 'Z1', # string (Optional, default "Z1" or "P1"). X axis label
#'ylabel' : 'Z2', # string (Optional, default "Z2" or "P2"). Y axis label
#'xlog' : True, # boolean (Optional, default True). If true plot X axis in log sale.
#'ylog' : True, # boolean (Optional, default True). If true plot Y axis in log sale.
# visual parameter for the probplt
#'style' : None, # string with valid bokeh chart type
'color' : 'black', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a) (Optional, default "navy")
'legend': 'Declustered', # string (Optional, default "NA").
#'alpha' : None, # float [0-1] (Optional, default 0.5). Transparency of the fill colour
#'lwidth': None, # float (Optional, default 1). Line width
# leyend
'legendloc': None} # float (Optional, default 'bottom_right'). Any of top_left, top_center, top_right, center_right, bottom_right, bottom_center, bottom_left, center_left
# Calculate the non declustered qq plot
results, fig = pygslib.plothtml.qpplt(parameters_qpplt)
# Calculate declustered qqplot
# a) get array of ones as weights
cluster['naive']= cluster['Declustering Weight'].values*0 +1
# update parameter dic
parameters_qpplt['wt1'] = cluster['naive']
parameters_qpplt['color'] = 'blue'
parameters_qpplt['legend']='Clustered'
results, fig = pygslib.plothtml.qpplt(parameters_qpplt)
# show the plot
pygslib.plothtml.show(fig)
Explanation: QQ-Plot
End of explanation |
13,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detecting outliers in time series
Step1: TL;DR
Step2: STL Decomposition
Seasonal and Trend decomposition using Loess (STL) was introduced by Cleveland et al. (1990). This method performs an addative decomposition of a time series into trend, seasonal, and remainder series via an iterative process
Step3: In the call to make_time_series, we specify a date range, a unit for the time step (here, h for hours), and a list giving each function along with its noise rate and its outlier percentage.
Step4: The function np_to_df is a helper functions for plotting and passing data between Python and R.
Step5: Here's a look at all the series we're working with
Step6: Series A
Here's a visualization of Series A with red dots marking the points identified as outliers. Series A only has a little noise, and that makes it easier for us to find outliers because they stand out more — almost every point above 1 and below –1 is considered an outlier in this series. More importantly, we're also able to find outliers that are solidly in the middle of the range we expect. Each of the red circles between –1 and 1 is an outlier because it breaks the seasonal pattern determined when using STL, even though it wouldn't raise any flags if taken out of its sequential context.
<img src="fig/A_outlier.png" width=600px>
Series B
Series B is a little more complicated than Series A because it has a downward trend on top of seasonality. However, STL has no problem identifying the overall pattern. The story is similar to Series A because the noise term still isn't too strong, but we do start to see a few points that could have been considered outliers but were missed because of the increase in noise; the Potential outlier labeled in the plot is one such point.
<img src="fig/B_outlier.png" width=600px>
Series C
For Series C, as with Series B, the upward trend doesn't cause any problems. However, unlike Series A and B, the increase in noise in C causes far fewer outliers to be found; only the largest, most obvious points are identified. Taking even a quick look shows some suspect points like the ones labeled Extreme outlier(s). These seem like they should be spotted as outliers, but the noise in the series makes the method more conservative.
<img src="fig/C_outlier.png" width=600>
Benchmarks
At the top of this post, we said our implementation performed outlier detection "very quickly" — now we'll quantify that.
To benchmark our work we compare outlierDetection to a similar package called AnomalyDetection, released by Twitter. This is an R package using similar (but not identical) techniques as our own, where the biggest differences are that AnomalyDetection has more features and outlierDetection does most of the hard work in C++ (through Rcpp).
We'll generate a new dataset using make_time_series, dropping from hours to minutes for our time step, and extending the time range.
Step7: This creates a large time series of ~86K observations
Step8: Here's the run with outlierDetection
Step9: Here's the run with AnomalyDetection | Python Code:
__author__ = "Ben Bernstein"
Explanation: Detecting outliers in time series
End of explanation
from roam_outliers import *
Explanation: TL;DR: We describe a method of finding outliers in time series data by combining two distinct techniques, STL decomposition and sequential Grubbs' tests. In the end, we arrive at a method that is flexible and identifies points of interest very quickly. Additionally, we offer a new function, comparable to scikit-learn's dataset.make_*, for generating time series datasets with varying functional forms, noise, and number of outliers.
Introduction
STL Decomposition
Grubbs' test
Sequential Grubbs' tests
Data
Detection
Benchmarks
Conclusion
Introduction
Outliers are often nuisances that we remove to avoid skew in estimates and help ensure well-behaved models. However, in business and many other contexts, outliers convey unique information, so simply identifying them can yield powerful insights and offer guidance on which parts of the data need more delving into.
In general, there are two parts to outlier detection. The first is identifying a pattern in the data, and the second is locating the points that don't fit that pattern. In traditional cases, finding a pattern usually means inferring distributions that fit the data best, and identifying outliers means finding the points that have a low probability of belonging to any of the distributions.
This framework also makes sense for time series, but instead of estimating distributions to find patterns, we need to use methods that respect the sequence structure of the data. In this post, we'll describe a method of outlier detection that combines two steps: (1) using STL decomposition to define a time series pattern, and (2) applying sequential Grubbs' tests to spot points that don't fit that pattern.
The code supporting this post is available as roam_outliers. That code creates a Python bridge to our R package outlierDetection, which is also available. Our hope is that this facilitates further exploration of our proposed method.
End of explanation
A = lambda x: np.sin(x / (24 / (2*np.pi)))
B = lambda x: 10*np.cos(x / (24 / (2*np.pi))) - np.power(x, 0.5)
C = lambda x: 100*np.sin(x / (24 / (2*np.pi))) + np.power(x, 0.75)
Explanation: STL Decomposition
Seasonal and Trend decomposition using Loess (STL) was introduced by Cleveland et al. (1990). This method performs an addative decomposition of a time series into trend, seasonal, and remainder series via an iterative process: (1) an inner loop to determine the seasonal and trend estimates and (2) an outer loop to update weights and discount points with outsized impacts on the seasonal and trend terms.
<img src="fig/stl_pic.png" width=600px>
Often we are most interested in the first two terms, but for outlier detection we are concerned only with the remainder. In other words, we want to strip any part of the time series that can be explained by regular patterns and save the left over bit to investigate further with statistical tests. One such test is the Grubbs' test.
Grubbs' test
The Grubbs' test (Grubbs 1950) is a statistical test to detect one outlier in a single sequence that is approximately normally distributed. We use this test to do the second part of outlier detection. That is, once we've determined which part of the series is a pattern and which is noise, the Grubbs' test helps us determine which points are outliers.
The test follows the usual steps where we calculate a formal statistic and then compare it to a critical value. The formal statistic is calculated by finding the point ($Y_{i}$) in the series farthest away from the mean ($\bar{Y}$) and adjusting by the standard deviation ($s$) (source):
$$
G = \frac{\max_{i=1,\ldots,N} |Y_{i}-\bar{Y}|}{s}
$$
Calculating this statistic is straightforward — the remaining question is what we will compare it to. A gut instinct might be to compare this to the normal distribution because one of our assumptions is that we have "approximately normal" data. However, we have to be careful. Since we calculate $s$ using all points in the series, including the outliers we expect to find, the normality assumption is unlikely to hold.
Thankfully, Grubbs showed us how to calculate the critical values we need. We reject the null hypothesis that there are no outliers with the following expression (source):
$$
G >
\frac{N-1}{\sqrt{N}}
\sqrt{
\frac{t^{2}{\alpha/(2N), N-2}}
{N-2 + t^{2}{\alpha/(2N), N-2}}
}
$$
where $N$ is the number of points and $t$ is the t-distribution with an $\alpha/(2N)$ significance level and $N-2$ degrees of freedom.
Sequential Grubbs' tests
The Sequential Grubbs' test works exactly as the name suggests. We perform Grubbs' tests repeatedly up to a predetermined number, which is specified by the maximum percent of outliers allowed. In each iteration, we remove the last $Y_{i}$ from the series and test the new farthest-away-from-the-mean point by recalculating all relevant values — meaning we decrement $N$ and update $\bar{Y}$, $s$, and $t$.
The key point to stress is that we have to do this sequentially because, if we can reject the null hypothesis for any point, then every previously checked point is also considered an outlier.
Data
Now that we have the process described, let's create some data to explore. Instead of tying ourselves to a single dataset, we'll use a make_time_series function that works a lot like the dataset.make_* functions in scikit-learn. It allows us to specify a few parameters:
The start, end, and time_step
A functional form for the series
A noise parameter, so the series isn't too smooth
The outliers percentage (of course we're testing for this so let's build it into our data!)
We want to highlight STL with seasonal data, so we'll make a few seasonal series where our time step is in hours and our season is a day:
Series A: Stationary sine curve with little noise and 10% outliers.
Series B: Downward trending cosine curve with average noise and 10% outliers.
Series C: Upward trending sine curve with a lot of noise and 10% outliers.
The following functions define the core pattern for each of these series:
End of explanation
dt, ys = make_time_series(
start_dt='2016-01-01T00:00:00',
end_dt='2016-01-10T00:00:00',
time_step="h",
functions=[(A, 0.1, 0.10),
(B, 1, 0.10),
(C, 50, 0.10)],
random_state=0)
Explanation: In the call to make_time_series, we specify a date range, a unit for the time step (here, h for hours), and a list giving each function along with its noise rate and its outlier percentage.
End of explanation
wide_df = np_to_df(dt, ys, cols=['A','B','C'])
Explanation: The function np_to_df is a helper functions for plotting and passing data between Python and R.
End of explanation
wide_outlier_df = find_outliers_for_examples(wide_df)
Explanation: Here's a look at all the series we're working with:
<img src="fig/all_series.png" width=600>
Detection
Now let's find outliers.
End of explanation
dt, ys = make_time_series(
start_dt='2016-01-01T00:00:00',
end_dt='2016-03-01T00:00:00', time_step="m",
functions=[(lambda x: np.sin(x / (1440 / (2*np.pi))), 0.1, 0.10)],
random_state=5)
Explanation: Series A
Here's a visualization of Series A with red dots marking the points identified as outliers. Series A only has a little noise, and that makes it easier for us to find outliers because they stand out more — almost every point above 1 and below –1 is considered an outlier in this series. More importantly, we're also able to find outliers that are solidly in the middle of the range we expect. Each of the red circles between –1 and 1 is an outlier because it breaks the seasonal pattern determined when using STL, even though it wouldn't raise any flags if taken out of its sequential context.
<img src="fig/A_outlier.png" width=600px>
Series B
Series B is a little more complicated than Series A because it has a downward trend on top of seasonality. However, STL has no problem identifying the overall pattern. The story is similar to Series A because the noise term still isn't too strong, but we do start to see a few points that could have been considered outliers but were missed because of the increase in noise; the Potential outlier labeled in the plot is one such point.
<img src="fig/B_outlier.png" width=600px>
Series C
For Series C, as with Series B, the upward trend doesn't cause any problems. However, unlike Series A and B, the increase in noise in C causes far fewer outliers to be found; only the largest, most obvious points are identified. Taking even a quick look shows some suspect points like the ones labeled Extreme outlier(s). These seem like they should be spotted as outliers, but the noise in the series makes the method more conservative.
<img src="fig/C_outlier.png" width=600>
Benchmarks
At the top of this post, we said our implementation performed outlier detection "very quickly" — now we'll quantify that.
To benchmark our work we compare outlierDetection to a similar package called AnomalyDetection, released by Twitter. This is an R package using similar (but not identical) techniques as our own, where the biggest differences are that AnomalyDetection has more features and outlierDetection does most of the hard work in C++ (through Rcpp).
We'll generate a new dataset using make_time_series, dropping from hours to minutes for our time step, and extending the time range.
End of explanation
wide_df = np_to_df(dt, ys, ['A'])
print("(rows, cols): {}".format(wide_df.shape))
Explanation: This creates a large time series of ~86K observations:
End of explanation
%%timeit
find_outliers_for_benchmarks(wide_df)
Explanation: Here's the run with outlierDetection:
End of explanation
%%timeit
find_anomalies_for_benchmarks(wide_df)
Explanation: Here's the run with AnomalyDetection:
End of explanation |
13,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Theano, Lasagne
and why they matter
got no lasagne?
Install the bleeding edge version from here
Step1: theano teaser
Doing the very same thing
Step2: How does it work?
if you're currently in classroom, chances are i am explaining this text wall right now
* 1 You define inputs f your future function;
* 2 You write a recipe for some transformation of inputs;
* 3 You compile it;
* You have just got a function!
* The gobbledegooky version
Step3: Compiling
So far we were using "symbolic" variables and transformations
Defining the recipe for computation, but not computing anything
To use the recipe, one should compile it
Step4: Debugging
Compilation can take a while for big functions
To avoid waiting, one can evaluate transformations without compiling
Without compilation, the code runs slower, so consider reducing input size
Step5: When debugging, one would generally want to reduce the computation complexity. For example, if you are about to feed neural network with 1000 samples batch, consider taking first 2.
If you really want to debug graph of high computation complexity, you could just as well compile it (e.g. with optimizer='fast_compile')
Do It Yourself
[2 points max]
Step6: Shared variables
The inputs and transformations only exist when function is called
Shared variables always stay in memory like global variables
Shared variables can be included into a symbolic graph
They can be set and evaluated using special methods
but they can't change value arbitrarily during symbolic graph computation
we'll cover that later;
Hint
Step7: Your turn
Step8: T.grad - why theano matters
Theano can compute derivatives and gradients automatically
Derivatives are computed symbolically, not numerically
Limitations
Step9: Why that rocks
Step10: Almost done - Updates
updates are a way of changing shared variables at after function call.
technically it's a dictionary {shared_variable
Step11: Logistic regression example
[ 4 points max]
Implement the regular logistic regression training algorithm
Tips
Step12: my1stNN
[basic part 4 points max]
Your ultimate task for this week is to build your first neural network [almost] from scratch and pure theano.
This time you will same digit recognition problem, but at a larger scale
* images are now 28x28
* 10 different digits
* 50k samples
Note that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) NN should already have ive you an edge over logistic regression.
[bonus score]
If you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! The milestones would be 95%/97.5%/98.5% accuraсy on test set.
SPOILER!
At the end of the notebook you will find a few tips and frequently made mistakes. If you feel enough might to shoot yourself in the foot without external assistance, we encourage you to do so, but if you encounter any unsurpassable issues, please do look there before mailing us. | Python Code:
import numpy as np
def sum_squares(N):
return <student.Implement_me()>
%%time
sum_squares(10**8)
Explanation: Theano, Lasagne
and why they matter
got no lasagne?
Install the bleeding edge version from here: http://lasagne.readthedocs.org/en/latest/user/installation.html
Warming up
Implement a function that computes the sum of squares of numbers from 0 to N
Use numpy or python
An array of numbers 0 to N - numpy.arange(N)
End of explanation
import theano
import theano.tensor as T
#I gonna be function parameter
N = T.scalar("a dimension",dtype='int32')
#i am a recipe on how to produce sum of squares of arange of N given N
result = (T.arange(N)**2).sum()
#Compiling the recipe of computing "result" given N
sum_function = theano.function(inputs = [N],outputs=result)
%%time
sum_function(10**8)
Explanation: theano teaser
Doing the very same thing
End of explanation
#Inputs
example_input_integer = T.scalar("scalar input",dtype='float32')
example_input_tensor = T.tensor4("four dimensional tensor input") #dtype = theano.config.floatX by default
#не бойся, тензор нам не пригодится
input_vector = T.vector("", dtype='int32') # vector of integers
#Transformations
#transofrmation: elementwise multiplication
double_the_vector = input_vector*2
#elementwise cosine
elementwise_cosine = T.cos(input_vector)
#difference between squared vector and vector itself
vector_squares = input_vector**2 - input_vector
#Practice time:
#create two vectors of size float32
my_vector = student.init_float32_vector()
my_vector2 = student.init_one_more_such_vector()
#Write a transformation(recipe):
#(vec1)*(vec2) / (sin(vec1) +1)
my_transformation = student.implementwhatwaswrittenabove()
print my_transformation
#it's okay it aint a number
Explanation: How does it work?
if you're currently in classroom, chances are i am explaining this text wall right now
* 1 You define inputs f your future function;
* 2 You write a recipe for some transformation of inputs;
* 3 You compile it;
* You have just got a function!
* The gobbledegooky version: you define a function as symbolic computation graph.
There are two main kinвs of entities: "Inputs" and "Transformations"
Both can be numbers, vectors, matrices, tensors, etc.
Both can be integers, floats of booleans (uint8) of various size.
An input is a placeholder for function parameters.
N from example above
Transformations are the recipes for computing something given inputs and transformation
(T.arange(N)^2).sum() are 3 sequential transformations of N
Doubles all functions of numpy vector syntax
You can almost always go with replacing "np.function" with "T.function" aka "theano.tensor.function"
np.mean -> T.mean
np.arange -> T.arange
np.cumsum -> T.cumsum
and so on.
builtin operations also work that way
np.arange(10).mean() -> T.arange(10).mean()
Once upon a blue moon the functions have different names or locations (e.g. T.extra_ops)
Ask us or google it
Still confused? We gonna fix that.
End of explanation
inputs = [<two vectors that my_transformation depends on>]
outputs = [<What do we compute (can be a list of several transformation)>]
# The next lines compile a function that takes two vectors and computes your transformation
my_function = theano.function(
inputs,outputs,
allow_input_downcast=True #automatic type casting for input parameters (e.g. float64 -> float32)
)
#using function with, lists:
print "using python lists:"
print my_function([1,2,3],[4,5,6])
print
#Or using numpy arrays:
#btw, that 'float' dtype is casted to secong parameter dtype which is float32
print "using numpy arrays:"
print my_function(np.arange(10),
np.linspace(5,6,10,dtype='float'))
Explanation: Compiling
So far we were using "symbolic" variables and transformations
Defining the recipe for computation, but not computing anything
To use the recipe, one should compile it
End of explanation
#a dictionary of inputs
my_function_inputs = {
my_vector:[1,2,3],
my_vector2:[4,5,6]
}
# evaluate my_transformation
# has to match with compiled function output
print my_transformation.eval(my_function_inputs)
# can compute transformations on the fly
print "add 2 vectors", (my_vector + my_vector2).eval(my_function_inputs)
#!WARNING! if your transformation only depends on some inputs,
#do not provide the rest of them
print "vector's shape:", my_vector.shape.eval({
my_vector:[1,2,3]
})
Explanation: Debugging
Compilation can take a while for big functions
To avoid waiting, one can evaluate transformations without compiling
Without compilation, the code runs slower, so consider reducing input size
End of explanation
# Quest #1 - implement a function that computes a mean squared error of two input vectors
# Your function has to take 2 vectors and return a single number
<student.define_inputs_and_transformations()>
compute_mse =<student.compile_function()>
# Tests
from sklearn.metrics import mean_squared_error
for n in [1,5,10,10**3]:
elems = [np.arange(n),np.arange(n,0,-1), np.zeros(n),
np.ones(n),np.random.random(n),np.random.randint(100,size=n)]
for el in elems:
for el_2 in elems:
true_mse = np.array(mean_squared_error(el,el_2))
my_mse = compute_mse(el,el_2)
if not np.allclose(true_mse,my_mse):
print 'Wrong result:'
print 'mse(%s,%s)'%(el,el_2)
print "should be: %f, but your function returned %f"%(true_mse,my_mse)
raise ValueError,"Что-то не так"
print "All tests passed"
Explanation: When debugging, one would generally want to reduce the computation complexity. For example, if you are about to feed neural network with 1000 samples batch, consider taking first 2.
If you really want to debug graph of high computation complexity, you could just as well compile it (e.g. with optimizer='fast_compile')
Do It Yourself
[2 points max]
End of explanation
#creating shared variable
shared_vector_1 = theano.shared(np.ones(10,dtype='float64'))
#evaluating shared variable (outside symbolicd graph)
print "initial value",shared_vector_1.get_value()
# within symbolic graph you use them just as any other inout or transformation, not "get value" needed
#setting new value
shared_vector_1.set_value( np.arange(5) )
#getting that new value
print "new value", shared_vector_1.get_value()
#Note that the vector changed shape
#This is entirely allowed... unless your graph is hard-wired to work with some fixed shape
Explanation: Shared variables
The inputs and transformations only exist when function is called
Shared variables always stay in memory like global variables
Shared variables can be included into a symbolic graph
They can be set and evaluated using special methods
but they can't change value arbitrarily during symbolic graph computation
we'll cover that later;
Hint: such variables are a perfect place to store network parameters
e.g. weights or some metadata
End of explanation
# Write a recipe (transformation) that computes an elementwise transformation of shared_vector and input_scalar
#Compile as a function of input_scalar
input_scalar = T.scalar('coefficient',dtype='float32')
scalar_times_shared = <student.write_recipe()>
shared_times_n = <student.compile_function()>
print "shared:", shared_vector_1.get_value()
print "shared_times_n(5)",shared_times_n(5)
print "shared_times_n(-0.5)",shared_times_n(-0.5)
#Changing value of vector 1 (output should change)
shared_vector_1.set_value([-1,0,1])
print "shared:", shared_vector_1.get_value()
print "shared_times_n(5)",shared_times_n(5)
print "shared_times_n(-0.5)",shared_times_n(-0.5)
Explanation: Your turn
End of explanation
my_scalar = T.scalar(name='input',dtype='float64')
scalar_squared = T.sum(my_scalar**2)
#a derivative of v_squared by my_vector
derivative = T.grad(scalar_squared,my_scalar)
fun = theano.function([my_scalar],scalar_squared)
grad = theano.function([my_scalar],derivative)
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-3,3)
x_squared = map(fun,x)
x_squared_der = map(grad,x)
plt.plot(x, x_squared,label="x^2")
plt.plot(x, x_squared_der, label="derivative")
plt.legend()
Explanation: T.grad - why theano matters
Theano can compute derivatives and gradients automatically
Derivatives are computed symbolically, not numerically
Limitations:
* You can only compute a gradient of a scalar transformation over one or several scalar or vector (or tensor) transformations or inputs.
* A transformation has to have float32 or float64 dtype throughout the whole computation graph
* derivative over an integer has no mathematical sense
End of explanation
my_vector = T.vector('float64')
#Compute the gradient of the next weird function over my_scalar and my_vector
#warning! Trying to understand the meaning of that function may result in permanent brain damage
weird_psychotic_function = ((my_vector+my_scalar)**(1+T.var(my_vector)) +1./T.arcsinh(my_scalar)).mean()/(my_scalar**2 +1) + 0.01*T.sin(2*my_scalar**1.5)*(T.sum(my_vector)* my_scalar**2)*T.exp((my_scalar-4)**2)/(1+T.exp((my_scalar-4)**2))*(1.-(T.exp(-(my_scalar-4)**2))/(1+T.exp(-(my_scalar-4)**2)))**2
der_by_scalar,der_by_vector = <student.compute_grad_over_scalar_and_vector()>
compute_weird_function = theano.function([my_scalar,my_vector],weird_psychotic_function)
compute_der_by_scalar = theano.function([my_scalar,my_vector],der_by_scalar)
#Plotting your derivative
vector_0 = [1,2,3]
scalar_space = np.linspace(0,7)
y = [compute_weird_function(x,vector_0) for x in scalar_space]
plt.plot(scalar_space,y,label='function')
y_der_by_scalar = [compute_der_by_scalar(x,vector_0) for x in scalar_space]
plt.plot(scalar_space,y_der_by_scalar,label='derivative')
plt.grid();plt.legend()
Explanation: Why that rocks
End of explanation
# Multiply shared vector by a number and save the product back into shared vector
inputs = [input_scalar]
outputs = [scalar_times_shared] #return vector times scalar
my_updates = {
shared_vector_1:scalar_times_shared #and write this same result bach into shared_vector_1
}
compute_and_save = theano.function(inputs, outputs, updates=my_updates)
shared_vector_1.set_value(np.arange(5))
#initial shared_vector_1
print "initial shared value:" ,shared_vector_1.get_value()
# evaluating the function (shared_vector_1 will be changed)
print "compute_and_save(2) returns",compute_and_save(2)
#evaluate new shared_vector_1
print "new shared value:" ,shared_vector_1.get_value()
Explanation: Almost done - Updates
updates are a way of changing shared variables at after function call.
technically it's a dictionary {shared_variable : a recipe for new value} which is has to be provided when function is compiled
That's how it works:
End of explanation
from sklearn.datasets import load_digits
mnist = load_digits(2)
X,y = mnist.data, mnist.target
print "y [shape - %s]:"%(str(y.shape)),y[:10]
print "X [shape - %s]:"%(str(X.shape))
print X[:3]
print y[:10]
# inputs and shareds
shared_weights = <student.code_me()>
input_X = <student.code_me()>
input_y = <student.code_me()>
predicted_y = <predicted probabilities for input_X>
loss = <logistic loss (scalar, mean over sample)>
grad = <gradient of loss over model weights>
updates = {
shared_weights: <new weights after gradient step>
}
train_function = <compile function that takes X and y, returns log loss and updates weights>
predict_function = <compile function that takes X and computes probabilities of y>
from sklearn.cross_validation import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y)
from sklearn.metrics import roc_auc_score
for i in range(5):
loss_i = train_function(X_train,y_train)
print "loss at iter %i:%.4f"%(i,loss_i)
print "train auc:",roc_auc_score(y_train,predict_function(X_train))
print "test auc:",roc_auc_score(y_test,predict_function(X_test))
print "resulting weights:"
plt.imshow(shared_weights.get_value().reshape(8,-1))
plt.colorbar()
Explanation: Logistic regression example
[ 4 points max]
Implement the regular logistic regression training algorithm
Tips:
* Weights fit in as a shared variable
* X and y are potential inputs
* Compile 2 functions:
* train_function(X,y) - returns error and computes weights' new values (through updates)
* predict_fun(X) - just computes probabilities ("y") given data
We shall train on a two-class MNIST dataset
* please note that target y are {0,1} and not {-1,1} as in some formulae
End of explanation
from mnist import load_dataset
#[down]loading the original MNIST dataset.
#Please note that you should only train your NN on _train sample,
# _val can be used to evaluate out-of-sample error, compare models or perform early-stopping
# _test should be hidden under a rock untill final evaluation... But we both know it is near impossible to catch you evaluating on it.
X_train,y_train,X_val,y_val,X_test,y_test = load_dataset()
print X_train.shape,y_train.shape
plt.imshow(X_train[0,0])
<here you could just as well create computation graph>
<this may or may not be a good place to evaluating loss and updates>
<here one could compile all the required functions>
<this may be a perfect cell to write a training&evaluation loop in>
<predict & evaluate on test here, right? No cheating pls.>
Explanation: my1stNN
[basic part 4 points max]
Your ultimate task for this week is to build your first neural network [almost] from scratch and pure theano.
This time you will same digit recognition problem, but at a larger scale
* images are now 28x28
* 10 different digits
* 50k samples
Note that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) NN should already have ive you an edge over logistic regression.
[bonus score]
If you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! The milestones would be 95%/97.5%/98.5% accuraсy on test set.
SPOILER!
At the end of the notebook you will find a few tips and frequently made mistakes. If you feel enough might to shoot yourself in the foot without external assistance, we encourage you to do so, but if you encounter any unsurpassable issues, please do look there before mailing us.
End of explanation |
13,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Examples of streaming and non streaming inference with TF/TFlite
Imports
Step4: Load wav file
Step5: Prepare batched model
Step6: Run inference with TF
TF Run non streaming inference
Step7: TF Run streaming inference with internal state
Step8: TF Run streaming inference with external state
Step9: Run inference with TFlite
Run non streaming inference with TFLite
Step10: Run streaming inference with TFLite
Step11: Run evaluation on all testing data | Python Code:
!git clone https://github.com/google-research/google-research.git
import sys
import os
import tarfile
import urllib
import zipfile
sys.path.append('./google-research')
Explanation: Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
End of explanation
# TF streaming
from kws_streaming.models import models
from kws_streaming.models import utils
from kws_streaming.models import model_utils
from kws_streaming.layers.modes import Modes
import tensorflow as tf
import numpy as np
import tensorflow.compat.v1 as tf1
import logging
from kws_streaming.models import model_flags
from kws_streaming.models import model_params
from kws_streaming.train import inference
from kws_streaming.train import test
from kws_streaming.data import input_data
from kws_streaming.data import input_data_utils as du
tf1.disable_eager_execution()
config = tf1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf1.Session(config=config)
# general imports
import matplotlib.pyplot as plt
import os
import json
import numpy as np
import scipy as scipy
import scipy.io.wavfile as wav
import scipy.signal
tf.__version__
tf1.reset_default_graph()
sess = tf1.Session()
tf1.keras.backend.set_session(sess)
tf1.keras.backend.set_learning_phase(0)
Explanation: Examples of streaming and non streaming inference with TF/TFlite
Imports
End of explanation
def waveread_as_pcm16(filename):
Read in audio data from a wav file. Return d, sr.
samplerate, wave_data = wav.read(filename)
# Read in wav file.
return wave_data, samplerate
def wavread_as_float(filename, target_sample_rate=16000):
Read in audio data from a wav file. Return d, sr.
wave_data, samplerate = waveread_as_pcm16(filename)
desired_length = int(
round(float(len(wave_data)) / samplerate * target_sample_rate))
wave_data = scipy.signal.resample(wave_data, desired_length)
# Normalize short ints to floats in range [-1..1).
data = np.array(wave_data, np.float32) / 32768.0
return data, target_sample_rate
# set PATH to data sets (for example to speech commands V2):
# it can be downloaded from
# https://storage.googleapis.com/download.tensorflow.org/data/speech_commands_v0.02.tar.gz
# if you run 00_check-data.ipynb then data2 should be located in the current folder
current_dir = os.getcwd()
DATA_PATH = os.path.join(current_dir, "data2/")
# Set path to wav file for testing.
wav_file = os.path.join(DATA_PATH, "left/012187a4_nohash_0.wav")
# read audio file
wav_data, samplerate = wavread_as_float(wav_file)
assert samplerate == 16000
plt.plot(wav_data)
Explanation: Load wav file
End of explanation
# This notebook is configured to work with 'ds_tc_resnet' and 'svdf'.
MODEL_NAME = 'ds_tc_resnet'
# MODEL_NAME = 'svdf'
MODELS_PATH = os.path.join(current_dir, "models")
MODEL_PATH = os.path.join(MODELS_PATH, MODEL_NAME + "/")
MODEL_PATH
train_dir = os.path.join(MODELS_PATH, MODEL_NAME)
# below is another way of reading flags - through json
with tf.compat.v1.gfile.Open(os.path.join(train_dir, 'flags.json'), 'r') as fd:
flags_json = json.load(fd)
class DictStruct(object):
def __init__(self, **entries):
self.__dict__.update(entries)
flags = DictStruct(**flags_json)
flags.data_dir = DATA_PATH
# get total stride of the model
total_stride = 1
if MODEL_NAME == 'ds_tc_resnet':
# it can be automated by scanning layers of the model, but for now just use parameters of specific model
pools = model_utils.parse(flags.ds_pool)
strides = model_utils.parse(flags.ds_stride)
time_stride = [1]
for pool in pools:
if pool > 1:
time_stride.append(pool)
for stride in strides:
if stride > 1:
time_stride.append(stride)
total_stride = np.prod(time_stride)
# overide input data shape for streaming model with stride/pool
flags.data_stride = total_stride
flags.data_shape = (total_stride * flags.window_stride_samples,)
# prepare mapping of index to word
audio_processor = input_data.AudioProcessor(flags)
index_to_label = {}
# labels used for training
for word in audio_processor.word_to_index.keys():
if audio_processor.word_to_index[word] == du.SILENCE_INDEX:
index_to_label[audio_processor.word_to_index[word]] = du.SILENCE_LABEL
elif audio_processor.word_to_index[word] == du.UNKNOWN_WORD_INDEX:
index_to_label[audio_processor.word_to_index[word]] = du.UNKNOWN_WORD_LABEL
else:
index_to_label[audio_processor.word_to_index[word]] = word
# training labels
index_to_label
# pad input audio with zeros, so that audio len = flags.desired_samples
padded_wav = np.pad(wav_data, (0, flags.desired_samples-len(wav_data)), 'constant')
input_data = np.expand_dims(padded_wav, 0)
input_data.shape
# create model with flag's parameters
model_non_stream_batch = models.MODELS[flags.model_name](flags)
# load model's weights
weights_name = 'best_weights'
model_non_stream_batch.load_weights(os.path.join(train_dir, weights_name))
tf.keras.utils.plot_model(
model_non_stream_batch,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
Explanation: Prepare batched model
End of explanation
# convert model to inference mode with batch one
inference_batch_size = 1
tf.keras.backend.set_learning_phase(0)
flags.batch_size = inference_batch_size # set batch size
model_non_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.NON_STREAM_INFERENCE)
#model_non_stream.summary()
tf.keras.utils.plot_model(
model_non_stream,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
predictions = model_non_stream.predict(input_data)
predicted_labels = np.argmax(predictions, axis=1)
predicted_labels
index_to_label[predicted_labels[0]]
Explanation: Run inference with TF
TF Run non streaming inference
End of explanation
# convert model to streaming mode
flags.batch_size = inference_batch_size # set batch size
model_stream = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.STREAM_INTERNAL_STATE_INFERENCE)
#model_stream.summary()
tf.keras.utils.plot_model(
model_stream,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
stream_output_prediction = inference.run_stream_inference_classification(flags, model_stream, input_data)
stream_output_arg = np.argmax(stream_output_prediction)
stream_output_arg
index_to_label[stream_output_arg]
Explanation: TF Run streaming inference with internal state
End of explanation
# convert model to streaming mode
flags.batch_size = inference_batch_size # set batch size
model_stream_external = utils.to_streaming_inference(model_non_stream_batch, flags, Modes.STREAM_EXTERNAL_STATE_INFERENCE)
#model_stream.summary()
tf.keras.utils.plot_model(
model_stream_external,
show_shapes=True,
show_layer_names=True,
expand_nested=True)
inputs = []
for s in range(len(model_stream_external.inputs)):
inputs.append(np.zeros(model_stream_external.inputs[s].shape, dtype=np.float32))
window_stride = flags.data_shape[0]
start = 0
end = window_stride
while end <= input_data.shape[1]:
# get new frame from stream of data
stream_update = input_data[:, start:end]
# update indexes of streamed updates
start = end
end = start + window_stride
# set input audio data (by default input data at index 0)
inputs[0] = stream_update
# run inference
outputs = model_stream_external.predict(inputs)
# get output states and set it back to input states
# which will be fed in the next inference cycle
for s in range(1, len(model_stream_external.inputs)):
inputs[s] = outputs[s]
stream_output_arg = np.argmax(outputs[0])
stream_output_arg
index_to_label[stream_output_arg]
Explanation: TF Run streaming inference with external state
End of explanation
tflite_non_streaming_model = utils.model_to_tflite(sess, model_non_stream_batch, flags, Modes.NON_STREAM_INFERENCE)
tflite_non_stream_fname = 'tflite_non_stream.tflite'
with open(os.path.join(MODEL_PATH, tflite_non_stream_fname), 'wb') as fd:
fd.write(tflite_non_streaming_model)
interpreter = tf.lite.Interpreter(model_content=tflite_non_streaming_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# set input audio data (by default input data at index 0)
interpreter.set_tensor(input_details[0]['index'], input_data.astype(np.float32))
# run inference
interpreter.invoke()
# get output: classification
out_tflite = interpreter.get_tensor(output_details[0]['index'])
out_tflite_argmax = np.argmax(out_tflite)
out_tflite_argmax
index_to_label[out_tflite_argmax]
Explanation: Run inference with TFlite
Run non streaming inference with TFLite
End of explanation
tflite_streaming_model = utils.model_to_tflite(sess, model_non_stream_batch, flags, Modes.STREAM_EXTERNAL_STATE_INFERENCE)
tflite_stream_fname = 'tflite_stream.tflite'
with open(os.path.join(MODEL_PATH, tflite_stream_fname), 'wb') as fd:
fd.write(tflite_streaming_model)
interpreter = tf.lite.Interpreter(model_content=tflite_streaming_model)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_states = []
for s in range(len(input_details)):
input_states.append(np.zeros(input_details[s]['shape'], dtype=np.float32))
out_tflite = inference.run_stream_inference_classification_tflite(flags, interpreter, input_data, input_states)
out_tflite_argmax = np.argmax(out_tflite[0])
index_to_label[out_tflite_argmax]
Explanation: Run streaming inference with TFLite
End of explanation
test.tflite_non_stream_model_accuracy(
flags,
MODEL_PATH,
tflite_model_name=tflite_non_stream_fname,
accuracy_name='tflite_non_stream_model_accuracy.txt')
test.tflite_stream_state_external_model_accuracy(
flags,
MODEL_PATH,
tflite_model_name=tflite_stream_fname,
accuracy_name='tflite_stream_state_external_model_accuracy.txt',
reset_state=True)
test.tflite_stream_state_external_model_accuracy(
flags,
MODEL_PATH,
tflite_model_name=tflite_stream_fname,
accuracy_name='tflite_stream_state_external_model_accuracy.txt',
reset_state=False)
Explanation: Run evaluation on all testing data
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.