Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
7,100 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quality Measures
Step1: <a id='pain'></a>
Pain
Step2: <a id='dyspnea'></a>
Dyspnea
Step3: <a id='constipation'></a>
Constipation Screening
Step4: <a id='opiod'></a>
Opiod bowel regimen
Can not find this variable in the data dictionary - can you help point me in the right direction?
<a id='ad'></a>
Advanced Directives
Step5: <a id='code'></a>
Code status
Step6: <a id='hcp'></a>
Health care proxy noted
Step7: <a id='papp'></a>
Provider-Assessed Palliative Performance | Python Code:
import pandas as pd
import pickle
import numpy as np
import matplotlib.pyplot as plt
from textwrap import wrap
#from matplotlib import rcParams
#rcParams.update({'figure.autolayout': True})
%matplotlib inline
dd = pickle.load(open("./python_scripts/02_data_dictionary_dict.p", "rb" ))
voi = ['ESASPain','ESASShortnessOfBreath','ESASConstipation','AdDirectives','Resuscitation','HealthcareProxy','PPSScore']
def draw_bars(temp):
testers = pickle.load(open("./python_scripts/04_" + temp + ".p", "rb" ))
title = dd.get(temp).get('display_text')
info = pd.Series(dd.get(temp).get('codes_fmt'),index=dd.get(temp).get('codes')).to_frame()
theStuff = pd.merge(info,testers,left_index=True,right_index=True,how='left')
#graphin now
y = theStuff[temp].tolist()
y.append(5066-sum(y))
x = np.arange(len(y))
xl = theStuff[0].tolist()
xl.append('missing')
fig = plt.figure(figsize=(16, 6))
plt.title(temp + ": " + "\n".join(wrap(title,60)),fontsize=8)
ax = fig.add_subplot(111)
ax.bar(x,y,0.5, align='center')
ax.set_xticks(x)
ax.set_xticklabels(xl,rotation=45,fontsize=8)
Explanation: Quality Measures: Variable Exploration
<img src="images/qualitymeasure.jpeg" alt="Smiley face" height="125" width="300" align="left">
Barplots: Key Quality Measures Overall CMMI
For the following key measures, a barplot illustrating frequencies in response categories are provided:
Pain
Dyspnea
Constipation Screening
Opiod bowel regimen
Advanced Directives
Code status
Health care proxy noted
Provider-Assessed Palliative Performance
End of explanation
draw_bars(voi[0])
Explanation: <a id='pain'></a>
Pain
End of explanation
draw_bars(voi[1])
Explanation: <a id='dyspnea'></a>
Dyspnea
End of explanation
draw_bars(voi[2])
Explanation: <a id='constipation'></a>
Constipation Screening
End of explanation
draw_bars(voi[3])
Explanation: <a id='opiod'></a>
Opiod bowel regimen
Can not find this variable in the data dictionary - can you help point me in the right direction?
<a id='ad'></a>
Advanced Directives
End of explanation
draw_bars(voi[4])
Explanation: <a id='code'></a>
Code status
End of explanation
draw_bars(voi[5])
Explanation: <a id='hcp'></a>
Health care proxy noted
End of explanation
draw_bars(voi[6])
Explanation: <a id='papp'></a>
Provider-Assessed Palliative Performance
End of explanation |
7,101 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real =
inputs_z =
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope # finish this
# Hidden layer
h1 =
# Leaky ReLU
h1 =
# Logits and tanh output
logits =
out =
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope # finish this
# Hidden layer
h1 =
# Leaky ReLU
h1 =
logits =
out =
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z =
# Generator network here
g_model =
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real =
d_model_fake, d_logits_fake =
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real =
d_loss_fake =
d_loss =
g_loss =
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars =
g_vars =
d_vars =
d_train_opt =
g_train_opt =
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
7,102 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deputado Histogramado
expressao.xyz/deputado/
Como processar as sessões do parlamento Português
Índice
Reunír o dataset
Contando as palavras mais comuns
Fazendo histogramas
Representações geograficas
Simplificar o dataset e exportar para o expressa.xyz/deputado/
O que se passou nas mais de 4000 sessões de discussão do parlamento Português que ocorreram desde 1976?
Neste notebook vamos tentar visualizar o que se passou da maneira mais simples - contando palavras, e fazendo gráficos.
Para obter os textos de todas as sessões usaremos o demo.cratica.org, onde podemos aceder facilmente a todas as sessões do parlamento de 1976 a 2015. Depois com um pouco de python, pandas e matplotlib vamos analisar o que se passou.
Para executar estes notebook será necessário descarregar e abrir com o Jupiter Notebooks (a distribuição Anaconda faz com que instalar todas as ferramentas necessárias seja fácil - https
Step1: Na parte 2 já ficamos a saber que 'Orçamento de/do Estado' não se usava antes de 1984, e se falava mais de decretos-lei antes de 1983.
Mas sinceramente não encontramos nada de interessante. Vamos acelerar o processo, e olhar para mais palavras
Step2: Tal como tinhamos visto antes, o ano 2000 foi um ano bastante presente para o Paulo Portas. Parece que as suas contribuições vêm em ondas.
Step3: Sempre se esteve em crise, mas em 2010 foi uma super-crise.
Step4: Os debates sobre o aborto parecem estar bem localizados, a 1982, 1984, 1997/8 e 2005.
Step5: Saiu de moda.
Step6: e se quisermos acumular varias palavras no mesmo histograma?
Step7: A União Europeia foi fundada em ~93 e a CEE integrada nesta (segundo a wikipedia), logo o gráfico faz sentido.
Vamos criar uma função para integrar os 2 graficos, para nos permitir comparar a evolução
Step8: Boa, uma substitui a outra, basicamente.
Step9: Novamente, uma substitui a outra.
Step10: Ok isto parece um mistério. Falava-se bastante mais da troika em 1989 do que 2011. Vamos investigar isto procurando e mostrando as frases onde as palavras aparecem.
Queremos saber o que foi dito quando se mencionou 'Troika' no parlamento. Vamos tentar encontrar e imprimir as frases onde se dão as >70 ocorrencias de troika de 1989 e as 25 de 2011.
Step11: Como vemos na última frase, a verdade é que no parlmento se usa mais o termo 'Troica' do que 'Troika'! Na comunicação social usa-se muito 'Troika'.
E para quem não sabe o que foi a perestroika | Python Code:
%matplotlib inline
import pylab
import matplotlib
import pandas
import numpy
dateparse = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d')
sessoes = pandas.read_csv('sessoes_democratica_org.csv',index_col=0,parse_dates=['data'], date_parser=dateparse)
Explanation: Deputado Histogramado
expressao.xyz/deputado/
Como processar as sessões do parlamento Português
Índice
Reunír o dataset
Contando as palavras mais comuns
Fazendo histogramas
Representações geograficas
Simplificar o dataset e exportar para o expressa.xyz/deputado/
O que se passou nas mais de 4000 sessões de discussão do parlamento Português que ocorreram desde 1976?
Neste notebook vamos tentar visualizar o que se passou da maneira mais simples - contando palavras, e fazendo gráficos.
Para obter os textos de todas as sessões usaremos o demo.cratica.org, onde podemos aceder facilmente a todas as sessões do parlamento de 1976 a 2015. Depois com um pouco de python, pandas e matplotlib vamos analisar o que se passou.
Para executar estes notebook será necessário descarregar e abrir com o Jupiter Notebooks (a distribuição Anaconda faz com que instalar todas as ferramentas necessárias seja fácil - https://www.continuum.io/downloads)
Parte 3 - Fazendo Histogramas
Código para carregar os dados do notebook anterior:
End of explanation
# retorna o número de ocorrências de palavra em texto
def conta_palavra(texto,palavra):
return texto.count(palavra)
# retorna um vector com um item por sessao, e valor verdadeiro se o ano é =i, falso se nao é
def selecciona_ano(data,i):
return data.map(lambda d: d.year == i)
# faz o histograma do número de ocorrencias de 'palavra' por ano
def histograma_palavra(palavra):
# cria uma coluna de tabela contendo as contagens de palavra por cada sessão
dados = sessoes['sessao'].map(lambda texto: conta_palavra(texto,palavra.lower()))
ocorrencias_por_ano = numpy.zeros(2016-1976)
for i in range(0,2016-1976):
# agrupa contagens por ano
ocorrencias_por_ano[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])
f = pylab.figure(figsize=(10,6))
ax = pylab.bar(range(1976,2016),ocorrencias_por_ano)
pylab.xlabel('Ano')
pylab.ylabel('Ocorrencias de '+str(palavra))
import time
start = time.time()
histograma_palavra('Paulo Portas') #já vimos que Paulo e Portas foram anormalmente frequentes em 2000, vamos ver se há mais eventos destes
print(str(time.time()-start)+' s') # mede o tempo que o código 'histograma_palavra('Paulo Portas')' demora a executar, para nossa referencia
Explanation: Na parte 2 já ficamos a saber que 'Orçamento de/do Estado' não se usava antes de 1984, e se falava mais de decretos-lei antes de 1983.
Mas sinceramente não encontramos nada de interessante. Vamos acelerar o processo, e olhar para mais palavras:
End of explanation
histograma_palavra('Crise')
Explanation: Tal como tinhamos visto antes, o ano 2000 foi um ano bastante presente para o Paulo Portas. Parece que as suas contribuições vêm em ondas.
End of explanation
histograma_palavra('aborto')
Explanation: Sempre se esteve em crise, mas em 2010 foi uma super-crise.
End of explanation
histograma_palavra('Euro')
histograma_palavra('Europa')
histograma_palavra('geringonça')
histograma_palavra('corrupção')
histograma_palavra('calúnia')
Explanation: Os debates sobre o aborto parecem estar bem localizados, a 1982, 1984, 1997/8 e 2005.
End of explanation
histograma_palavra('iraque')
histograma_palavra('china')
histograma_palavra('alemanha')
histograma_palavra('brasil')
histograma_palavra('internet')
histograma_palavra('telemóvel')
histograma_palavra('redes sociais')
histograma_palavra('sócrates')
histograma_palavra('droga')
histograma_palavra('aeroporto')
histograma_palavra('hospital')
histograma_palavra('médicos')
Explanation: Saiu de moda.
End of explanation
def conta_palavras(texto,palavras):
l = [texto.count(palavra.lower()) for palavra in palavras]
return sum(l)
def selecciona_ano(data,i):
return data.map(lambda d: d.year == i)
def histograma_palavras(palavras):
dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras))
ocorrencias_por_ano = numpy.zeros(2016-1976)
for i in range(0,2016-1976):
ocorrencias_por_ano[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])
f = pylab.figure(figsize=(10,6))
ax = pylab.bar(range(1976,2016),ocorrencias_por_ano)
pylab.xlabel('Ano')
pylab.ylabel('Ocorrencias de '+str(palavras))
histograma_palavras(['escudos','contos','escudo'])
histograma_palavras(['muito bem','aplausos','fantastico','excelente','grandioso'])
histograma_palavras([' ecu ',' ecu.'])
histograma_palavra('União Europeia')
histograma_palavras(['CEE','Comunidade Económica Europeia'])
Explanation: e se quisermos acumular varias palavras no mesmo histograma?
End of explanation
def conta_palavras(texto,palavras):
l = [texto.count(palavra) for palavra in palavras]
return sum(l)
def selecciona_ano(data,i):
return data.map(lambda d: d.year == i)
# calcula os dados para os 2 histogramas, e representa-os no mesmo gráfico
def grafico_palavras_vs_palavras(palavras1, palavras2):
palavras1 = [p.lower() for p in palavras1]
palavras2 = [p.lower() for p in palavras2]
dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras1))
ocorrencias_por_ano1 = numpy.zeros(2016-1976)
for i in range(0,2016-1976):
ocorrencias_por_ano1[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])
dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras2))
ocorrencias_por_ano2 = numpy.zeros(2016-1976)
for i in range(0,2016-1976):
ocorrencias_por_ano2[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])
anos = range(1976,2016)
f = pylab.figure(figsize=(10,6))
p1 = pylab.bar(anos, ocorrencias_por_ano1)
p2 = pylab.bar(anos, ocorrencias_por_ano2,bottom=ocorrencias_por_ano1)
pylab.legend([palavras1[0], palavras2[0]])
pylab.xlabel('Ano')
pylab.ylabel('Ocorrencias totais')
grafico_palavras_vs_palavras(['CEE','Comunidade Económica Europeia'],['União Europeia'])
Explanation: A União Europeia foi fundada em ~93 e a CEE integrada nesta (segundo a wikipedia), logo o gráfico faz sentido.
Vamos criar uma função para integrar os 2 graficos, para nos permitir comparar a evolução:
End of explanation
grafico_palavras_vs_palavras(['contos','escudo'],['euro.','euro ','euros'])
Explanation: Boa, uma substitui a outra, basicamente.
End of explanation
histograma_palavra('Troika')
Explanation: Novamente, uma substitui a outra.
End of explanation
sessoes_1989 = sessoes[selecciona_ano(sessoes['data'],1989)]
sessoes_2011 = sessoes[selecciona_ano(sessoes['data'],2011)]
def divide_em_frases(texto):
return texto.replace('!','.').replace('?','.').split('.')
def acumula_lista_de_lista(l):
return [j for x in l for j in x ]
def selecciona_frases_com_palavra(sessoes, palavra):
frases_ = sessoes['sessao'].map(divide_em_frases)
frases = acumula_lista_de_lista(frases_)
return list(filter(lambda frase: frase.find(palavra) != -1, frases))
frases_com_troika1989 = selecciona_frases_com_palavra(sessoes_1989, 'troika')
print('Frases com troika em 1989: ' + str(len(frases_com_troika1989)))
frases_com_troika2011 = selecciona_frases_com_palavra(sessoes_2011, 'troika')
print('Frases com troika em 2011: ' + str(len(frases_com_troika2011)))
from IPython.display import Markdown, display
#print markdown permite-nos escrever a negrito ou como título
def print_markdown(string):
display(Markdown(string))
def imprime_frases(lista_de_frases, palavra_negrito):
for i in range(len(lista_de_frases)):
string = lista_de_frases[i].replace(palavra_negrito,'**' + palavra_negrito + '**')
#print_markdown(str(i+1) + ':' + string)
print(str(i+1) + ':' + string)
# no Jupyter notebooks 4.3.1 não se pode gravar output em markdown, tem de ser texto normal
# se estiverem a executar o notebook e não a ler no github, podem descomentar a linha anterior para ver o texto com formatação
#print_markdown('1989:\n====')
print('1989:\n====')
imprime_frases(frases_com_troika1989[1:73:5],'troika')
#print_markdown('2011:\n====')
print('2011:\n====')
imprime_frases(frases_com_troika2011[1:20:2],'troika')
Explanation: Ok isto parece um mistério. Falava-se bastante mais da troika em 1989 do que 2011. Vamos investigar isto procurando e mostrando as frases onde as palavras aparecem.
Queremos saber o que foi dito quando se mencionou 'Troika' no parlamento. Vamos tentar encontrar e imprimir as frases onde se dão as >70 ocorrencias de troika de 1989 e as 25 de 2011.
End of explanation
def conta_palavras(texto,palavras):
l = [texto.count(palavra) for palavra in palavras]
return sum(l)
def selecciona_ano(data,i):
return data.map(lambda d: d.year == i)
# calcula os dados para os 2 histogramas, e representa-os no mesmo gráfico
def grafico_palavras_vs_palavras(palavras1, palavras2):
palavras1 = [p.lower() for p in palavras1]
palavras2 = [p.lower() for p in palavras2]
dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras1))
ocorrencias_por_ano1 = numpy.zeros(2016-1976)
for i in range(0,2016-1976):
ocorrencias_por_ano1[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])
dados = sessoes['sessao'].map(lambda texto: conta_palavras(texto,palavras2))
ocorrencias_por_ano2 = numpy.zeros(2016-1976)
for i in range(0,2016-1976):
ocorrencias_por_ano2[i] = numpy.sum(dados[selecciona_ano(sessoes['data'],i+1976)])
anos = range(1976,2016)
f = pylab.figure(figsize=(10,6))
p1 = pylab.bar(anos, ocorrencias_por_ano1)
p2 = pylab.bar(anos, ocorrencias_por_ano2,bottom=ocorrencias_por_ano1)
pylab.legend([palavras1[0], palavras2[0]])
pylab.xlabel('Ano')
pylab.ylabel('Ocorrencias totais')
grafico_palavras_vs_palavras(['troica'],['troika'])
Explanation: Como vemos na última frase, a verdade é que no parlmento se usa mais o termo 'Troica' do que 'Troika'! Na comunicação social usa-se muito 'Troika'.
E para quem não sabe o que foi a perestroika: https://pt.wikipedia.org/wiki/Perestroika
Ok, assim já faz sentido:
End of explanation |
7,103 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 8 – Fit an Hubble Diagram
The SN Ia Science in short
The Type Ia Supernova event is the thermonuclear runaway of a white dwarf. This bright event is extremely stable and the maximum of luminosity of any explosion reaches roughly the same maximum magnitude M0. Two additional parameters
Step1: First Steps access the Data
Import for the cosmological analysis (an also the convenient table)
Step2: Read the data
Step26: The Chi2 Method
an introduction of the "property" and "setter" decorators
decorators are kind-of functions of function. Check e.g., http | Python Code:
import warnings
# No annoying warnings
warnings.filterwarnings('ignore')
# Because we always need that
# plot within the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as mpl
Explanation: Exercise 8 – Fit an Hubble Diagram
The SN Ia Science in short
The Type Ia Supernova event is the thermonuclear runaway of a white dwarf. This bright event is extremely stable and the maximum of luminosity of any explosion reaches roughly the same maximum magnitude M0. Two additional parameters: the speed of evolution of the explosion x1 (also called stretch) and the color of the event color enable to further standardize the SN maximum of luminosity. Thanks to the statibility of the absolute SN magnitude, the observation of the effective SN magnitude is a direct indication of the event's distance. Combined with the redshift measurement of the Supernovae -the redshift tracing the overall expansion of the Universe since the SN event- we can measurment the history of the expansion of the Universe. The magnitude vs. redshift plot is called an Hubble Diagram and the Hubble diagram of the SNe Ia enabled the discovery of the accelerated expansion of the Universe in 1999, and thereby of the existance of dark energy (Noble 2011)
Data you have to measures the density Dark Energy
The latest compilation of Type Ia Supernovae Data has ~1000 SN Ia distance measurements. The data are registered
in
notebooks/data/SNeIaCosmo/jla_lcparams.txt
The Model
In this example we will use the most straightforward analysis. The stretch and color corrected magnitude of the SNe Ia is:
$$
mb_{corr} = mb - (x1 \times \alpha - color \times \beta)
$$
The expected magnitude of the SNe Ia is:
$$
mb_{expected} = \mu \left(z, \Omega \right) + M_0
$$
where $\mu\left(z, \Omega \right)$ is the distance modulus (distance in log scale) for a given cosmology. To have access to $\mu$ use astropy.
In the flat Lambda CDM model, the only free cosmological parameter you can fit is Omega_m, the density of matter today, knowing that, in that case, the density of dark energy is 1-Omega_m
Astropy
The website with all the information is here http://www.astropy.org/
If you installed python using Anaconda, you should have astropy. Otherwise sudo pip install astropy should work
To get the cosmology for an Hubble Constant of 70 (use that) and for a given density of matter (Omega_m, which will be one of your free parameter):
python
from astropy import cosmology
cosmo = cosmology.FlatLambdaCDM(70, Omega_m)
To get the distance modulus
python
from astropy import cosmology
mu = cosmology.FlatLambdaCDM(70, Omega_m).distmod(z).value
Your Job: find the density of Dark energy
Create a Chi2 fitting class that enables to derive the density of dark energy. You should find Omega_m~0.3 so a universe composed of ~70% of dark energy and 30% of matter
Plot an Hubble diagram showing the corrected SN mangitude (mb_corr) as a function of the redshift (zcmb). Overplot the model.
Remark alpha beta and M0 have to be fitted together with the cosmology, we call that 'nuisance parameters'
Remark 2 ignore errors on the redshift (z), but take into account errors on mb and on the x1 and color parameters. For this example ignore the covariance terms.
Correction
End of explanation
from astropy import table, cosmology
Explanation: First Steps access the Data
Import for the cosmological analysis (an also the convenient table)
End of explanation
data = table.Table.read("data/SNeIaCosmo/jla_lcparams.txt", format="ascii")
data.colnames
Explanation: Read the data
End of explanation
# copy paste of a class build before
class Chi2Fit( object ):
def __init__(self, jla_data=None):
init the class
Parameters:
-----------
jla_data: [astropy.table]
Astropy Table containing the Supernovae properties
(zcmb, mb, x1, color etc.)
Return
------
Void
if jla_data is not None:
self.set_data(jla_data)
# ------------------- #
# Setter #
# ------------------- #
def set_data(self, datatable):
Set the data of the class. This must be an astropy table
# use this tricks to forbid the user to overwrite self.data...
self._data = datatable
def setup(self, parameters):
Set the parameters of the class:
- alpha
- beta
- Om
- M0
self._parameters = parameters
# ------------------- #
# GETTER #
# ------------------- #
def get_mbcorr(self, parameters):
corrected sn magnitude with its associated variance
return self.mb - (self.x1*parameters[0] + self.color*parameters[1]),\
self.mb_err**2 + (self.x1*parameters[0])**2 + (self.color*parameters[1])**2
def get_mbexp(self, parameters, z=None):
corrected sn magnitude with its associated error
zused = z if z is not None else self.zcmb
return cosmology.FlatLambdaCDM(70, parameters[2]).distmod(zused ).value + parameters[3]
def fit(self,guess):
fit the model to the data
The methods uses scipy.optimize.minize to fit the model
to the data. The fit output is saved as self.fitout, the
best fit parameters being self.fitout["x"]
Parameters
----------
guess: [array]
initial guess for the minimizer. It's size must correspond
to the amount of free parameters of the model.
Return
------
Void (create self.fitout)
from scipy.optimize import minimize
bounds = [[None,None],[None,None],[0,None],[None,None]]
self.fitout = minimize(self.chi2, guess,bounds=bounds)
print self.fitout
self._bestfitparameters = self.fitout["x"]
def chi2(self,parameters):
The chi2 of the model with the given `parameters`
in comparison to the object's data
Return
------
float (the chi2)
mbcorr, mbcorr_var = self.get_mbcorr(parameters)
mb_exp = self.get_mbexp(parameters)
chi2 = ((mbcorr-mb_exp)**2)/(mbcorr_var)
return np.sum(chi2)
def plot(self, parameters):
Vizualize the data and the model for the given
parameters
Return
------
Void
fig = mpl.figure()
ax = fig.add_subplot(1,1,1)
self.setup(parameters)
ax.errorbar(self.zcmb,self.setted_mbcorr,
yerr=self.setted_mbcorr_err, ls="None",marker='o', color="b",
ecolor="0.7")
x = np.linspace(0.001,1.4,10000)
#print self.get_cosmo_distmod(parameters,x)
ax.plot(x,self.get_mbexp(self._parameters,x),'-r', scalex=False,scaley=False)
fig.show()
# ================== #
# Properties #
# ================== #
@property
def data(self):
Data table containing the data of the instance
return self._data
@data.setter
def data(self, newdata):
Set the data
# add all the relevant tests
print "You setted new data"
self._data = newdata
@property
def npoints(self):
number of data points
return len(self.data)
# ----------
# - Parameters
@property
def parameters(self):
Current parameters of the fit
if not self.has_parameters():
raise ValueError("No Parameters defined. See the self.setup() method")
return self._parameters
def has_parameters(self):
return "_parameters" in dir(self)
# -- Current Param prop
@property
def _alpha(self):
return self._parameters[0]
@property
def _beta(self):
return self._parameters[1]
@property
def _omegam(self):
return self._parameters[2]
@property
def _M0(self):
return self._parameters[3]
# -------
# -- Param derived properties
@property
def setted_mbcorr(self):
corrected hubble residuals
return self.get_mbcorr(self.parameters)[0]
@property
def setted_mbcorr_err(self):
corrected hubble residuals
return np.sqrt(self.get_mbcorr(self.parameters)[1])
@property
def setted_mu(self):
distance modulus for the given cosmology
return cosmology.FlatLambdaCDM(70, _omegam).distmod(self.zcmb).value
@property
def setted_M0(self):
absolute SN magnitude for the setted parameters
return self._M0
@property
def setted_mbexp(self):
return self.setted_mu + self._M0
# -------
# -- Data derived properties
@property
def mb(self):
observed magnitude (in the b-band) of the Supernovae
return self.data["mb"]
@property
def mb_err(self):
observed magnitude (in the b-band) of the Supernovae
return self.data["dmb"]
@property
def x1(self):
Lightcurve stretch
return self.data["x1"]
@property
def x1_err(self):
errors on the Lightcurve stretch
return self.data["dx1"]
@property
def color(self):
Lightcurve color
return self.data["color"]
@property
def color_err(self):
errors on the Lightcurve color
return self.data["dcolor"]
@property
def zcmb(self):
cosmological redshift of the Supenovae
return self.data["zcmb"]
c = Chi2Fit(data)
c.setup([ -0.18980149, 3.65435315, 0.32575054, -19.06810566])
c.setted_mbcorr
c.fit([0.13,3,0.2,-20])
c.plot(c._bestfitparameters)
c.setted_mbcorr
Explanation: The Chi2 Method
an introduction of the "property" and "setter" decorators
decorators are kind-of functions of function. Check e.g., http://thecodeship.com/patterns/guide-to-python-function-decorators/
End of explanation |
7,104 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The network directory in this share (which is still uploading, btw) contains a pickle (data.pkl) and the code used to generate it (network.py). The lfdr_pcor object in the pickle has the partial correlations after pruning, but poor has all of them (the full network). network.txt has a text version of the network (after pruning) that can be sucked into cytoscape.
The network data was calculated from mapping to genome bins
Step1: Make a 10k row version of the file for development.
Step2: I am only using 3.7% of Waffle's memory at the beginning
Step3: Use summary_counts, not summary_rpkm for gene names.
jmatsen@waffle | Python Code:
! ls -lh ../waffle_network_dir/*.tsv
! wc -l ../waffle_network_dir/network.py.tsv
! head -n 5 ../waffle_network_dir/network.py.tsv | csvlook -t
! ls -lh ../waffle_network_dir/network.py.tsv
Explanation: The network directory in this share (which is still uploading, btw) contains a pickle (data.pkl) and the code used to generate it (network.py). The lfdr_pcor object in the pickle has the partial correlations after pruning, but poor has all of them (the full network). network.txt has a text version of the network (after pruning) that can be sucked into cytoscape.
The network data was calculated from mapping to genome bins:
Full graph:
End of explanation
network = pd.read_csv('../waffle_network_dir/network.py.tsv', skiprows=1,
#skipfooter = 49995001 - 1*10**4,
#skipfooter = 1000, # can't have skipfooter with dtype. :(
sep='\t', names = ['source', 'target', 'pcor'],
dtype = {'source':str, 'target':str, 'pcor':float})
network.shape
network.head()
def label_associations(row):
if row['pcor'] > 0:
val = 'positive'
elif row['pcor'] < 0:
val = 'negative'
elif row['pcor'] == 0:
val = 'drop me'
return val
network['association'] = network.apply(label_associations, axis=1)
network['association'].unique()
print("shape before dropping rows with pcor == 0: {}".format(network.shape))
network = network[network['association'] != 'drop me']
print("shape after dropping rows with pcor == 0: {}".format(network.shape))
network.head(3)
Explanation: Make a 10k row version of the file for development.
End of explanation
! top -o %MEM | head
network['target_organism'] = network['target'].str.extract('([A-z0-9]+)_[0-9]+')
network['target_gene'] = network['target'].str.extract('[A-z0-9]+_([0-9]+)')
network['source_organism'] = network['source'].str.extract('([A-z0-9]+)_[0-9]+')
network['source_gene'] = network['source'].str.extract('[A-z0-9]+_([0-9]+)')
network.head()
network = network.rename(columns=lambda x: re.sub('source$', 'source_locus_tag', x))
network = network.rename(columns=lambda x: re.sub('target$', 'target_locus_tag', x))
network.head(2)
network['target_organism'].unique()
len(network['target_organism'].unique())
network['cross_species'] = network['source_organism'] != network['target_organism']
network.cross_species.describe()
network.cross_species.plot.hist()
network.pcor.plot.hist()
fig, ax = plt.subplots(1, 1, figsize=(4, 3))
plt.hist(network.pcor)
plt.yscale('log', nonposy='clip')
plt.xlabel('partial correlation value')
plt.ylabel('# edges')
plt.tight_layout()
plt.savefig('161209_hist_of_pcor_values.pdf')
plt.savefig('161209_hist_of_pcor_values.png', dpi=600)
fig, ax = plt.subplots(1, 1, figsize=(5, 2.5))
plt.hist(network.pcor, 50)
plt.yscale('log', nonposy='clip')
plt.xlabel('partial correlation value')
plt.ylabel('# edges')
plt.tight_layout()
plt.savefig('161209_hist_of_pcor_values_50_bins.pdf')
plt.savefig('161209_hist_of_pcor_values_50_bins.png', dpi=600)
locus_to_organism = pd.read_csv('/dacb/meta4_bins/data/genome_bins.locus_to_organism.tsv', sep='\t',
names=['locus', 'organism'])
locus_to_organism.head()
# Found a problem:
# Expected exactly 2 organsm names, but we have 3
# {'Methylobacter-123 (UID203) ', 'Methylobacter-123 (UID203)', 'Methylotenera mobilis-49 (UID203)'}
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html
# strips both left and right whitespace :)
locus_to_organism['organism'] = locus_to_organism['organism'].str.strip()
locus_to_organism['organism ID'] = locus_to_organism['locus'].str.extract('([A-z]+[0-9]+)_[0-9]+')
source_organism_names = locus_to_organism[['organism ID', 'organism']].drop_duplicates()
target_organism_names = locus_to_organism[['organism ID', 'organism']].drop_duplicates()
source_organism_names = source_organism_names.rename(
columns={'organism ID':'source_organism', 'organism':'source_organism_name'})
target_organism_names = target_organism_names.rename(
columns={'organism ID':'target_organism', 'organism':'target_organism_name'})
source_organism_names
merged = pd.merge(network, source_organism_names)
len(merged.source_organism_name.unique())
merged.head(2)
merged = pd.merge(merged, target_organism_names)
print(merged.shape)
print(network.shape)
merged.head()
len(merged.target_organism_name.unique())
print(merged.shape)
print(network.shape)
merged.tail(3)
Explanation: I am only using 3.7% of Waffle's memory at the beginning :)
End of explanation
genes = pd.read_csv('/dacb/meta4_bins/analysis/assemble_summaries/summary_counts.xls',
sep = '\t', usecols=[1, 2])
genes.tail(3)
genes.tail()
genes[genes['locus_tag'] == 'Ga0081607_11219']
merged.head(2)
source_genes = genes[['locus_tag', 'product']].rename(
columns={'locus_tag':'source_locus_tag', 'product':'source_gene_product'})
target_genes = genes[['locus_tag', 'product']].rename(
columns={'locus_tag':'target_locus_tag', 'product':'target_gene_product'})
source_genes.head(2)
network.shape
merged.shape
merged = pd.merge(merged, source_genes)
merged.shape
merged = pd.merge(merged, target_genes)
merged.shape
merged.head(2)
merged.head(3)
merged['sort'] = merged.pcor.abs()
merged = merged.sort(columns='sort', ascending=False).drop('sort', axis=1)
merged['pcor'].describe()
merged.head(2)
filename = '50M_network'
! ls ../data
dirname = '../data/50M_network/'
if not os.path.exists(dirname):
print"make dir {}".format(dirname)
os.mkdir(dirname)
else:
print("dir {} already exists.".format(dirname))
path = dirname + filename + '.tsv'
print('save to : {}'.format(path))
merged.to_csv(path, sep='\t', index=False)
# The CSV isn't a good idea because of the gene names.
#merged.to_csv(dirname + filename + '.csv')
merged.head(100).to_csv(dirname + filename + '--100' + '.tsv', sep='\t', index=False)
os.listdir(dirname)
merged.shape
Explanation: Use summary_counts, not summary_rpkm for gene names.
jmatsen@waffle:/dacb/meta4_bins/analysis/assemble_summaries$ ag Ga0081607_11219 summary_rpkm.xls | head -n 10
jmatsen@waffle:/dacb/meta4_bins/analysis/assemble_summaries$ ag Ga0081607_11219 summary_counts.xls | head -n 10
2015:Methylobacter-123 (UID203) Ga0081607_11219 hypothetical protein 243652 6660 160 285587 448 89 94 4893 13
66994 47733 163 301 3 146 1851 26 53125 249288 21 14249 28 12 42296 23538 2778 1918 2061 217 173983 164307 398 450 1170 10410 30 344 2224 2164 1452 810 338 656 70 222 3475 1143 2672 1313 1246 930 54 23 9942 9603 2381 8196 29 49 23721 7808 33195 17291 5825 6609 36 83 40661 28629 17949 12227 15478 15054 125 1010 10214 66875 40225 944 11993 9572 56 9375
End of explanation |
7,105 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example Usage
Step1: pandas-learn has an identical module structure to scikit-learn, so you already know where to find all the models you already use
Step2: You can use pandas to manipulate your data with ease
Step3: (Just in case you are wondering, Mrs. Nasser was apparently 14, so was in fact a child despite being married!).
Step4: pandas-learn modules inherit directly from scikit learn models. They have basically the same interface
Step5: When you fit to pandas data, it saves the feature and target names automatically | Python Code:
import pandas as pd
%matplotlib inline
Explanation: Example Usage: Titanic Dataset
An example of training a model on the titanic dataset.
The name of the package is pandas-learn, a mixing pandas into scikit-learn. Therefore, you should always use pandas to handle your data if you are using the package(!):
End of explanation
from pdlearn.ensemble import RandomForestClassifier
Explanation: pandas-learn has an identical module structure to scikit-learn, so you already know where to find all the models you already use:
End of explanation
data = pd.read_csv('titanic-train.csv') \
.append(pd.read_csv('titanic-test.csv')) \
.set_index('name')
data['sex'] = data.sex == 'male'
data['child'] = data.age.fillna(20) < 15
X = data[['sex', 'p_class', 'child']].astype(int)
y = data['survived']
train = y.notnull()
X.head(10)
Explanation: You can use pandas to manipulate your data with ease:
End of explanation
y.head(10)
Explanation: (Just in case you are wondering, Mrs. Nasser was apparently 14, so was in fact a child despite being married!).
End of explanation
rf = RandomForestClassifier(n_estimators=500, criterion='gini')
Explanation: pandas-learn modules inherit directly from scikit learn models. They have basically the same interface:
End of explanation
rf.fit(X[train], y[train]);
print('Feature names: ', rf.feature_names_)
print('Target names: ', rf.target_names_)
rf.predict(X[~train]).head(10)
rf.predict_proba(X[~train]).head(10)
rf.feature_importances_.plot.bar()
Explanation: When you fit to pandas data, it saves the feature and target names automatically:
End of explanation |
7,106 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Keras での重みクラスタリングの例
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: クラスタを使用せずに、MNIST の tf.keras モデルをトレーニングする
Step3: ベースラインモデルを評価して後で使用できるように保存する
Step4: クラスタを使ってトレーニング済みのモデルを微調整する
cluster_weights() API をトレーニング済みのモデル全体に適用し、十分な精度を維持しながら zip 適用後のモデル縮小の効果を実演します。ユースケースに応じた精度と圧縮率のバランスについては、総合ガイドのレイヤー別の例をご覧ください。
モデルを定義してクラスタリング API を適用する
クラスタリング API にモデルを渡す前に、必ずトレーニングを実行し、許容できる精度が備わっていることを確認してください。
Step5: モデルを微調整し、ベースラインに対する精度を評価する
1 エポック、クラスタでモデルを微調整します。
Step6: この例では、ベースラインと比較し、クラスタリング後のテスト精度に最小限の損失があります。
Step7: クラスタリングによって 6 倍小さなモデルを作成する
<code>strip_clustering</code> と標準圧縮アルゴリズム(gzip など)の適用は、クラスタリングの圧縮のメリットを確認する上で必要です。
まず、TensorFlow の圧縮可能なモデルを作成します。ここで、strip_clustering は、クラスタリングがトレーニング中にのみ必要とするすべての変数(クラスタの重心とインデックスを格納する tf.Variable など)を除去します。そうしない場合、推論中にモデルサイズが増加してしまいます。
Step8: 次に、TFLite の圧縮可能なモデルを作成します。クラスタモデルをターゲットバックエンドで実行可能な形式に変換できます。TensorFlow Lite は、モバイルデバイスにデプロイするために使用できる例です。
Step9: 実際に gzip でモデルを圧縮し、zip 圧縮されたサイズを測定するヘルパー関数を定義します。
Step10: 比較して、モデルがクラスタリングによって 6 倍小さくなっていることを確認します。
Step11: 重みクラスタリングとポストトレーニング量子化を組み合わせて、8 倍小さな TFLite モデルを作成する
さらにメリットを得るために、ポストトレーニング量子化をクラスタモデルに適用できます。
Step12: TF から TFLite への精度の永続性を確認する
テストデータセットで TFLite モデルを評価するヘルパー関数を定義します。
Step13: クラスタ化および量子化されたモデルを評価し、TensorFlow の精度が TFLite バックエンドに持続することを確認します。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tensorflow as tf
from tensorflow import keras
import numpy as np
import tempfile
import zipfile
import os
Explanation: Keras での重みクラスタリングの例
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/clustering/clustering_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/model_optimization/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabで実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/model_optimization/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/model_optimization/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
概要
TensorFlow Model Optimization ツールキットの一部である重みクラスタリングのエンドツーエンドの例へようこそ。
その他のページ
重みクラスタリングの紹介、およびクラスタリングを使用すべきかどうかの判定(サポート情報も含む)については、概要ページをご覧ください。
ユースケースに合った API を素早く特定するには(16 個のクラスタでモデルを完全クラスタ化するケースを超える内容)、総合ガイドをご覧ください。
内容
チュートリアルでは、次について説明しています。
MNIST データセットの tf.keras モデルを最初からトレーニングする
重みクラスタリング API を適用してモデルを微調整し、精度を確認する
クラスタリングによって 6 倍小さな TF および TFLite モデルを作成する
重みクラスタリングとポストトレーニング量子化を組み合わせて、8 倍小さな TFLite モデルを作成する
TF から TFLite への精度の永続性を確認する
セットアップ
この Jupyter ノートブックは、ローカルの virtualenv または Colab で実行できます。依存関係のセットアップに関する詳細は、インストールガイドをご覧ください。
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
Explanation: クラスタを使用せずに、MNIST の tf.keras モデルをトレーニングする
End of explanation
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
Explanation: ベースラインモデルを評価して後で使用できるように保存する
End of explanation
import tensorflow_model_optimization as tfmot
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
clustering_params = {
'number_of_clusters': 16,
'cluster_centroids_init': CentroidInitialization.LINEAR
}
# Cluster a whole model
clustered_model = cluster_weights(model, **clustering_params)
# Use smaller learning rate for fine-tuning clustered model
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
clustered_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
clustered_model.summary()
Explanation: クラスタを使ってトレーニング済みのモデルを微調整する
cluster_weights() API をトレーニング済みのモデル全体に適用し、十分な精度を維持しながら zip 適用後のモデル縮小の効果を実演します。ユースケースに応じた精度と圧縮率のバランスについては、総合ガイドのレイヤー別の例をご覧ください。
モデルを定義してクラスタリング API を適用する
クラスタリング API にモデルを渡す前に、必ずトレーニングを実行し、許容できる精度が備わっていることを確認してください。
End of explanation
# Fine-tune model
clustered_model.fit(
train_images,
train_labels,
batch_size=500,
epochs=1,
validation_split=0.1)
Explanation: モデルを微調整し、ベースラインに対する精度を評価する
1 エポック、クラスタでモデルを微調整します。
End of explanation
_, clustered_model_accuracy = clustered_model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Clustered test accuracy:', clustered_model_accuracy)
Explanation: この例では、ベースラインと比較し、クラスタリング後のテスト精度に最小限の損失があります。
End of explanation
final_model = tfmot.clustering.keras.strip_clustering(clustered_model)
_, clustered_keras_file = tempfile.mkstemp('.h5')
print('Saving clustered model to: ', clustered_keras_file)
tf.keras.models.save_model(final_model, clustered_keras_file,
include_optimizer=False)
Explanation: クラスタリングによって 6 倍小さなモデルを作成する
<code>strip_clustering</code> と標準圧縮アルゴリズム(gzip など)の適用は、クラスタリングの圧縮のメリットを確認する上で必要です。
まず、TensorFlow の圧縮可能なモデルを作成します。ここで、strip_clustering は、クラスタリングがトレーニング中にのみ必要とするすべての変数(クラスタの重心とインデックスを格納する tf.Variable など)を除去します。そうしない場合、推論中にモデルサイズが増加してしまいます。
End of explanation
clustered_tflite_file = '/tmp/clustered_mnist.tflite'
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
tflite_clustered_model = converter.convert()
with open(clustered_tflite_file, 'wb') as f:
f.write(tflite_clustered_model)
print('Saved clustered TFLite model to:', clustered_tflite_file)
Explanation: 次に、TFLite の圧縮可能なモデルを作成します。クラスタモデルをターゲットバックエンドで実行可能な形式に変換できます。TensorFlow Lite は、モバイルデバイスにデプロイするために使用できる例です。
End of explanation
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in bytes.
import os
import zipfile
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)
Explanation: 実際に gzip でモデルを圧縮し、zip 圧縮されたサイズを測定するヘルパー関数を定義します。
End of explanation
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered Keras model: %.2f bytes" % (get_gzipped_model_size(clustered_keras_file)))
print("Size of gzipped clustered TFlite model: %.2f bytes" % (get_gzipped_model_size(clustered_tflite_file)))
Explanation: 比較して、モデルがクラスタリングによって 6 倍小さくなっていることを確認します。
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
_, quantized_and_clustered_tflite_file = tempfile.mkstemp('.tflite')
with open(quantized_and_clustered_tflite_file, 'wb') as f:
f.write(tflite_quant_model)
print('Saved quantized and clustered TFLite model to:', quantized_and_clustered_tflite_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_clustered_tflite_file)))
Explanation: 重みクラスタリングとポストトレーニング量子化を組み合わせて、8 倍小さな TFLite モデルを作成する
さらにメリットを得るために、ポストトレーニング量子化をクラスタモデルに適用できます。
End of explanation
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print('Evaluated on {n} results so far.'.format(n=i))
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
Explanation: TF から TFLite への精度の永続性を確認する
テストデータセットで TFLite モデルを評価するヘルパー関数を定義します。
End of explanation
interpreter = tf.lite.Interpreter(model_content=tflite_quant_model)
interpreter.allocate_tensors()
test_accuracy = eval_model(interpreter)
print('Clustered and quantized TFLite test_accuracy:', test_accuracy)
print('Clustered TF test accuracy:', clustered_model_accuracy)
Explanation: クラスタ化および量子化されたモデルを評価し、TensorFlow の精度が TFLite バックエンドに持続することを確認します。
End of explanation |
7,107 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gravity Brightening/Darkening (gravb_bol)
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Relevant Parameters
The 'gravb_bol' parameter corresponds to the β coefficient for gravity darkening corrections.
Step3: If you have a logger enabled, PHOEBE will print a warning if the value of gravb_bol is outside the "suggested" ranges. Note that this is strictly a warning, and will never turn into an error at b.run_compute().
You can also manually call b.run_checks(). The first returned item tells whether the system has passed checks | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Gravity Brightening/Darkening (gravb_bol)
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
print(b['gravb_bol'])
print(b['gravb_bol@primary'])
Explanation: Relevant Parameters
The 'gravb_bol' parameter corresponds to the β coefficient for gravity darkening corrections.
End of explanation
print(b.run_checks())
b['teff@primary'] = 8500
b['gravb_bol@primary'] = 0.8
print(b.run_checks())
b['teff@primary'] = 7000
b['gravb_bol@primary'] = 0.2
print(b.run_checks())
b['teff@primary'] = 6000
b['gravb_bol@primary'] = 1.0
print(b.run_checks())
Explanation: If you have a logger enabled, PHOEBE will print a warning if the value of gravb_bol is outside the "suggested" ranges. Note that this is strictly a warning, and will never turn into an error at b.run_compute().
You can also manually call b.run_checks(). The first returned item tells whether the system has passed checks: True means it has, False means it has failed, and None means the tests pass but with a warning. The second argument tells the first warning/error message raised by the checks.
The checks use the following "suggested" values:
* teff 8000+: gravb_bol >= 0.9 (suggest 1.0)
* teff 6600-8000: gravb_bol 0.32-1.0
* teff 6600-: grav_bol < 0.9 (suggest 0.32)
End of explanation |
7,108 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Object-Oriented Programming
We've talked about how everyting in Python is an object. In addition, we've come to use many objects. However, we have not created any objects. In this lecture, we will discuss object-oriented programming, and what we can achieve with it. Here is what we will cover
Step1: If you do not already know, the word "instantiation" means to create a version of an object. Here is how we would instantiate a bike.
Step2: Now, my_bike is an object reference to "Bike". This means that the variable doesn't actually hold the object in memory, but simply points to it.
Attribute
To give objects attributes (i.e. a bike's wheel size, speed, weight), you need to create the
__init__(attributes...)
function. This function is called whenever an object is instantiated. It's what assigns the values you want on an object. Let's see its use below.
Step3: What just happened? We created the init method in Bike, and provided it four parameters
Step4: The instantiation checks out. Here's what happened
Step5: Methods
We've discussed functions numerous amounts of times, and methods are essentially the same thing but written inside of functions. We can use attribute values now, instead.
Step6: Special Methods
There are a few methods in Python that you can alter in your class referred to as "special methods". These methods allow you to control certain behaviors that happen behind the scenes in your program. Things like printing your object, or how it reacts to operators, etc can be controlled via these methods. Let's look at a few. | Python Code:
# Creating a class called Bike
class Bike:
pass
Explanation: Object-Oriented Programming
We've talked about how everyting in Python is an object. In addition, we've come to use many objects. However, we have not created any objects. In this lecture, we will discuss object-oriented programming, and what we can achieve with it. Here is what we will cover:
1. Classes
> Attributes
2. Methods
3. Inheritance
Let's take a look at the building block of creating an object - a class!
Class
The fundamental building block of an object, is the class. A class defines all of the specifications of an object, from its attributes, methods, and more. The declaration of a class begins with the "class" keyword.
class Name(superclass):
// etc
We'll talk later about what superclass means. However, this is the opening statement for a class. Here's another example:
End of explanation
# An 'instance' of a bike
my_bike = Bike()
type(my_bike)
Explanation: If you do not already know, the word "instantiation" means to create a version of an object. Here is how we would instantiate a bike.
End of explanation
class Bike:
def __init__(self, speed, wheel, weight):
self.speed = speed
self.wheel = wheel
self.weight = weight
Explanation: Now, my_bike is an object reference to "Bike". This means that the variable doesn't actually hold the object in memory, but simply points to it.
Attribute
To give objects attributes (i.e. a bike's wheel size, speed, weight), you need to create the
__init__(attributes...)
function. This function is called whenever an object is instantiated. It's what assigns the values you want on an object. Let's see its use below.
End of explanation
# Instantiating a Bike Object
woo = Bike(2, 4, 5)
Explanation: What just happened? We created the init method in Bike, and provided it four parameters: self, speed, wheel, and weight. In the body of the method, we assigned self.attr to the attribute. First, let's discuss the self. The word self is actually not a keyword, but a type of requirement from Python. You see, all Python methods must have a reference to the object itself. You can use any name you like for that reference, but everyone uses the word "self" because it is simply convention.
The attributes in this class are speed, wheel, and weight. In the method body, we set the referenced object's attribute value to... well... itself (or, in other words, whatever was sent in). Let's try an instantiation below.
End of explanation
woo.speed
woo.wheel
woo.weight
Explanation: The instantiation checks out. Here's what happened:
self.speed = 2
self.wheel = 4
self.weight = 5
This allows us to use dot notation to access the properties. How do we get the wheel size of the bike? We use the following notation:
object.attr
It's that simple.
End of explanation
class Bike:
# __init__() function
def __init__(self, speed, wheel, weight):
self.speed = speed
self.wheel = wheel
self.weight = weight
# A method calculates the max weight of a person on the bike
def max_weight(self, rider_weight):
max_weight = rider_weight * self.weight
return max_weight
# Another method
def some_method(self):
pass
woo = Bike(2, 4, 5)
woo.max_weight(30)
Explanation: Methods
We've discussed functions numerous amounts of times, and methods are essentially the same thing but written inside of functions. We can use attribute values now, instead.
End of explanation
class Bike():
def __init__(self, speed, wheel, weight):
self.speed = speed
self.wheel = wheel
self.weight = weight
def __str__(self):
return "Bike Speed: {} Wheel Size: {} Weight: {}".format(self.speed, self.wheel, self.weight)
woo = Bike(3, 4, 5)
print(woo)
Explanation: Special Methods
There are a few methods in Python that you can alter in your class referred to as "special methods". These methods allow you to control certain behaviors that happen behind the scenes in your program. Things like printing your object, or how it reacts to operators, etc can be controlled via these methods. Let's look at a few.
End of explanation |
7,109 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Como usar com o Pandas
Os catálogos de dados abertos podem ser consultados facilmente com a ferramenta
Pandas, com ou sem Jupyter Notebook.
Esse tutorial inspirado na
demonstração
do Open Knowledge Labs.
Pacotes necessários
Frictionless Data
Além do Pandas, será necessário instalar alguns pacotes para trabalhar com Frictionless Data. Para isso,
execute
Step1: Lendo o pacote de dados
É possível ler o pacote de dados diretamente a partir da URL
Step2: Um pacote de dados pode conter uma quantidade de recursos. Pense em um
recurso como uma tabela em um banco de dados. Cada um é um arquivo CSV.
No contexto do armazenamento dos dados, esses recursos também são chamados de buckets (numa tradução livre, "baldes").
Step3: Que são também Dataframes do Pandas
Step4: Por isso, funcionam todas as operações que podem ser feitas com um DataFrames
do Pandas
Step5: Como, por exemplo, mostrar o início da tabela.
Step6: Ou ver quantos portais existem por tipo de solução, ou por UF, ou por poder,
etc.
Por tipo de solução
Step7: Por poder da república
Step8: Por esfera
Step9: Por unidade federativa | Python Code:
import pandas as pd
# Para trabalhar com Frictionless Data – frictionlessdata.io
from tableschema import Storage
from datapackage import Package
# Para visualização
import plotly_express as px
import plotly as py, plotly.graph_objects as go
Explanation: Como usar com o Pandas
Os catálogos de dados abertos podem ser consultados facilmente com a ferramenta
Pandas, com ou sem Jupyter Notebook.
Esse tutorial inspirado na
demonstração
do Open Knowledge Labs.
Pacotes necessários
Frictionless Data
Além do Pandas, será necessário instalar alguns pacotes para trabalhar com Frictionless Data. Para isso,
execute:
pip install datapackage tableschema-pandas
Para maiores informações sobre como usar esses pacotes, veja o exemplo contido
no repositório do
tableschema-pandas.
Plotly e Plotly Express
Para visualizar os dados, utilizaremos Plotly e Plotly Express.
Entretanto, sinta-se à vontade para usar a biblioteca de visualização de sua
preferência.
Para instalar:
pip install plotly plotly_express
Versões
Para este tutorial, estamos usando a versão 1.10.0 do datapackage, a
versão 1.1.0 do tableschema-pandas. Quanto às bibliotecas de
visualização, usamos Plotly versão 4.1.0 e Plotly Express versão 0.4.1.
Para saber a sua versão, após ter instalado, use os comandos
pip freeze | grep datapackage
pip freeze | grep tableschema
pip freeze | grep plotly
Se desejar instalar essas versões exatas, é possível executar o comando
pip install -r requirements.txt
pois esse arquivo já contém as versões afixadas.
End of explanation
# Gravar no Pandas
url = 'https://github.com/dadosgovbr/catalogos-dados-brasil/raw/master/datapackage.json'
storage = Storage.connect('pandas')
package = Package(url)
package.save(storage=storage)
Explanation: Lendo o pacote de dados
É possível ler o pacote de dados diretamente a partir da URL:
End of explanation
storage.buckets
Explanation: Um pacote de dados pode conter uma quantidade de recursos. Pense em um
recurso como uma tabela em um banco de dados. Cada um é um arquivo CSV.
No contexto do armazenamento dos dados, esses recursos também são chamados de buckets (numa tradução livre, "baldes").
End of explanation
type(storage['catalogos'])
Explanation: Que são também Dataframes do Pandas:
End of explanation
storage['solucao']
Explanation: Por isso, funcionam todas as operações que podem ser feitas com um DataFrames
do Pandas:
End of explanation
storage['catalogos'].head()
Explanation: Como, por exemplo, mostrar o início da tabela.
End of explanation
tipo_solucao = storage['catalogos'].groupby('Solução').count()['URL'].rename('quantidade')
tipo_solucao
px.bar(
pd.DataFrame(tipo_solucao).reset_index(),
x = 'Solução',
y = 'quantidade',
color = 'Solução',
color_discrete_sequence = py.colors.qualitative.Set2
)
Explanation: Ou ver quantos portais existem por tipo de solução, ou por UF, ou por poder,
etc.
Por tipo de solução
End of explanation
poder = storage['catalogos'].groupby('Poder').count()['URL'].rename('quantidade')
poder
go.Figure(
data=go.Pie(
labels=poder.index,
values=poder.values,
hole=.4
)
).show()
Explanation: Por poder da república
End of explanation
esfera = storage['catalogos'].groupby('Esfera').count()['URL'].rename('quantidade')
esfera
go.Figure(
data=go.Pie(
labels=esfera.index,
values=esfera.values,
hole=.4
)
).show()
Explanation: Por esfera
End of explanation
uf = storage['catalogos'].groupby('UF').count()['URL'].rename('quantidade')
uf
px.bar(
pd.DataFrame(uf).reset_index(),
x = 'UF',
y = 'quantidade',
color = 'UF',
color_discrete_sequence = py.colors.qualitative.Set3
)
Explanation: Por unidade federativa
End of explanation |
7,110 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
build a neural network to predict the magnitude of an Earthquake given the date, time, Latitude, and Longitude as features. This is the dataset. Optimize at least 1 hyperparameter using Random Search. See this example for more information.
You can use any library you like, bonus points are given if you do this using only numpy.
Step1: We're using date, time, Latitude, and Longitude to predict the magnitude.
Step2: y is the target for the prediction
Step3: We need to convert the input data into something better suited for prediction. The date and time are strings which doesn't work at all, and latitude and longitude could be normalized.
But first, check to see if the input data has any missing values
Step4: There is a value in each of the rows, so moving ahead, first we change the date string into a pandas datetime
Step5: Now to normalize the data
Step6: now to start the prediction
First, splitting x into training and testing sets | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
df = pd.read_csv("data/earthquake-database.csv")
print(df.shape)
df.head()
Explanation: build a neural network to predict the magnitude of an Earthquake given the date, time, Latitude, and Longitude as features. This is the dataset. Optimize at least 1 hyperparameter using Random Search. See this example for more information.
You can use any library you like, bonus points are given if you do this using only numpy.
End of explanation
#prediction_cols = ["Date", "Time", "Latitude", "Longitude"]
# ignoring time for now
prediction_cols = ["Date", "Latitude", "Longitude"]
x = df[prediction_cols]
x.head()
Explanation: We're using date, time, Latitude, and Longitude to predict the magnitude.
End of explanation
y = df["Magnitude"]
y.head()
Explanation: y is the target for the prediction:
End of explanation
x.info()
Explanation: We need to convert the input data into something better suited for prediction. The date and time are strings which doesn't work at all, and latitude and longitude could be normalized.
But first, check to see if the input data has any missing values:
End of explanation
x.loc[:,'Date'] = x.loc[:,'Date'].apply(pd.to_datetime)
x.head()
x.info()
x['Date'].items()
Explanation: There is a value in each of the rows, so moving ahead, first we change the date string into a pandas datetime
End of explanation
# normalize the target y
y = (y - y.min()) / (y.max() - y.min())
Explanation: Now to normalize the data
End of explanation
x_train = x[:20000]
y_train = y[:20000]
y_test = x[20000:]
y_test = y[20000:]
len(x_train), len(y_train), len(y_test), len(y_test)
input_features = 3
output_features = 1
data_length = len(x_train)
weights = np.random.random([input_features, data_length])
weights.shape
# testing how to loop through the data
t = x[:10]
for a,b in t.iterrows():
print(b[0], '|', b[1], '|', b[2])
Explanation: now to start the prediction
First, splitting x into training and testing sets:
End of explanation |
7,111 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to the TensorFlow computation graph
When I started with deep learning, one of the concepts that took me quite a while to wrap my head around was the use of a computation graph within code. Furthermore, that the code in tensorflow is usally packed in objects does not really help grapsing this concept. That is I wanted to give a simple explanation of the TensorFlow computation graph, what it is and how to use it. The explanation focuses on the link between the intuition, the math and the code.
Structure of notebook
Intuition of a computation graph
Why is it useful for neural networks
Feeding values
Visualize them.
Todo
example
picture
graph
feed_dict
Resources
https
Step1: Intuition behind a computation graph
A computation graph is a visual way of representing calculations.
A graph consists of nodes and edges.
The nodes represent operations, the edges represent variables.
Example | Python Code:
import tensorflow as tf
assert tf.__version__=="1.2.0" # we want that version
Explanation: Introduction to the TensorFlow computation graph
When I started with deep learning, one of the concepts that took me quite a while to wrap my head around was the use of a computation graph within code. Furthermore, that the code in tensorflow is usally packed in objects does not really help grapsing this concept. That is I wanted to give a simple explanation of the TensorFlow computation graph, what it is and how to use it. The explanation focuses on the link between the intuition, the math and the code.
Structure of notebook
Intuition of a computation graph
Why is it useful for neural networks
Feeding values
Visualize them.
Todo
example
picture
graph
feed_dict
Resources
https://www.tensorflow.org/get_started/get_started
End of explanation
# define calculation graph
a = tf.Variable(1.0, dtype=tf.float32, name="a", trainable=True) #
b = tf.Variable(2.0, dtype=tf.float32, name="b", trainable=False) # scalar
c = tf.placeholder(shape=(), dtype=tf.float32, name="c") # scalar
r_1 = tf.multiply(a, b, name="a_times_b")
r_2 = tf.add(r_1, c, name="r_1_plus_c")
# run operations on graph
session = tf.Session()
variable_assignment = {
a:1.0, # assign variable a of calulation graph to 1
b:2.0, # assign variable b of calulation graph to 2.0
}
r_2_result = session.run(
fetches=[# which operations of the calculation graph to fetch
r_2
],
feed_dict=variable_assignment # which variables to use for this assigmnent
)
print(r_2_result)
Explanation: Intuition behind a computation graph
A computation graph is a visual way of representing calculations.
A graph consists of nodes and edges.
The nodes represent operations, the edges represent variables.
Example:
Math calculation is: $c=a*b$
Three variables, $a,b,c$
One operation $*$
Visualization:
In tensorflow
Two steps:
* need to define the graph
* need to execute operations on the graph
Explain difference between
* Trainable variables
* Non-Trainable Variables
* Placeholders
* Constants
End of explanation |
7,112 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Masking and padding with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Introduction
Masking is a way to tell sequence-processing layers that certain timesteps
in an input are missing, and thus should be skipped when processing the data.
Padding is a special form of masking where the masked steps are at the start or
the end of a sequence. Padding comes from the need to encode sequence data into
contiguous batches
Step3: Masking
Now that all samples have a uniform length, the model must be informed that some part
of the data is actually padding and should be ignored. That mechanism is masking.
There are three ways to introduce input masks in Keras models
Step4: As you can see from the printed result, the mask is a 2D boolean tensor with shape
(batch_size, sequence_length), where each individual False entry indicates that
the corresponding timestep should be ignored during processing.
Mask propagation in the Functional API and Sequential API
When using the Functional API or the Sequential API, a mask generated by an Embedding
or Masking layer will be propagated through the network for any layer that is
capable of using them (for example, RNN layers). Keras will automatically fetch the
mask corresponding to an input and pass it to any layer that knows how to use it.
For instance, in the following Sequential model, the LSTM layer will automatically
receive a mask, which means it will ignore padded values
Step5: This is also the case for the following Functional API model
Step6: Passing mask tensors directly to layers
Layers that can handle masks (such as the LSTM layer) have a mask argument in their
__call__ method.
Meanwhile, layers that produce a mask (e.g. Embedding) expose a compute_mask(input,
previous_mask) method which you can call.
Thus, you can pass the output of the compute_mask() method of a mask-producing layer
to the __call__ method of a mask-consuming layer, like this
Step8: Supporting masking in your custom layers
Sometimes, you may need to write layers that generate a mask (like Embedding), or
layers that need to modify the current mask.
For instance, any layer that produces a tensor with a different time dimension than its
input, such as a Concatenate layer that concatenates on the time dimension, will
need to modify the current mask so that downstream layers will be able to properly
take masked timesteps into account.
To do this, your layer should implement the layer.compute_mask() method, which
produces a new mask given the input and the current mask.
Here is an example of a TemporalSplit layer that needs to modify the current mask.
Step9: Here is another example of a CustomEmbedding layer that is capable of generating a
mask from input values
Step10: Opting-in to mask propagation on compatible layers
Most layers don't modify the time dimension, so don't need to modify the current mask.
However, they may still want to be able to propagate the current mask, unchanged,
to the next layer. This is an opt-in behavior. By default, a custom layer will
destroy the current mask (since the framework has no way to tell whether propagating
the mask is safe to do).
If you have a custom layer that does not modify the time dimension, and if you want it
to be able to propagate the current input mask, you should set self.supports_masking
= True in the layer constructor. In this case, the default behavior of
compute_mask() is to just pass the current mask through.
Here's an example of a layer that is whitelisted for mask propagation
Step11: You can now use this custom layer in-between a mask-generating layer (like Embedding)
and a mask-consuming layer (like LSTM), and it will pass the mask along so that it
reaches the mask-consuming layer.
Step12: Writing layers that need mask information
Some layers are mask consumers | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: Masking and padding with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/masking_and_padding"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/snapshot-keras/site/en/guide/keras/masking_and_padding.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/keras-team/keras-io/blob/master/guides/understanding_masking_and_padding.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/keras/masking_and_padding.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Setup
End of explanation
raw_inputs = [
[711, 632, 71],
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
# By default, this will pad using 0s; it is configurable via the
# "value" parameter.
# Note that you could "pre" padding (at the beginning) or
# "post" padding (at the end).
# We recommend using "post" padding when working with RNN layers
# (in order to be able to use the
# CuDNN implementation of the layers).
padded_inputs = tf.keras.preprocessing.sequence.pad_sequences(
raw_inputs, padding="post"
)
print(padded_inputs)
Explanation: Introduction
Masking is a way to tell sequence-processing layers that certain timesteps
in an input are missing, and thus should be skipped when processing the data.
Padding is a special form of masking where the masked steps are at the start or
the end of a sequence. Padding comes from the need to encode sequence data into
contiguous batches: in order to make all sequences in a batch fit a given standard
length, it is necessary to pad or truncate some sequences.
Let's take a close look.
Padding sequence data
When processing sequence data, it is very common for individual samples to have
different lengths. Consider the following example (text tokenized as words):
[
["Hello", "world", "!"],
["How", "are", "you", "doing", "today"],
["The", "weather", "will", "be", "nice", "tomorrow"],
]
After vocabulary lookup, the data might be vectorized as integers, e.g.:
[
[71, 1331, 4231]
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
The data is a nested list where individual samples have length 3, 5, and 6,
respectively. Since the input data for a deep learning model must be a single tensor
(of shape e.g. (batch_size, 6, vocab_size) in this case), samples that are shorter
than the longest item need to be padded with some placeholder value (alternatively,
one might also truncate long samples before padding short samples).
Keras provides a utility function to truncate and pad Python lists to a common length:
tf.keras.preprocessing.sequence.pad_sequences.
End of explanation
embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
masked_output = embedding(padded_inputs)
print(masked_output._keras_mask)
masking_layer = layers.Masking()
# Simulate the embedding lookup by expanding the 2D input to 3D,
# with embedding dimension of 10.
unmasked_embedding = tf.cast(
tf.tile(tf.expand_dims(padded_inputs, axis=-1), [1, 1, 10]), tf.float32
)
masked_embedding = masking_layer(unmasked_embedding)
print(masked_embedding._keras_mask)
Explanation: Masking
Now that all samples have a uniform length, the model must be informed that some part
of the data is actually padding and should be ignored. That mechanism is masking.
There are three ways to introduce input masks in Keras models:
Add a keras.layers.Masking layer.
Configure a keras.layers.Embedding layer with mask_zero=True.
Pass a mask argument manually when calling layers that support this argument (e.g.
RNN layers).
Mask-generating layers: Embedding and Masking
Under the hood, these layers will create a mask tensor (2D tensor with shape (batch,
sequence_length)), and attach it to the tensor output returned by the Masking or
Embedding layer.
End of explanation
model = keras.Sequential(
[layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True), layers.LSTM(32),]
)
Explanation: As you can see from the printed result, the mask is a 2D boolean tensor with shape
(batch_size, sequence_length), where each individual False entry indicates that
the corresponding timestep should be ignored during processing.
Mask propagation in the Functional API and Sequential API
When using the Functional API or the Sequential API, a mask generated by an Embedding
or Masking layer will be propagated through the network for any layer that is
capable of using them (for example, RNN layers). Keras will automatically fetch the
mask corresponding to an input and pass it to any layer that knows how to use it.
For instance, in the following Sequential model, the LSTM layer will automatically
receive a mask, which means it will ignore padded values:
End of explanation
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
outputs = layers.LSTM(32)(x)
model = keras.Model(inputs, outputs)
Explanation: This is also the case for the following Functional API model:
End of explanation
class MyLayer(layers.Layer):
def __init__(self, **kwargs):
super(MyLayer, self).__init__(**kwargs)
self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
self.lstm = layers.LSTM(32)
def call(self, inputs):
x = self.embedding(inputs)
# Note that you could also prepare a `mask` tensor manually.
# It only needs to be a boolean tensor
# with the right shape, i.e. (batch_size, timesteps).
mask = self.embedding.compute_mask(inputs)
output = self.lstm(x, mask=mask) # The layer will ignore the masked values
return output
layer = MyLayer()
x = np.random.random((32, 10)) * 100
x = x.astype("int32")
layer(x)
Explanation: Passing mask tensors directly to layers
Layers that can handle masks (such as the LSTM layer) have a mask argument in their
__call__ method.
Meanwhile, layers that produce a mask (e.g. Embedding) expose a compute_mask(input,
previous_mask) method which you can call.
Thus, you can pass the output of the compute_mask() method of a mask-producing layer
to the __call__ method of a mask-consuming layer, like this:
End of explanation
class TemporalSplit(keras.layers.Layer):
Split the input tensor into 2 tensors along the time dimension.
def call(self, inputs):
# Expect the input to be 3D and mask to be 2D, split the input tensor into 2
# subtensors along the time axis (axis 1).
return tf.split(inputs, 2, axis=1)
def compute_mask(self, inputs, mask=None):
# Also split the mask into 2 if it presents.
if mask is None:
return None
return tf.split(mask, 2, axis=1)
first_half, second_half = TemporalSplit()(masked_embedding)
print(first_half._keras_mask)
print(second_half._keras_mask)
Explanation: Supporting masking in your custom layers
Sometimes, you may need to write layers that generate a mask (like Embedding), or
layers that need to modify the current mask.
For instance, any layer that produces a tensor with a different time dimension than its
input, such as a Concatenate layer that concatenates on the time dimension, will
need to modify the current mask so that downstream layers will be able to properly
take masked timesteps into account.
To do this, your layer should implement the layer.compute_mask() method, which
produces a new mask given the input and the current mask.
Here is an example of a TemporalSplit layer that needs to modify the current mask.
End of explanation
class CustomEmbedding(keras.layers.Layer):
def __init__(self, input_dim, output_dim, mask_zero=False, **kwargs):
super(CustomEmbedding, self).__init__(**kwargs)
self.input_dim = input_dim
self.output_dim = output_dim
self.mask_zero = mask_zero
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self.input_dim, self.output_dim),
initializer="random_normal",
dtype="float32",
)
def call(self, inputs):
return tf.nn.embedding_lookup(self.embeddings, inputs)
def compute_mask(self, inputs, mask=None):
if not self.mask_zero:
return None
return tf.not_equal(inputs, 0)
layer = CustomEmbedding(10, 32, mask_zero=True)
x = np.random.random((3, 10)) * 9
x = x.astype("int32")
y = layer(x)
mask = layer.compute_mask(x)
print(mask)
Explanation: Here is another example of a CustomEmbedding layer that is capable of generating a
mask from input values:
End of explanation
class MyActivation(keras.layers.Layer):
def __init__(self, **kwargs):
super(MyActivation, self).__init__(**kwargs)
# Signal that the layer is safe for mask propagation
self.supports_masking = True
def call(self, inputs):
return tf.nn.relu(inputs)
Explanation: Opting-in to mask propagation on compatible layers
Most layers don't modify the time dimension, so don't need to modify the current mask.
However, they may still want to be able to propagate the current mask, unchanged,
to the next layer. This is an opt-in behavior. By default, a custom layer will
destroy the current mask (since the framework has no way to tell whether propagating
the mask is safe to do).
If you have a custom layer that does not modify the time dimension, and if you want it
to be able to propagate the current input mask, you should set self.supports_masking
= True in the layer constructor. In this case, the default behavior of
compute_mask() is to just pass the current mask through.
Here's an example of a layer that is whitelisted for mask propagation:
End of explanation
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
x = MyActivation()(x) # Will pass the mask along
print("Mask found:", x._keras_mask)
outputs = layers.LSTM(32)(x) # Will receive the mask
model = keras.Model(inputs, outputs)
Explanation: You can now use this custom layer in-between a mask-generating layer (like Embedding)
and a mask-consuming layer (like LSTM), and it will pass the mask along so that it
reaches the mask-consuming layer.
End of explanation
class TemporalSoftmax(keras.layers.Layer):
def call(self, inputs, mask=None):
broadcast_float_mask = tf.expand_dims(tf.cast(mask, "float32"), -1)
inputs_exp = tf.exp(inputs) * broadcast_float_mask
inputs_sum = tf.reduce_sum(
inputs_exp * broadcast_float_mask, axis=-1, keepdims=True
)
return inputs_exp / inputs_sum
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=10, output_dim=32, mask_zero=True)(inputs)
x = layers.Dense(1)(x)
outputs = TemporalSoftmax()(x)
model = keras.Model(inputs, outputs)
y = model(np.random.randint(0, 10, size=(32, 100)), np.random.random((32, 100, 1)))
Explanation: Writing layers that need mask information
Some layers are mask consumers: they accept a mask argument in call and use it to
determine whether to skip certain time steps.
To write such a layer, you can simply add a mask=None argument in your call
signature. The mask associated with the inputs will be passed to your layer whenever
it is available.
Here's a simple example below: a layer that computes a softmax over the time dimension
(axis 1) of an input sequence, while discarding masked timesteps.
End of explanation |
7,113 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is the notebook for the python pandas dataframe course
The idea of this notebook is to show the power of working with pandas dataframes
Motivation
We usually work with tabular data
We should not handle them with bash commands like
Step1: Series definition
Series is a one-dimensional labeled array capable of holding any data type
The axis labels are collectively referred to as the index
This is the basic idea of how to create a Series dataframe
Step2: Create a Series array from a numpy array
If data is an ndarray, index must be the same length as data.
If no index is passed, one will be created having values [0, ..., len(data) - 1]
Not including index
Step3: Including index
Step4: From scalar value
If data is a scalar value, an index must be provided
The value will be repeated to match the length of index
Step5: Create a Series array from a python dictionary
Step6: DataFrame definition
DataFrame is a 2-dimensional labeled data structure with columns of potentially different types (see also Panel - 3-dimensional array).
You can think of it like a spreadsheet or SQL table, or a dict of Series objects.
It is generally the most commonly used pandas object.
Like Series, DataFrame accepts many different kinds of input
Step7: From dict of Series or dicts
Step8: From dict of ndarrays / lists
The ndarrays must all be the same length.
If an index is passed, it must clearly also be the same length as the arrays.
If no index is passed, the result will be range(n), where n is the array length.
Step9: From structured or record array
The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays.
If no index is passed, the result will be range(n), where n is the array length.
Step10: From a list of dicts
Step11: <a id=exercise1></a>
Exercise 1
Step12: <a id=io></a>
I/O
Reading from different sources into a DataFrame
Most of the times any study starts with an input file containing some data rather than having a python list or dictionary.
Here we present three different data sources and how to read them
Step13: CSV.BZ2 (less storage, slower when reading because of decompression)
Step14: Reading the full catalog at once (if the file is not very large)
Step15: DataFrame.describe
Step16: FITS file
Step17: FITS file created using the same query as the CSV file
Step19: - From Database
Step20: Write to csv file
Step21: Advanced example
Step22: Opening file with the with method
Creating a file object using read_csv method
Looping by chunks using enumerate in order to also have the chunk number
Step23: DataFrame plot method (just for curiosity!)
Step24: <a id=selecting></a>
SELECTING AND SLICING
The idea of this section is to show how to slice and get and set subsets of pandas objects
The basics of indexing are as follows
Step25: Select a column
Step26: Select a row by index
Step27: Select a row by integer location
Step28: Slice rows
Step29: Select rows by boolean vector
Step30: Recap
Step31: DO NOT LOOP THE PANDAS DATAFRAME IN GENERAL!
Step32: <a id=exercise2></a>
Exercise 2
Step33: <a id=merging></a>
Merge, join, and concatenate
https
Step34: Using append method
Step35: Note
Step36: This is also a valid argument to DataFrame.append
Step37: Mixing dimensions
Step38: <a id=exercise3></a>
Exercise 3
Step39: Merge method
Step40: We are going to unset the index and rename the columns in order to use the "on" argument
Step41: Now we have everything ready to make the JOINs
Step42: <a id=functions></a>
More functions
Looping a dataframe (iterrows)
Step43: FITS files
fitsio
And working by chunks
Step44: .values DataFrame attribute
Some scipy functions do not allow to use pandas dataframe as arguments and therefore it is useful to use the values atribute, which is the numpy representation of NDFrame
The dtype will be a lower-common-denominator dtype (implicit upcasting); that is to say if the dtypes (even of numeric types) are mixed, the one that accommodates all will be chosen. Use this with care if you are not dealing with the blocks.
View vs. Copy
https
Step45: Necessary to "modify" the file in order to convert it into a standard csv file, e.g. | Python Code:
# Import libraries
import pandas as pd
import numpy as np
Explanation: This is the notebook for the python pandas dataframe course
The idea of this notebook is to show the power of working with pandas dataframes
Motivation
We usually work with tabular data
We should not handle them with bash commands like: for, split, grep, awk, etc...
And pandas is a very nice tool to handle this kind of data.
Welcome to Pandas!
Definition of pandas:
Python package providing fast, flexible, and expressive data structures designed to make working with “relational” or “labeled” data both easy and intuitive.
It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python.
Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language.
More information about pandas: http://pandas.pydata.org/pandas-docs/stable/
Contents of the course:
Know your data:
Dimensionality: Series or DataFrame
Index
Some examples
Exercise 1: Selecting pandas structure
I/O:
Reading: CSV, FITS, SQL
Writing: CSV
Advanced example: Reading and writing CSV files by chunks
Selecting and slicing:
loc. & iloc.
Advanced example: Estimate a galaxy property for a subset of galaxies using boolean conditions
Exercise 2: Estimate another galaxy property
Merge, join, and concatenate:
Exercise 3: Generate a random catalog using the concat method
Example: Merging dataframes using the merge method
More functions:
Loop a dataframe (itertuples and iterows)
Sort
Sample
Reshape: pivot, stack, unstack, etc.
Caveats and technicalities:
Floating point limitations
.values
FITS chunks
View or copy
Wrong input example
Some useful information
Ten minutes to pandas:
https://pandas.pydata.org/pandas-docs/stable/10min.html
Pandas cookbook:
https://pandas.pydata.org/pandas-docs/stable/cookbook.html
Nice pandas course:
https://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python#gs.=B6Dr74
Multidimensional dataframes, xarray:
http://xarray.pydata.org/en/stable/
Tips & Tricks
https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/
<a id=know></a>
Know your data
Very important to (perfectly) know your data: structure, data type, index, relation, etc. (see Pau's talk for a much better explanation ;)
Dimensionality:
- 1-D: Series; e.g.
- Solar planets: [Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune]
- Set of astronomical objects and when they were observed:
[[NGC1952, 2012-05-01],
[NGC224, 2013-01-23],
[NGC5194, 2014-02-13]]
- 2-D: DataFrame; e.g (more business oriented):
- 3 months of sales information for 3 fictitious companies:
sales = [{'account': 'Jones LLC', 'Jan': 150, 'Feb': 200, 'Mar': 140},
{'account': 'Alpha Co', 'Jan': 200, 'Feb': 210, 'Mar': 215},
{'account': 'Blue Inc', 'Jan': 50, 'Feb': 90, 'Mar': 95 }]
Index
It is the value (~key) we use as a reference for each element. (Note: It does not have to be unique)
Most of the data contain at least one index
End of explanation
solar_planets = ['Mercury','Venus','Earth','Mars','Jupiter','Saturn','Uranus','Neptune']
splanets = pd.Series(solar_planets)
# Tips and tricks
# To access the Docstring for quick reference on syntax use ? before:
#?pd.Series()
splanets
splanets.index
Explanation: Series definition
Series is a one-dimensional labeled array capable of holding any data type
The axis labels are collectively referred to as the index
This is the basic idea of how to create a Series dataframe:
s = pd.Series(data, index=index)
where data can be:
- list
- ndarray
- python dictionary
- scalar
and index is a list of axis labels
Create a Series array from a list
If no index is passed, one will be created having values [0, ..., len(data) - 1]
End of explanation
s1 = pd.Series(np.random.randn(5))
s1
s1.index
Explanation: Create a Series array from a numpy array
If data is an ndarray, index must be the same length as data.
If no index is passed, one will be created having values [0, ..., len(data) - 1]
Not including index:
End of explanation
s2 = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])
s2
s2.index
Explanation: Including index
End of explanation
s3 = pd.Series(5., index=['a', 'b', 'c', 'd', 'e'])
s3
s3.index
Explanation: From scalar value
If data is a scalar value, an index must be provided
The value will be repeated to match the length of index
End of explanation
d = {'a' : 0., 'b' : 1., 'c' : 2.}
sd = pd.Series(d)
sd
Explanation: Create a Series array from a python dictionary
End of explanation
sales = [{'account': 'Jones LLC', 'Jan': 150, 'Feb': 200, 'Mar': 140},
{'account': 'Alpha Co', 'Jan': 200, 'Feb': 210, 'Mar': 215},
{'account': 'Blue Inc', 'Jan': 50, 'Feb': 90, 'Mar': 95 }]
df = pd.DataFrame(sales)
df
df.info()
df.index
df = df.set_index('account')
df
Explanation: DataFrame definition
DataFrame is a 2-dimensional labeled data structure with columns of potentially different types (see also Panel - 3-dimensional array).
You can think of it like a spreadsheet or SQL table, or a dict of Series objects.
It is generally the most commonly used pandas object.
Like Series, DataFrame accepts many different kinds of input:
Dict of 1D ndarrays, lists, dicts, or Series
2-D numpy.ndarray
Structured or record ndarray
A Series
Another DataFrame
From a list of dictionaries
End of explanation
d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(d)
df
df.info()
pd.DataFrame(d, index=['d', 'b', 'a'])
df.index
df.columns
Explanation: From dict of Series or dicts
End of explanation
d = {'one' : [1., 2., 3., 4.], 'two' : [4., 3., 2., 1.]}
pd.DataFrame(d)
pd.DataFrame(d, index=['a', 'b', 'c', 'd'])
Explanation: From dict of ndarrays / lists
The ndarrays must all be the same length.
If an index is passed, it must clearly also be the same length as the arrays.
If no index is passed, the result will be range(n), where n is the array length.
End of explanation
data = np.random.random_sample((5, 5))
data
df = pd.DataFrame(data)
df
# Add index
df = pd.DataFrame(data,index = ['a','b','c','d','e'])
df
# Add column names
df = pd.DataFrame(data, index = ['a','b','c','d','e'], columns = ['ra', 'dec','z_phot','z_true','imag'])
df
Explanation: From structured or record array
The ndarrays must all be the same length. If an index is passed, it must clearly also be the same length as the arrays.
If no index is passed, the result will be range(n), where n is the array length.
End of explanation
data2 = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}]
pd.DataFrame(data2)
pd.DataFrame(data2, index=['first', 'second'])
pd.DataFrame(data2, columns=['a', 'b'])
Explanation: From a list of dicts
End of explanation
#Few galaxies with some properties: id, ra, dec, magi
galaxies = [
{'id' : 1, 'ra' : 4.5, 'dec' : -55.6, 'magi' : 21.3},
{'id' : 3, 'ra' : 23.5, 'dec' : 23.6, 'magi' : 23.3},
{'id' : 25, 'ra' : 22.5, 'dec' : -0.3, 'magi' : 20.8},
{'id' : 17, 'ra' : 33.5, 'dec' : 15.6, 'magi' : 24.3}
]
# %load -r 1-19 solutions/06_01_pandas.py
Explanation: <a id=exercise1></a>
Exercise 1: Selecting pandas structure
Given a few galaxies with some properties ['id', 'ra', 'dec', 'magi'], choose which pandas structure to use and its index:
End of explanation
filename = '../resources/galaxy_sample.csv'
!head -30 ../resources/galaxy_sample.csv
Explanation: <a id=io></a>
I/O
Reading from different sources into a DataFrame
Most of the times any study starts with an input file containing some data rather than having a python list or dictionary.
Here we present three different data sources and how to read them: two file formats (CSV and FITS) and a database connection.
Advanced: More and more frequently the amount of data to handle is larger and larger (Big Data era) and therefore files are huge. This is why we strongly recommend to always program by chunks (sometimes it is mandatory and also it is not straight forward to implement).
- From a CSV (Comma Separated Value) file:
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
Reading the full catalog at once (if the file is not very large)
CSV file created using the following query (1341.csv.bz2):
SELECT unique_gal_id, ra_gal, dec_gal, z_cgal, z_cgal_v, lmhalo, (mr_gal - 0.8 * (atan(1.5 * z_cgal)- 0.1489)) AS abs_mag, gr_gal AS color, (des_asahi_full_i_true - 0.8 * (atan(1.5 * z_cgal)- 0.1489)) AS app_mag FROM micecatv2_0_view TABLESAMPLE (BUCKET 1 OUT OF 512)
End of explanation
filename_bz2 = '../resources/galaxy_sample.csv.bz2'
!head ../resources/galaxy_sample.csv.bz2
Explanation: CSV.BZ2 (less storage, slower when reading because of decompression)
End of explanation
# Field index name (known a priori from the header or the file description)
unique_gal_id_field = 'unique_gal_id'
galaxy_sample = pd.read_csv(filename, sep=',', index_col = unique_gal_id_field, comment='#', na_values = '\\N')
galaxy_sample.head()
galaxy_sample.tail()
Explanation: Reading the full catalog at once (if the file is not very large)
End of explanation
galaxy_sample.describe()
galaxy_sample.info()
galaxy_sample_bz2 = pd.read_csv(filename_bz2, sep=',', index_col = unique_gal_id_field, comment='#', na_values = r'\N')
galaxy_sample_bz2.head()
Explanation: DataFrame.describe:
Generates descriptive statistics that summarize the central tendency, dispersion and shape of a dataset’s distribution, excluding NaN values.
End of explanation
from astropy.table import Table
Explanation: FITS file:
Pandas does not read directly FITS files so it is necessary to make some "convertion"
We have found 2 different approaches:
Table method from astropy pyfits
fitsio (see "Caveats and technicalities" section below)
Not easy to read it by chunks (see also "Caveats and technicalities" section below)
Note: we strongly recommend to use CSV.BZ2!
Using astropy (or pyfits)
This method does not support "by chunks" and therefore you have to read it all at once
End of explanation
filename = '../resources/galaxy_sample.fits'
#?Table.read()
data = Table.read(filename)
type(data)
df = data.to_pandas()
df.head()
df = df.set_index('unique_gal_id')
df.head()
df.shape
df.values.dtype
df.info()
Explanation: FITS file created using the same query as the CSV file:
End of explanation
# For PostgreSQL access
from sqlalchemy.engine import create_engine
# Text wrapping
import textwrap
# Database configuration parameters
#db_url = '{scheme}://{user}:{password}@{host}/{database}'
db_url = 'sqlite:///../resources/pandas.sqlite'
sql_sample = textwrap.dedent(\
SELECT *
FROM micecatv1
WHERE ABS(ra_mag-ra) > 0.05
)
index_col = 'id'
# Create database connection
engine = create_engine(db_url)
df = pd.read_sql(sql_sample, engine,index_col = 'id')
df.head()
Explanation: - From Database:
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql.html
End of explanation
outfile = '../resources/micecatv1_sample1.csv'
with open(outfile, 'w') as f_out:
df.to_csv(f_out,
columns = ['ra', 'dec','ra_mag','dec_mag'],
index=True,
header=True
)
Explanation: Write to csv file:
End of explanation
filename = '../resources/galaxy_sample.csv'
outfile = '../resources/galaxy_sample_some_columns.csv'
# chunk size
gal_chunk = 100000
# Field index name (known a priori from the header or the file description)
unique_gal_id_field = 'unique_gal_id'
Explanation: Advanced example: Reading and writing by chunks
End of explanation
with open(filename, 'r') as galaxy_fd, open (outfile, 'w') as f_out:
galaxy_sample_reader = pd.read_csv(
galaxy_fd,
sep=',',
index_col = unique_gal_id_field,
comment='#',
na_values = '\\N',
chunksize=gal_chunk
)
for chunk, block in enumerate(galaxy_sample_reader):
print(chunk)
# In order not to write n chunk times the header (HELP PAU!)
block.to_csv(f_out,
columns = ['ra_gal','dec_gal','z_cgal_v'],
index=True,
header= chunk==0,
mode='a'
)
block.head()
block.tail(3)
Explanation: Opening file with the with method
Creating a file object using read_csv method
Looping by chunks using enumerate in order to also have the chunk number
End of explanation
# DataFrame plot method
%matplotlib inline
import matplotlib.pyplot as plt
block['lmhalo'].plot.hist(bins=100, logy = True)
plt.show()
Explanation: DataFrame plot method (just for curiosity!)
End of explanation
# Same dataframe as before
filename='../resources/galaxy_sample.csv.bz2'
galaxy_sample = pd.read_csv(filename, sep=',', index_col = unique_gal_id_field, comment='#', na_values = r'\N')
galaxy_sample.head()
Explanation: <a id=selecting></a>
SELECTING AND SLICING
The idea of this section is to show how to slice and get and set subsets of pandas objects
The basics of indexing are as follows:
| Operation | Syntax | Result |
|--------------------------------|------------------|---------------|
| Select column | df[column label] | Series |
| Select row by index | df.loc[index] | Series |
| Select row by integer location | df.iloc[pos] | Series |
| Slice rows | df[5:10] | DataFrame |
| Select rows by boolean vector | df[bool_vec] | DataFrame |
End of explanation
galaxy_sample['ra_gal'].head()
type(galaxy_sample['dec_gal'])
galaxy_sample[['ra_gal','dec_gal','lmhalo']].head()
Explanation: Select a column
End of explanation
galaxy_sample.loc[28581888]
type(galaxy_sample.loc[28581888])
Explanation: Select a row by index
End of explanation
galaxy_sample.iloc[0]
type(galaxy_sample.iloc[0])
Explanation: Select a row by integer location
End of explanation
galaxy_sample.iloc[3:7]
galaxy_sample[3:7]
type(galaxy_sample.iloc[3:7])
Explanation: Slice rows
End of explanation
# Boolean vector
(galaxy_sample['ra_gal'] < 45).tail()
type(galaxy_sample['ra_gal'] < 45)
galaxy_sample[galaxy_sample['ra_gal'] < 45].head()
# redshift shell
galaxy_sample[(galaxy_sample.z_cgal <= 0.2) | (galaxy_sample.z_cgal >= 1.0)].head()
galaxy_sample[(galaxy_sample.z_cgal <= 1.0) & (galaxy_sample.index.isin([5670656,13615360,3231232]))]
galaxy_sample[(galaxy_sample['ra_gal'] < 1.) & (galaxy_sample['dec_gal'] < 1.)][['ra_gal','dec_gal']].head()
Explanation: Select rows by boolean vector:
The operators are: | for or, & for and, and ~ for not. These must be grouped by using parentheses.
End of explanation
galaxy_sample.tail(10)
# Splitting the galaxies
# Boolean mask
has_disk_mask = (galaxy_sample['color']-0.29+0.03*galaxy_sample['abs_mag'] < 0)
has_disk_mask.tail(10)
print (len(has_disk_mask))
print (type(has_disk_mask))
# Counting how many spirals
n_spiral = has_disk_mask.sum()
# Counting how many ellipticals
n_elliptical = ~has_disk_mask.sum()
galaxy_sample[has_disk_mask].count()
galaxy_sample[has_disk_mask]['hubble_type'] = 'Spiral'
# It did not add any column! It was working in a view!
galaxy_sample.tail(10)
# This is the proper way of doing it if one wants to add another column
galaxy_sample.loc[has_disk_mask, 'hubble_type'] = 'Spiral'
galaxy_sample.loc[~has_disk_mask, 'hubble_type'] = 'Elliptical'
galaxy_sample.tail(10)
# We can use the numpy where method to do the same:
galaxy_sample['color_type'] = np.where(has_disk_mask, 'Blue', 'Red')
galaxy_sample.tail(10)
# The proper way would be to use a boolean field
galaxy_sample['has_disk'] = has_disk_mask
galaxy_sample.tail(10)
galaxy_sample.loc[~has_disk_mask, 'disk_length'] = 0.
galaxy_sample.loc[has_disk_mask, 'disk_length'] = np.fabs(
np.random.normal(
0., scale=0.15, size=n_spiral
)
)
Explanation: Recap:
loc works on labels in the index.
iloc works on the positions in the index (so it only takes integers).
Advanced example: estimate the size of the disk (disk_length) for a set of galaxies
In this exercise we are going to use some of the previous examples.
Also we are going to introduce how to add a column and some other concepts
We split the galaxies into two different populations, Ellipticals and Spirals, depending on the their color and absolute magnitude:
if color - 0.29 + 0.03 * abs_mag < 0 then Spiral
else then Elliptical
How many galaxies are elliptical or spirals?
Elliptical galaxies do not have any disk (and therefore disk_length = 0).
The disk_length for spiral galaxies follows a normal distribution with mean = 0 and sigma = 0.15 (in arcsec). In addition, the minimum disk_length for a spiral galaxy is 1.e-3.
End of explanation
galaxy_sample.tail(10)
# Minimum value for disk_length for spirals
dl_min = 1.e-4;
disk_too_small_mask = has_disk_mask & (galaxy_sample['disk_length'] < dl_min)
disk_too_small_mask.sum()
galaxy_sample.loc[disk_too_small_mask, 'disk_length'].head()
galaxy_sample.loc[disk_too_small_mask, 'disk_length'] = dl_min
galaxy_sample.loc[disk_too_small_mask, 'disk_length'].head()
galaxy_sample.tail(10)
Explanation: DO NOT LOOP THE PANDAS DATAFRAME IN GENERAL!
End of explanation
# %load -r 20-102 solutions/06_01_pandas.py
Explanation: <a id=exercise2></a>
Exercise 2: Estimate another galaxy property
What is the mean value and the standard deviation of the disk_length for spiral galaxies (Tip: use the .mean() and .std() methods)
Estimate the bulge_length for elliptical galaxies. The bulge_length depends on the absolute magnitude in the following way:
bulge_length = exp(-1.145 - 0.269 * (abs_mag - 23.))
How many galaxies have bulge_lenth > 1.0?
In our model the maximum bulge_length for an elliptical galaxy is 0.5 arcsec.
What is the mean value and the standard deviation of the bulge_length for elliptical galaxies. And for ellipticals with absolute magnitude brighter than -20?
End of explanation
df1 = pd.DataFrame(
{'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3]
)
df2 = pd.DataFrame(
{'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7]
)
df3 = pd.DataFrame(
{'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11]
)
frames = [df1, df2, df3]
result = pd.concat(frames)
result
# Multiindex
result = pd.concat(frames, keys=['x', 'y','z'])
result
result.index
result.loc['y']
df4 = pd.DataFrame(
{'B': ['B2', 'B3', 'B6', 'B7'],
'D': ['D2', 'D3', 'D6', 'D7'],
'F': ['F2', 'F3', 'F6', 'F7']},
index=[2, 3, 6, 7]
)
df4
df1
result = pd.concat([df1, df4])
result
result = pd.concat([df1, df4], axis=1)
result
result = pd.concat([df1, df4], axis=1, join='inner')
result
Explanation: <a id=merging></a>
Merge, join, and concatenate
https://pandas.pydata.org/pandas-docs/stable/merging.html
pandas provides various facilities for easily combining together Series, DataFrame, and Panel objects with various kinds of set logic for the indexes and relational algebra functionality in the case of join / merge-type operations.
concat method:
pd.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
keys=None, levels=None, names=None, verify_integrity=False,
copy=True)
End of explanation
df1
df2
result = df1.append(df2)
result
df1
df4
result = df1.append(df4)
result
result = pd.concat([df1,df4])
result
Explanation: Using append method:
End of explanation
result = pd.concat([df1,df4], ignore_index=True)
result
Explanation: Note: Unlike list.append method, which appends to the original list and returns nothing, append here does not modify df1 and returns its copy with df2 appended.
End of explanation
result = df1.append(df4, ignore_index = True)
result
Explanation: This is also a valid argument to DataFrame.append:
End of explanation
df1
s1 = pd.Series(['X0', 'X1', 'X2', 'X3'], name='X')
s1
result = pd.concat([s1,df1])
result
result = pd.concat([df1,s1], axis = 1)
result
s2 = pd.Series(['_0', '_1', '_2', '_3'])
result = pd.concat([df1,s2,s2,s2], axis = 1)
result
Explanation: Mixing dimensions
End of explanation
data = [
# halo_id, gal_id, ra, dec, z, abs_mag'
[1, 1, 21.5, 30.1, 0.21, -21.2],
[1, 2, 21.6, 29.0, 0.21, -18.3],
[1, 3, 21.4, 30.0, 0.21, -18.5],
[2, 1, 45.0, 45.0, 0.42, -20.4],
[3, 1, 25.0, 33.1, 0.61, -21.2],
[3, 2, 25.1, 33.2, 0.61, -20.3]
]
# %load -r 103-145 solutions/06_01_pandas.py
Explanation: <a id=exercise3></a>
Exercise 3: Generate a random catalog using concat method
In this exercise we will use the concat method and show a basic example of multiIndex.
Given a subset of a few galaxies with the following properties ['halo_id', 'gal_id' ,'ra', 'dec', 'z', 'abs_mag'], create a random catalog with 50 times more galaxies than the subset keeping the properties of the galaxies but placing them randomly in the first octant of the sky.
The index of each galaxy is given by the tuple [halo_id, gal_id]
End of explanation
star_filename = '../resources/df_star.ssv'
spectra_filename = '../resources/df_spectra.ssv'
starid_specid_filename = '../resources/df_starid_specid.ssv'
df_spectra = pd.read_csv(spectra_filename, index_col=['spec_id', 'band'], sep = ' ')
df_spectra.head(41)
df_starid_specid = pd.read_csv(starid_specid_filename, sep=' ')
df_starid_specid.head(5)
# Given that the file is somehow corrupted we open it without defining any index
df_star = pd.read_csv(star_filename, sep=' ')
df_star.head(10)
df_star[(df_star['sdss_star_id'] == 1237653665258930303) & (df_star['filter'] == 'NB455')]
# Drop duplicates:
df_star.drop_duplicates(subset = ['sdss_star_id', 'filter'], inplace= True)
df_star[(df_star['sdss_star_id'] == 1237653665258930303) & (df_star['filter'] == 'NB455')]
df_starid_specid.head(5)
Explanation: Merge method: Database-style DataFrame joining/merging:
pandas has full-featured, high performance in-memory join operations idiomatically very similar to relational databases like SQL. These methods perform significantly better (in some cases well over an order of magnitude better) than other open source implementations (like base::merge.data.frame in R). The reason for this is careful algorithmic design and internal layout of the data in DataFrame
See the cookbook for some advanced strategies
Users who are familiar with SQL but new to pandas might be interested in a comparison with SQL
pd.merge(left, right, how='inner', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False)
Example: Merging dataframes using the merge method (thanks Nadia!)
Goal: build a dataframe merging 2 different dataframes with complementary information, through the relation given by a third dataframe.
df_stars contains information of stars magnitudes per sdss_star_id and per filter:
['sdss_star_id', 'filter', 'expected_mag', 'expected_mag_err']
Note, the file is "somehow" corrupted and entries are duplicate several times
Unique entries are characterized by sdss_star_id and filter
df_spectra contains information of star flux per band (== filter) and per spec_id (!= sdss_star_id):
['spec_id', 'band', 'flux', 'flux_err']
Unique entries are characterized by spec_id and band
df_spec_IDs allows to make the correspondence between sdss_star_id (== objID) and spec_id (== specObjID):
['objID', 'specObjID']
Unique entries are characterized by objID
End of explanation
df_spectra.reset_index(inplace = True)
df_spectra.head()
df_spectra.rename(columns={'band': 'filter'}, inplace = True)
df_spectra.head()
df_starid_specid.rename(columns={'objID':'sdss_star_id', 'specObjID':'spec_id'}, inplace = True)
df_starid_specid.head()
Explanation: We are going to unset the index and rename the columns in order to use the "on" argument:
End of explanation
df_star_merged = pd.merge(df_star, df_starid_specid, on='sdss_star_id')
df_star_merged.head()
df_star_merged = pd.merge(df_star_merged, df_spectra, on=['spec_id','filter'])
df_star_merged.head(40)
df_star_merged.set_index(['sdss_star_id', 'filter'], inplace = True)
df_star_merged.head()
# Each element has been observed in how many bands?
count_bands = df_star_merged.groupby(level=0)['flux'].count()
count_bands.head(20)
df_star_merged.groupby(level=1)['flux_err'].mean().head(10)
Explanation: Now we have everything ready to make the JOINs
End of explanation
# e.g.: the decimal value 0.1 cannot be represented exactly as a base 2 fraction
(0.1 + 0.2) == 0.3
(0.1 + 0.2) - 0.3
Explanation: <a id=functions></a>
More functions
Looping a dataframe (iterrows):
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iterrows.html
sort method:
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html
sample method:
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sample.html
Reshape dataframes (pivot, stack, unstack):
http://nikgrozev.com/2015/07/01/reshaping-in-pandas-pivot-pivot-table-stack-and-unstack-explained-with-pictures/
Data cleaning:
check for missing values (isnull): https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html
drop missing values (dropna): https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html
fill the missing values with other values (fillna): https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html
replace values with different values (replace): https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html
Some general ideas to get home:
Do not loop a dataframe!
Try to work by chunks; create functions that work with chunks
Work with standard formats and "already implemented" functions
<a id=caveats></a>
Caveats and technicalities
Floating point limitations:
Be careful with exact comparisons!
End of explanation
import fitsio
filename = '../resources/galaxy_sample.fits'
fits=fitsio.FITS(filename)
data = fits[1]
# Number of rows
data.get_nrows()
# chunk size
gal_chunk = 300000
# e.g.to create the ranges!
import math
niter = int(math.ceil(data.get_nrows() / float(gal_chunk)))
for i in range(niter):
s = i*gal_chunk
f = min((i+1)*gal_chunk, data.get_nrows())
chunk = data[s:f]
print (i)
print (type(chunk))
print (chunk.dtype)
df_chunk = pd.DataFrame(chunk)
print (type(df_chunk))
print (df_chunk.dtypes)
df_chunk = df_chunk.set_index('unique_gal_id')
print (df_chunk.head())
Explanation: FITS files
fitsio
And working by chunks
End of explanation
bad_filename = '../resources/steps.flagship.dat'
df_bad = pd.read_csv(bad_filename)
df_bad.head()
df_bad = pd.read_csv(bad_filename, sep = ' ')
Explanation: .values DataFrame attribute
Some scipy functions do not allow to use pandas dataframe as arguments and therefore it is useful to use the values atribute, which is the numpy representation of NDFrame
The dtype will be a lower-common-denominator dtype (implicit upcasting); that is to say if the dtypes (even of numeric types) are mixed, the one that accommodates all will be chosen. Use this with care if you are not dealing with the blocks.
View vs. Copy
https://pandas.pydata.org/pandas-docs/stable/indexing.html#returning-a-view-versus-a-copy
Wrong input example:
.dat
Look at the file using e.g. head bash command
Note that there are more than one space, and if you do tail filename, different number of "spaces"
End of explanation
filename = '../resources/steps.flagship.ssv'
columns = ['step_num', 'r_min', 'r_max', 'r_med', 'a_med', 'z_med']
df = pd.read_csv(filename, sep = ' ', header = None, names = columns, index_col = 'step_num')
df.head()
Explanation: Necessary to "modify" the file in order to convert it into a standard csv file, e.g.:
cat steps.flagship.dat | tr -s " " | sed 's/^ *//g' > steps.flagship.ssv
End of explanation |
7,114 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing various MNE solutions
This example shows example fixed- and free-orientation source localizations
produced by MNE, dSPM, sLORETA, and eLORETA.
Step1: Fixed orientation
First let's create a fixed-orientation inverse, with the default weighting.
Step2: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
Step3: Next let's use the default noise normalization, dSPM
Step4: And sLORETA
Step5: And finally eLORETA
Step6: Free orientation
Now let's not constrain the orientation of the dipoles at all by creating
a free-orientation inverse.
Step7: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
Step8: Next let's use the default noise normalization, dSPM
Step9: sLORETA
Step10: And finally eLORETA | Python Code:
# Author: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
Explanation: Computing various MNE solutions
This example shows example fixed- and free-orientation source localizations
produced by MNE, dSPM, sLORETA, and eLORETA.
End of explanation
inv = make_inverse_operator(evoked.info, fwd, cov, loose=0., depth=0.8,
verbose=True)
Explanation: Fixed orientation
First let's create a fixed-orientation inverse, with the default weighting.
End of explanation
snr = 3.0
lambda2 = 1.0 / snr ** 2
kwargs = dict(initial_time=0.08, hemi='both', subjects_dir=subjects_dir,
size=(600, 600))
stc = abs(apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True))
brain = stc.plot(figure=1, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
Explanation: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
End of explanation
stc = abs(apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True))
brain = stc.plot(figure=2, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
Explanation: Next let's use the default noise normalization, dSPM:
End of explanation
stc = abs(apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True))
brain = stc.plot(figure=3, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
Explanation: And sLORETA:
End of explanation
stc = abs(apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True))
brain = stc.plot(figure=4, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
Explanation: And finally eLORETA:
End of explanation
inv = make_inverse_operator(evoked.info, fwd, cov, loose=1., depth=0.8,
verbose=True)
Explanation: Free orientation
Now let's not constrain the orientation of the dipoles at all by creating
a free-orientation inverse.
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True)
brain = stc.plot(figure=5, **kwargs)
brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14)
Explanation: Let's look at the current estimates using MNE. We'll take the absolute
value of the source estimates to simplify the visualization.
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True)
brain = stc.plot(figure=6, **kwargs)
brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14)
Explanation: Next let's use the default noise normalization, dSPM:
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True)
brain = stc.plot(figure=7, **kwargs)
brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14)
Explanation: sLORETA:
End of explanation
stc = apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True)
brain = stc.plot(figure=8, **kwargs)
brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14)
Explanation: And finally eLORETA:
End of explanation |
7,115 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started in scikit-learn with the famous iris dataset
From the video series
Step1: Machine learning on the iris dataset
Framed as a supervised learning problem
Step2: Machine learning terminology
Each row is an observation (also known as
Step3: Each value we are predicting is the response (also known as | Python Code:
from IPython.display import IFrame
IFrame('http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data', width=300, height=200)
Explanation: Getting started in scikit-learn with the famous iris dataset
From the video series: Introduction to machine learning with scikit-learn
Agenda
What is the famous iris dataset, and how does it relate to machine learning?
How do we load the iris dataset into scikit-learn?
How do we describe a dataset using machine learning terminology?
What are scikit-learn's four key requirements for working with data?
Introducing the iris dataset
50 samples of 3 different species of iris (150 samples total)
Measurements: sepal length, sepal width, petal length, petal width
End of explanation
# import load_iris function from datasets module
from sklearn.datasets import load_iris
# save "bunch" object containing iris dataset and its attributes
iris = load_iris()
type(iris)
# print the iris data
print(iris.data)
Explanation: Machine learning on the iris dataset
Framed as a supervised learning problem: Predict the species of an iris using the measurements
Famous dataset for machine learning because prediction is easy
Learn more about the iris dataset: UCI Machine Learning Repository
Loading the iris dataset into scikit-learn
End of explanation
# print the names of the four features
print(iris.feature_names)
# print integers representing the species of each observation
print(iris.target)
# print the encoding scheme for species: 0 = setosa, 1 = versicolor, 2 = virginica
print(iris.target_names)
Explanation: Machine learning terminology
Each row is an observation (also known as: sample, example, instance, record)
Each column is a feature (also known as: predictor, attribute, independent variable, input, regressor, covariate)
End of explanation
# check the types of the features and response
print(type(iris.data))
print(type(iris.target))
# check the shape of the features (first dimension = number of observations, second dimensions = number of features)
print(iris.data.shape)
# check the shape of the response (single dimension matching the number of observations)
print(iris.target.shape)
# store feature matrix in "X"
X = iris.data
# store response vector in "y"
y = iris.target
Explanation: Each value we are predicting is the response (also known as: target, outcome, label, dependent variable)
Classification is supervised learning in which the response is categorical
Regression is supervised learning in which the response is ordered and continuous
Requirements for working with data in scikit-learn
Features and response are separate objects
Features and response should be numeric
Features and response should be NumPy arrays
Features and response should have specific shapes
End of explanation |
7,116 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 3
Imports
Step2: Geometric Brownian motion
Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.
Step3: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
Step4: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
Step5: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
Step6: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation
Step7: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
Explanation: Numpy Exercise 3
Imports
End of explanation
def brownian(maxt, n):
Return one realization of a Brownian (Wiener) process with n steps and a max time of t.
t = np.linspace(0.0,maxt,n)
h = t[1]-t[0]
Z = np.random.normal(0.0,1.0,n-1)
dW = np.sqrt(h)*Z
W = np.zeros(n)
W[1:] = dW.cumsum()
return t, W
Explanation: Geometric Brownian motion
Here is a function that produces standard Brownian motion using NumPy. This is also known as a Wiener Process.
End of explanation
# YOUR CODE HERE
t, W = brownian(1.0, 1000)
assert isinstance(t, np.ndarray)
assert isinstance(W, np.ndarray)
assert t.dtype==np.dtype(float)
assert W.dtype==np.dtype(float)
assert len(t)==len(W)==1000
Explanation: Call the brownian function to simulate a Wiener process with 1000 steps and max time of 1.0. Save the results as two arrays t and W.
End of explanation
# YOUR CODE HERE
plt.plot(t,W)
plt.xlabel("W")
plt.ylabel("t")
assert True # this is for grading
Explanation: Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
End of explanation
# YOUR CODE HERE
dW = np.diff(W)
print dW
print np.mean(dW)
print np.std(dW)
assert len(dW)==len(W)-1
assert dW.dtype==np.dtype(float)
Explanation: Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
End of explanation
def geo_brownian(t, W, X0, mu, sigma):
X = X0*np.exp(((mu - sigma**2)/2.0)*t + sigma*W)
return X
assert True # leave this for grading
Explanation: Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation:
$$
X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))}
$$
Use Numpy ufuncs and no loops in your function.
End of explanation
# YOUR CODE HERE
plt.plot(t,geo_brownian(t, W, 1.0, 0.5, 0.3))
plt.xlabel("X")
plt.ylabel("t")
assert True # leave this for grading
Explanation: Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above.
Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.
End of explanation |
7,117 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hydropy-package
Step1: We have a Dataframe with river discharge at different locations in the Maarkebeek basin (Belgium)
Step2: Data downloaded from http
Step3: Converting the dataframe to a hydropy time series datatype, provides extra functionalities
Step4: Select the summer of 2009
Step5: Select only the recession periods of the discharges (catchment drying) in June 2011
Step6: Peak values above the 90th percentile for the station LS06_347 in July 2010
Step7: Select storms and make plot
Step8: Season averages (Pandas!)
Step9: Goals
Use the power of Python Pandas
Provide domain specific (hydrological) functionalities
Provide intuitive interface for hydrological time series (main focus on flow series)
Combine different earlier written loose functionalities in package
Independent, but useful in global Phd-developed framework
Step10: Forwarding Pandas functionalities
Step11: Easy period selection
Step12: For the seasons some options are available
Step13: 'Hydrological' selections | Python Code:
#Loading the hydropy package
import hydropy as hp
Explanation: Hydropy-package
End of explanation
HTML('<iframe src=http://biomath.ugent.be/~stvhoey/maarkebeek_data/ width=700 height=350></iframe>')
Explanation: We have a Dataframe with river discharge at different locations in the Maarkebeek basin (Belgium):
End of explanation
flowdata.head()
flowdata.tail()
print len(flowdata), 'records', 'from', flowdata.index[0], 'till', flowdata.index[-1]
Explanation: Data downloaded from http://www.waterinfo.be/, made available by the Flemish Environmental Agency (VMM).
End of explanation
myflowserie = hp.HydroAnalysis(flowdata)
Explanation: Converting the dataframe to a hydropy time series datatype, provides extra functionalities:
End of explanation
myflowserie.get_year('2009').get_season('summer').plot(figsize=(12,6))
Explanation: Select the summer of 2009:
End of explanation
myflowserie.get_year('2011').get_month("Jun").get_recess().plot(figsize=(12,6))
Explanation: Select only the recession periods of the discharges (catchment drying) in June 2011:
End of explanation
fig, ax = plt.subplots(figsize=(13, 6))
myflowserie['LS06_347'].get_year('2010').get_month("Jul").get_highpeaks(150, above_percentile=0.9).plot(style='o', ax=ax)
myflowserie['LS06_347'].get_year('2010').get_month("Jul").plot(ax=ax)
Explanation: Peak values above the 90th percentile for the station LS06_347 in July 2010:
End of explanation
raindata.columns
storms = myflowserie.derive_storms(raindata['P05_019'], 'LS06_347',
number_of_storms=3, drywindow=50,
makeplot=True)
storms = myflowserie.derive_storms(raindata['P06_014'], 'LS06_347',
number_of_storms=3, drywindow=96,
makeplot=True)
Explanation: Select storms and make plot
End of explanation
myflowserie.data.groupby('season').mean()
Explanation: Season averages (Pandas!)
End of explanation
import hydropy as hp
flowdata = pd.read_pickle("./data/FlowData")
raindata = pd.read_pickle("./data/RainData")
myflowserie = hp.HydroAnalysis(flowdata)
Explanation: Goals
Use the power of Python Pandas
Provide domain specific (hydrological) functionalities
Provide intuitive interface for hydrological time series (main focus on flow series)
Combine different earlier written loose functionalities in package
Independent, but useful in global Phd-developed framework: enables the user to quickly look at different properties of model behaviour
Where?
Code : https://github.com/stijnvanhoey/hydropy --> Fork and contribute
Website : https://stijnvanhoey.github.io/hydropy/
How to start?
Fork the github repo
Get the code on your computer
git clone https://github.com/yourname/hydropy
Run the python setup script (install as development package):
python setup.py develop
4. Improve implementation, add functionalities,...
Make a new branch
Make improvements on this branch
push the branch towards the repo and perform a push request
Functionalities extended
End of explanation
# Data inspection
myflowserie.summary() #head(), tail(),
# Resampling frequencies
temp1 = myflowserie.frequency_resample('7D', 'mean') # 7 day means
temp1.head()
temp2 = myflowserie.frequency_resample("M", "max") # Monthly maxima
temp2.head()
temp3 = myflowserie.frequency_resample("A", 'sum') # Yearly sums
temp3.head(6)
#slicing of the dataframes
myflowserie['L06_347']['2009'].plot()
Explanation: Forwarding Pandas functionalities
End of explanation
# get_month, get_year, get_season, get_date_range
myflowserie.get_date_range("01/01/2010","03/05/2010").plot(figsize=(13, 6))
# or combine different statements:
myflowserie.get_year('2010').get_month(6).plot(figsize=(13, 6))
Explanation: Easy period selection
End of explanation
myflowserie.current_season_dates()
myflowserie.info_season_dates('north', 'astro')
Explanation: For the seasons some options are available: Meteorologic (first of the month) or astrologic (21st of the month)
End of explanation
# Peaks (high or low)
myflowserie['LS06_348'].get_year('2012').get_highpeaks(60, above_percentile=0.8).data.dropna().head()
# Recessions and climbing periods get_recess, get_climbing
myflowserie.get_year("2012").get_month("april").get_climbing().plot(figsize=(13, 6))
# above/below certain percentile values
myflowserie["LS06_348"].get_above_percentile(0.6).get_year('2011').get_season('summer').plot()
Explanation: 'Hydrological' selections
End of explanation |
7,118 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this exercise, you will use your new knowledge to propose a solution to a real-world scenario. To succeed, you will need to import data into Python, answer questions using the data, and generate scatter plots to understand patterns in the data.
Scenario
You work for a major candy producer, and your goal is to write a report that your company can use to guide the design of its next product. Soon after starting your research, you stumble across this very interesting dataset containing results from a fun survey to crowdsource favorite candies.
Setup
Run the next cell to import and configure the Python libraries that you need to complete the exercise.
Step1: The questions below will give you feedback on your work. Run the following cell to set up our feedback system.
Step2: Step 1
Step3: Step 2
Step4: The dataset contains 83 rows, where each corresponds to a different candy bar. There are 13 columns
Step5: Step 3
Step6: Part B
Does the scatter plot show a strong correlation between the two variables? If so, are candies with more sugar relatively more or less popular with the survey respondents?
Step7: Step 4
Step8: Part B
According to the plot above, is there a slight correlation between 'winpercent' and 'sugarpercent'? What does this tell you about the candy that people tend to prefer?
Step9: Step 5
Step10: Can you see any interesting patterns in the scatter plot? We'll investigate this plot further by adding regression lines in the next step!
Step 6
Step11: Part B
Using the regression lines, what conclusions can you draw about the effects of chocolate and price on candy popularity?
Step12: Step 7
Step13: Part B
You decide to dedicate a section of your report to the fact that chocolate candies tend to be more popular than candies without chocolate. Which plot is more appropriate to tell this story | Python Code:
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
Explanation: In this exercise, you will use your new knowledge to propose a solution to a real-world scenario. To succeed, you will need to import data into Python, answer questions using the data, and generate scatter plots to understand patterns in the data.
Scenario
You work for a major candy producer, and your goal is to write a report that your company can use to guide the design of its next product. Soon after starting your research, you stumble across this very interesting dataset containing results from a fun survey to crowdsource favorite candies.
Setup
Run the next cell to import and configure the Python libraries that you need to complete the exercise.
End of explanation
# Set up code checking
import os
if not os.path.exists("../input/candy.csv"):
os.symlink("../input/data-for-datavis/candy.csv", "../input/candy.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex4 import *
print("Setup Complete")
Explanation: The questions below will give you feedback on your work. Run the following cell to set up our feedback system.
End of explanation
# Path of the file to read
candy_filepath = "../input/candy.csv"
# Fill in the line below to read the file into a variable candy_data
candy_data = ____
# Run the line below with no changes to check that you've loaded the data correctly
step_1.check()
#%%RM_IF(PROD)%%
candy_data = pd.read_csv(candy_filepath, index_col="id")
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution()
Explanation: Step 1: Load the Data
Read the candy data file into candy_data. Use the "id" column to label the rows.
End of explanation
# Print the first five rows of the data
____ # Your code here
Explanation: Step 2: Review the data
Use a Python command to print the first five rows of the data.
End of explanation
# Fill in the line below: Which candy was more popular with survey respondents:
# '3 Musketeers' or 'Almond Joy'? (Please enclose your answer in single quotes.)
more_popular = ____
# Fill in the line below: Which candy has higher sugar content: 'Air Heads'
# or 'Baby Ruth'? (Please enclose your answer in single quotes.)
more_sugar = ____
# Check your answers
step_2.check()
#%%RM_IF(PROD)%%
more_popular = '3 Musketeers'
more_sugar = 'Air Heads'
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
Explanation: The dataset contains 83 rows, where each corresponds to a different candy bar. There are 13 columns:
- 'competitorname' contains the name of the candy bar.
- the next 9 columns (from 'chocolate' to 'pluribus') describe the candy. For instance, rows with chocolate candies have "Yes" in the 'chocolate' column (and candies without chocolate have "No" in the same column).
- 'sugarpercent' provides some indication of the amount of sugar, where higher values signify higher sugar content.
- 'pricepercent' shows the price per unit, relative to the other candies in the dataset.
- 'winpercent' is calculated from the survey results; higher values indicate that the candy was more popular with survey respondents.
Use the first five rows of the data to answer the questions below.
End of explanation
# Scatter plot showing the relationship between 'sugarpercent' and 'winpercent'
____ # Your code here
# Check your answer
step_3.a.check()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_3.a.assert_check_passed()
#%%RM_IF(PROD)%%
sns.regplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_3.a.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_3.a.hint()
#_COMMENT_IF(PROD)_
step_3.a.solution_plot()
Explanation: Step 3: The role of sugar
Do people tend to prefer candies with higher sugar content?
Part A
Create a scatter plot that shows the relationship between 'sugarpercent' (on the horizontal x-axis) and 'winpercent' (on the vertical y-axis). Don't add a regression line just yet -- you'll do that in the next step!
End of explanation
#_COMMENT_IF(PROD)_
step_3.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_3.b.solution()
Explanation: Part B
Does the scatter plot show a strong correlation between the two variables? If so, are candies with more sugar relatively more or less popular with the survey respondents?
End of explanation
# Scatter plot w/ regression line showing the relationship between 'sugarpercent' and 'winpercent'
____ # Your code here
# Check your answer
step_4.a.check()
#%%RM_IF(PROD)%%
sns.regplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_4.a.assert_check_passed()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['sugarpercent'], y=candy_data['winpercent'])
step_4.a.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_4.a.hint()
#_COMMENT_IF(PROD)_
step_4.a.solution_plot()
Explanation: Step 4: Take a closer look
Part A
Create the same scatter plot you created in Step 3, but now with a regression line!
End of explanation
#_COMMENT_IF(PROD)_
step_4.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_4.b.solution()
Explanation: Part B
According to the plot above, is there a slight correlation between 'winpercent' and 'sugarpercent'? What does this tell you about the candy that people tend to prefer?
End of explanation
# Scatter plot showing the relationship between 'pricepercent', 'winpercent', and 'chocolate'
____ # Your code here
# Check your answer
step_5.check()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['pricepercent'], y=candy_data['winpercent'], hue=candy_data['chocolate'])
step_5.assert_check_passed()
#%%RM_IF(PROD)%%
#sns.scatterplot(x=candy_data['pricepercent'], y=candy_data['winpercent'])
#step_5.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_5.hint()
#_COMMENT_IF(PROD)_
step_5.solution_plot()
Explanation: Step 5: Chocolate!
In the code cell below, create a scatter plot to show the relationship between 'pricepercent' (on the horizontal x-axis) and 'winpercent' (on the vertical y-axis). Use the 'chocolate' column to color-code the points. Don't add any regression lines just yet -- you'll do that in the next step!
End of explanation
# Color-coded scatter plot w/ regression lines
____ # Your code here
# Check your answer
step_6.a.check()
#%%RM_IF(PROD)%%
sns.scatterplot(x=candy_data['pricepercent'], y=candy_data['winpercent'])
step_6.a.assert_check_failed()
#%%RM_IF(PROD)%%
sns.lmplot(x="pricepercent", y="winpercent", hue="chocolate", data=candy_data)
step_6.a.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_6.a.hint()
#_COMMENT_IF(PROD)_
step_6.a.solution_plot()
Explanation: Can you see any interesting patterns in the scatter plot? We'll investigate this plot further by adding regression lines in the next step!
Step 6: Investigate chocolate
Part A
Create the same scatter plot you created in Step 5, but now with two regression lines, corresponding to (1) chocolate candies and (2) candies without chocolate.
End of explanation
#_COMMENT_IF(PROD)_
step_6.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_6.b.solution()
Explanation: Part B
Using the regression lines, what conclusions can you draw about the effects of chocolate and price on candy popularity?
End of explanation
# Scatter plot showing the relationship between 'chocolate' and 'winpercent'
____ # Your code here
# Check your answer
step_7.a.check()
#%%RM_IF(PROD)%%
sns.swarmplot(x=candy_data['chocolate'], y=candy_data['winpercent'])
step_7.a.assert_check_passed()
#%%RM_IF(PROD)%%
#sns.swarmplot(x=candy_data['chocolate'], y=candy_data['sugarpercent'])
#step_7.a.assert_check_failed()
#%%RM_IF(PROD)%%
#sns.swarmplot(x=candy_data['fruity'], y=candy_data['winpercent'])
#step_7.a.assert_check_failed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_7.a.hint()
#_COMMENT_IF(PROD)_
step_7.a.solution_plot()
Explanation: Step 7: Everybody loves chocolate.
Part A
Create a categorical scatter plot to highlight the relationship between 'chocolate' and 'winpercent'. Put 'chocolate' on the (horizontal) x-axis, and 'winpercent' on the (vertical) y-axis.
End of explanation
#_COMMENT_IF(PROD)_
step_7.b.hint()
# Check your answer (Run this code cell to receive credit!)
step_7.b.solution()
Explanation: Part B
You decide to dedicate a section of your report to the fact that chocolate candies tend to be more popular than candies without chocolate. Which plot is more appropriate to tell this story: the plot from Step 6, or the plot from Step 7?
End of explanation |
7,119 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transliteration
Transliteration is the conversion of a text from one script to another.
For instance, a Latin transliteration of the Greek phrase "Ελληνική Δημοκρατία", usually translated as 'Hellenic Republic', is "Ellēnikḗ Dēmokratía".
Step1: Languages Coverage
Step2: Downloading Necessary Models
Step4: Example
We tag each word in the text with one part of speech.
Step5: We can query all the tagged words
Step6: Command Line Interface | Python Code:
from polyglot.transliteration import Transliterator
Explanation: Transliteration
Transliteration is the conversion of a text from one script to another.
For instance, a Latin transliteration of the Greek phrase "Ελληνική Δημοκρατία", usually translated as 'Hellenic Republic', is "Ellēnikḗ Dēmokratía".
End of explanation
from polyglot.downloader import downloader
print(downloader.supported_languages_table("transliteration2"))
Explanation: Languages Coverage
End of explanation
%%bash
polyglot download embeddings2.en pos2.en
Explanation: Downloading Necessary Models
End of explanation
from polyglot.text import Text
blob = We will meet at eight o'clock on Thursday morning.
text = Text(blob)
Explanation: Example
We tag each word in the text with one part of speech.
End of explanation
for x in text.transliterate("ar"):
print(x)
Explanation: We can query all the tagged words
End of explanation
!polyglot --lang en tokenize --input testdata/cricket.txt | polyglot --lang en transliteration --target ar | tail -n 30
Explanation: Command Line Interface
End of explanation |
7,120 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling the X-ray image data
In this notebook, we'll take a closer look at the X-ray image data products, and build a simple generative model for the observed data.
Step1: A closer look at the data products
If you haven't done so, download the data as in the FirstLook notebook. Here we read in the three provided files.
Step2: im is the observed data as an image (counts in each pixel), after standard processing. This displays the image on a log scale, which allows us to simultaneously see both the cluster and the much fainter background and other sources in the field.
Step3: ex is the exposure map (units of seconds). This is actually the product of the exposure time used to make the image, and a relative sensitivity map accounting for the vignetting of the telescope, dithering, and bad pixels whose data have been excised. Displaying the exposure map on a linear scale makes the vignetting pattern and other features clear.
Step4: bk is a particle background map. This is not data at all, but a model for the expected counts/pixel in this specific observation due to the quiescent particle background. The map comes out of a blackbox in the processing pipeline -- even though there are surely uncertainties in it, we have no quantitative description of them to work with. Note that the exposure map above does not apply to the particle backround; some particles are vignetted by the telescope optics, but not to the same degree as X-rays. The resulting spatial pattern and the total exposure time are accounted for in bk.
Step5: There are non-cluster sources in this field. To simplify the model-building exercise, we will crudely mask them out for the moment. A convenient way to do this is by setting the exposure map to zero in these locations, since we will not consider such pixels as part of the image data later on. Below, we read in a text file encoding a list of circular regions in the image, and set the pixels within each of those regions in ex to zero.
Step6: As a sanity check, let's have a look at the modified exposure map. Compare the location of the "holes" to the science image above.
Step7: Building a model
Our data are counts, i.e. the number of times a physical pixel in the camera was activated while pointing at the area of sky corresponding to a pixel in our image. We can think of different sky pixels as having different effective exposure times, as encoded by the exposure map. Counts can be produced by
X-rays from our source of interest (the galaxy cluster)
X-rays from other detected sources (i.e. the other sources we've masked out)
X-rays from unresolved background sources (the cosmic X-ray background)
Diffuse X-rays from the Galactic halo and the local bubble (the local X-ray foreground)
Soft protons from the solar wind, cosmic rays, and other undesirables (the particle background)
Of these, the particle background represents a flux of particles that either do not traverse the telescope optics at all, or follow a different optical path than X-rays. In contrast, the X-ray background is vignetted in the same way as X-rays from a source of interest. We will lump these sources (2-4) together, so that our model is composed of a galaxy cluster, the X-ray background, and the particle background.
Since our data are counts in each pixel, our model needs to predict the counts in each pixel. However, physical models will not predict count distributions, but rather intensity (counts per second per pixel per unit effective area of the telescope). The spatial variation of the effective area relative to the aimpoint is accounted for in the exposure map, and we can leave the overall area to one side when fitting (although we would need it to turn our results into physically interesting conclusions).
Cluster model
We will use a common parametric model for the surface brightness of galaxy clusters | Python Code:
import astropy.io.fits as pyfits
import astropy.visualization as viz
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
Explanation: Modeling the X-ray image data
In this notebook, we'll take a closer look at the X-ray image data products, and build a simple generative model for the observed data.
End of explanation
imfits = pyfits.open('a1835_xmm/P0098010101M2U009IMAGE_3000.FTZ')
im = imfits[0].data
bkfits = pyfits.open('a1835_xmm/P0098010101M2X000BKGMAP3000.FTZ')
bk = bkfits[0].data
exfits = pyfits.open('a1835_xmm/P0098010101M2U009EXPMAP3000.FTZ')
ex = exfits[0].data
Explanation: A closer look at the data products
If you haven't done so, download the data as in the FirstLook notebook. Here we read in the three provided files.
End of explanation
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
Explanation: im is the observed data as an image (counts in each pixel), after standard processing. This displays the image on a log scale, which allows us to simultaneously see both the cluster and the much fainter background and other sources in the field.
End of explanation
plt.imshow(ex, cmap='gray', origin='lower');
Explanation: ex is the exposure map (units of seconds). This is actually the product of the exposure time used to make the image, and a relative sensitivity map accounting for the vignetting of the telescope, dithering, and bad pixels whose data have been excised. Displaying the exposure map on a linear scale makes the vignetting pattern and other features clear.
End of explanation
plt.imshow(bk, cmap='gray', origin='lower');
Explanation: bk is a particle background map. This is not data at all, but a model for the expected counts/pixel in this specific observation due to the quiescent particle background. The map comes out of a blackbox in the processing pipeline -- even though there are surely uncertainties in it, we have no quantitative description of them to work with. Note that the exposure map above does not apply to the particle backround; some particles are vignetted by the telescope optics, but not to the same degree as X-rays. The resulting spatial pattern and the total exposure time are accounted for in bk.
End of explanation
mask = np.loadtxt('a1835_xmm/M2ptsrc.txt')
for reg in mask:
# this is inefficient but effective
for i in np.round(reg[1]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):
for j in np.round(reg[0]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):
if (i-reg[1])**2 + (j-reg[0])**2 <= reg[2]**2:
ex[np.int(i-1), np.int(j-1)] = 0.0
Explanation: There are non-cluster sources in this field. To simplify the model-building exercise, we will crudely mask them out for the moment. A convenient way to do this is by setting the exposure map to zero in these locations, since we will not consider such pixels as part of the image data later on. Below, we read in a text file encoding a list of circular regions in the image, and set the pixels within each of those regions in ex to zero.
End of explanation
plt.imshow(ex, cmap='gray', origin='lower');
Explanation: As a sanity check, let's have a look at the modified exposure map. Compare the location of the "holes" to the science image above.
End of explanation
#todo
Explanation: Building a model
Our data are counts, i.e. the number of times a physical pixel in the camera was activated while pointing at the area of sky corresponding to a pixel in our image. We can think of different sky pixels as having different effective exposure times, as encoded by the exposure map. Counts can be produced by
X-rays from our source of interest (the galaxy cluster)
X-rays from other detected sources (i.e. the other sources we've masked out)
X-rays from unresolved background sources (the cosmic X-ray background)
Diffuse X-rays from the Galactic halo and the local bubble (the local X-ray foreground)
Soft protons from the solar wind, cosmic rays, and other undesirables (the particle background)
Of these, the particle background represents a flux of particles that either do not traverse the telescope optics at all, or follow a different optical path than X-rays. In contrast, the X-ray background is vignetted in the same way as X-rays from a source of interest. We will lump these sources (2-4) together, so that our model is composed of a galaxy cluster, the X-ray background, and the particle background.
Since our data are counts in each pixel, our model needs to predict the counts in each pixel. However, physical models will not predict count distributions, but rather intensity (counts per second per pixel per unit effective area of the telescope). The spatial variation of the effective area relative to the aimpoint is accounted for in the exposure map, and we can leave the overall area to one side when fitting (although we would need it to turn our results into physically interesting conclusions).
Cluster model
We will use a common parametric model for the surface brightness of galaxy clusters: the azimuthally symmetric beta model,
$S(r) = S_0 \left[1.0 + \left(\frac{r}{r_c}\right)^2\right]^{-3\beta + 1/2}$,
where $r$ is projected distance from the cluster center. The parameters of this model are
$x_0$, the $x$ coordinate of the cluster center
$y_0$, the $y$ coordinate of the cluster center
$S_0$, the normalization
$r_c$, a radial scale (called the "core radius")
$\beta$, which determines the slope of the profile
Let's plot it:
End of explanation |
7,121 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic reading and visualization of radar data with Py-ART
Introduction to Jupyter
What you are looking at is a Jupyter Notebook, a web-based interactive computation enviroment well suited for creating and sharing examples of scientific Python computations.
<img class="logo" src="images/jupyter-logo.svg" height=100 />
One of the key features of Jupyter notebooks is that Python code can be written and executed in cells.
Click on the next cell and press shift+enter to execute the Python code.
Step1: Cells can also be used to create textual materials using the markup language Markdown.
Double click on this cell (or any others) to see the raw markdown which produces the nicely formatted text.
One of the reasons these Notebooks are so well suited for scientific Python work is that Juypter is well integrated into the Scientific Python ecosystem. For example, plots can be included in notebooks for visualizing data. Execute the next two cells (using ctrl+enter)
Step2: <img class="logo" src="images/python-logo.png" height=100 align='right'/>
Python
General-purpose.
Interpreted.
Focuses on readability.
Excellent for interfacing with C, C++ and Fortran code.
Comprehesive standard library.
Extended with a large number of third-party packages.
Widely used in scientific programming.
This presentation will give a brief into to some key features of Python and the Scientific Python ecosystem to help those not familar with the language with the remainder of the class. This is in no way a comprehensive introduction to either topic. Excellent tutorials on Python and Scientific Python can be found online.
We will be using IPython for this class which is a package which allows Python code to be run inside a browser. This is in no way the only way to run python, the Python/IPython shell, scripts and various IDEs can also be used but will not be covered.
The notebook for this materials is available if you wish to follow along on your own computer, but we will be moving fast...
Variables
Integers
Step3: Floating point numbers
Step4: Variables
Complex numbers
Step5: Booleans
Step6: Variables
Strings
Step7: Variables can be cast from one type to another
Step8: Containers
Lists
Step9: Indexing
Step10: Slicing
Step11: Lists can store different type of variable in each element
Step12: Containers
Dictionaries
Step13: Containers
Tuples
Step14: Flow control
conditional (if, else, elif)
Step15: Flow control
Loops
Step16: Functions
Step17: Classes
Step18: The Scientific Python ecosystem
NumPy
Step19: Arrays can be multi-dimensional
Step20: The Scientific Python ecosystem
SciPy
Step21: The Scientific Python ecosystem
matplotlib | Python Code:
# This is a Python comment
# the next line is a line of Python code
print("Hello World!")
Explanation: Basic reading and visualization of radar data with Py-ART
Introduction to Jupyter
What you are looking at is a Jupyter Notebook, a web-based interactive computation enviroment well suited for creating and sharing examples of scientific Python computations.
<img class="logo" src="images/jupyter-logo.svg" height=100 />
One of the key features of Jupyter notebooks is that Python code can be written and executed in cells.
Click on the next cell and press shift+enter to execute the Python code.
End of explanation
# These two lines turn on inline plotting
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot([1,2,3])
Explanation: Cells can also be used to create textual materials using the markup language Markdown.
Double click on this cell (or any others) to see the raw markdown which produces the nicely formatted text.
One of the reasons these Notebooks are so well suited for scientific Python work is that Juypter is well integrated into the Scientific Python ecosystem. For example, plots can be included in notebooks for visualizing data. Execute the next two cells (using ctrl+enter)
End of explanation
a = 1
a + 1
Explanation: <img class="logo" src="images/python-logo.png" height=100 align='right'/>
Python
General-purpose.
Interpreted.
Focuses on readability.
Excellent for interfacing with C, C++ and Fortran code.
Comprehesive standard library.
Extended with a large number of third-party packages.
Widely used in scientific programming.
This presentation will give a brief into to some key features of Python and the Scientific Python ecosystem to help those not familar with the language with the remainder of the class. This is in no way a comprehensive introduction to either topic. Excellent tutorials on Python and Scientific Python can be found online.
We will be using IPython for this class which is a package which allows Python code to be run inside a browser. This is in no way the only way to run python, the Python/IPython shell, scripts and various IDEs can also be used but will not be covered.
The notebook for this materials is available if you wish to follow along on your own computer, but we will be moving fast...
Variables
Integers
End of explanation
b = 2.1
b + 1
a + b
type(a + b)
Explanation: Floating point numbers
End of explanation
c = 1.5 + 0.5j # complex numbers
print(c.real)
print(c.imag)
Explanation: Variables
Complex numbers
End of explanation
d = 3 > 4
print(d)
type(d)
Explanation: Booleans
End of explanation
s = "Hello everyone"
type(s)
a = "Hello "
b = "World"
print(a + b)
Explanation: Variables
Strings
End of explanation
a = 1
print(a)
print(type(a))
b = float(a)
print(b)
print(type(b))
s = "1.23"
print(s)
print(type(s))
f = float(s)
print(f)
print(type(f))
Explanation: Variables can be cast from one type to another
End of explanation
l = ['red', 'blue', 'green', 'black', 'white']
len(l)
Explanation: Containers
Lists
End of explanation
l
print(l[0])
print(l[1])
print(l[2])
print(l[-1]) # last element
print(l[-2])
l[0] = 'orange'
print(l)
Explanation: Indexing
End of explanation
print(l[2:5])
print(l[2:-1])
print(l[1:6:2])
l[::-1]
Explanation: Slicing
End of explanation
ll = [5, 22.9, 14.8+1j, 'hello', [1,2,3]]
ll
print(ll[0])
print(ll[1])
print(ll[2])
print(ll[3])
print(ll[4])
Explanation: Lists can store different type of variable in each element
End of explanation
d = {'name': 'Bobby', 'id': 223984, 'location': 'USA'}
d.keys()
d.values()
d['name']
d['id']
d['id'] = 1234
d['id']
Explanation: Containers
Dictionaries
End of explanation
t = ('red', 'blue', 'green')
t[0]
t[1:3]
Explanation: Containers
Tuples
End of explanation
a = 4
if a > 10:
print("a is larger than 10")
elif a < 10:
print("a is less than 10")
else:
print("a is equal to 10")
Explanation: Flow control
conditional (if, else, elif)
End of explanation
for i in range(10):
print(i)
Explanation: Flow control
Loops
End of explanation
def func():
print("Hello world")
func()
Explanation: Functions
End of explanation
class Car(object):
engine = 'V4' # class attribute
def start(self): # class method
print("Starting the car with a", self.engine, "engine")
mycar = Car()
type(mycar)
mycar.engine
mycar.start()
mycar.engine = 'V6'
mycar.engine
mycar.start()
Explanation: Classes
End of explanation
import numpy as np
a = np.array([0, 1, 2, 3, 4, 5, 6, 7])
a
a.shape
a.ndim
a.dtype
a[0::2]
a[a>3]
a * 2 + 100
a.mean()
Explanation: The Scientific Python ecosystem
NumPy
End of explanation
b = np.arange(12).reshape(3,4)
b.shape
b
b[1,2]
b[0:2, ::-1]
Explanation: Arrays can be multi-dimensional
End of explanation
import scipy
print(scipy.__doc__)
Explanation: The Scientific Python ecosystem
SciPy
End of explanation
%pylab inline
plot([1,2,3])
a = np.random.rand(30, 30)
imshow(a)
colorbar()
Explanation: The Scientific Python ecosystem
matplotlib
End of explanation |
7,122 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifying Images using Dropout and Batchnorm Layer
Introduction
In this notebook, you learn how to build a neural network to classify the tf-flowers dataset using dropout and batchnorm layer.
Learning objectives
Define Helper Functions.
Apply dropout and batchnorm layer.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Step1: Define Helper Functions
Reading and Preprocessing image data
Step2: Apply dropout and batchnorm layer
A deep neural network (DNN) is a neural network with more than one hidden layer. Each time you add a layer, the number of trainable parameters increases. Therefore,you need a larger dataset. You still have only 3700 flower images which might cause overfitting.
Dropouts are the regularization technique that is used to prevent overfitting in the model. Batch normalization is a layer that allows every layer of the network to do learning more independently. The layer is added to the sequential model to standardize the input or the outputs. Add a dropout and batchnorm layer after each of the hidden layers.
Dropout
Dropout is one of the oldest regularization techniques in deep learning. At each training iteration, it drops random neurons from the network with a probability p (typically 25% to 50%). In practice, neuron outputs are set to 0. The net result is that these neurons will not participate in the loss computation this time around and they will not get weight updates. Different neurons will be dropped at each training iteration.
Batch normalization
Our input pixel values are in the range [0,1] and this is compatible with the dynamic range of the typical activation functions and optimizers. However, once we add a hidden layer, the resulting output values will no longer lie in the dynamic range of the activation function for subsequent layers. When this happens, the neuron output is zero, and because there is no difference by moving a small amount in either direction, the gradient is zero. There is no way for the network to escape from the dead zone. To fix this, batch norm normalizes neuron outputs across a training batch of data, i.e. it subtracts the average and divides by the standard deviation. This way, the network decides, through machine learning, how much centering and re-scaling to apply at each neuron. In Keras, you can selectively use one or the other | Python Code:
import tensorflow as tf
print(tf.version.VERSION)
Explanation: Classifying Images using Dropout and Batchnorm Layer
Introduction
In this notebook, you learn how to build a neural network to classify the tf-flowers dataset using dropout and batchnorm layer.
Learning objectives
Define Helper Functions.
Apply dropout and batchnorm layer.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
End of explanation
# Helper functions
def training_plot(metrics, history):
f, ax = plt.subplots(1, len(metrics), figsize=(5*len(metrics), 5))
for idx, metric in enumerate(metrics):
ax[idx].plot(history.history[metric], ls='dashed')
ax[idx].set_xlabel("Epochs")
ax[idx].set_ylabel(metric)
ax[idx].plot(history.history['val_' + metric]);
ax[idx].legend([metric, 'val_' + metric])
# Call model.predict() on a few images in the evaluation dataset
def plot_predictions(filename):
f, ax = plt.subplots(3, 5, figsize=(25,15))
dataset = (tf.data.TextLineDataset(filename).
map(decode_csv))
for idx, (img, label) in enumerate(dataset.take(15)):
ax[idx//5, idx%5].imshow((img.numpy()));
batch_image = tf.reshape(img, [1, IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS])
batch_pred = model.predict(batch_image)
pred = batch_pred[0]
label = CLASS_NAMES[label.numpy()]
pred_label_index = tf.math.argmax(pred).numpy()
pred_label = CLASS_NAMES[pred_label_index]
prob = pred[pred_label_index]
ax[idx//5, idx%5].set_title('{}: {} ({:.4f})'.format(label, pred_label, prob))
def show_trained_weights(model):
# CLASS_NAMES is ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']
LAYER = 1 # Layer 0 flattens the image, layer=1 is the first dense layer
WEIGHT_TYPE = 0 # 0 for weight, 1 for bias
f, ax = plt.subplots(1, 5, figsize=(15,15))
for flower in range(len(CLASS_NAMES)):
weights = model.layers[LAYER].get_weights()[WEIGHT_TYPE][:, flower]
min_wt = tf.math.reduce_min(weights).numpy()
max_wt = tf.math.reduce_max(weights).numpy()
flower_name = CLASS_NAMES[flower]
print("Scaling weights for {} in {} to {}".format(
flower_name, min_wt, max_wt))
weights = (weights - min_wt)/(max_wt - min_wt)
ax[flower].imshow(weights.reshape(IMG_HEIGHT, IMG_WIDTH, 3));
ax[flower].set_title(flower_name);
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
IMG_HEIGHT = 224
IMG_WIDTH = 224
IMG_CHANNELS = 3
def read_and_decode(filename, reshape_dims):
# Read the file
img = tf.io.read_file(filename)
# Convert the compressed string to a 3D uint8 tensor.
img = tf.image.decode_jpeg(img, channels=IMG_CHANNELS)
# Use `convert_image_dtype` to convert to floats in the [0,1] range.
img = tf.image.convert_image_dtype(img, tf.float32)
# TODO 1: Resize the image to the desired size.
return tf.image.resize(img, reshape_dims)
CLASS_NAMES = [item.numpy().decode("utf-8") for item in
tf.strings.regex_replace(
tf.io.gfile.glob("gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/*"),
"gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/", "")]
CLASS_NAMES = [item for item in CLASS_NAMES if item.find(".") == -1]
print("These are the available classes:", CLASS_NAMES)
# the label is the index into CLASS_NAMES array
def decode_csv(csv_row):
record_defaults = ["path", "flower"]
filename, label_string = tf.io.decode_csv(csv_row, record_defaults)
img = read_and_decode(filename, [IMG_HEIGHT, IMG_WIDTH])
label = tf.argmax(tf.math.equal(CLASS_NAMES, label_string))
return img, label
Explanation: Define Helper Functions
Reading and Preprocessing image data
End of explanation
def train_and_evaluate(batch_size = 32,
lrate = 0.0001,
l1 = 0,
l2 = 0.001,
dropout_prob = 0.4,
num_hidden = [64, 16]):
regularizer = tf.keras.regularizers.l1_l2(l1, l2)
train_dataset = (tf.data.TextLineDataset(
"gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/train_set.csv").
map(decode_csv)).batch(batch_size)
eval_dataset = (tf.data.TextLineDataset(
"gs://practical-ml-vision-book/flowers_5_jpeg/flower_photos/eval_set.csv").
map(decode_csv)).batch(32) # this doesn't matter
# NN with multiple hidden layers
layers = [tf.keras.layers.Flatten(
input_shape=(IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS),
name='input_pixels')]
for hno, nodes in enumerate(num_hidden):
layers.extend([
tf.keras.layers.Dense(nodes,
kernel_regularizer=regularizer,
name='hidden_dense_{}'.format(hno)),
tf.keras.layers.BatchNormalization(scale=False, # ReLU
center=False, # have bias in Dense
name='batchnorm_dense_{}'.format(hno)),
#move activation to come after batchnorm
tf.keras.layers.Activation('relu', name='relu_dense_{}'.format(hno)),
# TODO 2: Apply Dropout to the input
tf.keras.layers.Dropout(rate=dropout_prob,
name='dropout_dense_{}'.format(hno)),
])
layers.append(
tf.keras.layers.Dense(len(CLASS_NAMES),
kernel_regularizer=regularizer,
activation='softmax',
name='flower_prob')
)
model = tf.keras.Sequential(layers, name='flower_classification')
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=lrate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=False),
metrics=['accuracy'])
print(model.summary())
history = model.fit(train_dataset, validation_data=eval_dataset, epochs=10)
training_plot(['loss', 'accuracy'], history)
return model
model = train_and_evaluate(dropout_prob=0.4)
Explanation: Apply dropout and batchnorm layer
A deep neural network (DNN) is a neural network with more than one hidden layer. Each time you add a layer, the number of trainable parameters increases. Therefore,you need a larger dataset. You still have only 3700 flower images which might cause overfitting.
Dropouts are the regularization technique that is used to prevent overfitting in the model. Batch normalization is a layer that allows every layer of the network to do learning more independently. The layer is added to the sequential model to standardize the input or the outputs. Add a dropout and batchnorm layer after each of the hidden layers.
Dropout
Dropout is one of the oldest regularization techniques in deep learning. At each training iteration, it drops random neurons from the network with a probability p (typically 25% to 50%). In practice, neuron outputs are set to 0. The net result is that these neurons will not participate in the loss computation this time around and they will not get weight updates. Different neurons will be dropped at each training iteration.
Batch normalization
Our input pixel values are in the range [0,1] and this is compatible with the dynamic range of the typical activation functions and optimizers. However, once we add a hidden layer, the resulting output values will no longer lie in the dynamic range of the activation function for subsequent layers. When this happens, the neuron output is zero, and because there is no difference by moving a small amount in either direction, the gradient is zero. There is no way for the network to escape from the dead zone. To fix this, batch norm normalizes neuron outputs across a training batch of data, i.e. it subtracts the average and divides by the standard deviation. This way, the network decides, through machine learning, how much centering and re-scaling to apply at each neuron. In Keras, you can selectively use one or the other:
tf.keras.layers.BatchNormalization(scale=False, center=True)
When using batch normalization, remember that:
1. Batch normalization goes between the output of a layer and its activation function. So, rather than set activation='relu' in the Dense layer’s constructor, we’d omit the activation function, and then add a separate Activation layer.
2. If you use center=True in batch norm, you do not need biases in your layer. The batch norm offset plays the role of a bias.
3. If you use an activation function that is scale-invariant (i.e. does not change shape if you zoom in on it) then you can set scale=False. ReLu is scale-invariant. Sigmoid is not.
End of explanation |
7,123 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Prediction
For investors it's interesting to know which characteristics of a loan are predictive of a loan ending in charged off. Lending club has its own algorithms beforehand that they use to predict which loans are riskier and give these a grade (A-F). This correlates well with the probability of charged off as we saw in the exploration of the dataset. The interest rates should reflect the risk (higher interest with more risk) to make the riskier loans still attractive to invest in. Although grade and interest correlates well, it's not a perfect correlation.
We will here use the loans that went to full term to build classifiers that can classify loans into charged off and fully paid. The accuracy measure used is 'f1_weighted' of sklearn. This score can be interpreted as a weighted average of the precision and recall. Also confusion matrices and ROC curves will be used for analysis. Grade is used as baseline prediction for charged off/fully paid. We will look for features that add extra predictive value on top of the grade feature and see if this gives us any insight.
Select loans and features
We selected the loans here that went to full term and add the characteristic whether they were charged off or not. We excluded the one loan that was a joint application. The number of loans left are 255,719. And the percentage of charged_off loans is 18%.
Step1: Select features
We selected features that can be included for the prediction. For this we left out features that are not known at the beginning, like 'total payment'. Because these are not useful features to help new investors. Also non-predictive features like 'id' or features that have all the same values are also excluded. All features to do with 'joint' loans are also excluded, since we do not have joint loans. We did add loan_status and charged_off for the prediction. Furthermore, features that were missing in more than 5% of the loans were excluded leaving 24 features (excluding the targets loan_status and charged_off).
Step2: Split data
We keep 30% of the data separate for now so we can later use this to reliable test the performance of the classifier. The split is stratified by 'loan_status' in order to equally divide old loans over the split (old loans have a higher 'charged_off' probability). The classes to predict are in the variable 'charged_off'.
Step3: Logistic regression
We will first start with the logistic regression classifier. This is a simple classifier that uses a sigmoidal curve to predict from the features to which class the sample belongs. It has one parameter to tune namely the C-parameter. This is the inverse of the regularization strength, smaller values specify stronger regularization. In sklearn the features have to be numerical that we input in this algorithm, so we need to convert the categorical features to numeric. To do this ordered categorical features will have adjacent numbers and unordered features will get an order as best as possible during conversion to numeric, for instance geographical. Also there cannot be nan/inf/-inf values, hence these will be made 0's. With this algorithm we will also have to scale and normalize the features.
Non-numeric features were converted as follows
Step4: After the categorical features are conversed to numeric an normalized/scaled, we will first check what the accuracy is when only using the feature 'grade' (A-F) to predict 'charged off' (True/False). This is the classification lending club gave the loans. The closer to F the higher the chance the loan will end in 'charged off'. For the accuracy estimation we will use 'F1-weighted'. This stands for F1 = 2 * (precision * recall) / (precision + recall). In this way both precision and recall is important for the accuracy. Precision is the number of correct positive results divided by the number of all positive results, and recall is the number of correct positive results divided by the number of positive results that should have been returned. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst at 0. In this case using only 'grade' as feature, using the default parameter value for C (inverse of regularization strength) and using l1/lasso penalization we get an F1-accuracy of 0.744.
Step5: A score of 0.744 looks not really high but still a lot better than random. Nevertheless, if we look into the confusion matrix and the ROC-curve we see a whole other picture. It turns out the algorithm mostly predicts everything in the not charged off group and therefore gets the majority right, because there are a lot more paid loans than charged off loans (18%). The area under the curve even gives only a score of 0.506 while random is 0.5. The prediction with logistic regression and only feature grade is therefore only as good as random.
Step6: We can now include all the features we selected (24) and see if the prediction will be better. Because we use regularization, the effect of not useful features will be downgraded automatically. This leads to a slightly better F1-score of 0.751. Also the confusion matrix and the ROC-curve/AUC-score are a little better. Although still not great with an AUC score of 0.515. The top-5 features most used by the algorithm are
Step7: We can also pick only 5 features and see if this works better. But it works exactly the same as only grade. So SelectKBest does not work as well with 5 features as using all features.
Step8: To see the statistical relevance of certain features, we can use the statsmodels package. We first use it with the 5 features selected by SelectKBest. We see there that only term, int_rate and installement are relevant. Confidence interval of all is small, but the coefficients of alle are also very close to 0, so do not seem to have a huge influence.
Subsequently we do the same for the 5 features with the highest coefficients in the regularized logistic regression that uses all features. Of these all features seem useful, except for 'sub_grade'. The coefficients are slightly higher and confidence intervals are small. Although the conclusions are contradictory. Funded_amnt and funded_amnt_inv have the highest coefficients. These two values should be roughly the same, but have a contradictory relation with the target value charged_off. This makes no sense and gives the idea that the algorithm is still pretty random.
Step9: Another way to possibly increase performance is to tune the C (penalization) parameter. We will do this with the GridSearchCV function of sklearn. The best performing C parameter, although really close with the default, is C=1. Giving an accuracy of 0.752. (code is quoted out because it takes a long time to run)
Step10: Random Forest
To improve accuracy levels we could use a more complicated algorithm that scores well in a lot of cases, namely random forest. This algorithm makes various decision trees from subsets of the samples and uses at each split only a fraction of the features to prevent overfitting. The random forest algorithm is known to be not very sensitive to the values of its parameters
Step11: Trying the algorithm with all the features (24) leads to a slightly higher F1-score of 0.750. But logistic regression with all features was a fraction better than that. Also the confusion matrix and AUC is comparable but slightly worse than the logistic regression algorithm with all features. The random forest classifier does select a different top-5 features, namely 'dti', 'revol_bal', 'revol_util', 'annual_inc' and 'int_rate'.
Step12: Test set
To test the accuracies of our algorithms we first have to do the same transformations on the test set as we did on the training set. So we will transform the categorical features to numerical and replace nan/inf/-inf with 0. Also for the logistic regression algorithm we normalized and scaled the training set and we saved these transformations, so we can do the exact same tranformation on the test set.
Step13: logistic regression
For logistic regression we will test both the 'only grade' algorithm (baseline) and the best performing algorithm (C=1, all features with regularization). We find practically the same F-scores/confusion matrices/ROC-curves/AUC-scores as for the training set. Therefore the crossvalidation scheme used on the training set gives reliable accuracy measurements. But it's clear that the predictive value of the algorithm increases slightly with more features, but it's basically predicting that all loans get fully paid and therefore the accuracy scores are practically random.
Step14: Try to see if top 25% and bottom 25% are ok (or 10%). Can we at least avoid bad loans?
Step15: Random Forest
Also with the random forest algorithm for both only grade and all features we find the same accuracy measurements as measured with cross-validation on the training set. Therefore the logistic regression algorithm with all features still performance the best, although it performs not very well.
Step16: Predict grade | Python Code:
loans = pd.read_csv('../data/loan.csv')
closed_loans = loans[loans['loan_status'].isin(['Fully Paid', 'Charged Off'])]
print(closed_loans.shape)
round(sum(closed_loans['loan_status']=='Charged Off')/len(closed_loans['loan_status'])*100)
Explanation: Prediction
For investors it's interesting to know which characteristics of a loan are predictive of a loan ending in charged off. Lending club has its own algorithms beforehand that they use to predict which loans are riskier and give these a grade (A-F). This correlates well with the probability of charged off as we saw in the exploration of the dataset. The interest rates should reflect the risk (higher interest with more risk) to make the riskier loans still attractive to invest in. Although grade and interest correlates well, it's not a perfect correlation.
We will here use the loans that went to full term to build classifiers that can classify loans into charged off and fully paid. The accuracy measure used is 'f1_weighted' of sklearn. This score can be interpreted as a weighted average of the precision and recall. Also confusion matrices and ROC curves will be used for analysis. Grade is used as baseline prediction for charged off/fully paid. We will look for features that add extra predictive value on top of the grade feature and see if this gives us any insight.
Select loans and features
We selected the loans here that went to full term and add the characteristic whether they were charged off or not. We excluded the one loan that was a joint application. The number of loans left are 255,719. And the percentage of charged_off loans is 18%.
End of explanation
include = ['term', 'int_rate', 'installment', 'grade', 'sub_grade', 'emp_length', 'home_ownership',
'annual_inc', 'purpose', 'zip_code', 'addr_state', 'delinq_2yrs', 'earliest_cr_line', 'inq_last_6mths',
'mths_since_last_delinq', 'mths_since_last_record', 'open_acc', 'pub_rec', 'revol_bal', 'revol_util', 'total_acc',
'mths_since_last_major_derog', 'acc_now_delinq', 'loan_amnt', 'open_il_6m', 'open_il_12m',
'open_il_24m', 'mths_since_rcnt_il', 'total_bal_il', 'dti', 'open_acc_6m', 'tot_cur_bal',
'il_util', 'open_rv_12m', 'open_rv_24m', 'max_bal_bc', 'all_util', 'total_rev_hi_lim', 'inq_fi', 'total_cu_tl',
'inq_last_12m', 'issue_d', 'loan_status']
exclude = ['funded_amnt', 'funded_amnt_inv', 'verfication_status', 'total_pymnt_inv', 'total_rec_prncp', 'total_rec_int', 'total_rec_late_fee',
'recoveries', 'collection_recovery_fee', 'last_pymnt_d', 'last_credit_pull_d', 'collections_12_mths_ex_med',
'initial_list_status', 'id', 'member_id', 'emp_title', 'pymnt_plan', 'url', 'desc', 'title',
'out_prncp', 'out_prncp_inv', 'total_pymnt', 'last_pymnt_amnt', 'next_pymnt_d', 'policy_code',
'application_type', 'annual_inc_joint', 'dti_joint', 'verification_status_joint', 'tot_coll_amt',
]
# exclude the one joint application
closed_loans = closed_loans[closed_loans['application_type'] == 'INDIVIDUAL']
# make id index
closed_loans.index = closed_loans.id
# include only the features above
closed_loans = closed_loans[include]
# exclude features with more than 5% missing values
columns_not_missing = (closed_loans.isnull().apply(sum, 0) / len(closed_loans)) < 0.1
closed_loans = closed_loans.loc[:,columns_not_missing[columns_not_missing].index]
# delete rows with NANs
print(1 - closed_loans.dropna().shape[0] / closed_loans.shape[0]) # ratio deleted rows
closed_loans = closed_loans.dropna()
# calculate nr of days between earliest creditline and issue date of the loan
# delete the two original features
closed_loans['earliest_cr_line'] = pd.to_datetime(closed_loans['earliest_cr_line'])
closed_loans['issue_d'] = pd.to_datetime(closed_loans['issue_d'])
closed_loans['days_since_first_credit_line'] = closed_loans['issue_d'] - closed_loans['earliest_cr_line']
closed_loans['days_since_first_credit_line'] = closed_loans['days_since_first_credit_line'] / np.timedelta64(1, 'D')
closed_loans = closed_loans.drop(['earliest_cr_line', 'issue_d'], axis=1)
# delete redundant features
#closed_loans = closed_loans.drop(['grade'], axis=1)
# round-up annual_inc and cut-off outliers annual_inc at 200.000
closed_loans['annual_inc'] = np.ceil(closed_loans['annual_inc'] / 1000)
closed_loans.loc[closed_loans['annual_inc'] > 200, 'annual_inc'] = 200
closed_loans.shape
closed_loans.head()
closed_loans.columns
plt.hist(closed_loans['annual_inc'], bins=100)
Explanation: Select features
We selected features that can be included for the prediction. For this we left out features that are not known at the beginning, like 'total payment'. Because these are not useful features to help new investors. Also non-predictive features like 'id' or features that have all the same values are also excluded. All features to do with 'joint' loans are also excluded, since we do not have joint loans. We did add loan_status and charged_off for the prediction. Furthermore, features that were missing in more than 5% of the loans were excluded leaving 24 features (excluding the targets loan_status and charged_off).
End of explanation
X_train, X_test, y_train, y_test = train_test_split(closed_loans, closed_loans['loan_status'],
test_size=0.3, random_state=123)
X_train = X_train.drop('loan_status', axis=1)
X_test = X_test.drop('loan_status', axis=1)
Explanation: Split data
We keep 30% of the data separate for now so we can later use this to reliable test the performance of the classifier. The split is stratified by 'loan_status' in order to equally divide old loans over the split (old loans have a higher 'charged_off' probability). The classes to predict are in the variable 'charged_off'.
End of explanation
# features that are not float or int, so not to be converted:
# ordered:
# sub_grade, emp_length, zip_code, term
# unordered:
# home_ownership, purpose, addr_state (ordered geographically)
# term
X_train['term'] = X_train['term'].apply(lambda x: int(x.split(' ')[1]))
# grade
loans['grade'] = loans['grade'].astype('category')
grade_dict = {'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5, 'F': 6, 'G': 7}
X_train['grade'] = X_train['grade'].apply(lambda x: grade_dict[x])
# emp_length
emp_length_dict = {'n/a':0,
'< 1 year':0,
'1 year':1,
'2 years':2,
'3 years':3,
'4 years':4,
'5 years':5,
'6 years':6,
'7 years':7,
'8 years':8,
'9 years':9,
'10+ years':10}
X_train['emp_length'] = X_train['emp_length'].apply(lambda x: emp_length_dict[x])
# zipcode
X_train['zip_code'] = X_train['zip_code'].apply(lambda x: int(x[0:3]))
# subgrade
X_train['sub_grade'] = X_train['grade'] + X_train['sub_grade'].apply(lambda x: float(list(x)[1])/10)
# house
house_dict = {'NONE': 0, 'OTHER': 0, 'ANY': 0, 'RENT': 1, 'MORTGAGE': 2, 'OWN': 3}
X_train['home_ownership'] = X_train['home_ownership'].apply(lambda x: house_dict[x])
# purpose
purpose_dict = {'other': 0, 'small_business': 1, 'renewable_energy': 2, 'home_improvement': 3,
'house': 4, 'educational': 5, 'medical': 6, 'moving': 7, 'car': 8,
'major_purchase': 9, 'wedding': 10, 'vacation': 11, 'credit_card': 12,
'debt_consolidation': 13}
X_train['purpose'] = X_train['purpose'].apply(lambda x: purpose_dict[x])
# states
state_dict = {'AK': 0, 'WA': 1, 'ID': 2, 'MT': 3, 'ND': 4, 'MN': 5,
'OR': 6, 'WY': 7, 'SD': 8, 'WI': 9, 'MI': 10, 'NY': 11,
'VT': 12, 'NH': 13, 'MA': 14, 'CT': 15, 'RI': 16, 'ME': 17,
'CA': 18, 'NV': 19, 'UT': 20, 'CO': 21, 'NE': 22, 'IA': 23,
'KS': 24, 'MO': 25, 'IL': 26, 'IN': 27, 'OH': 28, 'PA': 29,
'NJ': 30, 'KY': 31, 'WV': 32, 'VA': 33, 'DC': 34, 'MD': 35,
'DE': 36, 'AZ': 37, 'NM': 38, 'OK': 39, 'AR': 40, 'TN': 41,
'NC': 42, 'TX': 43, 'LA': 44, 'MS': 45, 'AL': 46, 'GA': 47,
'SC': 48, 'FL': 49, 'HI': 50}
X_train['addr_state'] = X_train['addr_state'].apply(lambda x: state_dict[x])
# make NA's, inf and -inf 0
X_train = X_train.fillna(0)
X_train = X_train.replace([np.inf, -np.inf], 0)
X_train.columns
# scaling and normalizing the features
X_train_scaled = preprocessing.scale(X_train)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
Explanation: Logistic regression
We will first start with the logistic regression classifier. This is a simple classifier that uses a sigmoidal curve to predict from the features to which class the sample belongs. It has one parameter to tune namely the C-parameter. This is the inverse of the regularization strength, smaller values specify stronger regularization. In sklearn the features have to be numerical that we input in this algorithm, so we need to convert the categorical features to numeric. To do this ordered categorical features will have adjacent numbers and unordered features will get an order as best as possible during conversion to numeric, for instance geographical. Also there cannot be nan/inf/-inf values, hence these will be made 0's. With this algorithm we will also have to scale and normalize the features.
Non-numeric features were converted as follows:
- earliest_cr_line: the date was converted to a timestamp number
- grade/sub_grade: order of the letters was kept
- emp_length: nr of years
- zipcode: numbers kept of zipcode (geographical order)
- term: in months
- home_ownership: from none to rent to mortgage to owned
- purpose: from purposes that might make money to purposes that only cost money
- addr_state: ordered geographically from west to east, top to bottom (https://theusa.nl/staten/)
End of explanation
clf = LogisticRegression(penalty='l1')
scores = cross_val_score(clf, X_train_scaled.loc[:,['grade']], y_train, cv=10, scoring='f1_weighted')
print(scores)
print(np.mean(scores))
Explanation: After the categorical features are conversed to numeric an normalized/scaled, we will first check what the accuracy is when only using the feature 'grade' (A-F) to predict 'charged off' (True/False). This is the classification lending club gave the loans. The closer to F the higher the chance the loan will end in 'charged off'. For the accuracy estimation we will use 'F1-weighted'. This stands for F1 = 2 * (precision * recall) / (precision + recall). In this way both precision and recall is important for the accuracy. Precision is the number of correct positive results divided by the number of all positive results, and recall is the number of correct positive results divided by the number of positive results that should have been returned. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst at 0. In this case using only 'grade' as feature, using the default parameter value for C (inverse of regularization strength) and using l1/lasso penalization we get an F1-accuracy of 0.744.
End of explanation
from sklearn.model_selection import cross_val_predict
from pandas_confusion import ConfusionMatrix
prediction = cross_val_predict(clf, X_train_scaled.loc[:,['grade']], y_train, cv=10)
confusion_matrix = ConfusionMatrix(y_train, prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
y_score = cross_val_predict(clf, X_train_scaled.loc[:,['grade']], y_train, cv=10, method='predict_proba')
fpr, tpr, thresholds = roc_curve(y_train, y_score[:,0], pos_label='Charged Off')
print(auc(fpr, tpr))
plt.plot(fpr, tpr)
Explanation: A score of 0.744 looks not really high but still a lot better than random. Nevertheless, if we look into the confusion matrix and the ROC-curve we see a whole other picture. It turns out the algorithm mostly predicts everything in the not charged off group and therefore gets the majority right, because there are a lot more paid loans than charged off loans (18%). The area under the curve even gives only a score of 0.506 while random is 0.5. The prediction with logistic regression and only feature grade is therefore only as good as random.
End of explanation
clf = LogisticRegression(penalty='l1')
scores = cross_val_score(clf, X_train_scaled, y_train, cv=10, scoring='f1_weighted')
print(scores)
print(np.mean(scores))
prediction = cross_val_predict(clf, X_train_scaled, y_train, cv=10)
confusion_matrix = ConfusionMatrix(y_train, prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
y_score = cross_val_predict(clf, X_train_scaled, y_train, cv=10, method='predict_proba')
fpr, tpr, thresholds = roc_curve(y_train, y_score[:,0], pos_label='Charged Off')
print(auc(fpr, tpr))
plt.plot(fpr, tpr)
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(X_train_scaled, y_train)
coefs = clf.coef_
# find index of top 5 highest coefficients, aka most used features for prediction
positions = abs(coefs[0]).argsort()[-5:][::-1]
print(X_train_scaled.columns[positions])
print(coefs[0][positions])
Explanation: We can now include all the features we selected (24) and see if the prediction will be better. Because we use regularization, the effect of not useful features will be downgraded automatically. This leads to a slightly better F1-score of 0.751. Also the confusion matrix and the ROC-curve/AUC-score are a little better. Although still not great with an AUC score of 0.515. The top-5 features most used by the algorithm are: 'funded_amnt_inv', 'int_rate', 'sub_grade', 'funded_amnt' and 'annual_inc'. Not even grade itself.
End of explanation
new_X = (SelectKBest(mutual_info_classif, k=5)
.fit_transform(X_train_scaled, y_train))
print(new_X[0]) # term, int_rate, installement, grade, sub_grade
print(X_train_scaled.head())
new_X = pd.DataFrame(new_X, columns=['term', 'int_rate', 'installment', 'grade', 'sub_grade'])
clf = LogisticRegression(penalty='l1')
scores = cross_val_score(clf, new_X, y_train, cv=10, scoring='f1_weighted')
print(scores)
print(np.mean(scores))
prediction = cross_val_predict(clf, new_X, y_train, cv=10)
confusion_matrix = ConfusionMatrix(y_train, prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
y_score = cross_val_predict(clf, new_X, y_train, cv=10, method='predict_proba')
fpr, tpr, thresholds = roc_curve(y_train, y_score[:,0], pos_label='Charged Off')
print(auc(fpr, tpr))
plt.plot(fpr, tpr)
Explanation: We can also pick only 5 features and see if this works better. But it works exactly the same as only grade. So SelectKBest does not work as well with 5 features as using all features.
End of explanation
y_train == 'Charged Off'
import statsmodels.api as sm
print(new_X.columns)
logit = sm.Logit(y_train == 'Charged Off', np.array(new_X))
result = logit.fit()
print(result.summary())
logit = sm.Logit(y_train == 'Charged Off', np.array(
X_train_scaled.loc[:,['int_rate', 'annual_inc', 'sub_grade', 'term', 'dti']]))
result = logit.fit()
print(result.summary())
Explanation: To see the statistical relevance of certain features, we can use the statsmodels package. We first use it with the 5 features selected by SelectKBest. We see there that only term, int_rate and installement are relevant. Confidence interval of all is small, but the coefficients of alle are also very close to 0, so do not seem to have a huge influence.
Subsequently we do the same for the 5 features with the highest coefficients in the regularized logistic regression that uses all features. Of these all features seem useful, except for 'sub_grade'. The coefficients are slightly higher and confidence intervals are small. Although the conclusions are contradictory. Funded_amnt and funded_amnt_inv have the highest coefficients. These two values should be roughly the same, but have a contradictory relation with the target value charged_off. This makes no sense and gives the idea that the algorithm is still pretty random.
End of explanation
from sklearn.model_selection import GridSearchCV
dict_Cs = {'C': [0.001, 0.1, 1, 10, 100]}
clf = GridSearchCV(LogisticRegression(penalty='l1'), dict_Cs, 'f1_weighted', cv=10)
clf.fit(X_train_scaled, y_train)
print(clf.best_params_)
print(clf.best_score_)
clf = LogisticRegression(penalty='l1', C=10)
scores = cross_val_score(clf, X_train_scaled, y_train, cv=10, scoring='f1_weighted')
print(scores)
print(np.mean(scores))
prediction = cross_val_predict(clf, X_train_scaled, y_train, cv=10)
confusion_matrix = ConfusionMatrix(y_train, prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
y_score = cross_val_predict(clf, X_train_scaled, y_train, cv=10, method='predict_proba')
fpr, tpr, thresholds = roc_curve(y_train, y_score[:,0], pos_label='Charged Off')
print(auc(fpr, tpr))
plt.plot(fpr, tpr)
Explanation: Another way to possibly increase performance is to tune the C (penalization) parameter. We will do this with the GridSearchCV function of sklearn. The best performing C parameter, although really close with the default, is C=1. Giving an accuracy of 0.752. (code is quoted out because it takes a long time to run)
End of explanation
clf = RandomForestClassifier(n_estimators=100)
scores = cross_val_score(clf, X_train.loc[:,['grade']], y_train, cv=10, scoring='f1_weighted')
print(scores)
print(np.mean(scores))
prediction = cross_val_predict(clf, X_train.loc[:,['grade']], y_train, cv=10)
confusion_matrix = ConfusionMatrix(y_train, prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
y_score = cross_val_predict(clf, X_train.loc[:,['grade']], y_train, cv=10, method='predict_proba')
fpr, tpr, thresholds = roc_curve(y_train, y_score[:,0], pos_label='Charged Off')
print(auc(fpr, tpr))
plt.plot(fpr, tpr)
Explanation: Random Forest
To improve accuracy levels we could use a more complicated algorithm that scores well in a lot of cases, namely random forest. This algorithm makes various decision trees from subsets of the samples and uses at each split only a fraction of the features to prevent overfitting. The random forest algorithm is known to be not very sensitive to the values of its parameters: the number of features used at each split and the number of trees in the forest. Nevertheless, the default of sklearn is so low that we will raise the number of trees to 100. The algorithm has feature selection already builtin (at each split) and scaling/normalization is also not necessary.
We will first run the algorithm with only grade. This makes not that much sense for Random Forest, since it builds trees. And you cannot build a tree from only one feature. Nevertheless, this will be our starting point. The F1 score is 0.739 hence slightly lower than logistic regression. As expected is the confusion matrix dramatic, namely the algorithm turns out to just predict everything as fully paid. And that's why the AUC-score is exactly random.
End of explanation
clf = RandomForestClassifier(n_estimators=100)
scores = cross_val_score(clf, X_train, y_train, cv=10, scoring='f1_weighted')
print(scores)
print(np.mean(scores))
prediction = cross_val_predict(clf, X_train, y_train, cv=10)
confusion_matrix = ConfusionMatrix(y_train, prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
y_score = cross_val_predict(clf, X_train, y_train, cv=10, method='predict_proba')
fpr, tpr, thresholds = roc_curve(y_train, y_score[:,0], pos_label='Charged Off')
print(auc(fpr, tpr))
plt.plot(fpr, tpr)
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train, y_train)
feat_imp = clf.feature_importances_
sns.barplot(x=X_train.columns, y=feat_imp, color='turquoise')
plt.xticks(rotation=90)
positions = abs(feat_imp).argsort()[-5:][::-1]
print(X_train.columns[positions])
print(feat_imp[positions])
Explanation: Trying the algorithm with all the features (24) leads to a slightly higher F1-score of 0.750. But logistic regression with all features was a fraction better than that. Also the confusion matrix and AUC is comparable but slightly worse than the logistic regression algorithm with all features. The random forest classifier does select a different top-5 features, namely 'dti', 'revol_bal', 'revol_util', 'annual_inc' and 'int_rate'.
End of explanation
# term
X_test['term'] = X_test['term'].apply(lambda x: int(x.split(' ')[1]))
# grade
X_test['grade'] = X_test['grade'].apply(lambda x: grade_dict[x])
# emp_length
X_test['emp_length'] = X_test['emp_length'].apply(lambda x: emp_length_dict[x])
# zipcode
X_test['zip_code'] = X_test['zip_code'].apply(lambda x: int(x[0:3]))
# subgrade
X_test['sub_grade'] = X_test['grade'] + X_test['sub_grade'].apply(lambda x: float(list(x)[1])/10)
# house
X_test['home_ownership'] = X_test['home_ownership'].apply(lambda x: house_dict[x])
# purpose
X_test['purpose'] = X_test['purpose'].apply(lambda x: purpose_dict[x])
# states
X_test['addr_state'] = X_test['addr_state'].apply(lambda x: state_dict[x])
# make NA's, inf and -inf 0
X_test = X_test.fillna(0)
X_test = X_test.replace([np.inf, -np.inf], 0)
X_test_scaled = scaler.transform(X_test)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
Explanation: Test set
To test the accuracies of our algorithms we first have to do the same transformations on the test set as we did on the training set. So we will transform the categorical features to numerical and replace nan/inf/-inf with 0. Also for the logistic regression algorithm we normalized and scaled the training set and we saved these transformations, so we can do the exact same tranformation on the test set.
End of explanation
from sklearn.metrics import f1_score
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(X_train_scaled.loc[:,['grade']], y_train)
prediction = clf.predict(X_test_scaled.loc[:,['grade']])
print(f1_score(y_test, prediction, average='weighted'))
confusion_matrix = ConfusionMatrix(y_test, prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
y_score = clf.predict_proba(X_test_scaled.loc[:,['grade']])
print(clf.classes_)
fpr, tpr, thresholds = roc_curve(y_test, y_score[:,0], pos_label='Charged Off')
print(auc(fpr, tpr))
plt.plot(fpr, tpr)
clf = LogisticRegression(penalty='l1', C=10)
clf.fit(X_train_scaled, y_train)
prediction = clf.predict(X_test_scaled)
print(f1_score(y_test, prediction, average='weighted'))
confusion_matrix = ConfusionMatrix(y_test, prediction)
confusion_matrix.print_stats()
confusion_matrix.plot()
y_score = clf.predict_proba(X_test_scaled)
print(clf.classes_)
fpr, tpr, thresholds = roc_curve(y_test, y_score[:,0], pos_label='Charged Off')
print(auc(fpr, tpr))
plt.plot(fpr, tpr)
Explanation: logistic regression
For logistic regression we will test both the 'only grade' algorithm (baseline) and the best performing algorithm (C=1, all features with regularization). We find practically the same F-scores/confusion matrices/ROC-curves/AUC-scores as for the training set. Therefore the crossvalidation scheme used on the training set gives reliable accuracy measurements. But it's clear that the predictive value of the algorithm increases slightly with more features, but it's basically predicting that all loans get fully paid and therefore the accuracy scores are practically random.
End of explanation
closed_loans2 = closed_loans.drop(['loan_status'], axis=1)
# term
closed_loans2['term'] = closed_loans2['term'].apply(lambda x: int(x.split(' ')[1]))
# grade
closed_loans2['grade'] = closed_loans2['grade'].apply(lambda x: grade_dict[x])
# emp_length
closed_loans2['emp_length'] = closed_loans2['emp_length'].apply(lambda x: emp_length_dict[x])
# zipcode
closed_loans2['zip_code'] = closed_loans2['zip_code'].apply(lambda x: int(x[0:3]))
# subgrade
closed_loans2['sub_grade'] = closed_loans2['grade'] + closed_loans2['sub_grade'].apply(lambda x: float(list(x)[1])/10)
# house
closed_loans2['home_ownership'] = closed_loans2['home_ownership'].apply(lambda x: house_dict[x])
# purpose
closed_loans2['purpose'] = closed_loans2['purpose'].apply(lambda x: purpose_dict[x])
# states
closed_loans2['addr_state'] = closed_loans2['addr_state'].apply(lambda x: state_dict[x])
# make NA's, inf and -inf 0
closed_loans2 = closed_loans2.fillna(0)
closed_loans2 = closed_loans2.replace([np.inf, -np.inf], 0)
closed_loans_scaled = scaler.transform(closed_loans2)
closed_loans_scaled = pd.DataFrame(closed_loans_scaled, columns=closed_loans2.columns)
closed_loans_scaled.index = closed_loans2.index
closed_loans_scaled
loans['roi'] = ((loans['total_rec_int'] + loans['total_rec_prncp']
+ loans['total_rec_late_fee'] + loans['recoveries']) / loans['funded_amnt']) -1
prof_loans = loans[loans['id'].isin(closed_loans['loan_status'][y_score[:,1] > 0.9].index.tolist())]
roi = loans.groupby('grade')['roi'].mean()
prof_loans = loans[loans['id'].isin(closed_loans.index.tolist())]
roi = prof_loans.groupby('grade')['roi'].mean()
print(roi)
print(prof_loans['roi'].mean())
prof_loans['grade'] = prof_loans['grade'].astype('category', ordered=True)
sns.barplot(data=roi.reset_index(), x='grade', y='roi', color='gray')
plt.show()
roi = prof_loans.groupby('loan_status')['roi'].mean()
sns.barplot(data=roi.reset_index(), x='loan_status', y='roi')
plt.show()
roi = prof_loans.groupby(['grade', 'loan_status'])['roi'].mean()
sns.barplot(data=roi.reset_index(), x='roi', y='grade', hue='loan_status', orient='h')
plt.show()
sns.countplot(data=prof_loans, x='grade', hue='loan_status')
plt.show()
prof_loans
closed_loans.index.tolist()
y_score = clf.predict_proba(closed_loans_scaled)
prediction = clf.predict(closed_loans_scaled)
confusion_matrix = ConfusionMatrix(np.array(closed_loans['loan_status'][y_score[:,1] > 0.9]), prediction[y_score[:,1] > 0.9])
confusion_matrix.print_stats()
confusion_matrix.plot()
np.array(closed_loans['loan_status'][y_score[:,1] > 0.9])
prediction[y_score[:,1] > 0.9]
y_total[y_score[:,1] > 0.9]
prediction[y_score[:,1] > 0.9]
X_total = pd.concat([X_train_scaled, X_test_scaled])
y_total = pd.concat([y_train, y_test])
y_score = clf.predict_proba(X_total)
prediction = clf.predict(X_total)
confusion_matrix = ConfusionMatrix(y_total[y_score[:,1] > 0.9], prediction[y_score[:,1] > 0.9])
confusion_matrix.print_stats()
confusion_matrix.plot()
diff_mean = X_total[y_score[:,1] > 0.9].mean() - X_total.mean()
abs(diff_mean).sort_values(ascending=False)
for col in X_total.columns:
result = ttest_ind(X_total[y_score[:,1] > 0.9][col], X_total[col])
print(col, ':', result)
#X_total[y_score[:,1] > 0.9].mean() - X_total.mean()
X_total[y_score[:,1] > 0.9]['term']
X_total.mean()
X_total #most interesting features: 'int_rate', 'annual_inc', 'sub_grade', 'term', 'dti'
# vergelijken predicted > 0.9 vs. all?
sum(y_score[:,1] > 0.5) / len(y_score[:,1] )
max(y_score[~prediction, 1])
diff_thres = y_score[:,1] > 0.18
print(f1_score(y_test, diff_thres, average='weighted'))
confusion_matrix = ConfusionMatrix(y_test, diff_thres)
confusion_matrix.print_stats()
confusion_matrix.plot()
Explanation: Try to see if top 25% and bottom 25% are ok (or 10%). Can we at least avoid bad loans?
End of explanation
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train.loc[:,['grade']], y_train)
prediction = clf.predict(X_test.loc[:,['grade']])
print(f1_score(y_test, prediction, average='weighted'))
confusion_matrix = ConfusionMatrix(y_test, prediction)
print(confusion_matrix)
confusion_matrix.plot()
fpr, tpr, thresholds = roc_curve(y_test, prediction, pos_label=True)
print(auc(fpr, tpr))
plt.plot(fpr, tpr)
clf = RandomForestClassifier(n_estimators=100)
clf.fit(X_train, y_train)
prediction = clf.predict(X_test)
print(f1_score(y_test, prediction, average='weighted'))
confusion_matrix = ConfusionMatrix(y_test, prediction)
print(confusion_matrix)
confusion_matrix.plot()
y_score = clf.predict_proba(X_test)
print(clf.classes_)
fpr, tpr, thresholds = roc_curve(y_test, y_score[:,1], pos_label=True)
print(auc(fpr, tpr))
plt.plot(fpr, tpr)
Explanation: Random Forest
Also with the random forest algorithm for both only grade and all features we find the same accuracy measurements as measured with cross-validation on the training set. Therefore the logistic regression algorithm with all features still performance the best, although it performs not very well.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(closed_loans.iloc[:, 0:24],
closed_loans['grade'], test_size=0.3,
random_state=123, stratify=closed_loans['loan_status'])
X_train = X_train.drop(['grade', 'sub_grade', 'int_rate'], axis=1)
X_test = X_test.drop(['grade', 'sub_grade', 'int_rate'], axis=1)
# features that are not float or int, so not to be converted:
# date:
# earliest_cr_line
# ordered:
# emp_length, zip_code, term
# unordered:
# home_ownership, purpose, addr_state (ordered geographically)
# date
X_train['earliest_cr_line'] = pd.to_datetime(X_train['earliest_cr_line']).dt.strftime("%s")
X_train['earliest_cr_line'] = [0 if date=='NaT' else int(date) for date in X_train['earliest_cr_line']]
# term
X_train['term'] = X_train['term'].apply(lambda x: int(x.split(' ')[1]))
# emp_length
emp_length_dict = {'n/a':0,
'< 1 year':0,
'1 year':1,
'2 years':2,
'3 years':3,
'4 years':4,
'5 years':5,
'6 years':6,
'7 years':7,
'8 years':8,
'9 years':9,
'10+ years':10}
X_train['emp_length'] = X_train['emp_length'].apply(lambda x: emp_length_dict[x])
# zipcode
X_train['zip_code'] = X_train['zip_code'].apply(lambda x: int(x[0:3]))
# house
house_dict = {'NONE': 0, 'OTHER': 0, 'ANY': 0, 'RENT': 1, 'MORTGAGE': 2, 'OWN': 3}
X_train['home_ownership'] = X_train['home_ownership'].apply(lambda x: house_dict[x])
# purpose
purpose_dict = {'other': 0, 'small_business': 1, 'renewable_energy': 2, 'home_improvement': 3,
'house': 4, 'educational': 5, 'medical': 6, 'moving': 7, 'car': 8,
'major_purchase': 9, 'wedding': 10, 'vacation': 11, 'credit_card': 12,
'debt_consolidation': 13}
X_train['purpose'] = X_train['purpose'].apply(lambda x: purpose_dict[x])
# states
state_dict = {'AK': 0, 'WA': 1, 'ID': 2, 'MT': 3, 'ND': 4, 'MN': 5,
'OR': 6, 'WY': 7, 'SD': 8, 'WI': 9, 'MI': 10, 'NY': 11,
'VT': 12, 'NH': 13, 'MA': 14, 'CT': 15, 'RI': 16, 'ME': 17,
'CA': 18, 'NV': 19, 'UT': 20, 'CO': 21, 'NE': 22, 'IA': 23,
'KS': 24, 'MO': 25, 'IL': 26, 'IN': 27, 'OH': 28, 'PA': 29,
'NJ': 30, 'KY': 31, 'WV': 32, 'VA': 33, 'DC': 34, 'MD': 35,
'DE': 36, 'AZ': 37, 'NM': 38, 'OK': 39, 'AR': 40, 'TN': 41,
'NC': 42, 'TX': 43, 'LA': 44, 'MS': 45, 'AL': 46, 'GA': 47,
'SC': 48, 'FL': 49, 'HI': 50}
X_train['addr_state'] = X_train['addr_state'].apply(lambda x: state_dict[x])
# make NA's, inf and -inf 0
X_train = X_train.fillna(0)
X_train = X_train.replace([np.inf, -np.inf], 0)
# date
X_test['earliest_cr_line'] = pd.to_datetime(X_test['earliest_cr_line']).dt.strftime("%s")
X_test['earliest_cr_line'] = [0 if date=='NaT' else int(date) for date in X_test['earliest_cr_line']]
# term
X_test['term'] = X_test['term'].apply(lambda x: int(x.split(' ')[1]))
# emp_length
X_test['emp_length'] = X_test['emp_length'].apply(lambda x: emp_length_dict[x])
# zipcode
X_test['zip_code'] = X_test['zip_code'].apply(lambda x: int(x[0:3]))
# house
X_test['home_ownership'] = X_test['home_ownership'].apply(lambda x: house_dict[x])
# purpose
X_test['purpose'] = X_test['purpose'].apply(lambda x: purpose_dict[x])
# states
X_test['addr_state'] = X_test['addr_state'].apply(lambda x: state_dict[x])
# make NA's, inf and -inf 0
X_test = X_test.fillna(0)
X_test = X_test.replace([np.inf, -np.inf], 0)
from sklearn import preprocessing
X_train_scaled = preprocessing.scale(X_train)
scaler = preprocessing.StandardScaler().fit(X_train)
X_train_scaled = pd.DataFrame(X_train_scaled, columns=X_train.columns)
X_test_scaled = scaler.transform(X_test)
X_test_scaled = pd.DataFrame(X_test_scaled, columns=X_test.columns)
from sklearn.preprocessing import LabelBinarizer
from sklearn.multiclass import OneVsRestClassifier
lb = LabelBinarizer()
grades = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
lb.fit(grades)
y_train_2 = lb.transform(y_train)
clf = OneVsRestClassifier(LogisticRegression(penalty='l1'))
predict_y = clf.fit(X_train_scaled, y_train_2).predict(X_test_scaled)
predict_y = lb.inverse_transform(predict_y)
#print(accuracy_score(y_test, predict_y))
confusion_matrix = ConfusionMatrix(np.array(y_test, dtype='<U1'), predict_y)
confusion_matrix.plot()
confusion_matrix.print_stats()
# find index of top 5 highest coefficients, aka most used features for prediction
coefs = clf.coef_
positions = abs(coefs[0]).argsort()[-5:][::-1]
print(X_train_scaled.columns[positions])
print(coefs[0][positions])
clf = OneVsRestClassifier(RandomForestClassifier(n_estimators=100))
predict_y = clf.fit(X_train_scaled, y_train_2).predict(X_test_scaled)
predict_y = lb.inverse_transform(predict_y)
print(accuracy_score(y_test, predict_y))
confusion_matrix = ConfusionMatrix(np.array(y_test, dtype='<U1'), predict_y)
confusion_matrix.plot()
print(confusion_matrix)
confusion_matrix.print_stats()
features = []
for i,j in enumerate(grades):
print('\n',j)
feat_imp = clf.estimators_[i].feature_importances_
positions = abs(feat_imp).argsort()[-5:][::-1]
features.extend(list(X_train.columns[positions]))
print(X_train.columns[positions])
print(feat_imp[positions])
pd.Series(features).value_counts()
Explanation: Predict grade
End of explanation |
7,124 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The basic imports and the variables we'll be using
Step1: Examples and tests
Step2: Sympy can be a little tricky because it caches things, which means that the first implementation of this code silently changed tensors in place, without meaning to. Let's just check that our variables haven't changed | Python Code:
from __future__ import division
import sympy
from sympy import *
from sympy import Rational as frac
import simpletensors
from simpletensors import Vector, TensorProduct, SymmetricTensorProduct, Tensor
init_printing()
var('vartheta, varphi')
var('nu, m, delta, c, t')
# These are related scalar functions of time
var('r, v, Omega', cls=Function)
r = r(t)
v = v(t)
Omega = Omega(t)
# These get redefined momentarily, but have to exist first
var('nHat, lambdaHat, ellHat', cls=Function)
# And now we define them as vector functions of time
nHat = Vector('nHat', r'\hat{n}', [cos(Omega*t),sin(Omega*t),0,])(t)
lambdaHat = Vector('lambdaHat', r'\hat{\lambda}', [-sin(Omega*t),cos(Omega*t),0,])(t)
ellHat = Vector('ellHat', r'\hat{\ell}', [0,0,1,])(t)
# These are the spin functions -- first, the individual components as regular sympy.Function objects; then the vectors themselves
var('S_n, S_lambda, S_ell', cls=Function)
var('Sigma_n, Sigma_lambda, Sigma_ell', cls=Function)
SigmaVec = Vector('SigmaVec', r'\vec{\Sigma}', [Sigma_n(t), Sigma_lambda(t), Sigma_ell(t)])(t)
SVec = Vector('S', r'\vec{S}', [S_n(t), S_lambda(t), S_ell(t)])(t)
Explanation: The basic imports and the variables we'll be using:
End of explanation
nHat
diff(nHat, t)
diff(lambdaHat, t)
diff(lambdaHat, t).components
diff(lambdaHat, t).subs(t,0).components
diff(lambdaHat, t, 2).components
diff(lambdaHat, t, 2).subs(t,0).components
diff(ellHat, t)
diff(nHat, t, 2)
diff(nHat,t, 3)
diff(nHat,t, 4)
diff(SigmaVec,t, 0)
SigmaVec.fdiff()
diff(SigmaVec,t, 1)
diff(SigmaVec,t, 2)
diff(SigmaVec,t, 2) | nHat
T1 = TensorProduct(SigmaVec, SigmaVec, ellHat, coefficient=1)
T2 = TensorProduct(SigmaVec, nHat, lambdaHat, coefficient=1)
tmp = Tensor(T1,T2)
display(T1, T2, tmp)
diff(tmp, t, 1)
T1+T2
T2*ellHat
ellHat*T2
T1.trace(0,1)
T2*ellHat
for k in range(1,4):
display((T2*ellHat).trace(0,k))
for k in range(1,4):
display((T2*ellHat).trace(0,k).subs(t,0))
T1.trace(0,1) * T2
Explanation: Examples and tests
End of explanation
display(T1, T2)
T3 = SymmetricTensorProduct(SigmaVec, SigmaVec, ellHat, coefficient=1)
display(T3)
T3.trace(0,1)
diff(T3, t, 1)
T3.symmetric
T3*ellHat
ellHat*T3
T1+T3
T1 = SymmetricTensorProduct(SigmaVec, SigmaVec, ellHat, nHat, coefficient=1)
display(T1)
display(T1.trace())
T1*T2
type(_)
import simpletensors
isinstance(__, simpletensors.TensorProductFunction)
SymmetricTensorProduct(nHat, nHat, nHat).trace()
diff(T1.trace(), t, 1)
diff(T1.trace(), t, 2)
diff(T1.trace(), t, 2).subs(t,0)
Explanation: Sympy can be a little tricky because it caches things, which means that the first implementation of this code silently changed tensors in place, without meaning to. Let's just check that our variables haven't changed:
End of explanation |
7,125 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Resumen de NLTK
Step2: Gramáticas Independientes del Contexto (CFG)
Noam Chmosky definió una jerarquía de lenguajes y gramáticas que se utiliza habitualmente en Lingüística e Informática para clasificar lenguajes y gramáticas formales. Cuando queremos modelar fenómenos lingüísticos de las lenguas naturales, el tipo de gramática más adeacuado es el conocido como Tipo 2 o Gramáticas Independientes del Contexto o Context-Free Grammars (CFG) en inglés.
Vamos a definir una gramática simplemente como un conjunto de reglas de reescritura o transformación. Sin entrar en muchos detalles sobre las restricciones que tienen que cumplir las reglas de las gramáticas de Tipo 2, es importante que tengamos en cuenta lo siguiente
Step3: Fíjate cómo hemos definido nuestra gramática
Step4: Con el objeto grammar1 ya creado, creamos el analizador con el método nltk.ChatParser.
Step5: Una vez creado nuestro analizador ya lo podemos utilizar. Tenemos a nuestro alcance el método .parse para analizar sintácticamente cualquier oración que se especifique como una cadena de palabras. Nuestra gramática es bastante limitada, pero podemos utilizarla para analizar la oración I shot an elephant in my pajamas. Si imprimimos el resultado del método, obtenemos el árbol sintáctico.
Step6: Por si no te has dado cuenta, la oración I shot an elephant in my pajamas es ambigua en inglés
Step7: Recuerda que para imprimir el árbol sintáctico hay que iterar (con un bucle for, por ejemplo) sobre el objeto que devuelve el método parse() y utilizar la función print.
Step9: A continuación modifico ligeramente mi gramática g1 para incluir una nueva categoría gramatical PRO y añadir algo de volcabulario nuevo. Compara ambos ejemplos
Step10: NOTA IMPORTANTE sobre errores y el comportamiento de parse()
Cuando un analizador reconoce todo el vocabulario de una oración de entrada pero es incapaz de analizarla, el método parse() no da error pero devuelve un objeto vacío. En este caso, la oración es agramatical según nuestra gramática.
Step11: Sin embargo, cuando el analizador no reconoce todo el vocabulario (porque utilizamos una palabra no definida dentro del léxico), el método parse() falla y muestra un mensaje de error de tipo ValueError como el siguiente. Fíjate solo en la última línea
Step13: Tenlo en cuenta a la hora de detectar errores en tu código.
Gramáticas en español
Visto un primer ejemplo de CFG, vamos a cambiar de lengua y crear un analizador para oraciones sencillas en español. El procedimiento es el mismo, definimos nuestra gramática en formato de Chomsky en un fichero aparte o en una cadena de texto, la parseamos con el método nltk.CFG.fromstring() y creamos un analizador con el método nltk.ChartParser()
Step15: Vamos a probar si es capaz de analizar distintas oraciones es español. Para hacerlo más divertido, vamos a guardar varias oraciones separadas por un intro (simbolizado por el metacarácter \n) en una lista de cadenas llamda oraciones. Iteramos sobre esas oraciones, las imprimimos, después las rompemos en listas de palabras (con el método .split()) e imprimimos el resultado de analizarlas con nuestro analizador.
Step18: Vamos a aumentar la cobertura de nuestra gramática de modo que sea capaz de reconocer y analizar oraciones coordinadas. Para ello, modificamos la regla en la que definimos la oración añadiendo una definición recursivaque defina oración como la secuencia de una oración (O) seguida de una conjunción (Conj) y de otra oración (O). Por último añadimos también algo de léxico nuevo
Step21: Recuerda que una gramática no es un programa
Step24: ¿Podemos extender g4 para que reconozca oraciones subordinadas introducidas con verbos de lengua o de pensamiento. Me refiero a oraciones del tipo | Python Code:
from __future__ import print_function
from __future__ import division
import nltk
Explanation: Resumen de NLTK: Análisis sintáctico
Este resumen se corresponde con el capítulo 8 del NLTK Book Analyzing Sentence Structure. La lectura del capítulo es muy recomendable.
En este resumen vamos a repasar cómo crear gramáticas con NLTK y cómo crear herramientas que nos permitan analizar sintácticamente oraciones sencillas.
Para empezar, necesitamos importar el módulo nltk que nos da acceso a todas las funcionalidades:
End of explanation
g1 =
S -> NP VP
NP -> Det N | Det N PP | 'I'
VP -> V NP | VP PP
PP -> P NP
Det -> 'an' | 'my'
N -> 'elephant' | 'pajamas'
V -> 'shot'
P -> 'in'
Explanation: Gramáticas Independientes del Contexto (CFG)
Noam Chmosky definió una jerarquía de lenguajes y gramáticas que se utiliza habitualmente en Lingüística e Informática para clasificar lenguajes y gramáticas formales. Cuando queremos modelar fenómenos lingüísticos de las lenguas naturales, el tipo de gramática más adeacuado es el conocido como Tipo 2 o Gramáticas Independientes del Contexto o Context-Free Grammars (CFG) en inglés.
Vamos a definir una gramática simplemente como un conjunto de reglas de reescritura o transformación. Sin entrar en muchos detalles sobre las restricciones que tienen que cumplir las reglas de las gramáticas de Tipo 2, es importante que tengamos en cuenta lo siguiente:
Las gramáticas formales manejan dos tipos de alfabetos.
Los símbolos no terminales son los componentes intermedios que utilizamos en las reglas. Todo símbolo no terminal tiene que ser definido como una secuenca de otros símbolos. En nuestro caso, los no terminales van a ser las categorías sintácticas.
Los símbolos terminales son los componentes finales reconocidos por la gramática. En nuestro caso, los terminales van a ser las palabras de las oraciones que queremos analizar sintácticamente.
Todas las reglas de una gramática formal tienen la forma Símbolo1 -> Símbolo2, Símbolo3... SímboloN y se leen como el Símbolo1 se define/está formado/se reescribe como una secuencia formada por Símbolo2, Símbolo3, etc.
En las gramáticas independientes del contexto, la parte situada a la izquierda de la flecha -> es siempre un único símbolo no terminal.
Gramáticas Generativas en NLTK
Pues bien, para definir nuestras gramáticas en NLTK podemos escribirlas en un fichero aparte o como una cadena de texto siguiendo el formalismo de las gramaticas generativas de Chomsky. Vamos a definir una sencilla gramática capaz de reconocer la famosa frase de los hermanos Marx I shot an elephant in my pajamas, y la vamos a guardar como una cadena de texto en la variable g1.
End of explanation
grammar1 = nltk.CFG.fromstring(g1)
Explanation: Fíjate cómo hemos definido nuestra gramática:
Hemos encerrado todo entre triples comillas dobles. Recuerda que esta sintaxis de Python permite crear cadenas que contengan retornos de carro y ocupen más de una línea de longitud.
Para los no terminales utilizamos las convenciones habituales para las estructuras sintácticas y las categorías de palabras y los escribimos en mayúsculas. Las etiquetas son autoexplicativas, aunque estén en inglés.
Lo no terminales van escritos entre comillas simples.
Cuando un no terminal se puede definir de más de una forma, marcamos la disyunción con la barra vertical |.
Tenemos reglas que se interpretan de la siguiente manera: una oración se define como una sintagma nominal y un sintagma verbal; un sintagma nominal se define como un determinante y un nombre, o un determinante, un nombre y un sintagma preposicional, o la palabra I, etc.
A partir de nuestra gramática en una cadena de texto, necesitamos crear un analizador que podamos utilizar posterioremente. Para ello, es imprescindible parsearla antes con el método nltk.CFG.fromstring().
End of explanation
analyzer = nltk.ChartParser(grammar1)
Explanation: Con el objeto grammar1 ya creado, creamos el analizador con el método nltk.ChatParser.
End of explanation
oracion = "I shot an elephant in my pajamas".split()
# guardamos todos los posibles análisis sintácticos en trees
trees = analyzer.parse(oracion)
for tree in trees:
print(tree)
Explanation: Una vez creado nuestro analizador ya lo podemos utilizar. Tenemos a nuestro alcance el método .parse para analizar sintácticamente cualquier oración que se especifique como una cadena de palabras. Nuestra gramática es bastante limitada, pero podemos utilizarla para analizar la oración I shot an elephant in my pajamas. Si imprimimos el resultado del método, obtenemos el árbol sintáctico.
End of explanation
print(analyzer.parse_one(oracion))
Explanation: Por si no te has dado cuenta, la oración I shot an elephant in my pajamas es ambigua en inglés: se trata del típico ejemplo de PP attachment (saber exactamente a qué nodo está modificando un sintagma preposicional). Existe una doble interpretación para el sintagma preposicional in my pajamas: En el momento del disparo, ¿quién llevaba puesto el pijama? ¿El elefante o yo? Pues bien, nuestra gramática recoge esta ambigüedad y sería capaz de analizarla de dos maneras diferentes, tal y como se muestra en la celda anterior.
En el caso de que nos interese solo generar uno de los posibles análisis, podemos utilizar el método parse_one(), como se muestra a continuación.
End of explanation
print(analyzer.parse(oracion))
Explanation: Recuerda que para imprimir el árbol sintáctico hay que iterar (con un bucle for, por ejemplo) sobre el objeto que devuelve el método parse() y utilizar la función print.
End of explanation
g1v2 =
S -> NP VP
NP -> Det N | Det N PP | PRO
VP -> V NP | VP PP
PP -> P NP
Det -> 'an' | 'my'
PRO -> 'I' | 'you'
N -> 'elephant' | 'pajamas'
V -> 'shot'
P -> 'in'
grammar1v2 = nltk.CFG.fromstring(g1v2)
analyzer1v2 = nltk.ChartParser(grammar1v2)
# itero sobre la estructura que devuelve parse()
for tree in analyzer1v2.parse(oracion):
print(tree)
print("\n", "-------------------------------", "\n")
for tree in analyzer1v2.parse("you shot my elephant".split()):
print(tree)
Explanation: A continuación modifico ligeramente mi gramática g1 para incluir una nueva categoría gramatical PRO y añadir algo de volcabulario nuevo. Compara ambos ejemplos:
End of explanation
for tree in analyzer.parse("shot an pajamas elephant my I".split()):
print("El análisis sintáctico es el siguiente")
print(tree)
Explanation: NOTA IMPORTANTE sobre errores y el comportamiento de parse()
Cuando un analizador reconoce todo el vocabulario de una oración de entrada pero es incapaz de analizarla, el método parse() no da error pero devuelve un objeto vacío. En este caso, la oración es agramatical según nuestra gramática.
End of explanation
for tree in analyzer.parse("our time is running out".split()):
print("El análisis sintáctico es el siguiente")
print(tree)
Explanation: Sin embargo, cuando el analizador no reconoce todo el vocabulario (porque utilizamos una palabra no definida dentro del léxico), el método parse() falla y muestra un mensaje de error de tipo ValueError como el siguiente. Fíjate solo en la última línea:
End of explanation
g2 = u
O -> SN SV
SN -> Det N | Det N Adj | Det Adj N | NProp | SN SP
SV -> V | V SN | V SP | V SN SP
SP -> Prep SN
Det -> 'el' | 'la' | 'un' | 'una'
N -> 'niño' | 'niña' | 'manzana' | 'pera' | 'cuchillo'
NProp -> 'Juan' | 'Ana' | 'Perico'
Adj -> 'bonito' | 'pequeña' | 'verde'
V -> 'come' | 'salta' | 'pela' | 'persigue'
Prep -> 'de' | 'con' | 'desde' | 'a'
grammar2 = nltk.CFG.fromstring(g2)
analizador2 = nltk.ChartParser(grammar2)
Explanation: Tenlo en cuenta a la hora de detectar errores en tu código.
Gramáticas en español
Visto un primer ejemplo de CFG, vamos a cambiar de lengua y crear un analizador para oraciones sencillas en español. El procedimiento es el mismo, definimos nuestra gramática en formato de Chomsky en un fichero aparte o en una cadena de texto, la parseamos con el método nltk.CFG.fromstring() y creamos un analizador con el método nltk.ChartParser():
End of explanation
oraciones = uAna salta
la niña pela una manzana verde con el cuchillo
Juan come un cuchillo bonito desde el niño
un manzana bonito salta el cuchillo desde el niño verde
el cuchillo verde persigue a la pequeña manzana de Ana
el cuchillo verde persigue a Ana.split("\n")
for oracion in oraciones:
print(oracion)
for tree in analizador2.parse(oracion.split()):
print(tree, "\n")
Explanation: Vamos a probar si es capaz de analizar distintas oraciones es español. Para hacerlo más divertido, vamos a guardar varias oraciones separadas por un intro (simbolizado por el metacarácter \n) en una lista de cadenas llamda oraciones. Iteramos sobre esas oraciones, las imprimimos, después las rompemos en listas de palabras (con el método .split()) e imprimimos el resultado de analizarlas con nuestro analizador.
End of explanation
g3 = u
O -> SN SV | O Conj O
SN -> Det N | Det N Adj | Det Adj N | NProp | SN SP
SV -> V | V SN | V SP | V SN SP
SP -> Prep SN
Det -> 'el' | 'la' | 'un' | 'una'
N -> 'niño' | 'niña' | 'manzana' | 'pera' | 'cuchillo'
NProp -> 'Juan' | 'Ana' | 'Perico'
Adj -> 'bonito' | 'pequeña' | 'verde'
V -> 'come' | 'salta' | 'pela' | 'persigue'
Prep -> 'de' | 'con' | 'desde' | 'a'
Conj -> 'y' | 'pero'
# Ahora fijate cómo creamos en analizador en un solo paso
# compáralo con los ejemplos anteriores
analizador3 = nltk.ChartParser(nltk.CFG.fromstring(g3))
for tree in analizador3.parse(ula manzana salta y el niño come pero el cuchillo
verde persigue a la pequeña manzana de Ana.split()):
print(tree)
Explanation: Vamos a aumentar la cobertura de nuestra gramática de modo que sea capaz de reconocer y analizar oraciones coordinadas. Para ello, modificamos la regla en la que definimos la oración añadiendo una definición recursivaque defina oración como la secuencia de una oración (O) seguida de una conjunción (Conj) y de otra oración (O). Por último añadimos también algo de léxico nuevo: un par de conjunciones.
End of explanation
# ojo, son sencillas, pero contienen oraciones impersonales, verbos copulativos, sujetos elípticos
oraciones = umañana es viernes
hoy es jueves
tenéis sueño
hace frío
Pepe hace sueño.split("\n")
# escribe tu gramática en esta celda
g4 =
analyzer4 = nltk.ChartParser(nltk.CFG.fromtring(g4))
# ¿qué tal funciona?
for oracion in oraciones:
print(oracion)
for tree in analyzer4.parse(oracion.split()):
print(tree, "\n")
Explanation: Recuerda que una gramática no es un programa: es una simple descripción que permite establecer qué estructuras sintácticas están bien formadas (la oraciones gramaticales) y cuáles no (las oraciones agramaticales). Cuando una oración es reconocida por una gramática (y en consecuencia, está bien formada), el analizador puede representar la estructura en forma de árbol.
NLTK proporciona acceso a distintos tipo de analizadores (árboles de dependencias, gramáticas probabilísticas, etc), aunque nosotros solo hemos utilizado el más sencillo de ellos: nltk.ChartParser(). Estos analizadores sí son programitas que permiten leer una gramática y analizar las oraciones que proporcionemos como entrada del método parse().
Otro ejemplo
En clase improvisamos un poco y proponemos el siguiente ejemplo de gramática. Vamos a ir complicándola de manera incremental. Comencemos con unas cuantas oraciones de ejemplo.
End of explanation
oraciones = uPepe cree que mañana es viernes
María dice que Pepe cree que mañana es viernes.split()
# escribe la extensión de tu gramática en esta celda
g5 =
analyzer5 = nltk.ChartParser(nltk.CFG.fromstring(g5))
# ¿qué tal funciona?
for oracion in oraciones:
print(oracion)
for tree in analyzer5.parse(oracion.split()):
print(tree, "\n")
Explanation: ¿Podemos extender g4 para que reconozca oraciones subordinadas introducidas con verbos de lengua o de pensamiento. Me refiero a oraciones del tipo: Pepe cree que mañana es viernes, María dice que Pepe cree que mañana es viernes, etc.
Aumenta tu vocabulario añadiendo tantos terminales como te haga falta.
End of explanation |
7,126 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute ICA on MEG data and remove artifacts
ICA is fit to MEG raw data.
The sources matching the ECG and EOG are automatically found and displayed.
Subsequently, artifact detection and rejection quality are assessed.
Step1: Setup paths and prepare raw data.
Step2: 1) Fit ICA model using the FastICA algorithm.
Other available choices are picard or infomax.
<div class="alert alert-info"><h4>Note</h4><p>The default method in MNE is FastICA, which along with Infomax is
one of the most widely used ICA algorithm. Picard is a
new algorithm that is expected to converge faster than FastICA and
Infomax, especially when the aim is to recover accurate maps with
a low tolerance parameter, see [1]_ for more information.</p></div>
We pass a float value between 0 and 1 to select n_components based on the
percentage of variance explained by the PCA components.
Step3: 2) identify bad components by analyzing latent sources.
Step4: 3) Assess component selection and unmixing quality. | Python Code:
# Authors: Denis Engemann <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.preprocessing import ICA
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
from mne.datasets import sample
Explanation: Compute ICA on MEG data and remove artifacts
ICA is fit to MEG raw data.
The sources matching the ECG and EOG are automatically found and displayed.
Subsequently, artifact detection and rejection quality are assessed.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, None, fir_design='firwin') # already lowpassed @ 40
raw.set_annotations(mne.Annotations([1], [10], 'BAD'))
raw.plot(block=True)
# For the sake of example we annotate first 10 seconds of the recording as
# 'BAD'. This part of data is excluded from the ICA decomposition by default.
# To turn this behavior off, pass ``reject_by_annotation=False`` to
# :meth:`mne.preprocessing.ICA.fit`.
raw.set_annotations(mne.Annotations([0], [10], 'BAD'))
Explanation: Setup paths and prepare raw data.
End of explanation
ica = ICA(n_components=0.95, method='fastica', random_state=0, max_iter=100)
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
# low iterations -> does not fully converge
ica.fit(raw, picks=picks, decim=3, reject=dict(mag=4e-12, grad=4000e-13))
# maximum number of components to reject
n_max_ecg, n_max_eog = 3, 1 # here we don't expect horizontal EOG components
Explanation: 1) Fit ICA model using the FastICA algorithm.
Other available choices are picard or infomax.
<div class="alert alert-info"><h4>Note</h4><p>The default method in MNE is FastICA, which along with Infomax is
one of the most widely used ICA algorithm. Picard is a
new algorithm that is expected to converge faster than FastICA and
Infomax, especially when the aim is to recover accurate maps with
a low tolerance parameter, see [1]_ for more information.</p></div>
We pass a float value between 0 and 1 to select n_components based on the
percentage of variance explained by the PCA components.
End of explanation
title = 'Sources related to %s artifacts (red)'
# generate ECG epochs use detection via phase statistics
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5, picks=picks)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_scores(scores, exclude=ecg_inds, title=title % 'ecg', labels='ecg')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=ecg_inds, title=title % 'ecg')
ica.plot_components(ecg_inds, title=title % 'ecg', colorbar=True)
ecg_inds = ecg_inds[:n_max_ecg]
ica.exclude += ecg_inds
# detect EOG by correlation
eog_inds, scores = ica.find_bads_eog(raw)
ica.plot_scores(scores, exclude=eog_inds, title=title % 'eog', labels='eog')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=eog_inds, title=title % 'eog')
ica.plot_components(eog_inds, title=title % 'eog', colorbar=True)
eog_inds = eog_inds[:n_max_eog]
ica.exclude += eog_inds
Explanation: 2) identify bad components by analyzing latent sources.
End of explanation
# estimate average artifact
ecg_evoked = ecg_epochs.average()
ica.plot_sources(ecg_evoked, exclude=ecg_inds) # plot ECG sources + selection
ica.plot_overlay(ecg_evoked, exclude=ecg_inds) # plot ECG cleaning
eog_evoked = create_eog_epochs(raw, tmin=-.5, tmax=.5, picks=picks).average()
ica.plot_sources(eog_evoked, exclude=eog_inds) # plot EOG sources + selection
ica.plot_overlay(eog_evoked, exclude=eog_inds) # plot EOG cleaning
# check the amplitudes do not change
ica.plot_overlay(raw) # EOG artifacts remain
# To save an ICA solution you can say:
# ica.save('my_ica.fif')
# You can later load the solution by saying:
# from mne.preprocessing import read_ica
# read_ica('my_ica.fif')
# Apply the solution to Raw, Epochs or Evoked like this:
# ica.apply(epochs)
Explanation: 3) Assess component selection and unmixing quality.
End of explanation |
7,127 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CUSTOMER CHURN
Credits
Step1: We'll be keeping the statistical model pretty simple for this example so the feature space is almost unchanged from what you see above. The following code simply drops irrelevant columns and converts strings to boolean values (since models don't handle "yes" and "no" very well). The rest of the numeric columns are left untouched.
Step2: Many predictors care about the relative size of different features even though those scales might be arbitrary. For instance
Step3: Algorithms compared
Step4: Random forest seems to be the winner, but... ?
Precision and recall
Measurements aren't golden formulas which always spit out high numbers for good models and low numbers for bad ones. Inherently they convey something sentiment about a model's performance, and it's the job of the human designer to determine each number's validity.
The problem with accuracy is that outcomes aren't necessarily equal.
If my classifier predicted a customer would churn and they didn't, that's not the best but it's forgivable. However, if my classifier predicted a customer would return, I didn't act, and then they churned... that's really bad.
We'll be using another built in scikit-learn function to construction a confusion matrix.
A CONFUSION MATRIX is a way of visualizing predictions made by a classifier and is just a table showing the distribution of predictions for a specific class.
The x-axis indicates the true class of each observation (if a customer churned or not)
The y-axis corresponds to the class predicted by the model (if my classifier said a customer would churned or not).
Confusion matrix and confusion tables
Step5: When an individual churns, how often does my classifier predict that correctly?
Precision and Recall
This measurement is called "RECALL" and a quick look at these diagrams can demonstrate that random forest is clearly best for this criteria. Out of all the churn cases (outcome "1") random forest correctly retrieved 330 out of 482. This translates to a churn "recall" of about 68% (330/482≈2/3), far better than support vector machines (≈50%) or k-nearest-neighbors (≈35%).
Another question of importance is "PRECISION" or, When a classifier predicts an individual will churn, how often does that individual actually churn? Random forest again out preforms the other two at about 93% precision (330 out of 356) with support vector machines a little behind at about 87% (235 out of 269). K-nearest-neighbors lags at about 80%.
While, just like accuracy, precision and recall still rank random forest above SVC and KNN, this won't always be true.
When different measurements do return a different pecking order, understanding the values and tradeoffs of each rating should effect how you proceed.
ROC Plots & AUC
Another important metric to consider is ROC plots.
Simply put, the area under the curve (AUC) of a receiver operating characteristic (ROC) curve is a way to reduce ROC performance to a single value representing expected performance.
To explain with a little more detail, a ROC curve plots the true positives (sensitivity) vs. false positives (1 − specificity), for a binary classifier system as its discrimination threshold is varied.
Since a random method describes a horizontal curve through the unit interval, it has an AUC of .5. Minimally, classifiers should perform better than this, and the extent to which they score higher than one another (meaning the area under the ROC curve is larger), they have better expected performance.
Step6: Feature Influence on Customer Behavior
Now that we understand the accuracy of each individual model for our particular dataset, let's dive a little deeper to get a better understanding of what features or behaviours are causing our customers to churn.
We will be using a RandomForestClassifer to build an ensemble of decision trees to predict whether a customer will churn or not churn.
One of the first steps in building a decision tree to calculating the information gain associated with splitting on a particular feature.
Let's look at the Top 10 features in our dataset that contribute to customer churn
Step7: Thinking in terms of Probabilities
Decision making often favors probability over simple classifications. There's plainly more information in statements like "there's a 20% chance of rain tomorrow" and "about 55% of test takers pass the California bar exam" than just saying "it shouldn't rain tomorrow" or "you'll probably pass."
Probability predictions for churn also allow us to gauge a customers expected value, and their expected loss.
Who do you want to reach out to first, the client with a 80% churn risk who pays 20,000 annually, or the client who's worth 100,000 a year with a 40% risk? How much should you spend on each client?
Step8: How good is a good predictor?
Determining how good a predictor which gives probabilities rather than classes is a bit more difficult. If I predict there's a 20% likelihood of rain tomorrow I don't get to live out all the possible outcomes of the universe. It either rains or it doesn't.
What helps is that the predictors aren't making one prediction, they're making 3000+. So for every time I predict an event to occur 20% of the time I can see how often those events actually happen. Here's we'll use pandas to help me compare the predictions made by random forest against the actual outcomes.
Step9: We can see that random forests predicted that 75 individuals would have a 0.9 proability of churn and in actuality that group had a ~0.97 rate.
Calibration and Descrimination
Using the DataFrame above we can draw a pretty simple graph to help visualize probability measurements.
The x axis represents the churn probabilities which random forest assigned to a group of individuals.
The y axis is the actual rate of churn within that group, and each point is scaled relative to the size of the group.
Calibration is a relatively simple measurement and can be summed up as so | Python Code:
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
from sklearn.cross_validation import KFold
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier as RF
%matplotlib inline
churn_df = pd.read_csv('../data/churn.csv')
col_names = churn_df.columns.tolist()
print "Column names:"
print col_names
to_show = col_names[:6] + col_names[-6:]
print "\nSample data:"
churn_df[to_show].head(6)
Explanation: CUSTOMER CHURN
Credits: yhat blog
"Churn Rate" is a business term describing the rate at which customers leave or cease paying for a product or service. The need to retain customers growing interest among companies to develop better churn-detection techniques, leading many to look to data mining and machine learning.
"Predicting churn is particularly important for businesses w/ subscription models such as cell phone, cable, or merchant credit card processing plans. But modeling churn has wide reaching applications in many domains. For example, casinos have used predictive models to predict ideal room conditions for keeping patrons at the blackjack table and when to reward unlucky gamblers with front row seats to Celine Dion. Similarly, airlines may offer first class upgrades to complaining customers. The list goes on".
THE DATASET
The data set we'll be using is a longstanding telecom customer data set. The data is straightforward. Each row represents a subscribing telephone customer. Each column contains customer attributes such as phone number, call minutes used during different times of day, charges incurred for services, lifetime account duration, and whether or not the customer is still a customer.
End of explanation
# Isolate target data
churn_result = churn_df['Churn?']
y = np.where(churn_result == 'True.',1,0)
# We don't need these columns
to_drop = ['State','Area Code','Phone','Churn?']
churn_feat_space = churn_df.drop(to_drop,axis=1)
# 'yes'/'no' has to be converted to boolean values
# NumPy converts these from boolean to 1. and 0. later
yes_no_cols = ["Int'l Plan","VMail Plan"]
churn_feat_space[yes_no_cols] = churn_feat_space[yes_no_cols] == 'yes'
# Pull out features for future use
features = churn_feat_space.columns
print features
X = churn_feat_space.as_matrix().astype(np.float)
# This is important
scaler = StandardScaler()
X = scaler.fit_transform(X)
print "Feature space holds %d observations and %d features" % X.shape
print "Unique target labels:", np.unique(y)
Explanation: We'll be keeping the statistical model pretty simple for this example so the feature space is almost unchanged from what you see above. The following code simply drops irrelevant columns and converts strings to boolean values (since models don't handle "yes" and "no" very well). The rest of the numeric columns are left untouched.
End of explanation
from sklearn.cross_validation import KFold
def run_cv(X,y,clf_class,**kwargs):
# Construct a kfolds object
kf = KFold(len(y),n_folds=3,shuffle=True)
y_pred = y.copy()
# Iterate through folds
for train_index, test_index in kf:
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
# Initialize a classifier with key word arguments
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
y_pred[test_index] = clf.predict(X_test)
return y_pred
Explanation: Many predictors care about the relative size of different features even though those scales might be arbitrary. For instance: the number of points a basketball team scores per game will naturally be a couple orders of magnitude larger than their win percentage. But this doesn't mean that the latter is 100 times less signifigant. StandardScaler fixes this by normalizing each feature to a range of around 1.0 to -1.0 thereby preventing models from misbehaving.
feature space X
set of target values y
EVALUATING MODEL PERFORMANCE
Express, test, cycle. A machine learning pipeline should be anything but static. There are always new features to design, new data to use, new classifiers to consider each with unique parameters to tune. And for every change it's critical to be able to ask, "Is the new version better than the last?" So how do I do that?
CROSS VALIDATION
As a good start, CROSS VALIDATION will be used throught this example. Cross validation attempts to avoid OVERFITTING (training on and predicting the same datapoint) while still producing a prediction for each observation dataset. This is accomplished by systematically hiding different subsets of the data while training a set of models. After training, each model predicts on the subset that had been hidden to it, emulating multiple train-test splits. When done correctly, every observation will have a 'fair' corresponding prediction.
End of explanation
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier as RF
from sklearn.neighbors import KNeighborsClassifier as KNN
from sklearn.linear_model import LogisticRegression as LR
from sklearn.ensemble import GradientBoostingClassifier as GBC
from sklearn.metrics import average_precision_score
def accuracy(y_true,y_pred):
# NumPy interpretes True and False as 1. and 0.
return np.mean(y_true == y_pred)
print "Logistic Regression:"
print "%.3f" % accuracy(y, run_cv(X,y,LR))
print "Gradient Boosting Classifier"
print "%.3f" % accuracy(y, run_cv(X,y,GBC))
print "Support vector machines:"
print "%.3f" % accuracy(y, run_cv(X,y,SVC))
print "Random forest:"
print "%.3f" % accuracy(y, run_cv(X,y,RF))
print "K-nearest-neighbors:"
print "%.3f" % accuracy(y, run_cv(X,y,KNN))
Explanation: Algorithms compared:
support vector machines
random forest
k-nearest-neighbors.
After, we pass each to cross validation and determining how often the classifier predicted the correct class.
End of explanation
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
def draw_confusion_matrices(confusion_matricies,class_names):
class_names = class_names.tolist()
for cm in confusion_matrices:
classifier, cm = cm[0], cm[1]
print(cm)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm)
plt.title('Confusion matrix for %s' % classifier)
fig.colorbar(cax)
ax.set_xticklabels([''] + class_names)
ax.set_yticklabels([''] + class_names)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
y = np.array(y)
class_names = np.unique(y)
confusion_matrices = [
( "Support Vector Machines", confusion_matrix(y,run_cv(X,y,SVC)) ),
( "Random Forest", confusion_matrix(y,run_cv(X,y,RF)) ),
( "K-Nearest-Neighbors", confusion_matrix(y,run_cv(X,y,KNN)) ),
( "Gradient Boosting Classifier", confusion_matrix(y,run_cv(X,y,GBC)) ),
( "Logisitic Regression", confusion_matrix(y,run_cv(X,y,LR)) )
]
# Pyplot code not included to reduce clutter
# from churn_display import draw_confusion_matrices
%matplotlib inline
draw_confusion_matrices(confusion_matrices,class_names)
Explanation: Random forest seems to be the winner, but... ?
Precision and recall
Measurements aren't golden formulas which always spit out high numbers for good models and low numbers for bad ones. Inherently they convey something sentiment about a model's performance, and it's the job of the human designer to determine each number's validity.
The problem with accuracy is that outcomes aren't necessarily equal.
If my classifier predicted a customer would churn and they didn't, that's not the best but it's forgivable. However, if my classifier predicted a customer would return, I didn't act, and then they churned... that's really bad.
We'll be using another built in scikit-learn function to construction a confusion matrix.
A CONFUSION MATRIX is a way of visualizing predictions made by a classifier and is just a table showing the distribution of predictions for a specific class.
The x-axis indicates the true class of each observation (if a customer churned or not)
The y-axis corresponds to the class predicted by the model (if my classifier said a customer would churned or not).
Confusion matrix and confusion tables:
The columns represent the actual class and the rows represent the predicted class.
Lets evaluate performance:
| | condition True | condition false|
|------|----------------|---------------|
|prediction true|True Positive|False positive|
|Prediction False|False Negative|True Negative|
Sensitivity, Recall or True Positive Rate quantify the models ability to predict our positive classes.
$$TPR = \frac{ TP}{TP + FN}$$
Specificity or True Negative Rate quantify the models ability to predict our Negative classes.
$$TNR = \frac{ TN}{FP + TN}$$
End of explanation
from sklearn.metrics import roc_curve, auc
from scipy import interp
def plot_roc(X, y, clf_class, **kwargs):
kf = KFold(len(y), n_folds=5, shuffle=True)
y_prob = np.zeros((len(y),2))
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
for i, (train_index, test_index) in enumerate(kf):
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
# Predict probabilities, not classes
y_prob[test_index] = clf.predict_proba(X_test)
fpr, tpr, thresholds = roc_curve(y[test_index], y_prob[test_index, 1])
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, label='ROC fold %d (area = %0.2f)' % (i, roc_auc))
mean_tpr /= len(kf)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, 'k--',label='Mean ROC (area = %0.2f)' % mean_auc, lw=2)
plt.plot([0, 1], [0, 1], '--', color=(0.6, 0.6, 0.6), label='Random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
print "Support vector machines:"
plot_roc(X,y,SVC,probability=True)
print "Random forests:"
plot_roc(X,y,RF,n_estimators=18)
print "K-nearest-neighbors:"
plot_roc(X,y,KNN)
print "Gradient Boosting Classifier:"
plot_roc(X,y,GBC)
Explanation: When an individual churns, how often does my classifier predict that correctly?
Precision and Recall
This measurement is called "RECALL" and a quick look at these diagrams can demonstrate that random forest is clearly best for this criteria. Out of all the churn cases (outcome "1") random forest correctly retrieved 330 out of 482. This translates to a churn "recall" of about 68% (330/482≈2/3), far better than support vector machines (≈50%) or k-nearest-neighbors (≈35%).
Another question of importance is "PRECISION" or, When a classifier predicts an individual will churn, how often does that individual actually churn? Random forest again out preforms the other two at about 93% precision (330 out of 356) with support vector machines a little behind at about 87% (235 out of 269). K-nearest-neighbors lags at about 80%.
While, just like accuracy, precision and recall still rank random forest above SVC and KNN, this won't always be true.
When different measurements do return a different pecking order, understanding the values and tradeoffs of each rating should effect how you proceed.
ROC Plots & AUC
Another important metric to consider is ROC plots.
Simply put, the area under the curve (AUC) of a receiver operating characteristic (ROC) curve is a way to reduce ROC performance to a single value representing expected performance.
To explain with a little more detail, a ROC curve plots the true positives (sensitivity) vs. false positives (1 − specificity), for a binary classifier system as its discrimination threshold is varied.
Since a random method describes a horizontal curve through the unit interval, it has an AUC of .5. Minimally, classifiers should perform better than this, and the extent to which they score higher than one another (meaning the area under the ROC curve is larger), they have better expected performance.
End of explanation
train_index,test_index = train_test_split(churn_df.index)
forest = RF()
forest_fit = forest.fit(X[train_index], y[train_index])
forest_predictions = forest_fit.predict(X[test_index])
importances = forest_fit.feature_importances_[:10]
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(10):
print("%d. %s (%f)" % (f + 1, features[f], importances[indices[f]]))
# Plot the feature importances of the forest
#import pylab as pl
plt.figure()
plt.title("Feature importances")
plt.bar(range(10), importances[indices], yerr=std[indices], color="r", align="center")
plt.xticks(range(10), indices)
plt.xlim([-1, 10])
plt.show()
Explanation: Feature Influence on Customer Behavior
Now that we understand the accuracy of each individual model for our particular dataset, let's dive a little deeper to get a better understanding of what features or behaviours are causing our customers to churn.
We will be using a RandomForestClassifer to build an ensemble of decision trees to predict whether a customer will churn or not churn.
One of the first steps in building a decision tree to calculating the information gain associated with splitting on a particular feature.
Let's look at the Top 10 features in our dataset that contribute to customer churn:
End of explanation
def run_prob_cv(X, y, clf_class, roc=False, **kwargs):
kf = KFold(len(y), n_folds=5, shuffle=True)
y_prob = np.zeros((len(y),2))
for train_index, test_index in kf:
X_train, X_test = X[train_index], X[test_index]
y_train = y[train_index]
clf = clf_class(**kwargs)
clf.fit(X_train,y_train)
# Predict probabilities, not classes
y_prob[test_index] = clf.predict_proba(X_test)
return y_prob
Explanation: Thinking in terms of Probabilities
Decision making often favors probability over simple classifications. There's plainly more information in statements like "there's a 20% chance of rain tomorrow" and "about 55% of test takers pass the California bar exam" than just saying "it shouldn't rain tomorrow" or "you'll probably pass."
Probability predictions for churn also allow us to gauge a customers expected value, and their expected loss.
Who do you want to reach out to first, the client with a 80% churn risk who pays 20,000 annually, or the client who's worth 100,000 a year with a 40% risk? How much should you spend on each client?
End of explanation
import warnings
warnings.filterwarnings('ignore')
# Use 10 estimators so predictions are all multiples of 0.1
pred_prob = run_prob_cv(X, y, RF, n_estimators=10)
pred_churn = pred_prob[:,1]
is_churn = y == 1
# Number of times a predicted probability is assigned to an observation
counts = pd.value_counts(pred_churn)
counts[:]
from collections import defaultdict
true_prob = defaultdict(float)
# calculate true probabilities
for prob in counts.index:
true_prob[prob] = np.mean(is_churn[pred_churn == prob])
true_prob = pd.Series(true_prob)
# pandas-fu
counts = pd.concat([counts,true_prob], axis=1).reset_index()
counts.columns = ['pred_prob', 'count', 'true_prob']
counts
Explanation: How good is a good predictor?
Determining how good a predictor which gives probabilities rather than classes is a bit more difficult. If I predict there's a 20% likelihood of rain tomorrow I don't get to live out all the possible outcomes of the universe. It either rains or it doesn't.
What helps is that the predictors aren't making one prediction, they're making 3000+. So for every time I predict an event to occur 20% of the time I can see how often those events actually happen. Here's we'll use pandas to help me compare the predictions made by random forest against the actual outcomes.
End of explanation
from churn_measurements import calibration, discrimination
from sklearn.metrics import roc_curve, auc
from scipy import interp
from __future__ import division
from operator import idiv
def print_measurements(pred_prob):
churn_prob, is_churn = pred_prob[:,1], y == 1
print " %-20s %.4f" % ("Calibration Error", calibration(churn_prob, is_churn))
print " %-20s %.4f" % ("Discrimination", discrimination(churn_prob,is_churn))
print "Note -- Lower calibration is better, higher discrimination is better"
print "Support vector machines:"
print_measurements(run_prob_cv(X,y,SVC,probability=True))
print "Random forests:"
print_measurements(run_prob_cv(X,y,RF,n_estimators=18))
print "K-nearest-neighbors:"
print_measurements(run_prob_cv(X,y,KNN))
print "Gradient Boosting Classifier:"
print_measurements(run_prob_cv(X,y,GBC))
print "Random Forest:"
print_measurements(run_prob_cv(X,y,RF))
Explanation: We can see that random forests predicted that 75 individuals would have a 0.9 proability of churn and in actuality that group had a ~0.97 rate.
Calibration and Descrimination
Using the DataFrame above we can draw a pretty simple graph to help visualize probability measurements.
The x axis represents the churn probabilities which random forest assigned to a group of individuals.
The y axis is the actual rate of churn within that group, and each point is scaled relative to the size of the group.
Calibration is a relatively simple measurement and can be summed up as so: Events predicted to happen 60% of the time should happen 60% of the time. For all individuals I predict to have a churn risk of between 30 and 40%, the true churn rate for that group should be about 35%. For the graph above think of it as, How close are my predictions to the red line?
DISCRIMINATION MEASURES How far are my predictions away from the green line? Why is that important?
Well, if we assign a churn probability of 15% to every individual we'll have near perfect calibration due to averages, but I'll be lacking any real insight. Discrimination gives a model a better score if it's able to isolate groups which are further from the base set.
Approach sources:
https://www.google.com/search?q=Measures+of+Discrimination+Skill+in+Probabilistic+Judgment&oq=Measures+of+Discrimination+Skill+in+Probabilistic+Judgment and https://github.com/EricChiang/churn/blob/master/churn_measurements.py.
End of explanation |
7,128 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example jupyter_spark notebook
This is an example notebook to demonstrate the jupyter_spark notebook plugin.
It is based on the approximating pi example in the pyspark documentation. This works by sampling random numbers in a square and counting the number that fall inside the unit circle.
Step1: Create a SparkSession and give it a name.
Note
Step2: partitions is the number of spark workers to partition the work into.
Step3: n is the number of random samples to calculate
Step4: This is the sampling function. It generates numbers in the square from (-1, -1) to (1, 1), and returns 1 if it falls inside the unit circle, and 0 otherwise.
Step5: Here's where we farm the work out to Spark.
Step6: Shut down the spark server. | Python Code:
import sys
from random import random
from operator import add
from pyspark.sql import SparkSession
Explanation: Example jupyter_spark notebook
This is an example notebook to demonstrate the jupyter_spark notebook plugin.
It is based on the approximating pi example in the pyspark documentation. This works by sampling random numbers in a square and counting the number that fall inside the unit circle.
End of explanation
spark = SparkSession \
.builder \
.appName("PythonPi") \
.getOrCreate()
Explanation: Create a SparkSession and give it a name.
Note: This will start the spark client console -- there is no need to run spark-shell directly.
End of explanation
partitions = 2
Explanation: partitions is the number of spark workers to partition the work into.
End of explanation
n = 100000000
Explanation: n is the number of random samples to calculate
End of explanation
def f(_):
x = random() * 2 - 1
y = random() * 2 - 1
return 1 if x ** 2 + y ** 2 <= 1 else 0
Explanation: This is the sampling function. It generates numbers in the square from (-1, -1) to (1, 1), and returns 1 if it falls inside the unit circle, and 0 otherwise.
End of explanation
count = spark.sparkContext \
.parallelize(range(1, n + 1), partitions) \
.map(f) \
.reduce(add)
print("Pi is roughly %f" % (4.0 * count / n))
Explanation: Here's where we farm the work out to Spark.
End of explanation
spark.stop()
Explanation: Shut down the spark server.
End of explanation |
7,129 | Given the following text description, write Python code to implement the functionality described.
Description:
Everyone knows Fibonacci sequence, it was studied deeply by mathematicians in
the last couple centuries. However, what people don't know is Tribonacci sequence.
Tribonacci sequence is defined by the recurrence:
tri(1) = 3
tri(n) = 1 + n / 2, if n is even.
tri(n) = tri(n - 1) + tri(n - 2) + tri(n + 1), if n is odd.
For example:
tri(2) = 1 + (2 / 2) = 2
tri(4) = 3
tri(3) = tri(2) + tri(1) + tri(4)
= 2 + 3 + 3 = 8
You are given a non-negative integer number n, you have to a return a list of the
first n + 1 numbers of the Tribonacci sequence.
Examples:
tri(3) = [1, 3, 2, 8]
| Python Code:
def tri(n):
if n == 0:
return [1]
my_tri = [1, 3]
for i in range(2, n + 1):
if i % 2 == 0:
my_tri.append(i / 2 + 1)
else:
my_tri.append(my_tri[i - 1] + my_tri[i - 2] + (i + 3) / 2)
return my_tri |
7,130 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RCM modeling with varying reactor volume
This example is available as an ipynb (Jupyter Notebook) file in the main GitHub repository at https
Step1: Next, we have to load the ChemKED file and retrieve the first element of the datapoints
list. Although this file only encodes a single experiment, the datapoints attribute will
always be a list (in this case, of length 1). The elements of the
datapoints list are instances of the DataPoint class, which we store in the variable
dp. To load the YAML file from the web, we also import and use the PyYAML package, and the built-in urllib package, and use the dict_input argument to ChemKED to read the information.
Step2: The initial temperature, pressure, and mixture composition can be read from the
instance of the DataPoint class. PyKED uses instances of the Pint Quantity class to
store values with units, while Cantera expects a floating-point value in SI
units as input. Therefore, we use the built-in capabilities of Pint to convert
the units from those specified in the ChemKED file to SI units, and we use the magnitude
attribute of the Quantity class to take only the numerical part. We also retrieve the
initial mixture mole fractions in a format Cantera will understand
Step3: With these properties defined, we have to create the objects in Cantera that represent the physical
state of the system to be studied. In Cantera, the Solution class stores the thermodynamic,
kinetic, and transport data from an input file in the CTI format. After the Solution object
is created, we can set the initial temperature, pressure, and mole fractions using the TPX
attribute of the Solution class. In this example, we will use the GRI-3.0 as the chemical kinetic mechanism for H<sub>2</sub>/CO combustion. GRI-3.0 is built-in to Cantera, so no other input files are needed.
Step4: With the thermodynamic and kinetic data loaded and the initial conditions defined, we need to
install the Solution instance into an IdealGasReactor which implements the equations
for mass, energy, and species conservation. In addition, we create a Reservoir to represent
the environment external to the reaction chamber. The input file used for the environment,
air.xml, is also included with Cantera and represents an average composition of air.
Step5: To apply the effect of the volume trace to the IdealGasReactor, a Wall must be
installed between the reactor and environment and assigned a velocity. The Wall allows the
environment to do work on the reactor (or vice versa) and change the reactor's thermodynamic state;
we use a Reservoir for the environment because in Cantera, Reservoirs always have a
constant thermodynamic state and composition. Using a Reservoir accelerates the solution
compared to using two IdealGasReactors, since the composition and state of the environment
are typically not necessary for the solution of autoignition problems. Although we do not show the
details here, a reference implementation of a class that computes the wall velocity given the volume
history of the reactor is available in CanSen, in the
cansen.profiles.VolumeProfile class, which we import here
Step6: Then, the IdealGasReactor is installed in a ReactorNet. The ReactorNet
implements the connection to the numerical solver (CVODES is
used in Cantera) to solve the energy and species equations. For this example, it is best practice
to set the maximum time step allowed in the solution to be the minimum time difference in the time array from the volume trace
Step7: To calculate the ignition delay, we will follow the definition specified in the ChemKED file for
this experiment, where the experimentalists used the maximum of the time derivative of the pressure
to define the ignition delay. To calculate this derivative, we need to store the state variables and the composition on each time step, so we initialize several Python lists to act as storage
Step8: Finally, the problem is integrated using the step method of the ReactorNet. The
step method takes one timestep forward on each call, with step size determined by the CVODES
solver (CVODES uses an adaptive time-stepping algorithm). On each step, we add the relevant variables
to their respective lists. The problem is integrated until a user-specified end time, in this case
50 ms, although in principle, the user could end the simulation on any condition
they choose
Step9: At this point, the user would post-process the information in the pressure list to calculate
the derivative by whatever algorithm they choose. We will plot the pressure versus the time of the simulation using the Matplotlib library
Step10: We can also plot the volume trace and compare to the values derived from the ChemKED file. | Python Code:
import cantera as ct
import numpy as np
from pyked import ChemKED
Explanation: RCM modeling with varying reactor volume
This example is available as an ipynb (Jupyter Notebook) file in the main GitHub repository at https://github.com/pr-omethe-us/PyKED/blob/master/docs/rcm-example.ipynb
The ChemKED file that will be used in this example can be found in the
tests directory of the PyKED
repository at https://github.com/pr-omethe-us/PyKED/blob/master/pyked/tests/testfile_rcm.yaml.
Examining that file, we find the first section specifies the information about
the ChemKED file itself:
yaml
file-authors:
- name: Kyle E Niemeyer
ORCID: 0000-0003-4425-7097
file-version: 0
chemked-version: 0.4.0
Then, we find the information regarding the article in the literature from which
this data was taken. In this case, the dataset comes from the work of
Mittal et al.:
yaml
reference:
doi: 10.1002/kin.20180
authors:
- name: Gaurav Mittal
- name: Chih-Jen Sung
ORCID: 0000-0003-2046-8076
- name: Richard A Yetter
journal: International Journal of Chemical Kinetics
year: 2006
volume: 38
pages: 516-529
detail: Fig. 6, open circle
experiment-type: ignition delay
apparatus:
kind: rapid compression machine
institution: Case Western Reserve University
facility: CWRU RCM
Finally, this file contains just a single datapoint, which describes the experimental
ignition delay, initial mixture composition, initial temperature, initial pressure,
compression time, ignition type, and volume history that specifies
how the volume of the reactor varies with time, for simulating the compression
stroke and post-compression processes:
yaml
datapoints:
- temperature:
- 297.4 kelvin
ignition-delay:
- 1.0 ms
pressure:
- 958.0 torr
composition:
kind: mole fraction
species:
- species-name: H2
InChI: 1S/H2/h1H
amount:
- 0.12500
- species-name: O2
InChI: 1S/O2/c1-2
amount:
- 0.06250
- species-name: N2
InChI: 1S/N2/c1-2
amount:
- 0.18125
- species-name: Ar
InChI: 1S/Ar
amount:
- 0.63125
ignition-type:
target: pressure
type: d/dt max
rcm-data:
compression-time:
- 38.0 ms
time-histories:
- type: volume
time:
units: s
column: 0
volume:
units: cm3
column: 1
values:
- [0.00E+000, 5.47669375000E+002]
- [1.00E-003, 5.46608789894E+002]
- [2.00E-003, 5.43427034574E+002]
...
The values for the volume history in the time-histories key are truncated here to save space. One application of the
data stored in this file is to perform a simulation using Cantera to
calculate the ignition delay, including the facility-dependent effects represented in the volume
trace. All information required to perform this simulation is present in the ChemKED file, with the
exception of a chemical kinetic model for H<sub>2</sub>/CO combustion.
In Python, additional functionality can be imported into a script or session by the import
keyword. Cantera, NumPy, and PyKED must be imported into the session so that we can work with the
code. In the case of Cantera and NumPy, we will use many functions from these libraries, so we
assign them abbreviations (ct and np, respectively) for convenience. From PyKED, we
will only be using the ChemKED class, so this is all that is imported:
End of explanation
from urllib.request import urlopen
import yaml
rcm_link = 'https://raw.githubusercontent.com/pr-omethe-us/PyKED/master/pyked/tests/testfile_rcm.yaml'
with urlopen(rcm_link) as response:
testfile_rcm = yaml.safe_load(response.read())
ck = ChemKED(dict_input=testfile_rcm)
dp = ck.datapoints[0]
Explanation: Next, we have to load the ChemKED file and retrieve the first element of the datapoints
list. Although this file only encodes a single experiment, the datapoints attribute will
always be a list (in this case, of length 1). The elements of the
datapoints list are instances of the DataPoint class, which we store in the variable
dp. To load the YAML file from the web, we also import and use the PyYAML package, and the built-in urllib package, and use the dict_input argument to ChemKED to read the information.
End of explanation
T_initial = dp.temperature.to('K').magnitude
P_initial = dp.pressure.to('Pa').magnitude
X_initial = dp.get_cantera_mole_fraction()
Explanation: The initial temperature, pressure, and mixture composition can be read from the
instance of the DataPoint class. PyKED uses instances of the Pint Quantity class to
store values with units, while Cantera expects a floating-point value in SI
units as input. Therefore, we use the built-in capabilities of Pint to convert
the units from those specified in the ChemKED file to SI units, and we use the magnitude
attribute of the Quantity class to take only the numerical part. We also retrieve the
initial mixture mole fractions in a format Cantera will understand:
End of explanation
gas = ct.Solution('gri30.xml')
gas.TPX = T_initial, P_initial, X_initial
Explanation: With these properties defined, we have to create the objects in Cantera that represent the physical
state of the system to be studied. In Cantera, the Solution class stores the thermodynamic,
kinetic, and transport data from an input file in the CTI format. After the Solution object
is created, we can set the initial temperature, pressure, and mole fractions using the TPX
attribute of the Solution class. In this example, we will use the GRI-3.0 as the chemical kinetic mechanism for H<sub>2</sub>/CO combustion. GRI-3.0 is built-in to Cantera, so no other input files are needed.
End of explanation
reac = ct.IdealGasReactor(gas)
env = ct.Reservoir(ct.Solution('air.xml'))
Explanation: With the thermodynamic and kinetic data loaded and the initial conditions defined, we need to
install the Solution instance into an IdealGasReactor which implements the equations
for mass, energy, and species conservation. In addition, we create a Reservoir to represent
the environment external to the reaction chamber. The input file used for the environment,
air.xml, is also included with Cantera and represents an average composition of air.
End of explanation
from cansen.profiles import VolumeProfile
exp_time = dp.volume_history.time.magnitude
exp_volume = dp.volume_history.volume.magnitude
keywords = {'vproTime': exp_time, 'vproVol': exp_volume}
ct.Wall(reac, env, velocity=VolumeProfile(keywords));
Explanation: To apply the effect of the volume trace to the IdealGasReactor, a Wall must be
installed between the reactor and environment and assigned a velocity. The Wall allows the
environment to do work on the reactor (or vice versa) and change the reactor's thermodynamic state;
we use a Reservoir for the environment because in Cantera, Reservoirs always have a
constant thermodynamic state and composition. Using a Reservoir accelerates the solution
compared to using two IdealGasReactors, since the composition and state of the environment
are typically not necessary for the solution of autoignition problems. Although we do not show the
details here, a reference implementation of a class that computes the wall velocity given the volume
history of the reactor is available in CanSen, in the
cansen.profiles.VolumeProfile class, which we import here:
End of explanation
netw = ct.ReactorNet([reac])
netw.set_max_time_step(np.min(np.diff(exp_time)))
Explanation: Then, the IdealGasReactor is installed in a ReactorNet. The ReactorNet
implements the connection to the numerical solver (CVODES is
used in Cantera) to solve the energy and species equations. For this example, it is best practice
to set the maximum time step allowed in the solution to be the minimum time difference in the time array from the volume trace:
End of explanation
time = []
temperature = []
pressure = []
volume = []
mass_fractions = []
Explanation: To calculate the ignition delay, we will follow the definition specified in the ChemKED file for
this experiment, where the experimentalists used the maximum of the time derivative of the pressure
to define the ignition delay. To calculate this derivative, we need to store the state variables and the composition on each time step, so we initialize several Python lists to act as storage:
End of explanation
while netw.time < 0.05:
time.append(netw.time)
temperature.append(reac.T)
pressure.append(reac.thermo.P)
volume.append(reac.volume)
mass_fractions.append(reac.Y)
netw.step()
Explanation: Finally, the problem is integrated using the step method of the ReactorNet. The
step method takes one timestep forward on each call, with step size determined by the CVODES
solver (CVODES uses an adaptive time-stepping algorithm). On each step, we add the relevant variables
to their respective lists. The problem is integrated until a user-specified end time, in this case
50 ms, although in principle, the user could end the simulation on any condition
they choose:
End of explanation
%matplotlib notebook
import matplotlib.pyplot as plt
plt.figure()
plt.plot(time, pressure)
plt.ylabel('Pressure [Pa]')
plt.xlabel('Time [s]');
Explanation: At this point, the user would post-process the information in the pressure list to calculate
the derivative by whatever algorithm they choose. We will plot the pressure versus the time of the simulation using the Matplotlib library:
End of explanation
plt.figure()
plt.plot(exp_time, exp_volume/exp_volume[0], label='Experimental volume', linestyle='--')
plt.plot(time, volume, label='Simulated volume')
plt.legend(loc='best')
plt.ylabel('Volume [m^3]')
plt.xlabel('Time [s]');
Explanation: We can also plot the volume trace and compare to the values derived from the ChemKED file.
End of explanation |
7,131 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction au Streaming adaptatif
Luc Trudeau
Au menu
Step1: Llama Drama Low (1920x1080)
256 kbits/secondes
Step2: Llama Drama Medium (1920x1080)
512 kbits/secondes
Step3: Llama Drama High (1920x1080)
1024 kbits/secondes
Le Manifeste
HLS utilise le format m3u8
/LlamaDrama.m3u8
```#EXTM3U
EXT-X-STREAM-INF
Step4: Playlist
Chaque playlist pointe les segments de la séquence
/high/LlamaDrama.m3u8
```#EXTM3U
EXT-X-VERSION
Step5: StreamEngine
Le StreamEngine choisi le prochain segment en fonction de la vitesse de transfert
Step6: Exemple réel
Le contenu des segments reçu par HLS est "pipé" dans GStreamer
Même, si GStreamer possède un buffer d'entrée, nous utilisons quand même un buffer interne pour ne pas blocker un téléchargement lorsque le buffer de GStreamer est plein.
L'appel de player.stdin.write étant bloquant, ceci va servir de mécanisme de controlle de flux pour les requêtes HLS. | Python Code:
!ffmpeg -i LlamaDrama.mp4 -movflags faststart -b:v 256000 -maxrate 256000 -x264opts "fps=24:keyint=48:min-keyint=48:no-scenecut" -hls_list_size 0 -hls_time 4 -hls_base_url http://192.168.3.14:8000/low/ low/LlamaDrama.m3u8
Explanation: Introduction au Streaming adaptatif
Luc Trudeau
Au menu:
Implémentation HTTP Live Streaming
https://tools.ietf.org/html/draft-pantos-http-live-streaming-07
Logiciels requis
FFMPEG (Outil de codage)
https://ffmpeg.org/ffmpeg.html
GStreamer (Diffusion)
http://gstreamer.freedesktop.org/
Séquence Vidéo
http://www.caminandes.com/
Streaming Adaptatif
Idée de base:
End of explanation
!ffmpeg -i LlamaDrama.mp4 -movflags faststart -b:v 512000 -maxrate 512000 -x264opts "fps=24:keyint=48:min-keyint=48:no-scenecut" -hls_list_size 0 -hls_time 4 -hls_base_url http://192.168.3.14:8000/medium/ medium/LlamaDrama.m3u8
Explanation: Llama Drama Low (1920x1080)
256 kbits/secondes
End of explanation
!ffmpeg -i LlamaDrama.mp4 -movflags faststart -b:v 1024000 -maxrate 1024000 -x264opts "fps=24:keyint=48:min-keyint=48:no-scenecut" -hls_list_size 0 -hls_time 4 -hls_base_url http://192.168.3.14:8000/high/ high/LlamaDrama.m3u8
Explanation: Llama Drama Medium (1920x1080)
512 kbits/secondes
End of explanation
from collections import namedtuple
from io import BytesIO
from requests import get
import m3u8
from time import time
from io import BytesIO
from subprocess import call
Stream = namedtuple('Stream',['bandwidth', 'uri'])
Explanation: Llama Drama High (1920x1080)
1024 kbits/secondes
Le Manifeste
HLS utilise le format m3u8
/LlamaDrama.m3u8
```#EXTM3U
EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=256000
http://localhost:8000/low/LlamaDrama.m3u8
EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=512000
http://localhost:8000/medium/LlamaDrama.m3u8
EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1024000
http://localhost:8000/high/LlamaDrama.m3u8```
End of explanation
class HLS:
speed = 0 # Bits / second
i = 0
def __init__(self, uri):
self.selector = StreamEngine(uri)
def __iter__(self):
return self
def __next__(self):
stream = self.selector.selectStream(self.speed)
if self.i < len(stream.segments):
startTime = time()
buf = getSegment(stream.segments[self.i])
self.speed = round((buf.getbuffer().nbytes*8) / (time() - startTime))
print('%d bits/s' %self.speed)
self.i += 1
return buf
else:
raise StopIteration
Explanation: Playlist
Chaque playlist pointe les segments de la séquence
/high/LlamaDrama.m3u8
```#EXTM3U
EXT-X-VERSION:3
EXT-X-TARGETDURATION:4
EXT-X-MEDIA-SEQUENCE:0
EXTINF:4.000000,
http://localhost:8000/high/LlamaDrama0.ts
EXTINF:4.000000,
http://localhost:8000/high/LlamaDrama1.ts
EXTINF:4.000000,
http://localhost:8000/high/LlamaDrama2.ts
EXTINF:4.000000,
http://localhost:8000/high/LlamaDrama3.ts
...
EXTINF:4.000000,
http://localhost:8000/high/LlamaDrama21.ts
EXTINF:1.875000,
http://localhost:8000/high/LlamaDrama22.ts
EXT-X-ENDLIST```
Client HLS
La classe HLS permet d'itérer a traver les segments.
Le temps est mesuré, lors du téléchargment du segment.
En combinant le temps et la taille du fichier, on obtient la vitesse.
Le StreamEngine choisit le stream approprié en fonction de la vitesse.
@startuml
skinparam style strictuml
skinparam dpi 300
class HLS << iterable >> {
ByteIO next()
}
class StreamEngine {
Stream selectStream(speed)
}
HLS -right-> StreamEngine
@enduml
Classe HLS
Le patron Iterator est utilisé pour itérer à travers l'ensemble des segments de la séquence.
End of explanation
class StreamEngine:
currentStream = None
streamM3 = None
streams = None
def __init__(self, uri):
self.streams = sorted([Stream(playlist.stream_info.bandwidth, playlist.uri)
for playlist in m3u8.load(uri).playlists])
self.currentStream = self.streams[0]
self.streamM3 = m3u8.load(self.currentStream.uri)
def selectStream(self, speed):
newStream = self.currentStream
for stream in self.streams:
if stream.bandwidth < speed:
newStream = stream
else:
break
if newStream != self.currentStream:
self.currentStream = newStream
self.streamM3 = m3u8.load(newStream.uri)
print('Changing Streams: New BitRate %d' %newStream.bandwidth)
return self.streamM3
def getSegment(segment):
buf = BytesIO()
r = get(segment.uri, stream=True)
for chunk in r.iter_content(chunk_size=2048):
if chunk:
buf.write(chunk)
return buf
Explanation: StreamEngine
Le StreamEngine choisi le prochain segment en fonction de la vitesse de transfert
End of explanation
from subprocess import Popen, PIPE, STDOUT
hls = HLS('http://192.168.3.14:8000/LlamaDrama.m3u8')
player = Popen("/usr/local/bin/gst-play-1.0 fd://0".split(), stdout=PIPE, stdin=PIPE)
for segment in hls:
player.stdin.write(segment.getvalue())
Explanation: Exemple réel
Le contenu des segments reçu par HLS est "pipé" dans GStreamer
Même, si GStreamer possède un buffer d'entrée, nous utilisons quand même un buffer interne pour ne pas blocker un téléchargement lorsque le buffer de GStreamer est plein.
L'appel de player.stdin.write étant bloquant, ceci va servir de mécanisme de controlle de flux pour les requêtes HLS.
End of explanation |
7,132 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image classification with Convolutional Neural Networks
Welcome to the first week of the second deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to our first task
Step1: Here we import the libraries we need. We'll learn about what each does during the course.
Step2: Extra steps if NOT using Crestle or our scripts
The dataset is available at http
Step3: First look at cat pictures
Our library will assume that you have train and valid directories. It also assumes that each dir will have subdirs for each class you wish to recognize (in this case, 'cats' and 'dogs').
Step4: Here is how the raw data looks like
Step5: Our first model
Step6: How good is this model? Well, as we mentioned, prior to this competition, the state of the art was 80% accuracy. But the competition resulted in a huge jump to 98.9% accuracy, with the author of a popular deep learning library winning the competition. Extraordinarily, less than 4 years later, we can now beat that result in seconds! Even last year in this same course, our initial model had 98.3% accuracy, which is nearly double the error we're getting just a year later, and that took around 10 minutes to compute.
Analyzing results
Step7: Choosing a learning rate
The learning rate determines how quickly or how slowly you want to update the weights (or parameters). Learning rate is one of the most difficult parameters to set, because it significantly affect model performance.
The method learn.lr_find() helps you find an optimal learning rate. It uses the technique developed in the 2015 paper Cyclical Learning Rates for Training Neural Networks, where we simply keep increasing the learning rate from a very small value, until the loss starts decreasing. We can plot the learning rate across batches to see what this looks like.
We first create a new learner, since we want to know how to set the learning rate for a new (untrained) model.
Step8: Our learn object contains an attribute sched that contains our learning rate scheduler, and has some convenient plotting functionality including this one
Step9: Note that in the previous plot iteration is one iteration (or minibatch) of SGD. In one epoch there are
(num_train_samples/num_iterations) of SGD.
We can see the plot of loss versus learning rate to see where our loss stops decreasing
Step10: The loss is still clearly improving at lr=1e-2 (0.01), so that's what we use. Note that the optimal learning rate can change as we training the model, so you may want to re-run this function from time to time.
Improving our model
Data augmentation
If you try training for more epochs, you'll notice that we start to overfit, which means that our model is learning to recognize the specific images in the training set, rather than generalizaing such that we also get good results on the validation set. One way to fix this is to effectively create more data, through data augmentation. This refers to randomly changing the images in ways that shouldn't impact their interpretation, such as horizontal flipping, zooming, and rotating.
We can do this by passing aug_tfms (augmentation transforms) to tfms_from_model, with a list of functions to apply that randomly change the image however we wish. For photos that are largely taken from the side (e.g. most photos of dogs and cats, as opposed to photos taken from the top down, such as satellite imagery) we can use the pre-defined list of functions transforms_side_on. We can also specify random zooming of images up to specified scale by adding the max_zoom parameter.
Step11: Let's create a new data object that includes this augmentation in the transforms.
Step12: By default when we create a learner, it sets all but the last layer to frozen. That means that it's still only updating the weights in the last layer when we call fit.
Step13: What is that cycle_len parameter? What we've done here is used a technique called stochastic gradient descent with restarts (SGDR), a variant of learning rate annealing, which gradually decreases the learning rate as training progresses. This is helpful because as we get closer to the optimal weights, we want to take smaller steps.
However, we may find ourselves in a part of the weight space that isn't very resilient - that is, small changes to the weights may result in big changes to the loss. We want to encourage our model to find parts of the weight space that are both accurate and stable. Therefore, from time to time we increase the learning rate (this is the 'restarts' in 'SGDR'), which will force the model to jump to a different part of the weight space if the current area is "spikey". Here's a picture of how that might look if we reset the learning rates 3 times (in this paper they call it a "cyclic LR schedule")
Step14: Our validation loss isn't improving much, so there's probably no point further training the last layer on its own.
Since we've got a pretty good model at this point, we might want to save it so we can load it again later without training it from scratch.
Step15: Fine-tuning and differential learning rate annealing
Now that we have a good final layer trained, we can try fine-tuning the other layers. To tell the learner that we want to unfreeze the remaining layers, just call (surprise surprise!) unfreeze().
Step16: Note that the other layers have already been trained to recognize imagenet photos (whereas our final layers where randomly initialized), so we want to be careful of not destroying the carefully tuned weights that are already there.
Generally speaking, the earlier layers (as we've seen) have more general-purpose features. Therefore we would expect them to need less fine-tuning for new datasets. For this reason we will use different learning rates for different layers
Step17: Another trick we've used here is adding the cycle_mult parameter. Take a look at the following chart, and see if you can figure out what the parameter is doing
Step18: Note that's what being plotted above is the learning rate of the final layers. The learning rates of the earlier layers are fixed at the same multiples of the final layer rates as we initially requested (i.e. the first layers have 100x smaller, and middle layers 10x smaller learning rates, since we set lr=np.array([1e-4,1e-3,1e-2]).
Step19: There is something else we can do with data augmentation
Step20: I generally see about a 10-20% reduction in error on this dataset when using TTA at this point, which is an amazing result for such a quick and easy technique!
Analyzing results
Confusion matrix
Step21: A common way to analyze the result of a classification model is to use a confusion matrix. Scikit-learn has a convenient function we can use for this purpose
Step22: We can just print out the confusion matrix, or we can show a graphical view (which is mainly useful for dependents with a larger number of categories).
Step23: Looking at pictures again
Step24: Review
Step25: We need a <b>path</b> that points to the dataset. In this path we will also store temporary data and final results. ImageClassifierData.from_paths reads data from a provided path and creates a dataset ready for training.
Step26: ConvLearner.pretrained builds learner that contains a pre-trained model. The last layer of the model needs to be replaced with the layer of the right dimensions. The pretained model was trained for 1000 classes therfore the final layer predicts a vector of 1000 probabilities. The model for cats and dogs needs to output a two dimensional vector. The diagram below shows in an example how this was done in one of the earliest successful CNNs. The layer "FC8" here would get replaced with a new layer with 2 outputs.
<img src="images/pretrained.png" width="500">
original image
Step27: Parameters are learned by fitting a model to the data. Hyparameters are another kind of parameter, that cannot be directly learned from the regular training process. These parameters express “higher-level” properties of the model such as its complexity or how fast it should learn. Two examples of hyperparameters are the learning rate and the number of epochs.
During iterative training of a neural network, a batch or mini-batch is a subset of training samples used in one iteration of Stochastic Gradient Descent (SGD). An epoch is a single pass through the entire training set which consists of multiple iterations of SGD.
We can now fit the model; that is, use gradient descent to find the best parameters for the fully connected layer we added, that can separate cat pictures from dog pictures. We need to pass two hyperameters
Step28: Analyzing results | Python Code:
# Put these at the top of every notebook, to get automatic reloading and inline plotting
%reload_ext autoreload
%autoreload 2
%matplotlib inline
Explanation: Image classification with Convolutional Neural Networks
Welcome to the first week of the second deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
Introduction to our first task: 'Dogs vs Cats'
We're going to try to create a model to enter the Dogs vs Cats competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): "State of the art: The current literature suggests machine classifiers can score above 80% accuracy on this task". So if we can beat 80%, then we will be at the cutting edge as of 2013!
End of explanation
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
PATH = "data/dogscats/"
sz=224
Explanation: Here we import the libraries we need. We'll learn about what each does during the course.
End of explanation
# os.makedirs('data/dogscats/models', exist_ok=True)
# !ln -s /datasets/fast.ai/dogscats/train {PATH}
# !ln -s /datasets/fast.ai/dogscats/test {PATH}
# !ln -s /datasets/fast.ai/dogscats/valid {PATH}
# os.makedirs('/cache/tmp', exist_ok=True)
# !ln -fs /cache/tmp {PATH}
# os.makedirs('/cache/tmp', exist_ok=True)
# !ln -fs /cache/tmp {PATH}
Explanation: Extra steps if NOT using Crestle or our scripts
The dataset is available at http://files.fast.ai/data/dogscats.zip. You can download it directly on your server by running the following line in your terminal. wget http://files.fast.ai/data/dogscats.zip. You should put the data in a subdirectory of this notebook's directory, called data/.
Extra steps if using Crestle
Crestle has the datasets required for fast.ai in /datasets, so we'll create symlinks to the data we want for this competition. (NB: we can't write to /datasets, but we need a place to store temporary files, so we create our own writable directory to put the symlinks in, and we also take advantage of Crestle's /cache/ faster temporary storage space.)
To run these commands (which you should only do if using Crestle) remove the # characters from the start of each line.
End of explanation
!ls {PATH}
!ls {PATH}valid
files = !ls {PATH}valid/cats | head
files
img = plt.imread(f'{PATH}valid/cats/{files[0]}')
plt.imshow(img);
Explanation: First look at cat pictures
Our library will assume that you have train and valid directories. It also assumes that each dir will have subdirs for each class you wish to recognize (in this case, 'cats' and 'dogs').
End of explanation
img.shape
img[:4,:4]
Explanation: Here is how the raw data looks like
End of explanation
# Uncomment the below if you need to reset your precomputed activations
# !rm -rf {PATH}tmp
arch=resnet34
data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz))
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(0.01, 3)
learn.fit(0.01,3)
learn.fit(0.01,2)
Explanation: Our first model: quick start
We're going to use a <b>pre-trained</b> model, that is, a model created by some one else to solve a different problem. Instead of building a model from scratch to solve a similar problem, we'll use a model trained on ImageNet (1.2 million images and 1000 classes) as a starting point. The model is a Convolutional Neural Network (CNN), a type of Neural Network that builds state-of-the-art models for computer vision. We'll be learning all about CNNs during this course.
We will be using the <b>resnet34</b> model. resnet34 is a version of the model that won the 2015 ImageNet competition. Here is more info on resnet models. We'll be studying them in depth later, but for now we'll focus on using them effectively.
Here's how to train and evalulate a dogs vs cats model in 3 lines of code, and under 20 seconds:
End of explanation
# This is the label for a val data
data.val_y
# from here we know that 'cats' is label 0 and 'dogs' is label 1.
data.classes
# this gives prediction for validation set. Predictions are in log scale
log_preds = learn.predict()
log_preds.shape
log_preds[:10]
preds = np.argmax(log_preds, axis=1) # from log probabilities to 0 or 1
probs = np.exp(log_preds[:,1]) # pr(dog)
def rand_by_mask(mask): return np.random.choice(np.where(mask)[0], 4, replace=False)
def rand_by_correct(is_correct): return rand_by_mask((preds == data.val_y)==is_correct)
# def plot_val_with_title(idxs, title):
# imgs = np.stack([data.val_ds[x][0] for x in idxs])
# title_probs = [probs[x] for x in idxs]
# print(title)
# return plots(data.val_ds.denorm(imgs), rows=1, titles=title_probs)
def plots(ims, figsize=(12,6), rows=1, titles=None):
f = plt.figure(figsize=figsize)
for i in range(len(ims)):
sp = f.add_subplot(rows, len(ims)//rows, i+1)
sp.axis('Off')
if titles is not None: sp.set_title(titles[i], fontsize=16)
plt.imshow(ims[i])
def load_img_id(ds, idx): return np.array(PIL.Image.open(PATH+ds.fnames[idx]))
def plot_val_with_title(idxs, title):
imgs = [load_img_id(data.val_ds,x) for x in idxs]
title_probs = [probs[x] for x in idxs]
print(title)
return plots(imgs, rows=1, titles=title_probs, figsize=(16,8))
# 1. A few correct labels at random
plot_val_with_title(rand_by_correct(True), "Correctly classified")
# 2. A few incorrect labels at random
plot_val_with_title(rand_by_correct(False), "Incorrectly classified")
def most_by_mask(mask, mult):
idxs = np.where(mask)[0]
return idxs[np.argsort(mult * probs[idxs])[:4]]
def most_by_correct(y, is_correct):
mult = -1 if (y==1)==is_correct else 1
return most_by_mask((preds == data.val_y)==is_correct & (data.val_y == y), mult)
plot_val_with_title(most_by_correct(0, True), "Most correct cats")
plot_val_with_title(most_by_correct(1, True), "Most correct dogs")
plot_val_with_title(most_by_correct(0, False), "Most incorrect cats")
plot_val_with_title(most_by_correct(1, False), "Most incorrect dogs")
most_uncertain = np.argsort(np.abs(probs -0.5))[:4]
plot_val_with_title(most_uncertain, "Most uncertain predictions")
Explanation: How good is this model? Well, as we mentioned, prior to this competition, the state of the art was 80% accuracy. But the competition resulted in a huge jump to 98.9% accuracy, with the author of a popular deep learning library winning the competition. Extraordinarily, less than 4 years later, we can now beat that result in seconds! Even last year in this same course, our initial model had 98.3% accuracy, which is nearly double the error we're getting just a year later, and that took around 10 minutes to compute.
Analyzing results: looking at pictures
As well as looking at the overall metrics, it's also a good idea to look at examples of each of:
1. A few correct labels at random
2. A few incorrect labels at random
3. The most correct labels of each class (ie those with highest probability that are correct)
4. The most incorrect labels of each class (ie those with highest probability that are incorrect)
5. The most uncertain labels (ie those with probability closest to 0.5).
End of explanation
learn = ConvLearner.pretrained(arch, data, precompute=True)
lrf=learn.lr_find()
Explanation: Choosing a learning rate
The learning rate determines how quickly or how slowly you want to update the weights (or parameters). Learning rate is one of the most difficult parameters to set, because it significantly affect model performance.
The method learn.lr_find() helps you find an optimal learning rate. It uses the technique developed in the 2015 paper Cyclical Learning Rates for Training Neural Networks, where we simply keep increasing the learning rate from a very small value, until the loss starts decreasing. We can plot the learning rate across batches to see what this looks like.
We first create a new learner, since we want to know how to set the learning rate for a new (untrained) model.
End of explanation
learn.sched.plot_lr()
Explanation: Our learn object contains an attribute sched that contains our learning rate scheduler, and has some convenient plotting functionality including this one:
End of explanation
learn.sched.plot()
Explanation: Note that in the previous plot iteration is one iteration (or minibatch) of SGD. In one epoch there are
(num_train_samples/num_iterations) of SGD.
We can see the plot of loss versus learning rate to see where our loss stops decreasing:
End of explanation
tfms = tfms_from_model(resnet34, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
def get_augs():
data = ImageClassifierData.from_paths(PATH, bs=2, tfms=tfms, num_workers=1)
x,_ = next(iter(data.aug_dl))
return data.trn_ds.denorm(x)[1]
ims = np.stack([get_augs() for i in range(6)])
plots(ims, rows=2)
Explanation: The loss is still clearly improving at lr=1e-2 (0.01), so that's what we use. Note that the optimal learning rate can change as we training the model, so you may want to re-run this function from time to time.
Improving our model
Data augmentation
If you try training for more epochs, you'll notice that we start to overfit, which means that our model is learning to recognize the specific images in the training set, rather than generalizaing such that we also get good results on the validation set. One way to fix this is to effectively create more data, through data augmentation. This refers to randomly changing the images in ways that shouldn't impact their interpretation, such as horizontal flipping, zooming, and rotating.
We can do this by passing aug_tfms (augmentation transforms) to tfms_from_model, with a list of functions to apply that randomly change the image however we wish. For photos that are largely taken from the side (e.g. most photos of dogs and cats, as opposed to photos taken from the top down, such as satellite imagery) we can use the pre-defined list of functions transforms_side_on. We can also specify random zooming of images up to specified scale by adding the max_zoom parameter.
End of explanation
data = ImageClassifierData.from_paths(PATH, tfms=tfms)
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.fit(1e-2, 1)
learn.precompute=False
Explanation: Let's create a new data object that includes this augmentation in the transforms.
End of explanation
learn.fit(1e-2, 3, cycle_len=1)
Explanation: By default when we create a learner, it sets all but the last layer to frozen. That means that it's still only updating the weights in the last layer when we call fit.
End of explanation
learn.sched.plot_lr()
Explanation: What is that cycle_len parameter? What we've done here is used a technique called stochastic gradient descent with restarts (SGDR), a variant of learning rate annealing, which gradually decreases the learning rate as training progresses. This is helpful because as we get closer to the optimal weights, we want to take smaller steps.
However, we may find ourselves in a part of the weight space that isn't very resilient - that is, small changes to the weights may result in big changes to the loss. We want to encourage our model to find parts of the weight space that are both accurate and stable. Therefore, from time to time we increase the learning rate (this is the 'restarts' in 'SGDR'), which will force the model to jump to a different part of the weight space if the current area is "spikey". Here's a picture of how that might look if we reset the learning rates 3 times (in this paper they call it a "cyclic LR schedule"):
<img src="images/sgdr.png" width="80%">
(From the paper Snapshot Ensembles).
The number of epochs between resetting the learning rate is set by cycle_len, and the number of times this happens is refered to as the number of cycles, and is what we're actually passing as the 2nd parameter to fit(). So here's what our actual learning rates looked like:
End of explanation
learn.save('224_lastlayer')
learn.load('224_lastlayer')
Explanation: Our validation loss isn't improving much, so there's probably no point further training the last layer on its own.
Since we've got a pretty good model at this point, we might want to save it so we can load it again later without training it from scratch.
End of explanation
learn.unfreeze()
Explanation: Fine-tuning and differential learning rate annealing
Now that we have a good final layer trained, we can try fine-tuning the other layers. To tell the learner that we want to unfreeze the remaining layers, just call (surprise surprise!) unfreeze().
End of explanation
lr=np.array([1e-4,1e-3,1e-2])
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
Explanation: Note that the other layers have already been trained to recognize imagenet photos (whereas our final layers where randomly initialized), so we want to be careful of not destroying the carefully tuned weights that are already there.
Generally speaking, the earlier layers (as we've seen) have more general-purpose features. Therefore we would expect them to need less fine-tuning for new datasets. For this reason we will use different learning rates for different layers: the first few layers will be at 1e-4, the middle layers at 1e-3, and our FC layers we'll leave at 1e-2 as before. We refer to this as differential learning rates, although there's no standard name for this techique in the literature that we're aware of.
End of explanation
learn.sched.plot_lr()
Explanation: Another trick we've used here is adding the cycle_mult parameter. Take a look at the following chart, and see if you can figure out what the parameter is doing:
End of explanation
learn.save('224_all')
learn.load('224_all')
Explanation: Note that's what being plotted above is the learning rate of the final layers. The learning rates of the earlier layers are fixed at the same multiples of the final layer rates as we initially requested (i.e. the first layers have 100x smaller, and middle layers 10x smaller learning rates, since we set lr=np.array([1e-4,1e-3,1e-2]).
End of explanation
log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds),0)
accuracy(probs, y)
Explanation: There is something else we can do with data augmentation: use it at inference time (also known as test time). Not surprisingly, this is known as test time augmentation, or just TTA.
TTA simply makes predictions not just on the images in your validation set, but also makes predictions on a number of randomly augmented versions of them too (by default, it uses the original image along with 4 randomly augmented versions). It then takes the average prediction from these images, and uses that. To use TTA on the validation set, we can use the learner's TTA() method.
End of explanation
preds = np.argmax(probs, axis=1)
probs = probs[:,1]
Explanation: I generally see about a 10-20% reduction in error on this dataset when using TTA at this point, which is an amazing result for such a quick and easy technique!
Analyzing results
Confusion matrix
End of explanation
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y, preds)
Explanation: A common way to analyze the result of a classification model is to use a confusion matrix. Scikit-learn has a convenient function we can use for this purpose:
End of explanation
plot_confusion_matrix(cm, data.classes)
Explanation: We can just print out the confusion matrix, or we can show a graphical view (which is mainly useful for dependents with a larger number of categories).
End of explanation
plot_val_with_title(most_by_correct(0, False), "Most incorrect cats")
plot_val_with_title(most_by_correct(1, False), "Most incorrect dogs")
Explanation: Looking at pictures again
End of explanation
tfms = tfms_from_model(resnet34, sz)
Explanation: Review: easy steps to train a world-class image classifier
Enable data augmentation, and precompute=True
Use lr_find() to find highest learning rate where loss is still clearly improving
Train last layer from precomputed activations for 1-2 epochs
Train last layer with data augmentation (i.e. precompute=False) for 2-3 epochs with cycle_len=1
Unfreeze all layers
Set earlier layers to 3x-10x lower learning rate than next higher layer
Use lr_find() again
Train full network with cycle_mult=2 until over-fitting
Understanding the code for our first model
Let's look at the Dogs v Cats code line by line.
tfms stands for transformations. tfms_from_model takes care of resizing, image cropping, initial normalization (creating data with (mean,stdev) of (0,1)), and more.
End of explanation
data = ImageClassifierData.from_paths(PATH, tfms=tfms)
Explanation: We need a <b>path</b> that points to the dataset. In this path we will also store temporary data and final results. ImageClassifierData.from_paths reads data from a provided path and creates a dataset ready for training.
End of explanation
learn = ConvLearner.pretrained(resnet34, data, precompute=True)
Explanation: ConvLearner.pretrained builds learner that contains a pre-trained model. The last layer of the model needs to be replaced with the layer of the right dimensions. The pretained model was trained for 1000 classes therfore the final layer predicts a vector of 1000 probabilities. The model for cats and dogs needs to output a two dimensional vector. The diagram below shows in an example how this was done in one of the earliest successful CNNs. The layer "FC8" here would get replaced with a new layer with 2 outputs.
<img src="images/pretrained.png" width="500">
original image
End of explanation
learn.fit(1e-2, 1)
Explanation: Parameters are learned by fitting a model to the data. Hyparameters are another kind of parameter, that cannot be directly learned from the regular training process. These parameters express “higher-level” properties of the model such as its complexity or how fast it should learn. Two examples of hyperparameters are the learning rate and the number of epochs.
During iterative training of a neural network, a batch or mini-batch is a subset of training samples used in one iteration of Stochastic Gradient Descent (SGD). An epoch is a single pass through the entire training set which consists of multiple iterations of SGD.
We can now fit the model; that is, use gradient descent to find the best parameters for the fully connected layer we added, that can separate cat pictures from dog pictures. We need to pass two hyperameters: the learning rate (generally 1e-2 or 1e-3 is a good starting point, we'll look more at this next) and the number of epochs (you can pass in a higher number and just stop training when you see it's no longer improving, then re-run it with the number of epochs you found works well.)
End of explanation
def binary_loss(y, p):
return np.mean(-(y * np.log(p) + (1-y)*np.log(1-p)))
acts = np.array([1, 0, 0, 1])
preds = np.array([0.95, 0.1, 0.2, 0.8])
binary_loss(acts, preds)
Explanation: Analyzing results: loss and accuracy
When we run learn.fit we print 3 performance values (see above.) Here 0.03 is the value of the loss in the training set, 0.0226 is the value of the loss in the validation set and 0.9927 is the validation accuracy. What is the loss? What is accuracy? Why not to just show accuracy?
Accuracy is the ratio of correct prediction to the total number of predictions.
In machine learning the loss function or cost function is representing the price paid for inaccuracy of predictions.
The loss associated with one example in binary classification is given by:
-(y * log(p) + (1-y) * log (1-p))
where y is the true label of x and p is the probability predicted by our model that the label is 1.
End of explanation |
7,133 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get the data
2MASS => J, H K, angular resolution ~4"
WISE => 3.4, 4.6, 12, and 22 μm (W1, W2, W3, W4) with an angular resolution of 6.1", 6.4", 6.5", & 12.0"
GALEX imaging => Five imaging surveys in a Far UV band (1350-1750Å) and Near UV band (1750-2800Å) with 6-8 arcsecond resolution (80% encircled energy) and 1 arcsecond astrometry, and a cosmic UV background map.
Step1: Matching coordinates
Step2: Plot $W_1-J$ vs $W_1$
Step3: W1-J < -1.7 => galaxy
W1-J > -1.7 => stars
Filter all Cats
Step4: Collect relevant data
Step5: Analysis
we can try
Step6: DBSCAN
Step7: Plot $W_1$ vs $J$
Step8: t-SNE | Python Code:
#obj = ["3C 454.3", 343.49062, 16.14821, 1.0]
obj = ["PKS J0006-0623", 1.55789, -6.39315, 1.0]
#obj = ["M87", 187.705930, 12.391123, 1.0]
#### name, ra, dec, radius of cone
obj_name = obj[0]
obj_ra = obj[1]
obj_dec = obj[2]
cone_radius = obj[3]
obj_coord = coordinates.SkyCoord(ra=obj_ra, dec=obj_dec, unit=(u.deg, u.deg), frame="icrs")
# Query data
data_2mass = Irsa.query_region(obj_coord, catalog="fp_psc", radius=cone_radius * u.deg)
data_wise = Irsa.query_region(obj_coord, catalog="allwise_p3as_psd", radius=cone_radius * u.deg)
__data_galex = Vizier.query_region(obj_coord, catalog='II/335', radius=cone_radius * u.deg)
data_galex = __data_galex[0]
num_2mass = len(data_2mass)
num_wise = len(data_wise)
num_galex = len(data_galex)
print("Number of object in (2MASS, WISE, GALEX): ", num_2mass, num_wise, num_galex)
Explanation: Get the data
2MASS => J, H K, angular resolution ~4"
WISE => 3.4, 4.6, 12, and 22 μm (W1, W2, W3, W4) with an angular resolution of 6.1", 6.4", 6.5", & 12.0"
GALEX imaging => Five imaging surveys in a Far UV band (1350-1750Å) and Near UV band (1750-2800Å) with 6-8 arcsecond resolution (80% encircled energy) and 1 arcsecond astrometry, and a cosmic UV background map.
End of explanation
# use only coordinate columns
ra_2mass = data_2mass['ra']
dec_2mass = data_2mass['dec']
c_2mass = coordinates.SkyCoord(ra=ra_2mass, dec=dec_2mass, unit=(u.deg, u.deg), frame="icrs")
ra_wise = data_wise['ra']
dec_wise = data_wise['dec']
c_wise = coordinates.SkyCoord(ra=ra_wise, dec=dec_wise, unit=(u.deg, u.deg), frame="icrs")
ra_galex = data_galex['RAJ2000']
dec_galex = data_galex['DEJ2000']
c_galex = coordinates.SkyCoord(ra=ra_galex, dec=dec_galex, unit=(u.deg, u.deg), frame="icrs")
####
sep_min = 1.0 * u.arcsec # minimum separation in arcsec
# Only 2MASS and WISE matching
#
idx_2mass, idx_wise, d2d, d3d = c_wise.search_around_sky(c_2mass, sep_min)
# select only one nearest if there are more in the search reagion (minimum seperation parameter)!
print("Only 2MASS and WISE: ", len(idx_2mass))
Explanation: Matching coordinates
End of explanation
# from matching of 2 cats (2MASS and WISE) coordinate
data_2mass_matchwith_wise = data_2mass[idx_2mass]
data_wise_matchwith_2mass = data_wise[idx_wise] # WISE dataset
w1 = data_wise_matchwith_2mass['w1mpro']
j = data_2mass_matchwith_wise['j_m']
w1j = w1-j
cutw1j = -1.7 # https://academic.oup.com/mnras/article/448/2/1305/1055284
# WISE galaxy data -> from cut
galaxy = data_wise_matchwith_2mass[w1j < cutw1j]
print("Number of galaxy from cut W1-J:", len(galaxy))
w1j_galaxy = w1j[w1j<cutw1j]
w1_galaxy = w1[w1j<cutw1j]
plt.scatter(w1j, w1, marker='o', color='blue')
plt.scatter(w1j_galaxy, w1_galaxy, marker='.', color="red")
plt.axvline(x=cutw1j) # https://academic.oup.com/mnras/article/448/2/1305/1055284
Explanation: Plot $W_1-J$ vs $W_1$
End of explanation
# GALEX
###
# coord of object in 2mass which match wise (first objet/nearest in sep_min region)
c_2mass_matchwith_wise = c_2mass[idx_2mass]
c_wise_matchwith_2mass = c_wise[idx_wise]
#Check with 2mass cut
idx_2mass_wise_galex, idx_galex1, d2d, d3d = c_galex.search_around_sky(c_2mass_matchwith_wise, sep_min)
num_galex1 = len(idx_galex1)
#Check with wise cut
idx_wise_2mass_galex, idx_galex2, d2d, d3d = c_galex.search_around_sky(c_wise_matchwith_2mass, sep_min)
num_galex2 = len(idx_galex2)
print("Number of GALEX match in 2MASS cut (with WISE): ", num_galex1)
print("Number of GALEX match in WISE cut (with 2MASS): ", num_galex2)
# diff/average
print("Confusion level: ", abs(num_galex1 - num_galex2)/np.mean([num_galex1, num_galex2])*100, "%")
# Choose which one is smaller!
if num_galex1 < num_galex2:
select_from_galex = idx_galex1
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# 2MASS from GALEX_selected
_idx_galex1, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(c_selected_galex, sep_min)
match_2mass = data_2mass[_idx_2mass]
# WISE from 2MASS_selected
_ra_match_2mass = match_2mass['ra']
_dec_match_2mass = match_2mass['dec']
_c_match_2mass = coordinates.SkyCoord(ra=_ra_match_2mass, dec=_dec_match_2mass, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_wise, d2d, d3d = c_wise.search_around_sky(_c_match_2mass, sep_min)
match_wise = data_wise[_idx_wise]
else:
select_from_galex = idx_galex2
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# WISE from GALEX_selected
_idx_galex1, _idx_wise, d2d, d3d = c_wise.search_around_sky(c_selected_galex, sep_min)
match_wise = data_wise[_idx_wise]
# 2MASS from WISE_selected
_ra_match_wise = match_wise['ra']
_dec_match_wise = match_wise['dec']
_c_match_wise = coordinates.SkyCoord(ra=_ra_match_wise, dec=_dec_match_wise, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(_c_match_wise, sep_min)
match_2mass = data_2mass[_idx_2mass]
print("Number of match in GALEX: ", len(match_galex))
print("Number of match in 2MASS: ", len(match_2mass))
print("Number of match in WISE : ", len(match_wise))
Explanation: W1-J < -1.7 => galaxy
W1-J > -1.7 => stars
Filter all Cats
End of explanation
joindata = np.array([match_2mass['j_m'], match_2mass['h_m'], match_2mass['k_m'],
match_wise['w1mpro'], match_wise['w2mpro'], match_wise['w3mpro'], match_wise['w4mpro'],
match_galex['NUVmag']])
joindata = joindata.T
Explanation: Collect relevant data
End of explanation
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
X = scale(joindata)
pca = PCA(n_components=4)
X_r = pca.fit(X).transform(X)
print(pca.components_)
print(pca.explained_variance_)
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,2], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,2], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,3], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,1], X_r[:,2], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,1], X_r[i,2], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,1], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,1], X_r[i,3], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,2], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,2], X_r[i,3], marker=".", color="red")
Explanation: Analysis
we can try:
- dimensionality reduction
- clustering
- classification
- data embedding
PCA
End of explanation
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
X = scale(joindata)
db = DBSCAN(eps=1, min_samples=3).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
#print(labels)
Explanation: DBSCAN
End of explanation
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
## J vs J-W1
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=8)
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.plot(X[i,3], X[i,0], marker="X", markerfacecolor='red', markeredgecolor='none', markersize=8)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
Explanation: Plot $W_1$ vs $J$
End of explanation
from sklearn.manifold import TSNE
X = scale(joindata)
X_r = TSNE(n_components=2).fit_transform(X)
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color="blue")
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker='.', color="red")
Explanation: t-SNE
End of explanation |
7,134 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DAT210x - Programming with Python for DS
Module3 - Lab1
Step1: Load up the wheat seeds dataset into a dataframe. We've stored a copy in the Datasets directory.
Step2: Create a slice from your dataframe and name the variable s1. It should only include the area and perimeter features.
Step3: Create another slice of from dataframe called it s2 this time. Slice out only the groove and asymmetry features
Step4: Create a histogram plot using the first slice, and another histogram plot using the second slice. Be sure to set alpha=0.75. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
Explanation: DAT210x - Programming with Python for DS
Module3 - Lab1
End of explanation
# .. your code here ..
Explanation: Load up the wheat seeds dataset into a dataframe. We've stored a copy in the Datasets directory.
End of explanation
# .. your code here ..
Explanation: Create a slice from your dataframe and name the variable s1. It should only include the area and perimeter features.
End of explanation
# .. your code here ..
Explanation: Create another slice of from dataframe called it s2 this time. Slice out only the groove and asymmetry features:
End of explanation
# .. your code here ..
# Display the graphs:
plt.show()
Explanation: Create a histogram plot using the first slice, and another histogram plot using the second slice. Be sure to set alpha=0.75.
End of explanation |
7,135 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 1.1 Trim sequence to multiples of three characters
Write a function trim(s) that trims the sequence s (which is a Seq object) to a multiple of three characters so that its translation happens without an error. Use it to translate the given sequence.
Step1: Exercise 1.2 GC content of a Seq sequence
Write a function GC(s) which calculates the GC content of a Seq sequence s and returns it as a real number in the range 0-1. The GC content is the proportion of "G" and "C" characters in the sequence. Make sure that your function works correctly also for lower-cased sequences.
Step2: Exercise 1.3 Hamming distance of two sequences
Write a function hamming(s1, s2) that calculates the hamming distance of the two sequences s1 and s2. The Hamming distance is the number of positions in which the two sequences differ. The distance is undefined if the sequences have unequal length.
Step3: Exercise 1.4 Find possible coding tables
DNA sequences are translated into protein (amino acid sequences) three letters at a time. So every three letters of the DNA sequence produce a single letter of the protein sequence. The translation is governed by a coding table, of which there are many. See https
Step4: Exercise 1.5 Remove ambiguous alphabet letter
Write a function clean(s) where s is a string and the function returns the string with all characters removed which are not A, C, G, or T.
Step5: Exercise 1.6 Sort sequences
Write a function sort_by_unknown(s) that sorts and returns the given DNA sequences in ascending order by the number of unknown bases. Here, s is an iterable containing Seq objects. | Python Code:
def trim(s):
# implement this function
pass
# test case
import Bio.Seq as BS
s = BS.Seq("ACGCGGCGTG")
print(s, "has length", len(s))
# write a piece of code here which will
# print the translated sequence 'TRR'
# without any errors
Explanation: Exercise 1.1 Trim sequence to multiples of three characters
Write a function trim(s) that trims the sequence s (which is a Seq object) to a multiple of three characters so that its translation happens without an error. Use it to translate the given sequence.
End of explanation
def GC(s):
# implement this function
pass
# test case
import Bio.Seq as BS
s = BS.Seq("ACGATTAA")
print("GC content of", s, "is", GC(s))
Explanation: Exercise 1.2 GC content of a Seq sequence
Write a function GC(s) which calculates the GC content of a Seq sequence s and returns it as a real number in the range 0-1. The GC content is the proportion of "G" and "C" characters in the sequence. Make sure that your function works correctly also for lower-cased sequences.
End of explanation
def hamming(s1, s2):
# implement this function
pass
# test case
import Bio.Seq as BS
s1 = BS.Seq("ACGCAGTTGCAGTAG")
s2 = BS.Seq("ACGCACTTGCAGAAG")
s3 = BS.Seq("AAAAAAAAAA")
print("Hamming distance of", s1, "and", s2, "is", hamming(s1,s2))
print("Hamming distance of", s1, "and", s3, "is", hamming(s1,s3))
Explanation: Exercise 1.3 Hamming distance of two sequences
Write a function hamming(s1, s2) that calculates the hamming distance of the two sequences s1 and s2. The Hamming distance is the number of positions in which the two sequences differ. The distance is undefined if the sequences have unequal length.
End of explanation
import Bio.Seq as BS
s_dna = BS.Seq("ATGGTCGATGACCTGTGAACTTAA")
s_protein = BS.Seq("MVDDLCT*", BS.IUPAC.protein)
# Hint: Bio.Data.CodonTable.generic_by_id
# print the number(s) of the table(s) which
# could have produced s_protein from s_dna
Explanation: Exercise 1.4 Find possible coding tables
DNA sequences are translated into protein (amino acid sequences) three letters at a time. So every three letters of the DNA sequence produce a single letter of the protein sequence. The translation is governed by a coding table, of which there are many. See https://www.ncbi.nlm.nih.gov/Taxonomy/Utils/wprintgc.cgi .
Let us say you have a DNA sequence and its translation, and you would like to find out which coding table was used in the translation. Given a sequence s_dna and its translation s_protein, find the coding table(s) which could have produced it. Print the number of the table (or one number per line if there are many possible tables).
The tables are defined in Bio.Data.CodonTable, and they are numbered. In particular, generic_by_id has a mapping from these numbers to the tables.
End of explanation
def clean(s):
# implement this function
pass
# test case
print("ACGTHABJHHBAGATGATB")
print(clean("ACGTHABJHHBAGATGATB"))
Explanation: Exercise 1.5 Remove ambiguous alphabet letter
Write a function clean(s) where s is a string and the function returns the string with all characters removed which are not A, C, G, or T.
End of explanation
def sort_by_unknown(s):
# implement this function
pass
# test case
import Bio.Seq as BS
s = [BS.Seq('NGTACCTTGCTACTC'),
BS.Seq('NCGTGNN'),
BS.Seq('NNNNN'),
BS.Seq('ACGGT'),
BS.Seq('ANNTGGT'),
BS.Seq('ACGNGT'),
BS.Seq('AACGTCCGTNNN'),
]
print(s)
print(sort_by_unknown(s))
Explanation: Exercise 1.6 Sort sequences
Write a function sort_by_unknown(s) that sorts and returns the given DNA sequences in ascending order by the number of unknown bases. Here, s is an iterable containing Seq objects.
End of explanation |
7,136 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Since the annotation channel of this data is somewhat suspect, I decided to load some alternate annoation files to compare
Interestingly, the labels are exactly 3 times the z span of the p1 data. could this mean they are just stacked p1, p2, p3?
Ok so the annotatons are legit. Next thing I want to be sure about is that we are checking precision, recall, and f1 correctly for this data
Step1: I think I may see where the issue was arising. I was checking to see if the overlap between the label and the prediction was greater than the overlap ratio times the volume of the prediction. Since, in this data, our predictions are massive compared to the labels, this would not work super well.
I made a change such that the overlap between the label and the prediction be merely nondisjoint.
Revised Pipeline
Step2: This looks more like it. So far so good
Step3: Now let's take a look at the max and average intensity in each of the clusters. I have a feeling that there will be some level of bimodality
Step4: That's what I like to see. The Max clusters exibits clear bimodality. Time to get the otsu threshold for the optimum class breakdown here, and disregard any clusters whose max is under the thresh | Python Code:
def otsuVox(argVox):
probVox = np.nan_to_num(argVox)
bianVox = np.zeros_like(probVox)
for zIndex, curSlice in enumerate(probVox):
#if the array contains all the same values
if np.max(curSlice) == np.min(curSlice):
#otsu thresh will fail here, leave bianVox as all 0's
continue
thresh = threshold_otsu(curSlice)
bianVox[zIndex] = curSlice > thresh
return bianVox
def precision_recall_f1(labels, predictions):
if len(predictions) == 0:
print 'ERROR: prediction list is empty'
return 0., 0., 0.
labelFound = np.zeros(len(labels))
truePositives = 0
falsePositives = 0
for prediction in predictions:
#casting to set is ok here since members are uinque
predictedMembers = set([tuple(elem) for elem in prediction.getMembers()])
detectionCutoff = 1
found = False
for idx, label in enumerate(labels):
labelMembers = set([tuple(elem) for elem in label.getMembers()])
#if the predictedOverlap is over the detectionCutoff ratio
if len(predictedMembers & labelMembers) >= detectionCutoff:
truePositives +=1
found=True
labelFound[idx] = 1
if not found:
falsePositives +=1
precision = truePositives/float(truePositives + falsePositives)
recall = np.count_nonzero(labelFound)/float(len(labels))
f1 = 0
try:
f1 = 2 * (precision*recall)/(precision + recall)
except ZeroDivisionError:
f1 = 0
return precision, recall, f1
Explanation: Since the annotation channel of this data is somewhat suspect, I decided to load some alternate annoation files to compare
Interestingly, the labels are exactly 3 times the z span of the p1 data. could this mean they are just stacked p1, p2, p3?
Ok so the annotatons are legit. Next thing I want to be sure about is that we are checking precision, recall, and f1 correctly for this data
End of explanation
gaba = procData[5][1]
otsuOut = otsuVox(gaba)
Explanation: I think I may see where the issue was arising. I was checking to see if the overlap between the label and the prediction was greater than the overlap ratio times the volume of the prediction. Since, in this data, our predictions are massive compared to the labels, this would not work super well.
I made a change such that the overlap between the label and the prediction be merely nondisjoint.
Revised Pipeline
End of explanation
clusterList = cLib.clusterThresh(otsuOut, 500, 1000000)
Explanation: This looks more like it. So far so good
End of explanation
aveList = []
maxList = []
for cluster in clusterList:
curClusterDist = []
for member in cluster.members:
curClusterDist.append(gaba[member[0]][member[1]][member[2]])
aveList.append(np.mean(curClusterDist))
maxList.append(np.max(curClusterDist))
plt.figure()
plt.hist(aveList, bins=40)
plt.title('Averages Of Clusters')
plt.show()
plt.figure()
plt.hist(maxList, bins=30)
plt.title('Max Of Clusters')
plt.show()
plt.figure()
plt.scatter(range(len(aveList)), aveList, c='b')
plt.scatter(range(len(maxList)), maxList, c='r')
plt.show()
Explanation: Now let's take a look at the max and average intensity in each of the clusters. I have a feeling that there will be some level of bimodality
End of explanation
#thresh = threshold_otsu(np.array(maxList))
thresh = 23
finalClusters = []
for i in range(len(maxList)): #this is bad and i should feel bad
if aveList[i] > thresh:
finalClusters.append(clusterList[i])
outVol = np.zeros_like(gaba)
for cluster in finalClusters:
for member in cluster.members:
outVol[member[0]][member[1]][member[2]] = 1
labelClusters = cLib.clusterThresh(procData[0][1], 0, 10000000)
print precision_recall_f1(labelClusters, finalClusters)
Explanation: That's what I like to see. The Max clusters exibits clear bimodality. Time to get the otsu threshold for the optimum class breakdown here, and disregard any clusters whose max is under the thresh
End of explanation |
7,137 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Libraries
Step1: NOTE
Step2: Analysis
RASLseqAnalysis_STAR
Step3: Demultiplexing and Aligning FASTQ Reads
Step4: SUMMARY REPORT | Python Code:
import pandas as pd
import os, sys, time, random
import numpy as np
from scipy import stats
sys.path.append('../')
from RASLseqTools import *
sys.path.append('../RASLseqTools')
import RASLseqAnalysis_STAR
import seaborn
%pylab inline
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
Explanation: Libraries
End of explanation
fastq_path = '../data/sample.fastq.gz'
probes = '../data/sample.probes'
well_annot = '../data/sample.bc'
aligner_path = '../STAR_bin/'
Explanation: NOTE: THIS SAMPLE NOTEBOOK REQUIRES THE FOLLOWING DEPENDENCIES:
<p>STAR BINARY: https://github.com/alexdobin/STAR/tree/master/bin</p>
<p>python-Levenshtein: https://pypi.python.org/pypi/python-Levenshtein/</p>
<p>pandas: https://github.com/pydata/pandas</p>
<p>seaborn: https://github.com/mwaskom/seaborn</p>
<p>NumPy: http://www.numpy.org</p>
<p>SciPy: http://www.scipy.org</p>
Data
End of explanation
#Initiate RASLseqAnalysis
seqRun1 = RASLseqAnalysis_STAR.RASLseqAnalysis_STAR(fastq_path, probes, aligner_path, well_annot, '../data/temp/', write_file='../data/temp/test_alignment.txt' ,print_on=False, \
n_jobs=1, offset_5p=25, offset_3p=21, wellbc_start=0, wellbc_end=8, write_alignments=False )
#Well Annotation DataFrame
seqRun1.RASLseqBCannot_obj.well_annot_df
#Probe DataFrame
seqRun1.RASLseqProbes_obj.on_off_target_probes_df.head()
Explanation: Analysis
RASLseqAnalysis_STAR
End of explanation
#FASTQ Analysis
%time seqRun1.get_target_counts_df()
#RESULTS
#Test data with approximately balanced read counts for each on-target probe
seqRun1.RASLseqAnalysis_df
Explanation: Demultiplexing and Aligning FASTQ Reads
End of explanation
#Pandas DataFrame of Probe Counts
seqRun1_counts = seqRun1.RASLseqAnalysis_df[seqRun1.probe_columns].fillna(value=0)
seqRun1_counts['well_count'] = seqRun1_counts.apply(np.sum, axis=1) #total read counts by well
#READS BY WELL
seqRun1_counts.well_count.hist(bins=10)
#READS BY PLATE
seqRun1_counts.groupby(level=0)['well_count'].aggregate(np.sum).plot(kind='bar')
#ADDING ROW AND COL ANNOTATIONS
seqRun1_counts.insert(0,'Row', [1,2,3,4,1,2,3,4,1,2,3,4,1,2,3,4])
seqRun1_counts.insert(1,'Col', [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4])
#MEDIAN READ COUNT BY WELL
median_well_count = seqRun1_counts.pivot_table(index='Row', columns='Col', values='well_count', aggfunc=np.median)
seaborn.heatmap(median_well_count.values, \
cmap='spectral', square=False, annot=True)
plt.show()
plt.close()
median_well_count.stack().hist(bins=10)
#MEAN WELL READ COUNT
mean_well_count = seqRun1_counts.pivot_table(index='Row', columns='Col', values='well_count', aggfunc=np.mean)
seaborn.heatmap(mean_well_count.values, \
cmap='spectral', square=False, annot=True)
plt.show()
plt.close()
mean_well_count.stack().hist(bins=10)
#PCA FOR EACH WELL BY PLATE
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.lda import LDA
from sklearn.preprocessing import StandardScaler
X = seqRun1_counts[seqRun1.probe_columns]
target_names = seqRun1_counts.index.get_level_values(1).tolist()
X = StandardScaler().fit_transform(X)
y = target_names
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
# Percentage of variance explained for each components
print('explained variance ratio (first two components): %s'
% str(pca.explained_variance_ratio_))
fig, ax = plt.subplots(figsize=(15,8))
#colored according to plate
colors = ['r', 'r', 'r', 'r', 'g', 'g', 'g', 'g', 'b', 'b', 'b', 'b', 'gray', 'gray', 'gray', 'gray', ]
ax.scatter(X_r[0:,0], X_r[0:,1], s=800, alpha=0.7, c=colors, cmap=cm.spectral)
for i, txt, c in zip(X_r, target_names, colors):
ax.annotate(txt, xy=(i[0],i[1]), textcoords='offset points', va='bottom', ha='center', xytext=(0, -5), fontsize=20, color=c)
plt.title('PCA Wells From Each Plate', fontsize=20)
plt.grid(color='gray')
plt.show()
Explanation: SUMMARY REPORT
End of explanation |
7,138 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
So in numpy arrays there is the built in function for getting the diagonal indices, but I can't seem to figure out how to get the diagonal ending at bottom left rather than botton right(might not on the corner for non-square matrix). | Problem:
import numpy as np
a = np.array([[ 0, 1, 2, 3, 4, 5],
[ 5, 6, 7, 8, 9, 10],
[10, 11, 12, 13, 14, 15],
[15, 16, 17, 18, 19, 20],
[20, 21, 22, 23, 24, 25]])
dim = min(a.shape)
b = a[:dim,:dim]
result = np.vstack((np.diag(b), np.diag(np.fliplr(b)))) |
7,139 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sigma to Pressure Interpolation
By using metpy.calc.log_interp, data with sigma as the vertical coordinate can be
interpolated to isobaric coordinates.
Step1: Data
The data for this example comes from the outer domain of a WRF-ARW model forecast
initialized at 1200 UTC on 03 June 1980. Model data courtesy Matthew Wilson, Valparaiso
University Department of Geography and Meteorology.
Step2: Array of desired pressure levels
Step3: Interpolate The Data
Now that the data is ready, we can interpolate to the new isobaric levels. The data is
interpolated from the irregular pressure values for each sigma level to the new input
mandatory isobaric levels. mpcalc.log_interp will interpolate over a specified dimension
with the axis argument. In this case, axis=1 will correspond to interpolation on the
vertical axis. The interpolated data is output in a list, so we will pull out each
variable for plotting.
Step4: Plotting the Data for 700 hPa. | Python Code:
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from netCDF4 import Dataset, num2date
from metpy.cbook import get_test_data
from metpy.interpolate import log_interpolate_1d
from metpy.plots import add_metpy_logo, add_timestamp
from metpy.units import units
Explanation: Sigma to Pressure Interpolation
By using metpy.calc.log_interp, data with sigma as the vertical coordinate can be
interpolated to isobaric coordinates.
End of explanation
data = Dataset(get_test_data('wrf_example.nc', False))
lat = data.variables['lat'][:]
lon = data.variables['lon'][:]
time = data.variables['time']
vtimes = num2date(time[:], time.units)
temperature = units.Quantity(data.variables['temperature'][:], 'degC')
pres = units.Quantity(data.variables['pressure'][:], 'Pa')
hgt = units.Quantity(data.variables['height'][:], 'meter')
Explanation: Data
The data for this example comes from the outer domain of a WRF-ARW model forecast
initialized at 1200 UTC on 03 June 1980. Model data courtesy Matthew Wilson, Valparaiso
University Department of Geography and Meteorology.
End of explanation
plevs = [700.] * units.hPa
Explanation: Array of desired pressure levels
End of explanation
height, temp = log_interpolate_1d(plevs, pres, hgt, temperature, axis=1)
Explanation: Interpolate The Data
Now that the data is ready, we can interpolate to the new isobaric levels. The data is
interpolated from the irregular pressure values for each sigma level to the new input
mandatory isobaric levels. mpcalc.log_interp will interpolate over a specified dimension
with the axis argument. In this case, axis=1 will correspond to interpolation on the
vertical axis. The interpolated data is output in a list, so we will pull out each
variable for plotting.
End of explanation
# Set up our projection
crs = ccrs.LambertConformal(central_longitude=-100.0, central_latitude=45.0)
# Set the forecast hour
FH = 1
# Create the figure and grid for subplots
fig = plt.figure(figsize=(17, 12))
add_metpy_logo(fig, 470, 320, size='large')
# Plot 700 hPa
ax = plt.subplot(111, projection=crs)
ax.add_feature(cfeature.COASTLINE.with_scale('50m'), linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Plot the heights
cs = ax.contour(lon, lat, height[FH, 0, :, :], transform=ccrs.PlateCarree(),
colors='k', linewidths=1.0, linestyles='solid')
ax.clabel(cs, fontsize=10, inline=1, inline_spacing=7,
fmt='%i', rightside_up=True, use_clabeltext=True)
# Contour the temperature
cf = ax.contourf(lon, lat, temp[FH, 0, :, :], range(-20, 20, 1), cmap=plt.cm.RdBu_r,
transform=ccrs.PlateCarree())
cb = fig.colorbar(cf, orientation='horizontal', extend='max', aspect=65, shrink=0.5,
pad=0.05, extendrect='True')
cb.set_label('Celsius', size='x-large')
ax.set_extent([-106.5, -90.4, 34.5, 46.75], crs=ccrs.PlateCarree())
# Make the axis title
ax.set_title('{:.0f} hPa Heights (m) and Temperature (C)'.format(plevs[0].m), loc='center',
fontsize=10)
# Set the figure title
fig.suptitle('WRF-ARW Forecast VALID: {:s} UTC'.format(str(vtimes[FH])), fontsize=14)
add_timestamp(ax, vtimes[FH], y=0.02, high_contrast=True)
plt.show()
Explanation: Plotting the Data for 700 hPa.
End of explanation |
7,140 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple charts in pandas
In this session, we're gonna revisit our MLB data to rock a quick chart in pandas. Our goal
Step1: Let Jupyter know that you're gonna be charting inline
(Don't worry if you get a warning about building a font library.)
Read in MLB data
Step2: Prep data for charting
Let's chart total team payroll, most to least. Let's repeat the grouping we did before.
Step3: Make a horizonal bar chart | Python Code:
# import a ticker formatting class from matplotlib
Explanation: Simple charts in pandas
In this session, we're gonna revisit our MLB data to rock a quick chart in pandas. Our goal: A horizontal bar chart of the top 10 teams by payroll.
Import pandas and a some chart formatting help
End of explanation
# create a data frame
# use head to check it out
Explanation: Let Jupyter know that you're gonna be charting inline
(Don't worry if you get a warning about building a font library.)
Read in MLB data
End of explanation
# group by team, aggregate on sum
# get top 10
Explanation: Prep data for charting
Let's chart total team payroll, most to least. Let's repeat the grouping we did before.
End of explanation
# make a horizontal bar chart
# set the figure size
# sort the bars top to bottom
# set the title
# kill the legend
# kill y axis label
# define a function to format x axis ticks
# otherwise they'd all run together (100000000)
# via https://stackoverflow.com/a/46454637
# format the x axis ticks using the function we just defined
Explanation: Make a horizonal bar chart
End of explanation |
7,141 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Function definitions and some other parameters
Step1: Load a mean pulse profile for B1957+20, and fit a gaussian to the main pulse
Step2: Isolate the main pulse, sqrt it, and center
Step3: Invent some sinusoidal wave at frequency of observed band. Assuming waves of the form e^(i \omega t)
Step4: Compute an electric field
Step5: Now we have to delay the phase and the pulse(group) and integrate. We will parallelize it so we don't have to wait too long.
Step6: Now, we have to backfit the integrated pulse back onto the observational data | Python Code:
res = 10000 # 1 sample is 1/res*1.6ms, 1e7 to resolve 311Mhz
n = 40 * res # grid size, total time
n = n-1 # To get the wave periodicity edge effects to work out
#freq = 311.25 # MHz, observed band
freq = 0.5 # Test, easier on computation
p_spin = 1.6 # ms, spin period
freq *= 1e6 #MHz to Hz
p_spin *= 1e-3 #ms to s
p_phase = 1./freq # s, E field oscillation period
R = 6.273 # s, pulsar-companion distance
gradient = 0.00133
#gradient = 60.0e-5/0.12
intime = p_spin/res # sample -> time in s conversion factor
insample = res/p_spin # time in s -> samples conversion factor
ipulsar = 18*n/32 # Pulsar position for the geometric delay, will be scanned through
t = np.linspace(0., n*intime, n) #time array
# Gaussian for fitting
def gaussian(x,a,b,c):
return a * np.exp(-(x - c)**2/(2 * b**2))
# x in the following functions should be in s
# The small features in DM change are about 7.5ms wide
def tau_str(x):
# return np.piecewise(x, [x < (n/2)*intime, x >= (n/2)*intime], [1.0e-5, 1.0e-5])
return np.piecewise(x, [x < (n/2)*intime, x >= (n/2)*intime], [1.0e-5, lambda x: 1.0e-5+gradient*(x-(n/2)*intime)])
def tau_geom(x, xpulsar):
return (xpulsar-x)**2/(2.*R)
def tau_phase(x, xpulsar):
return -tau_str(x) + tau_geom(x, xpulsar)
def tau_group(x, xpulsar):
return tau_str(x) + tau_geom(x, xpulsar)
#def tau_str(x):
# return Piecewise((1.0e-5, x < (n/2)*intime), (1.0e-5+gradient*(x-(n/2)*intime), x >= (n/2)*intime))
#plt.figure(figsize=(14,4))
plt.plot(t,tau_str(t), label="Dispersive delay")
plt.plot(t,tau_geom(t, ipulsar*intime), label="Geometric delay")
plt.plot(t,tau_phase(t, ipulsar*intime), label="Phase delay")
plt.plot(t,tau_group(t, ipulsar*intime), label="Group delay")
plt.xlim(0,n*intime)
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.legend(loc="best")
plt.xlabel('sample', fontsize=16)
plt.ylabel(r't (s)', fontsize=16)
plt.savefig('images/delays.png')
Explanation: Function definitions and some other parameters
End of explanation
# The grid size of mean_profile.npy is 1000
mean_profile = np.load('mean_profile.npy')
meanprof_Jy = (mean_profile / np.median(mean_profile) - 1) * 12.
xarr = np.arange(1000)
popt, pcov = curve_fit(gaussian, xarr, meanprof_Jy, bounds=([-np.inf,-np.inf,800.],[np.inf,np.inf,890.]))
plt.plot(meanprof_Jy, label="Mean profile")
plt.plot(xarr, gaussian(xarr, *popt), label="Gaussian fitted to main pulse")
plt.legend(loc="best", fontsize=8)
plt.xlabel('sample', fontsize=16)
plt.ylabel(r'$F_{\nu}$ (Jy)', fontsize=16)
plt.xlim(0,1000)
plt.ylim(-.1, 0.8)
plt.savefig('images/gaussfit_MP.png')
Explanation: Load a mean pulse profile for B1957+20, and fit a gaussian to the main pulse
End of explanation
xarr = np.arange(n)
pulse_params = popt
pulse_params[1] *= res/1000 # resizing the width to current gridsize
pulse_params[2] = n/2 # centering
pulse_ref = gaussian(xarr, *pulse_params)
envel_ref = np.sqrt(pulse_ref)
plt.plot(envel_ref, label="sqrt of the fitted gaussian")
plt.legend(loc="upper right", fontsize=8)
plt.xlabel('sample', fontsize=16)
plt.ylabel(r'$\sqrt{F_{\nu}}$ (rt Jy)', fontsize=16)
plt.xlim(0,n)
plt.ylim(-.1, 0.8)
plt.savefig('images/gaussfit_center.png')
Explanation: Isolate the main pulse, sqrt it, and center
End of explanation
angular_freq = 2*np.pi*freq
phase_ref = np.exp(1j*angular_freq*t)
print phase_ref[0], phase_ref[-1]
#phase_ref = np.roll(phase_ref,3)
plt.plot(t, phase_ref.imag, label="Imaginary part")
plt.plot(t, phase_ref.real, label="Real part")
plt.xlim(0, 1*1./freq)
plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0))
plt.legend(loc="upper right")
plt.xlabel('t (s)', fontsize=16)
plt.ylabel(r'Amplitude', fontsize=16)
plt.savefig('images/oscillations.png')
Explanation: Invent some sinusoidal wave at frequency of observed band. Assuming waves of the form e^(i \omega t)
End of explanation
a = 109797 # rt(kg)*m/s^2/A; a = sqrt(2*16MHz/(c*n*epsilon_0)), conversion factor between
# sqrt(Jansky) and E field strength assuming n=1 and a 16MHz bandwidth
b = 1e-13 # rt(kg)/s; a*b = 1.1e-8 V/m
E_field_ref = a*b*envel_ref*phase_ref
E_field_ref = np.roll(E_field_ref, (int)(1e-5*insample))
plt.figure(figsize=(14,4))
#plt.plot(t, np.abs(E_field_ref), label="\'Theoretical\' $E$ field")
plt.plot(t, np.real(E_field_ref), label="im \'Theoretical\' $E$ field")
plt.plot(t, np.imag(E_field_ref), label="real \'Theoretical\' $E$ field")
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.xlim((n/2-7*pulse_params[1])*intime,(n/2+7*pulse_params[1])*intime)
plt.legend(loc="best")
plt.xlabel('t (s)', fontsize=16)
plt.ylabel(r'$E$ (V/m)', fontsize=16)
plt.savefig('pulse_E_field.png')
Explanation: Compute an electric field
End of explanation
k_int = 30 # Integration range = +- period*k_int
lim = (int)(np.sqrt(k_int*p_phase*2*R)*insample) # How far does tau_geom have to go to get to k_int periods of E oscillation
# The last argument should be >> 2*k_int (m) to resolve oscillations of E
#int_domain = np.linspace(ipulsar-lim, ipulsar+lim, 3000*np.sqrt(k_int))
int_domain = np.linspace(0,0.064*insample,20000)
#int_domain = (np.random.random(10000)*2*lim)-lim
#print lim, (int)(tau_geom(np.amax(int_domain)*intime, n/2*intime)*insample)
def delay_field(i):
phase = np.roll(phase_ref, (int)(tau_phase(i*intime, ipulsar*intime)*insample))
padsize = (int)(tau_group(i*intime, ipulsar*intime)*insample)
envel = np.pad(envel_ref, (padsize,0), mode='constant')[:-padsize]
#envel = np.roll(envel_ref, (int)(tau_group(i*intime, ipulsar*intime)*insample))
E_field = a*b*phase*envel
return E_field
def delay_field_flat(i):
phase = np.roll(phase_ref, (int)((-1.0e-5+tau_geom(i*intime, ipulsar*intime))*insample))
padsize = (int)(( 1.0e-5+tau_geom(i*intime, ipulsar*intime))*insample)
envel = np.pad(envel_ref, (padsize,0), mode='constant')[:-padsize]
#envel = np.roll(envel_ref, (int)(( 1.0e-5+tau_geom(i*intime, ipulsar*intime))*insample))
E_field = a*b*phase*envel
return E_field
#This was used to find the optimal k_int, it seems to be around 30 for this particular frequency
#heightprev = 10
#error = 10
#k = 31
#heights = []
#while error > 0.0001 and k < 100:
# E_tot = np.zeros(n, dtype=np.complex128)
# lim = (int)(np.sqrt(k*p_phase*2*R)*insample)
# int_domain = np.linspace(-lim, lim, 10000*np.sqrt(k))
# for i in int_domain:
# E_tot += delay_field(i)
#
# popt3, pcov3 = curve_fit(gaussian, xarr, np.abs(E_tot), bounds=([-np.inf,-np.inf,n/2-n/15],[np.inf,np.inf,n/2+n/15]))
# heights.append(popt3[0])
# error = np.abs(popt3[0]-heightprev)/heightprev
# print k, popt3[0], popt3[1], popt3[2]
# heightprev = popt3[0]
# k += 1
#print "Stopped with error = ", error
#plt.plot(heights)
E_tot_flat = np.zeros(n, dtype=np.complex128)
E_tot = np.zeros(n, dtype=np.complex128)
print 'Integrating with', int_domain.size, 'samples'
print 'Total size', n
print 'Integration range', ipulsar, "+-", lim
print 'Integration range', ipulsar*intime*1e3, "+-", lim*intime*1e3, 'ms'
print 'Integration range in y +-', tau_geom(lim*intime, ipulsar*intime)*1e6, 'us'
tick = time()
for i in int_domain:
E_tot += delay_field(i)
E_tot_flat += delay_field_flat(i)
print 'Time elapsed:', time() - tick, 's'
print np.amax(np.abs(E_field_ref)), np.amax(np.abs(E_tot))
# This parallelizes really terribly for some reason.... Maybe take a look later.
# Possibly because this is in a notebook?
# The if is required apparently so we don't get an infinite loop (in Windows in particular)
#import multiprocessing
#E_tot = np.zeros(n, dtype=np.complex128)
#if __name__ == '__main__':
# pool = multiprocessing.Pool()
# print multiprocessing.cpu_count()
# currenttime = time()
# E_tot = sum(pool.imap_unordered(delay_field, ((i) for i in int_domain), chunksize=int_domain.size/16))
# print 'Time elapsed:', time() - currenttime
# pool.close()
# pool.join()
plt.figure(figsize=(14,4))
#plt.plot(t.reshape(-1,1e3).mean(axis=1), E_tot.reshape(-1,1e3).mean(axis=1)) # downsampling before plotting.
plt.plot(t, np.abs(E_tot_flat), label="Flat lens $E$ field")
plt.plot(t, np.abs(E_tot), label="Kinked lens $E$ field")
plt.xlim((n/2-10*pulse_params[1])*intime,(n/2+50*pulse_params[1])*intime)
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.legend(loc="best")
plt.xlabel('t (s)', fontsize=16)
plt.ylabel(r'$E$ (V/m)', fontsize=16)
plt.savefig('images/lensed_pulse.png')
Explanation: Now we have to delay the phase and the pulse(group) and integrate. We will parallelize it so we don't have to wait too long.
End of explanation
#E_tot *= np.amax(np.abs(E_field_ref))/np.amax(np.abs(E_tot)) # this scaling is ad-hoc, really should do it for the pre-lensed E field
#pulse_tot = E_tot * np.conjugate(E_tot) / (a*b)**2
#if np.imag(pulse_tot).all() < 1e-50:
# pulse_tot = np.real(pulse_tot)
#else:
# print 'Not purely real, what\'s going on?'
# Check if it's still gaussian by fitting a gaussian
#pulse_tot = pulse_tot.reshape(-1,n/1000).mean(axis=1)
#popt2, pcov2 = curve_fit(gaussian, x, pulse_tot, bounds=([-np.inf,-np.inf,400.],[np.inf,np.inf,600.]))#
#plt.plot(np.roll(meanprof_Jy,-1000/3-22), label="Mean profile main pulse") # This fitting is also ad-hoc, should do least square?
#plt.plot(xarr, gaussian(xarr, popt[0], popt[1], popt2[2]), label="Gaussian fitted to main pulse")
#plt.plot(pulse_tot, label="Integrated refitted curve")
#plt.plot(xarr, gaussian(xarr, *popt2), label="Gaussian fitted to integrated curve (overlaps)") # Looks like it's still gaussian
#plt.legend(loc="best", fontsize=7)
#plt.xlabel('sample', fontsize=16)
#plt.ylabel(r'$F_{\nu}$ (Jy)', fontsize=16)
#plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
#plt.xlim(450,580)
#plt.ylim(-.1, .8)
Explanation: Now, we have to backfit the integrated pulse back onto the observational data
End of explanation |
7,142 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example implementation of the ESU subgraph enumeration algorithm in python
Load the packages that we will need
Step1: Set the random number seed to 1337
Step2: Define the extend_subgraph function, which takes six arguments
Step3: Define the enumerate_subgraphs function, which takes two arguments
Step4: Make an undirected Barabasi-Albert graph G with 20 vertices and 3 edges per step (using igraph.Graph.Barabasi); as usual, print the graph summary
Step5: Let's take a look at the structure of this graph that we made, using igraph.drawing.plot
Step6: Let's print the list of subgraphs
Step7: We don't know the isomorphism class of each of these subgraphs. Let's use igraph.Graph.isoclass for that | Python Code:
import igraph
import random
import collections
Explanation: Example implementation of the ESU subgraph enumeration algorithm in python
Load the packages that we will need
End of explanation
def exclusive_neighborhood(graph, v, Vp):
assert type(graph)==igraph.Graph
assert type(v)==int
assert type(Vp)==set
Nv = set(graph.neighborhood(v))
NVpll = graph.neighborhood(list(Vp))
NVp = set([u for sublist in NVpll for u in sublist])
return Nv - NVp
Explanation: Set the random number seed to 1337:
Define an exclusive_neighborhood function which takes three arguments:
- graph the whole graph object
- v a single vertex, as an integer
- Vp a set of vertices
returns: a set of vertex IDs of the set difference N(v)-N(Vp)
side effects: none
End of explanation
def extend_subgraph(graph, Vsubgraph, Vextension, v, k, k_subgraphs):
assert type(graph)==igraph.Graph
assert type(Vsubgraph)==set
assert type(Vextension)==set
assert type(v)==int
assert type(k)==int
assert type(k_subgraphs)==list
if len(Vsubgraph) == k:
k_subgraphs.append(Vsubgraph)
assert 1==len(set(graph.subgraph(Vsubgraph).clusters(mode=igraph.WEAK).membership))
return
while len(Vextension) > 0:
w = random.choice(tuple(Vextension))
Vextension.remove(w)
## obtain the "exclusive neighborhood" Nexcl(w, vsubgraph)
NexclwVsubgraph = exclusive_neighborhood(graph, w, Vsubgraph)
VpExtension = Vextension | set([u for u in NexclwVsubgraph if u > v])
extend_subgraph(graph, Vsubgraph | set([w]), VpExtension, v, k, k_subgraphs)
return
Explanation: Define the extend_subgraph function, which takes six arguments:
- graph the whole graph object
- Vsubgraph which is a set of vertices (cardinality 1--k)
- Vextension which is a set of vertices (cardinality 0--N)
- v which is the start vertex from which we are to extend
- k the integer number of vertices in the motif (only sane values are 3 or 4)
- k_subgraphs a list of subgraph objects (modified)
Returns: nothing (but see k_subgraphs which is really the return data)
side effects: Vextension and k_subgraphs are modified
End of explanation
def enumerate_subgraphs(graph, k):
assert type(graph)==igraph.Graph
assert type(k)==int
k_subgraphs = []
for vertex_obj in graph.vs:
v = vertex_obj.index
Vextension = set([u for u in G.neighbors(v) if u > v])
extend_subgraph(graph, set([v]), Vextension, v, k, k_subgraphs)
return k_subgraphs
Explanation: Define the enumerate_subgraphs function, which takes two arguments:
- graph, the whole graph object
- k, the integer number of vertices in the motif (only sane values are 3 or 4)
returns: a list of set objects containing the vertices of each of the size k subgraphs
side effects: none
End of explanation
N = 6
K = 2
G =
Explanation: Make an undirected Barabasi-Albert graph G with 20 vertices and 3 edges per step (using igraph.Graph.Barabasi); as usual, print the graph summary
End of explanation
sgset =
Explanation: Let's take a look at the structure of this graph that we made, using igraph.drawing.plot:
Now let's run our ESU algorithm code with k=4, and get back the list of subgraphs:
End of explanation
sgset
Explanation: Let's print the list of subgraphs: (What type is each list element?)
End of explanation
subgraph_isoclass_list =
subgraph_isoclass_list
Explanation: We don't know the isomorphism class of each of these subgraphs. Let's use igraph.Graph.isoclass for that:
End of explanation |
7,143 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Giới thiệu
Trong notebook này, mình sẽ trình bày cách giải quyết đề tài tuyển dụng của VinID. Mô hình CNN được sử dụng để phân loại 10 số viết tay trong bộ MNIST. Trong notebook này,bao gồm các phần sau
Step1: 2. Tiền xử lý
Đọc bộ dữ liệu MNIST, chúng ta chia bộ dữ liệu thành tập train/valid/test. Đồng thời chuẩn hóa dữ liệu về khoảng [0-1] để giúp tăng tốc quá trình hội tụ. Tập valid của chúng ta sẽ gồm 20% tập train.
Step2: Kiểm tra phân bố của nhãn
Chúng ta thấy rằng số lượng mẫu dữ liệu cho mỗi nhãn tương đương nhau.
Step3: Thử nhìn qua một số mẫu trong tập huấn luyện. Chúng ta thấy rằng hầu hết các ảnh đều rõ nét và tương đối dễ dàng để nhận dạng.
Step4: Định nghĩa số epochs cần huấn luyện và bachsize
Step5: 3. Data Augmentation
Kĩ thuật data augmentation được sử dụng để phát sinh thêm những mẫu dữ liệu mới bằng cách áp dụng các kĩ thuật xử lý ảnh trên bức ảnh. Các phép biến đổi nhỏ này phải đảm bảo không làm thay đổi nhãn của bức ảnh.
Một số kĩ thuật phổ biến của data augmentation như là
Step6: 4. Xây dưng mô hình CNN
CNN bao gồm tập hợp các lớp cơ bản bao gồm
Step7: 5. Hyper-params tunning
Chúng ta sử dụng Hyperas để tunning các tham số. Hyperas sẽ phát sinh bộ tham số dựa trên khai báo ở trên. Sau đó huấn luyện mô hình và đánh giá trên tập validation. Bộ tham số có độ chính xác cao nhất trên tập validation sẽ được ghi nhận lại.
5.1 Khai báo không gian tìm kiếm siêu tham số
Có rất nhiều siêu tham số cần được tunning như
Step8: 5.2 Optimze để tìm bộ tham số tốt nhất
Hyperas sẽ phát sinh các bộ tham số giữ trên không gian tìm kiếm định nghĩa trước của chúng ta. Sau đó thư viện sẽ hỗ trợ quá trình tìm kiếm các tham số này đơn giản bằng một số API có sẵn.
Step9: Chạy quá trình search tham số. Bộ siêu tham số tốt nhất sẽ được ghi nhận lại để chúng ta sử dụng trong mô hình cuối cùng.
Step10: Huấn luyện lại mô hình với bộ tham số tốt nhất ở trên.
Step11: Kết quả trên tập validation khá cao với acc > 99%
6. So sánh optimizers và loss
6.1 So sánh các optimzers
Mục tiêu của quá trình huấn luyện mô hình ML là giảm độ lỗi của hàm loss function được tính bằng sự khác biệt của giá trị mô hình dự đoán và giá trị thực tế. Để đạt được mục đích này chúng ta thường sử dụng gradient descent. Gradient descent sẽ cập nhật trọng số của mô hình ngược với chiều gradient để giảm độ lỗi của loss function.
Chúng ta sử thường sử dụng 3 optimzer phổ biến sau là adam, sgd, rmsprop để cập nhật trọng số của mô hình.
Stochastic Gradient Descent là một biến thể của Gradient Descent, yêu cầu chúng ta phải shuffle dự liệu trước khi huấn luyện. Trong khi đó RMSProp và Adam là 2 optimizer hướng đến việc điều chỉnh learning rate tự động theo quá trình học.
RMSprop (Root mean square propagation) được giới thiệu bởi Geoffrey Hinton. RMSProp giải quyết vấn đề giảm dần learning rate của Adagrad bằng cách chuẩn hóa learning với gradient gần với thời điểm cập nhật mà thôi. Để làm được điều này tác giả chia learning rate cho tổng bình phương gradient giảm dần.
Adam là optimizer phổ biến nhất tại thời điểm hiện tại. Adam cũng tính learning riêng biệt cho từng tham số, tương tự như RMSProp và Adagrad. Adam chuẩn hóa learning của mỗi tham số bằng first và second order moment của gradient.
Step12: Plot quá trình huấn luyện mô hình với 3 lọai optimizers khác nhau.
Step13: 6.2 So sánh các loss function
Trong bài toán phân loại nhiều lớp. Chúng ta thường sử dụng 2 loại loss function sau
Step14: Plot quá trình huấn luyện mô hình với 2 loại loss function khác nhau.
Step16: Chúng ta thấy rằng không có sự khác biệt rõ rằng về tốc độ hội tụ giữ 2 hàm loss function là cross-entropy và KL loss trong bài toán của chúng ta.
7. Đánh giá mô hình.
Chúng ta sẽ xem xét một số lỗi của mô hình dự huấn luyện được. Một số lỗi dễ dàng được phát hiện bằng confusion matrix thể hiện xác xuất/số ảnh bị phân loại nhầm thành lớp khác.
Step18: Các giá trị trên đường chéo rất cao, chúng ta mô hình chúng ta có độ chính xác rất tốt.
Nhìn vào confusion matrix ở trên, chúng ta có một số nhận xét như sau
Step19: Với các mẫu ảnh sai, chúng ta có thể thấy rằng những mẫu này rất khó nhận dạng nhầm lẫn sáng các lớp khác. ví dụ số 9 và 4 hay là 3 và 8
8. Kfold, Predict và submit kết quả
Chúng ta huấn luyện lại mô hình sử dụng kfold, kết quả cuối dự đoán cuối cùng là trung bình cộng sự đoán của các mô hình huấn luyện trên mỗi fold.
Chúng ta chọn nhãn là lớp được dự đoán có xác suất cao nhất mà mô hình nhận dạng được | Python Code:
!pip install hyperas
# Basic compuational libaries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sns
%matplotlib inline
np.random.seed(2)
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
import itertools
from sklearn.model_selection import KFold
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.layers import Dense, Dropout, Conv2D, GlobalAveragePooling2D, Flatten, GlobalMaxPooling2D
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPool2D, BatchNormalization
from keras.models import Sequential
from keras.optimizers import RMSprop, Adam, SGD, Nadam
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
from keras import regularizers
# Import hyperopt for tunning hyper params
from hyperopt import hp, tpe, fmin
from hyperopt import space_eval
sns.set(style='white', context='notebook', palette='deep')
# Set the random seed
random_seed = 2
Explanation: 1. Giới thiệu
Trong notebook này, mình sẽ trình bày cách giải quyết đề tài tuyển dụng của VinID. Mô hình CNN được sử dụng để phân loại 10 số viết tay trong bộ MNIST. Trong notebook này,bao gồm các phần sau:
1. Giới thiệu
2. Tiền xử lý dữ liệu
2.1 Load dữ liệu
2.2 Kiểm tra missing value
2.3 Chuẩn hóa
2.5 Label encoding
2.6 Xây dựng tập train/test
3. Data augmentation
4. Xây dựng mô hình
4.1 Xây dựng mô hình
5. Tunning parameter
5.1 Khai báo không gian tìm kiếm siêu tham số
5.2 Grid search
6. So sánh các optimizer và loss function
6.1 So sánh optimizer
6.2 So sánh loss function
7. Đánh giá mô hình
7.1 Confusion matrix
8. Dự đoán
8.1 Predict and Submit results
Cài đặt thư viện hyperas để hỗ trỡ quá trình tunning siêu tham số. Hyperas cung cấp các api rất tiện lợi cho quá trình theo huấn luyện và theo dõi độ chính xác của model tại mỗi bộ tham số.
End of explanation
def data():
# Load the data
train = pd.read_csv("../input/digit-recognizer/train.csv")
test = pd.read_csv("../input/digit-recognizer/test.csv")
Y_train = train["label"]
# Drop 'label' column
X_train = train.drop(labels = ["label"],axis = 1)
# Normalize the data
X_train = X_train / 255.0
test = test / 255.0
# Reshape image in 3 dimensions (height = 28px, width = 28px , canal = 1)
X_train = X_train.values.reshape(-1,28,28,1)
test = test.values.reshape(-1,28,28,1)
# Encode labels to one hot vectors (ex : 2 -> [0,0,1,0,0,0,0,0,0,0])
Y_train = to_categorical(Y_train, num_classes = 10)
return X_train, Y_train, test
X, Y, X_test = data()
Explanation: 2. Tiền xử lý
Đọc bộ dữ liệu MNIST, chúng ta chia bộ dữ liệu thành tập train/valid/test. Đồng thời chuẩn hóa dữ liệu về khoảng [0-1] để giúp tăng tốc quá trình hội tụ. Tập valid của chúng ta sẽ gồm 20% tập train.
End of explanation
g = sns.countplot(np.argmax(Y, axis=1))
Explanation: Kiểm tra phân bố của nhãn
Chúng ta thấy rằng số lượng mẫu dữ liệu cho mỗi nhãn tương đương nhau.
End of explanation
for i in range(0, 9):
plt.subplot(330 + (i+1))
plt.imshow(X[i][:,:,0], cmap=plt.get_cmap('gray'))
plt.title(np.argmax(Y[i]));
plt.axis('off')
plt.tight_layout()
Explanation: Thử nhìn qua một số mẫu trong tập huấn luyện. Chúng ta thấy rằng hầu hết các ảnh đều rõ nét và tương đối dễ dàng để nhận dạng.
End of explanation
epochs = 30 # Turn epochs to 30 to get 0.9967 accuracy
batch_size = 64
Explanation: Định nghĩa số epochs cần huấn luyện và bachsize
End of explanation
# With data augmentation to prevent overfitting (accuracy 0.99286)
train_aug = ImageDataGenerator(
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
zoom_range = 0.1, # Randomly zoom image
width_shift_range=0.1, # randomly shift images horizontally (fraction of total width)
height_shift_range=0.1, # randomly shift images vertically (fraction of total height)
)
test_aug = ImageDataGenerator()
Explanation: 3. Data Augmentation
Kĩ thuật data augmentation được sử dụng để phát sinh thêm những mẫu dữ liệu mới bằng cách áp dụng các kĩ thuật xử lý ảnh trên bức ảnh. Các phép biến đổi nhỏ này phải đảm bảo không làm thay đổi nhãn của bức ảnh.
Một số kĩ thuật phổ biến của data augmentation như là:
* Rotation: Xoay một góc nhỏ
* Translation: Tính tiến
* Brightness, Staturation: Thay đổi độ sáng, tương phản
* Zoom: zoom to/nhỏ bức ảnh
* Elastic Distortion: biến dạng bức ảnh
* Flip: lật trái/phải/trên/dưới.
Ở dưới đây, chúng ta sẽ chọn xoay 1 góc trong 0-10 độ. Zoom ảnh 0.1 lần, tịnh tiến 0.1 lần mỗi chiều.
End of explanation
# Set the CNN model
def train_model(train_generator, valid_generator, params):
model = Sequential()
model.add(Conv2D(filters = params['conv1'], kernel_size = params['kernel_size_1'], padding = 'Same',
activation ='relu', input_shape = (28,28,1)))
model.add(BatchNormalization())
model.add(Conv2D(filters = params['conv2'], kernel_size = params['kernel_size_2'], padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size = params['pooling_size_1']))
model.add(Dropout(params['dropout1']))
model.add(BatchNormalization())
model.add(Conv2D(filters = params['conv3'], kernel_size = params['kernel_size_3'], padding = 'Same',
activation ='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters = params['conv4'], kernel_size = params['kernel_size_4'], padding = 'Same',
activation ='relu'))
model.add(MaxPool2D(pool_size = params['pooling_size_1'], strides=(2,2)))
model.add(Dropout(params['dropout2']))
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(params['dense1'], activation = "relu"))
model.add(Dropout(params['dropout3']))
model.add(Dense(10, activation = "softmax"))
if params['opt'] == 'rmsprop':
opt = RMSprop()
elif params['opt'] == 'sgd':
opt = SGD()
elif params['opt'] == 'nadam':
opt = Nadam()
else:
opt = Adam()
model.compile(loss=params['loss'], optimizer=opt, metrics=['acc'])
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.3, patience=2, mode='auto', cooldown=2, min_lr=1e-7)
early = EarlyStopping(monitor='val_loss', patience=3)
callbacks_list = [reduce_lr, early]
history = model.fit_generator(train_generator,
validation_data=valid_generator,
steps_per_epoch=len(train_generator),
validation_steps=len(valid_generator),
callbacks=callbacks_list, epochs = epochs,
verbose=2)
score, acc = model.evaluate_generator(valid_generator, steps=len(valid_generator), verbose=0)
return acc, model, history
Explanation: 4. Xây dưng mô hình CNN
CNN bao gồm tập hợp các lớp cơ bản bao gồm: convolution layer + nonlinear layer, pooling layer, fully connected layer. Các lớp này liên kết với nhau theo một thứ tự nhất định. Thông thường, một ảnh sẽ được lan truyền qua tầng convolution layer + nonlinear layer đầu tiên, sau đó các giá trị tính toán được sẽ lan truyền qua pooling layer, bộ ba convolution layer + nonlinear layer + pooling layer có thể được lặp lại nhiều lần trong network. Và sau đó được lan truyền qua tầng fully connected layer và softmax để tính sác xuất ảnh đó chứa vật thế gì.
Định nghĩa mô hình
Chúng ta sử dụng Keras Sequential API để định nghĩa mô hình. Các layer được thêm vào rất dễ dàng và tương đối linh động.
Đầu tiên chúng ta sử dụng layer Conv2D trên ảnh đầu vào. Conv2D bao gồm một tập các filters cần phải học. Mỗi filters sẽ trược qua toàn bộ bức ảnh để detect các đặt trưng trên bức ảnh đó.
Pooling layer là tầng quan trọng và thường đứng sau tầng Conv. Tầng này có chức năng giảm chiều của feature maps trước đó. Đối với max-pooling, tầng này chỉ đơn giản chọn giá trị lớn nhất trong vùng có kích thước pooling_size x pooling_size (thường là 2x2). Tầng pooling này được sử dụng để giảm chi phí tính toán và giảm được overfit của mô hình.
Đồng thời, Dropout cũng được sử dụng để hạn chế overfit. Dropout sẽ bỏ đi ngẫu nhiên các neuron bằng cách nhân với mask zeros, do đó, giúp mô hình học được những đặc trưng hữu ích. Dropout trong hầu hết các trường hợp đều giúp tăng độ chính xác và hạn chết overfit của mô hình.
Ở tầng cuối cùng, chúng ta flatten feature matrix thành một vector, sau đó sử dụng các tầng fully connected layers để phân loại ảnh thành các lớp cho trước.
Để giúp mô hình hội tụ gần với gobal minima chúng ta sử dụng annealing learning rate. Learning sẽ được điều chỉnh nhỏ dần sau mỗi lần cập nhật nếu như sau một số bước nhất định mà loss của mô hình không giảm nữa. Để giảm thời gian tính toán, chúng ta có thể sử dụng learning ban đầu lớn, sau đó giảm dần để mô hình hội tụ nhanh hơn.
Ngoài ra, chúng ta sử dụng early stopping để hạn chế hiện tượng overfit của mô hình. early stopping sẽ dừng quá trình huấn luyện nếu như loss trên tập validation tăng dần trong khi trên tập lại giảm.
Sử dụng hyperas để tunning siêu tham số
Trong quá trình định nghĩa mô hình, chúng ta sẽ lồng vào đó các đoạn mã để hỗ trợ quá trình search siêu tham số đã được định nghĩa ở trên. Chúng ta sẽ cần search các tham số như filter_size, pooling_size, dropout rate, dense size. Đồng thời chúng ta cũng thử việc điều chỉnh cả optimizer của mô hình.
End of explanation
#This is the space of hyperparameters that we will search
space = {
'opt':hp.choice('opt', ['adam', 'sgd', 'rmsprop']),
'conv1':hp.choice('conv1', [16, 32, 64, 128]),
'conv2':hp.choice('conv2', [16, 32, 64, 128]),
'kernel_size_1': hp.choice('kernel_size_1', [3, 5]),
'kernel_size_2': hp.choice('kernel_size_2', [3, 5]),
'dropout1': hp.choice('dropout1', [0, 0.25, 0.5]),
'pooling_size_1': hp.choice('pooling_size_1', [2, 3]),
'conv3':hp.choice('conv3', [32, 64, 128, 256, 512]),
'conv4':hp.choice('conv4', [32, 64, 128, 256, 512]),
'kernel_size_3': hp.choice('kernel_size_3', [3, 5]),
'kernel_size_4': hp.choice('kernel_size_4', [3, 5]),
'dropout2':hp.choice('dropout2', [0, 0.25, 0.5]),
'pooling_size_2': hp.choice('pooling_size_2', [2, 3]),
'dense1':hp.choice('dense1', [128, 256, 512, 1024]),
'dropout3':hp.choice('dropout3', [0, 0.25, 0.5]),
'loss': hp.choice('loss', ['categorical_crossentropy', 'kullback_leibler_divergence']),
}
Explanation: 5. Hyper-params tunning
Chúng ta sử dụng Hyperas để tunning các tham số. Hyperas sẽ phát sinh bộ tham số dựa trên khai báo ở trên. Sau đó huấn luyện mô hình và đánh giá trên tập validation. Bộ tham số có độ chính xác cao nhất trên tập validation sẽ được ghi nhận lại.
5.1 Khai báo không gian tìm kiếm siêu tham số
Có rất nhiều siêu tham số cần được tunning như: kiến trúc mạng, số filter, kích thước mỗi filters, kích thước pooling, các cách khởi tạo, hàm kích hoạt, tỉ lệ dropout,... Trong phần này, chúng ta sẽ tập trung vào các tham số như kích thước filter, số filters, pooling size.
Đầu tiên, chúng ta cần khai báo các siêu tham để hyperas có thể tìm kiếm trong tập đấy. Ở mỗi tầng conv, chúng ta sẽ tunning kích thước filter, filter size. Ở tầng pooling, kích thước pooling size sẽ được tunning. Đồng thời, tỉ lệ dropout ở tầng Dropout cũng được tunning. Số filters ở tầng conv thường từ 16 -> 1024, kích thước filter hay thường dùng nhất trong là 3 với 5. Còn tỉ lệ dropout nằm trong đoạn 0-1
End of explanation
X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size = 0.2, random_state=random_seed)
# only apply data augmentation with train data
train_gen = train_aug.flow(X_train, Y_train, batch_size=batch_size)
valid_gen = test_aug.flow(X_val, Y_val, batch_size=batch_size)
def optimize(params):
acc, model, history = train_model(train_gen, valid_gen, params)
return -acc
Explanation: 5.2 Optimze để tìm bộ tham số tốt nhất
Hyperas sẽ phát sinh các bộ tham số giữ trên không gian tìm kiếm định nghĩa trước của chúng ta. Sau đó thư viện sẽ hỗ trợ quá trình tìm kiếm các tham số này đơn giản bằng một số API có sẵn.
End of explanation
best = fmin(fn = optimize, space = space,
algo = tpe.suggest, max_evals = 50) # change to 50 to search more
best_params = space_eval(space, best)
print('best hyper params: \n', best_params)
Explanation: Chạy quá trình search tham số. Bộ siêu tham số tốt nhất sẽ được ghi nhận lại để chúng ta sử dụng trong mô hình cuối cùng.
End of explanation
acc, model, history = train_model(train_gen, valid_gen, best_params)
print("validation accuracy: {}".format(acc))
Explanation: Huấn luyện lại mô hình với bộ tham số tốt nhất ở trên.
End of explanation
optimizers = ['rmsprop', 'sgd', 'adam']
hists = []
params = best_params
for optimizer in optimizers:
params['opt'] = optimizer
print("Train with optimizer: {}".format(optimizer))
_, _, history = train_model(train_gen, valid_gen, params)
hists.append((optimizer, history))
Explanation: Kết quả trên tập validation khá cao với acc > 99%
6. So sánh optimizers và loss
6.1 So sánh các optimzers
Mục tiêu của quá trình huấn luyện mô hình ML là giảm độ lỗi của hàm loss function được tính bằng sự khác biệt của giá trị mô hình dự đoán và giá trị thực tế. Để đạt được mục đích này chúng ta thường sử dụng gradient descent. Gradient descent sẽ cập nhật trọng số của mô hình ngược với chiều gradient để giảm độ lỗi của loss function.
Chúng ta sử thường sử dụng 3 optimzer phổ biến sau là adam, sgd, rmsprop để cập nhật trọng số của mô hình.
Stochastic Gradient Descent là một biến thể của Gradient Descent, yêu cầu chúng ta phải shuffle dự liệu trước khi huấn luyện. Trong khi đó RMSProp và Adam là 2 optimizer hướng đến việc điều chỉnh learning rate tự động theo quá trình học.
RMSprop (Root mean square propagation) được giới thiệu bởi Geoffrey Hinton. RMSProp giải quyết vấn đề giảm dần learning rate của Adagrad bằng cách chuẩn hóa learning với gradient gần với thời điểm cập nhật mà thôi. Để làm được điều này tác giả chia learning rate cho tổng bình phương gradient giảm dần.
Adam là optimizer phổ biến nhất tại thời điểm hiện tại. Adam cũng tính learning riêng biệt cho từng tham số, tương tự như RMSProp và Adagrad. Adam chuẩn hóa learning của mỗi tham số bằng first và second order moment của gradient.
End of explanation
for name, history in hists:
plt.plot(history.history['val_acc'], label=name)
plt.legend(loc='best', shadow=True)
plt.tight_layout()
Explanation: Plot quá trình huấn luyện mô hình với 3 lọai optimizers khác nhau.
End of explanation
loss_functions = ['categorical_crossentropy', 'kullback_leibler_divergence']
hists = []
params = best_params
for loss_funct in loss_functions:
params['loss'] = loss_funct
print("Train with loss function : {}".format(loss_funct))
_, _, history = train_model(train_gen, valid_gen, params)
hists.append((loss_funct, history))
Explanation: 6.2 So sánh các loss function
Trong bài toán phân loại nhiều lớp. Chúng ta thường sử dụng 2 loại loss function sau:
* Cross entropy
* Kullback Leibler Divergence Loss
Cross entropy được sử dụng phổ biến nhất trong bài toán của chúng ta. Cross entropy loss có nền tảng toán học của maximun likelihood được tính bằng tổng của sự khác biệt giữ giá trị dự đoán và giá trị thực tế của dữ liệu. Cross entropy error tốt nhất khi có giá trị bằng 0.
KL loss (Kullback Leibler Divergence Loss) thể hiện sự khác biệt giữ 2 phân bố xác suất. KL loss bằng 0, chứng tỏ 2 phân bố này hoàn toàn giống nhau.
Cross entropy cho bằng toán phân loại nhiều lớn tương đối giống với KL Loss về mặt toán học, nên có thể xem 2 độ lỗi này là một trong bài toán của chúng ta.
End of explanation
for name, history in hists:
plt.plot(history.history['val_acc'], label=name)
plt.legend(loc='best', shadow=True)
plt.tight_layout()
Explanation: Plot quá trình huấn luyện mô hình với 2 loại loss function khác nhau.
End of explanation
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Predict the values from the validation dataset
Y_pred = model.predict(X_val)
# Convert predictions classes to one hot vectors
Y_pred_classes = np.argmax(Y_pred,axis = 1)
# Convert validation observations to one hot vectors
Y_true = np.argmax(Y_val,axis = 1)
# compute the confusion matrix
confusion_mtx = confusion_matrix(Y_true, Y_pred_classes)
# plot the confusion matrix
plot_confusion_matrix(confusion_mtx, classes = range(10))
Explanation: Chúng ta thấy rằng không có sự khác biệt rõ rằng về tốc độ hội tụ giữ 2 hàm loss function là cross-entropy và KL loss trong bài toán của chúng ta.
7. Đánh giá mô hình.
Chúng ta sẽ xem xét một số lỗi của mô hình dự huấn luyện được. Một số lỗi dễ dàng được phát hiện bằng confusion matrix thể hiện xác xuất/số ảnh bị phân loại nhầm thành lớp khác.
End of explanation
# Display some error results
# Errors are difference between predicted labels and true labels
errors = (Y_pred_classes - Y_true != 0)
Y_pred_classes_errors = Y_pred_classes[errors]
Y_pred_errors = Y_pred[errors]
Y_true_errors = Y_true[errors]
X_val_errors = X_val[errors]
def display_errors(errors_index,img_errors,pred_errors, obs_errors):
This function shows 6 images with their predicted and real labels
n = 0
nrows = 2
ncols = 3
fig, ax = plt.subplots(nrows,ncols,sharex=True,sharey=True)
for row in range(nrows):
for col in range(ncols):
error = errors_index[n]
ax[row,col].imshow((img_errors[error]).reshape((28,28)))
ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error]))
n += 1
fig.tight_layout()
# Probabilities of the wrong predicted numbers
Y_pred_errors_prob = np.max(Y_pred_errors,axis = 1)
# Predicted probabilities of the true values in the error set
true_prob_errors = np.diagonal(np.take(Y_pred_errors, Y_true_errors, axis=1))
# Difference between the probability of the predicted label and the true label
delta_pred_true_errors = Y_pred_errors_prob - true_prob_errors
# Sorted list of the delta prob errors
sorted_dela_errors = np.argsort(delta_pred_true_errors)
# Top 6 errors
most_important_errors = sorted_dela_errors[-6:]
# Show the top 6 errors
display_errors(most_important_errors, X_val_errors, Y_pred_classes_errors, Y_true_errors)
Explanation: Các giá trị trên đường chéo rất cao, chúng ta mô hình chúng ta có độ chính xác rất tốt.
Nhìn vào confusion matrix ở trên, chúng ta có một số nhận xét như sau:
* Số 4 hay nhầm lẫn với số 9, bởi vì khi viết tay đường nét của 2 số này tương tự nhau khá nhiều
* số 3 và số 8, cũng hay bị tình trạng tương tự.
Hiển thị một số trường hợp bị sai
Để có cái nhìn rõ hơn về một số mẫu bị sai, chúng ta quan sát top các mẫu có giá trị dự đoán khác nhất so với nhãn thật
End of explanation
kf = KFold(n_splits=5)
preds = []
for train_index, valid_index in kf.split(X):
X_train, Y_train, X_val, Y_val = X[train_index], Y[train_index], X[valid_index], Y[valid_index]
train_gen = train_aug.flow(X_train, Y_train, batch_size=batch_size)
valid_gen = test_aug.flow(X_val, Y_val, batch_size=batch_size)
acc, model, history = train_model(train_gen, valid_gen, best_params)
pred = model.predict(X_test)
preds.append(pred)
# predict results
results = np.mean(preds, axis=0)
# select the indix with the maximum probability
results = np.argmax(results,axis = 1)
results = pd.Series(results,name="Label")
submission = pd.concat([pd.Series(range(1,28001),name = "ImageId"),results],axis = 1)
submission.to_csv("cnn_mnist_datagen.csv",index=False)
Explanation: Với các mẫu ảnh sai, chúng ta có thể thấy rằng những mẫu này rất khó nhận dạng nhầm lẫn sáng các lớp khác. ví dụ số 9 và 4 hay là 3 và 8
8. Kfold, Predict và submit kết quả
Chúng ta huấn luyện lại mô hình sử dụng kfold, kết quả cuối dự đoán cuối cùng là trung bình cộng sự đoán của các mô hình huấn luyện trên mỗi fold.
Chúng ta chọn nhãn là lớp được dự đoán có xác suất cao nhất mà mô hình nhận dạng được
End of explanation |
7,144 | Given the following text description, write Python code to implement the functionality described.
Description:
Program to find the smallest element among three elements
Python implementation to find the smallest of three elements
| Python Code:
a , b , c = 5 , 7 , 10
if(a <= b and a <= c ) :
print(a , "is ▁ the ▁ smallest ")
elif(b <= a and b <= c ) :
print(b , "is ▁ the ▁ smallest ")
else :
print(c , "is ▁ the ▁ smallest ")
|
7,145 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Testing the Numerial Gradient
This notebook explores the assess the differences and validity in calculating the spectral slope using finite differences or np.gradient().
In Figueira et al. 2016 the slope is calculated as dy/dx = np.diff(flux)/ np.diff(wav). The np.diff() function shrinks the array by 1 which can be significant when slicing the wavelengths into many chunks for the telluric masking as each loses 1 pixel.
The np.gradient function does not drop the end pixel.
From the numpy documentation
- "The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array."
Step4: Here we plot a small section of the spectrum and the slopes from the gradient and dy/dx methods.
Step5: There is an wavelength offset between the dy/dx (blue) and gradient (green) due to the wavelenght points. The wavelength and flux are adjusted to the center of each difference (orange).
We later show that the difference difference between the blue and orange creates a change in precision of around 0.1%. (e.g. the wavelength difference is squared)
The gradient is less sharp then the dy/dx method and so produces a lower precision (higher rms).
We also show that the difference in RV between the gradient and dy/dx method is between $2-7$% in the given bands!
Step6: Numerical Gradient Effect on Bands
Need to calcualte the relative difference on the same wavelength and flux sections. They are all normalized differently due to the maximum in the given range. | Python Code:
import matplotlib
matplotlib.rcParams["text.usetex"] = False
matplotlib.rcParams["text.latex.unicode"] = True
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
from eniric.utilities import load_aces_spectrum
# from eniric.precision import slope, slope_grad
def slope(wavelength, flux):
Original version used to calculate the slope. [Looses one value of array].
delta_flux = np.diff(flux)
delta_lambda = np.diff(wavelength)
return delta_flux / delta_lambda
def slope_grad(wavelength, flux):
Slope using gradient.
return np.gradient(flux, wavelength) # Yes they should be opposite order
def slope_adjusted(wavelength, flux):
Slope that adjust the wave and flux values to match diff.
derivf_over_lambda = np.diff(flux) / np.diff(wavelength)
wav_new = (wavelength[:-1] + wavelength[1:]) / 2
flux_new = (flux[:-1] + flux[1:]) / 2
return derivf_over_lambda, wav_new, flux_new
# Load spectrum
wl_, flux_ = load_aces_spectrum([3900, 4.5, 0.0, 0]) # M0 star
wl_ = wl_ * 1000
Explanation: Testing the Numerial Gradient
This notebook explores the assess the differences and validity in calculating the spectral slope using finite differences or np.gradient().
In Figueira et al. 2016 the slope is calculated as dy/dx = np.diff(flux)/ np.diff(wav). The np.diff() function shrinks the array by 1 which can be significant when slicing the wavelengths into many chunks for the telluric masking as each loses 1 pixel.
The np.gradient function does not drop the end pixel.
From the numpy documentation
- "The gradient is computed using second order accurate central differences in the interior points and either first or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array."
End of explanation
mask = ((wl_ / 1000) > 2.20576) & ((wl_ / 1000) < 2.20586)
wl, flux = wl_[mask], flux_[mask]
flux = flux / max(flux) # Normalize maximum to 1
f_slope = slope(wl, flux)
f_grad = slope_grad(wl, flux)
f_adj, new_wl, new_flux = slope_adjusted(wl, flux)
# plt.figure(figsize=(15,10))
ax1 = plt.subplot(211)
plt.plot(wl, flux, "o-", label="Spectrum")
# plt.plot(new_wl, new_flux, "+--", label="Diff adjusted")
plt.ylabel("Normalized Flux")
plt.ticklabel_format(axis="x", style="plain")
ax1.ticklabel_format
plt.legend()
ax2 = plt.subplot(212, sharex=ax1)
plt.plot(wl[:-1], f_slope, "s--", label="FFD")
plt.plot(new_wl, f_adj, "o-.", label="FFD(shifted)")
plt.plot(wl, f_grad, "*--", label="Numpy")
plt.ylim([-30, 32])
plt.xlabel("Wavelength (nm)")
plt.ylabel("Gradient")
ax2.ticklabel_format(useOffset=False)
ax1.tick_params(labelbottom=False)
plt.legend()
plt.tight_layout()
plt.savefig("spectra_gradient_example1.pdf")
plt.show()
Explanation: Here we plot a small section of the spectrum and the slopes from the gradient and dy/dx methods.
End of explanation
# Another example
mask = ((wl_ / 1000) > 2.10580) & ((wl_ / 1000) < 2.1059)
wl, flux = wl_[mask], flux_[mask]
flux = flux / max(flux)
f_slope = slope(wl, flux)
f_grad = slope_grad(wl, flux)
f_adj, new_wl, new_flux = slope_adjusted(wl, flux)
# plt.figure(figsize=(15,10))
ax1 = plt.subplot(211)
plt.plot(wl, flux, "o-", label="Spectrum")
# plt.plot(new_wl, new_flux, "+--", label="Diff adjusted")
plt.ylabel("Normalized Flux")
plt.legend()
ax2 = plt.subplot(212, sharex=ax1)
plt.plot(wl[:-1], f_slope, "s--", label="FFD")
plt.plot(new_wl, f_adj, "o-.", label="FFD(shifted)")
plt.plot(wl, f_grad, "*--", label="Numpy")
plt.ylim([-2, 3])
plt.xlabel("Wavelength (nm)")
plt.ylabel("Gradient")
ax2.ticklabel_format(useOffset=False)
ax1.tick_params(labelbottom=False)
plt.tight_layout()
plt.savefig("spectra_gradient_example2.pdf")
plt.legend()
plt.show()
# Gradient method is ~7 times slower than the difference.
# %timeit np.diff(flux_) / np.diff(wl_)
# %timeit np.gradient(wl_, flux_, axis=0)
# Functions to calcualte wis, and q and RV from the slopes.
def slope_wis(wavelength, flux):
derivf_over_lambda = slope(wavelength, flux)
wis = np.sqrt(
np.nansum(wavelength[:-1] ** 2.0 * derivf_over_lambda ** 2.0 / flux[:-1])
)
return wis
def grad_wis(wavelength, flux):
derivf_over_lambda = slope_grad(wavelength, flux)
wis = np.sqrt(np.nansum(wavelength ** 2.0 * derivf_over_lambda ** 2.0 / flux))
return wis
def slope_adjusted_wis(wavelength, flux):
derivf_over_lambda, wav_new, flux_new = slope_adjusted(wavelength, flux)
wis = np.sqrt(np.nansum(wav_new ** 2.0 * derivf_over_lambda ** 2.0 / flux_new))
return wis
def q(wavelength, flux):
"Quality"
return slope_wis(wavelength, flux) / np.sqrt(np.nansum(flux))
def grad_q(wavelength, flux):
"Quality"
return grad_wis(wavelength, flux) / np.sqrt(np.nansum(flux))
from astropy.constants import c
def slope_rv(wavelength, flux):
return c.value / slope_wis(wavelength, flux)
def grad_rv(wavelength, flux):
return c.value / grad_wis(wavelength, flux)
def slope_adjusted_rv(wavelength, flux):
return c.value / slope_adjusted_wis(wavelength, flux)
Explanation: There is an wavelength offset between the dy/dx (blue) and gradient (green) due to the wavelenght points. The wavelength and flux are adjusted to the center of each difference (orange).
We later show that the difference difference between the blue and orange creates a change in precision of around 0.1%. (e.g. the wavelength difference is squared)
The gradient is less sharp then the dy/dx method and so produces a lower precision (higher rms).
We also show that the difference in RV between the gradient and dy/dx method is between $2-7$% in the given bands!
End of explanation
import eniric
from eniric.utilities import band_limits
wl_, flux_ = load_aces_spectrum([3900, 4.5, 0.0, 0]) # M0 star
wl_ = wl_ * 1000
bands = ["VIS", "CARMENES_VIS", "Z", "Y", "J", "H", "K", "CARMENES_NIR", "NIR"]
print(
"Band wl_min wl_max dy/dx gradient Q(dy/dx) Q(grad)"
" Q(frac) RV(dy/dx) RV_adj RV(grad) RV(frac_grad) RV(frac_adj)"
)
for band in bands:
wl_min, wl_max = band_limits(band)
mask = ((wl_ / 1000) > wl_min) & ((wl_ / 1000) < wl_max)
wl, flux = wl_[mask], flux_[mask]
flux = flux / max(flux)
s = slope_wis(wl, flux)
sa = slope_adjusted_wis(wl, flux)
g = grad_wis(wl, flux)
# Quality
qs = q(wl, flux)
qg = grad_q(wl, flux)
qfraction = (qg - qs) / qs
# RV_rms
rvs = slope_rv(wl, flux)
rvg = grad_rv(wl, flux)
rvsa = slope_adjusted_rv(wl, flux)
rv_fraction = (rvg - rvs) / rvs
adj_rv_fraction = (rvsa - rvs) / rvs
if "CARMENES" in band:
band = "CARM" + band[-4:]
print(
(
"{0:8} {1:1.02f} {2:1.02f} {3:7.2e} {4:7.2e} {5:7.1f} {6:7.1f} "
"{7:-6.03f} {8:7.01f} {9:7.01f} {10:7.01f} {11:-6.03f} {12:-6.03f}"
).format(
band,
wl_min,
wl_max,
s,
g,
qs,
qg,
qfraction,
rvs,
rvsa,
rvg,
rv_fraction,
adj_rv_fraction,
)
)
Explanation: Numerical Gradient Effect on Bands
Need to calcualte the relative difference on the same wavelength and flux sections. They are all normalized differently due to the maximum in the given range.
End of explanation |
7,146 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ensemble of Decision Trees
By Parijat Mazumdar (GitHub ID
Step1: Next, we decide the parameters of our Random Forest.
Step2: In the above code snippet, we decided to create a forest using 10 trees in which each split in individual trees will be using a randomly chosen subset of 4 features. Note that 4 here is the square root of the total available features (16) and is hence the usually chosen value as mentioned in the introductory paragraph. The strategy for combination chosen is Majority Vote which, as the name suggests, chooses the mode of all the individual tree outputs. The given features are all continuous in nature and hence feature types are all set false (i.e. not nominal). Next, we train our Random Forest and use it to classify letters in our test dataset.
Step3: We have with us the labels predicted by our Random Forest model. Let us also get the predictions made by a single tree. For this purpose, we train a CART-flavoured decision tree.
Step4: With both results at our disposal, let us find out which one is better.
Step5: As it is clear from the results above, we see a significant improvement in the predictions. The reason for the improvement is clear when one looks at the training accuracy. The single decision tree was over-fitting on the training dataset and hence was not generic. Random Forest on the other hand appropriately trades off training accuracy for the sake of generalization of the model. Impressed already? Let us now see what happens if we increase the number of trees in our forest.
Random Forest parameters
Step6: The method above takes the number of trees and subset size as inputs and returns the evaluated accuracy as output. Let us use this method to get the accuracy for different number of trees keeping the subset size constant at 4.
Step7: NOTE
Step8: NOTE
Step9: As we can see from the above plot, the subset size does not have a major impact on the saturated accuracy obtained in this particular dataset. While this is true in many datasets, this is not a generic observation. In some datasets, the random feature sample size does have a measurable impact on the test accuracy. A simple strategy to find the optimal subset size is to use cross-validation. But with Random Forest model, there is actually no need to perform cross-validation. Let us see how in the next section.
Out-of-bag error
The individual trees in a Random Forest are trained over data vectors randomly chosen with replacement. As a result, some of the data vectors are left out of training by each of the individual trees. These vectors form the out-of-bag (OOB) vectors of the corresponding trees. A data vector can be part of OOB classes of multiple trees. While calculating OOB error, a data vector is applied to only those trees of which it is a part of OOB class and the results are combined. This combined result averaged over similar estimate for all other vectors gives the OOB error. The OOB error is an estimate of the generalization bound of the Random Forest model. Let us see how to compute this OOB estimate in Shogun.
Step10: The above OOB accuracy calculated is found to be slighly less than the test error evaluated in the previous section (see plot for num_trees=100 and rand_subset_size=2). This is because of the fact that the OOB estimate depicts the expected error for any generalized set of data vectors. It is only natural that for some set of vectors, the actual accuracy is slightly greater than the OOB estimate while in some cases the accuracy observed in a bit lower.
Let us now apply the Random Forest model to the wine dataset. This dataset is different from the previous one in the sense that this dataset is small and has no separate test dataset. Hence OOB (or equivalently cross-validation) is the only viable strategy available here. Let us read the dataset first.
Step11: Next let us find out the appropriate feature subset size. For this we will make use of OOB error.
Step12: From the above plot it is clear that subset size of 2 or 3 produces maximum accuracy for wine classification. At this value of subset size, the expected classification accuracy is of the model is 98.87%. Finally, as a sanity check, let us plot the accuracy vs number of trees curve to ensure that 400 is indeed a sufficient value ie. the oob error saturates before 400. | Python Code:
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../../data')
import shogun as sg
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def load_file(feat_file,label_file):
feats=sg.create_features(sg.read_csv(feat_file))
labels=sg.create_labels(sg.read_csv(label_file))
return (feats, labels)
trainfeat_file=os.path.join(SHOGUN_DATA_DIR, 'uci/letter/train_fm_letter.dat')
trainlab_file=os.path.join(SHOGUN_DATA_DIR, 'uci/letter/train_label_letter.dat')
train_feats,train_labels=load_file(trainfeat_file,trainlab_file)
Explanation: Ensemble of Decision Trees
By Parijat Mazumdar (GitHub ID: mazumdarparijat)
This notebook illustrates the use of Random Forests in Shogun for classification and regression. We will understand the functioning of Random Forests, discuss about the importance of its various parameters and appreciate the usefulness of this learning method.
What is Random Forest?
Random Forest is an ensemble learning method in which a collection of decision trees are grown during training and the combination of the outputs of all the individual trees are considered during testing or application. The strategy for combination can be varied but generally, in case of classification, the mode of the output classes is used and, in case of regression, the mean of the outputs is used. The randomness in the method, as the method's name suggests, is infused mainly by the random subspace sampling done while training individual trees. While choosing the best split during tree growing, only a small randomly chosen subset of all the features is considered. The subset size is a user-controlled parameter and is usually the square root of the total number of available features. The purpose of the random subset sampling method is to decorrelate the individual trees in the forest, thus making the overall model more generic; i.e. decrease the variance without increasing the bias (see bias-variance trade-off). The purpose of Random Forest, in summary, is to reduce the generalization error of the model as much as possible.
Random Forest vs Decision Tree
In this section, we will appreciate the importance of training a Random Forest over a single decision tree. In the process, we will also learn how to use Shogun's Random Forest class. For this purpose, we will use the letter recognition dataset. This dataset contains pixel information (16 features) of 20000 samples of the English alphabet. This is a 26-class classification problem where the task is to predict the alphabet given the 16 pixel features. We start by loading the training dataset.
End of explanation
def setup_random_forest(num_trees,rand_subset_size,combination_rule,feature_types):
rf=sg.create_machine("RandomForest", num_bags=num_trees,
combination_rule=combination_rule)
rf.get("machine").put("m_randsubset_size", rand_subset_size)
rf.get("machine").put("nominal", feature_types)
return rf
comb_rule=sg.create_combination_rule("MajorityVote")
feat_types=np.array([False]*16)
rand_forest=setup_random_forest(10,4,comb_rule,feat_types)
Explanation: Next, we decide the parameters of our Random Forest.
End of explanation
# train forest
rand_forest.put('labels', train_labels)
rand_forest.train(train_feats)
# load test dataset
testfeat_file= os.path.join(SHOGUN_DATA_DIR, 'uci/letter/test_fm_letter.dat')
testlab_file= os.path.join(SHOGUN_DATA_DIR, 'uci/letter/test_label_letter.dat')
test_feats,test_labels=load_file(testfeat_file,testlab_file)
# apply forest
output_rand_forest_train=rand_forest.apply_multiclass(train_feats)
output_rand_forest_test=rand_forest.apply_multiclass(test_feats)
Explanation: In the above code snippet, we decided to create a forest using 10 trees in which each split in individual trees will be using a randomly chosen subset of 4 features. Note that 4 here is the square root of the total available features (16) and is hence the usually chosen value as mentioned in the introductory paragraph. The strategy for combination chosen is Majority Vote which, as the name suggests, chooses the mode of all the individual tree outputs. The given features are all continuous in nature and hence feature types are all set false (i.e. not nominal). Next, we train our Random Forest and use it to classify letters in our test dataset.
End of explanation
def train_cart(train_feats,train_labels,feature_types,problem_type):
c=sg.create_machine("CARTree", nominal=feature_types,
mode=problem_type,
folds=2,
apply_cv_pruning=False,
labels=train_labels)
c.train(train_feats)
return c
# train CART
cart=train_cart(train_feats,train_labels,feat_types,"PT_MULTICLASS")
# apply CART model
output_cart_train=cart.apply_multiclass(train_feats)
output_cart_test=cart.apply_multiclass(test_feats)
Explanation: We have with us the labels predicted by our Random Forest model. Let us also get the predictions made by a single tree. For this purpose, we train a CART-flavoured decision tree.
End of explanation
accuracy=sg.create_evaluation("MulticlassAccuracy")
rf_train_accuracy=accuracy.evaluate(output_rand_forest_train,train_labels)*100
rf_test_accuracy=accuracy.evaluate(output_rand_forest_test,test_labels)*100
cart_train_accuracy=accuracy.evaluate(output_cart_train,train_labels)*100
cart_test_accuracy=accuracy.evaluate(output_cart_test,test_labels)*100
print('Random Forest training accuracy : '+str(round(rf_train_accuracy,3))+'%')
print('CART training accuracy : '+str(round(cart_train_accuracy,3))+'%')
print
print('Random Forest test accuracy : '+str(round(rf_test_accuracy,3))+'%')
print('CART test accuracy : '+str(round(cart_test_accuracy,3))+'%')
Explanation: With both results at our disposal, let us find out which one is better.
End of explanation
def get_rf_accuracy(num_trees,rand_subset_size):
rf=setup_random_forest(num_trees,rand_subset_size,comb_rule,feat_types)
rf.put('labels', train_labels)
rf.train(train_feats)
out_test=rf.apply_multiclass(test_feats)
acc=sg.create_evaluation("MulticlassAccuracy")
return acc.evaluate(out_test,test_labels)
Explanation: As it is clear from the results above, we see a significant improvement in the predictions. The reason for the improvement is clear when one looks at the training accuracy. The single decision tree was over-fitting on the training dataset and hence was not generic. Random Forest on the other hand appropriately trades off training accuracy for the sake of generalization of the model. Impressed already? Let us now see what happens if we increase the number of trees in our forest.
Random Forest parameters : Number of trees and random subset size
In the last section, we trained a forest of 10 trees. What happens if we make our forest with 20 trees? Let us try to answer this question in a generic way.
End of explanation
num_trees4=[5,10,20,50,100]
rf_accuracy_4=[round(get_rf_accuracy(i,4)*100,3) for i in num_trees4]
print('Random Forest accuracies (as %) :' + str(rf_accuracy_4))
# plot results
x4=[1]
y4=[86.48] # accuracy for single tree-CART
x4.extend(num_trees4)
y4.extend(rf_accuracy_4)
plt.plot(x4,y4,'--bo')
plt.xlabel('Number of trees')
plt.ylabel('Multiclass Accuracy (as %)')
plt.xlim([0,110])
plt.ylim([85,100])
plt.show()
Explanation: The method above takes the number of trees and subset size as inputs and returns the evaluated accuracy as output. Let us use this method to get the accuracy for different number of trees keeping the subset size constant at 4.
End of explanation
# subset size 2
num_trees2=[10,20,50,100]
rf_accuracy_2=[round(get_rf_accuracy(i,2)*100,3) for i in num_trees2]
print('Random Forest accuracies (as %) :' + str(rf_accuracy_2))
# subset size 8
num_trees8=[5,10,50,100]
rf_accuracy_8=[round(get_rf_accuracy(i,8)*100,3) for i in num_trees8]
print('Random Forest accuracies (as %) :' + str(rf_accuracy_8))
Explanation: NOTE : The above code snippet takes about a minute to execute. Please wait patiently.
We see from the above plot that the accuracy of the model keeps on increasing as we increase the number of trees on our Random Forest and eventually satarates at some value. Extrapolating the above plot qualitatively, the saturation value will be somewhere around 96.5%. The jump of accuracy from 86.48% for a single tree to 96.5% for a Random Forest with about 100 trees definitely highlights the importance of the Random Forest algorithm.
The inevitable question at this point is whether it is possible to achieve higher accuracy saturation by working with lesser (or greater) random feature subset size. Let us figure this out by repeating the above procedure for random subset size as 2 and 8.
End of explanation
x2=[1]
y2=[86.48]
x2.extend(num_trees2)
y2.extend(rf_accuracy_2)
x8=[1]
y8=[86.48]
x8.extend(num_trees8)
y8.extend(rf_accuracy_8)
plt.plot(x2,y2,'--bo',label='Subset Size = 2')
plt.plot(x4,y4,'--r^',label='Subset Size = 4')
plt.plot(x8,y8,'--gs',label='Subset Size = 8')
plt.xlabel('Number of trees')
plt.ylabel('Multiclass Accuracy (as %) ')
plt.legend(bbox_to_anchor=(0.92,0.4))
plt.xlim([0,110])
plt.ylim([85,100])
plt.show()
Explanation: NOTE : The above code snippets take about a minute each to execute. Please wait patiently.
Let us plot all the results together and then comprehend the results.
End of explanation
rf=setup_random_forest(100,2,comb_rule,feat_types)
rf.put('labels', train_labels)
rf.train(train_feats)
# set evaluation strategy
rf.put("oob_evaluation_metric", sg.create_evaluation("MulticlassAccuracy"))
oobe=rf.get("oob_error")
print('OOB accuracy : '+str(round(oobe*100,3))+'%')
Explanation: As we can see from the above plot, the subset size does not have a major impact on the saturated accuracy obtained in this particular dataset. While this is true in many datasets, this is not a generic observation. In some datasets, the random feature sample size does have a measurable impact on the test accuracy. A simple strategy to find the optimal subset size is to use cross-validation. But with Random Forest model, there is actually no need to perform cross-validation. Let us see how in the next section.
Out-of-bag error
The individual trees in a Random Forest are trained over data vectors randomly chosen with replacement. As a result, some of the data vectors are left out of training by each of the individual trees. These vectors form the out-of-bag (OOB) vectors of the corresponding trees. A data vector can be part of OOB classes of multiple trees. While calculating OOB error, a data vector is applied to only those trees of which it is a part of OOB class and the results are combined. This combined result averaged over similar estimate for all other vectors gives the OOB error. The OOB error is an estimate of the generalization bound of the Random Forest model. Let us see how to compute this OOB estimate in Shogun.
End of explanation
trainfeat_file= os.path.join(SHOGUN_DATA_DIR, 'uci/wine/fm_wine.dat')
trainlab_file= os.path.join(SHOGUN_DATA_DIR, 'uci/wine/label_wine.dat')
train_feats,train_labels=load_file(trainfeat_file,trainlab_file)
Explanation: The above OOB accuracy calculated is found to be slighly less than the test error evaluated in the previous section (see plot for num_trees=100 and rand_subset_size=2). This is because of the fact that the OOB estimate depicts the expected error for any generalized set of data vectors. It is only natural that for some set of vectors, the actual accuracy is slightly greater than the OOB estimate while in some cases the accuracy observed in a bit lower.
Let us now apply the Random Forest model to the wine dataset. This dataset is different from the previous one in the sense that this dataset is small and has no separate test dataset. Hence OOB (or equivalently cross-validation) is the only viable strategy available here. Let us read the dataset first.
End of explanation
def get_oob_errors_wine(num_trees,rand_subset_size):
feat_types=np.array([False]*13)
rf=setup_random_forest(num_trees,rand_subset_size,sg.create_combination_rule("MajorityVote"),feat_types)
rf.put('labels', train_labels)
rf.train(train_feats)
rf.put("oob_evaluation_metric", sg.create_evaluation("MulticlassAccuracy"))
return rf.get("oob_error")
size=[1,2,4,6,8,10,13]
oobe=[round(get_oob_errors_wine(400,i)*100,3) for i in size]
print('Out-of-box Accuracies (as %) : '+str(oobe))
plt.plot(size,oobe,'--bo')
plt.xlim([0,14])
plt.xlabel('Random subset size')
plt.ylabel('Multiclass accuracy')
plt.show()
Explanation: Next let us find out the appropriate feature subset size. For this we will make use of OOB error.
End of explanation
size=[50,100,200,400,600]
oobe=[round(get_oob_errors_wine(i,2)*100,3) for i in size]
print('Out-of-box Accuracies (as %) : '+str(oobe))
plt.plot(size,oobe,'--bo')
plt.xlim([40,650])
plt.ylim([90,100])
plt.xlabel('Number of trees')
plt.ylabel('Multiclass accuracy')
plt.show()
Explanation: From the above plot it is clear that subset size of 2 or 3 produces maximum accuracy for wine classification. At this value of subset size, the expected classification accuracy is of the model is 98.87%. Finally, as a sanity check, let us plot the accuracy vs number of trees curve to ensure that 400 is indeed a sufficient value ie. the oob error saturates before 400.
End of explanation |
7,147 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: We define the model, adapted from the Keras CIFAR-10 example
Step2: We train the model using the
RMSprop
optimizer
Step3: Now let's train the model again, using the XLA compiler.
To enable the compiler in the middle of the application, we need to reset the Keras session. | Python Code:
import tensorflow as tf
# Check that GPU is available: cf. https://colab.research.google.com/notebooks/gpu.ipynb
assert(tf.test.is_gpu_available())
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(False) # Start with XLA disabled.
def load_data():
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x_train = x_train.astype('float32') / 256
x_test = x_test.astype('float32') / 256
# Convert class vectors to binary class matrices.
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
return ((x_train, y_train), (x_test, y_test))
(x_train, y_train), (x_test, y_test) = load_data()
Explanation: <table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/xla/tutorials/autoclustering_xla"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/tutorials/autoclustering_xla.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Classifying CIFAR-10 with XLA
This tutorial trains a TensorFlow model to classify the CIFAR-10 dataset, and we compile it using XLA.
Load and normalize the dataset using the Keras API:
End of explanation
def generate_model():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), padding='same', input_shape=x_train.shape[1:]),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(32, (3, 3)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Conv2D(64, (3, 3), padding='same'),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(64, (3, 3)),
tf.keras.layers.Activation('relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10),
tf.keras.layers.Activation('softmax')
])
model = generate_model()
Explanation: We define the model, adapted from the Keras CIFAR-10 example:
End of explanation
def compile_model(model):
opt = tf.keras.optimizers.RMSprop(lr=0.0001, decay=1e-6)
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
return model
model = compile_model(model)
def train_model(model, x_train, y_train, x_test, y_test, epochs=25):
model.fit(x_train, y_train, batch_size=256, epochs=epochs, validation_data=(x_test, y_test), shuffle=True)
def warmup(model, x_train, y_train, x_test, y_test):
# Warm up the JIT, we do not wish to measure the compilation time.
initial_weights = model.get_weights()
train_model(model, x_train, y_train, x_test, y_test, epochs=1)
model.set_weights(initial_weights)
warmup(model, x_train, y_train, x_test, y_test)
train_model(model, x_train, y_train, x_test, y_test)
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
Explanation: We train the model using the
RMSprop
optimizer:
End of explanation
# We need to clear the session to enable JIT in the middle of the program.
tf.keras.backend.clear_session()
tf.config.optimizer.set_jit(True) # Enable XLA.
model = compile_model(generate_model())
(x_train, y_train), (x_test, y_test) = load_data()
warmup(model, x_train, y_train, x_test, y_test)
%time train_model(model, x_train, y_train, x_test, y_test)
Explanation: Now let's train the model again, using the XLA compiler.
To enable the compiler in the middle of the application, we need to reset the Keras session.
End of explanation |
7,148 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: tf.data を使ったテキストの読み込み
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 例 1
Step3: train/csharp、train/java、train/python および train/javascript ディレクトリには、多くのテキストファイルが含まれています。それぞれが Stack Overflow の質問です。
サンプルファイルを出力してデータを調べます。
Step4: データセットを読み込む
次に、データをディスクから読み込み、トレーニングに適した形式に準備します。これを行うには、tf.keras.utils.text_dataset_from_directory ユーティリティを使用して、ラベル付きの tf.data.Dataset を作成します。これは、入力パイプラインを構築するための強力なツールのコレクションです。tf.data を始めて使用する場合は、tf.data
Step5: 前のセル出力が示すように、トレーニングフォルダには 8,000 の例があり、そのうち 80% (6,400) をトレーニングに使用します。tf.data.Dataset を Model.fit に直接渡すことで、モデルをトレーニングできます。詳細は、後ほど見ていきます。
まず、データセットを反復処理し、いくつかの例を出力して、データを確認します。
Note
Step6: ラベルは、0、1、2 または 3 です。これらのどれがどの文字列ラベルに対応するかを確認するには、データセットの class_names プロパティを確認します。
Step7: 次に、tf.keras.utils.text_dataset_from_directory を使って検証およびテスト用データセットを作成します。トレーニング用セットの残りの 1,600 件のレビューを検証に使用します。
注意
Step8: トレーニング用データセットを準備する
次に、tf.keras.layers.TextVectorization レイヤーを使用して、データを標準化、トークン化、およびベクトル化します。
標準化とは、テキストを前処理することを指します。通常、句読点や HTML 要素を削除して、データセットを簡素化します。
トークン化とは、文字列をトークンに分割することです(たとえば、空白で分割することにより、文を個々の単語に分割します)。
ベクトル化とは、トークンを数値に変換して、ニューラルネットワークに入力できるようにすることです。
これらのタスクはすべて、このレイヤーで実行できます。これらの詳細については、tf.keras.layers.TextVectorization API ドキュメントを参照してください。
注意点
Step9: 'int' モードの場合、最大語彙サイズに加えて、明示的な最大シーケンス長 (MAX_SEQUENCE_LENGTH) を設定する必要があります。これにより、レイヤーはシーケンスを正確に output_sequence_length 値にパディングまたは切り捨てます。
Step10: 次に、TextVectorization.adapt を呼び出して、前処理レイヤーの状態をデータセットに適合させます。これにより、モデルは文字列から整数へのインデックスを作成します。
注意
Step11: これらのレイヤーを使用してデータを前処理した結果を出力します。
Step12: 上に示したように、TextVectorization の 'binary' モードは、入力に少なくとも 1 回存在するトークンを示す配列を返しますが、'int' モードでは、各トークンが整数に置き換えられるため、トークンの順序が保持されます。
レイヤーで TextVectorization.get_vocabulary を呼び出すことにより、各整数が対応するトークン (文字列) を検索できます。
Step13: モデルをトレーニングする準備がほぼ整いました。
最後の前処理ステップとして、トレーニング、検証、およびデータセットのテストのために前に作成した TextVectorization レイヤーを適用します。
Step14: パフォーマンスのためにデータセットを構成する
以下は、データを読み込むときに I/O がブロックされないようにするために使用する必要がある 2 つの重要な方法です。
Dataset.cache はデータをディスクから読み込んだ後、データをメモリに保持します。これにより、モデルのトレーニング中にデータセットがボトルネックになることを回避できます。データセットが大きすぎてメモリに収まらない場合は、この方法を使用して、パフォーマンスの高いオンディスクキャッシュを作成することもできます。これは、多くの小さなファイルを読み込むより効率的です。
Dataset.prefetch はトレーニング中にデータの前処理とモデルの実行をオーバーラップさせます。
以上の 2 つの方法とデータをディスクにキャッシュする方法についての詳細は、<a>データパフォーマンスガイド</a>の <em>プリフェッチ</em>を参照してください。
Step15: モデルをトレーニングする
ニューラルネットワークを作成します。
'binary' のベクトル化されたデータの場合、単純な bag-of-words 線形モデルを定義し、それを構成してトレーニングします。
Step16: 次に、'int' ベクトル化レイヤーを使用して、1D ConvNet を構築します。
Step17: 2 つのモデルを比較します。
Step18: テストデータで両方のモデルを評価します。
Step19: 注意
Step20: これで、モデルは生の文字列を入力として受け取り、Model.predict を使用して各ラベルのスコアを予測できます。最大スコアのラベルを見つける関数を定義します。
Step21: 新しいデータで推論を実行する
Step22: モデル内にテキスト前処理ロジックを含めると、モデルを本番環境にエクスポートして展開を簡素化し、トレーニング/テストスキューの可能性を減らすことができます。
tf.keras.layers.TextVectorization を適用する場所を選択する際に性能の違いに留意する必要があります。モデルの外部で使用すると、GPU でトレーニングするときに非同期 CPU 処理とデータのバッファリングを行うことができます。したがって、GPU でモデルをトレーニングしている場合は、モデルの開発中に最高のパフォーマンスを得るためにこのオプションを使用し、デプロイの準備ができたらモデル内に TextVectorization レイヤーを含めるように切り替えることをお勧めします。
モデルの保存の詳細については、モデルの保存と読み込みチュートリアルをご覧ください。
例 2
Step23: データセットを読み込む
以前は、tf.keras.utils.text_dataset_from_directory では、ファイルのすべてのコンテンツが 1 つの例として扱われていました。ここでは、tf.data.TextLineDataset を使用します。これは、テキストファイルから tf.data.Dataset を作成するように設計されています。それぞれの例は、元のファイルからの行です。TextLineDataset は、主に行ベースのテキストデータ (詩やエラーログなど) に役立ちます。
これらのファイルを繰り返し処理し、各ファイルを独自のデータセットに読み込みます。各例には個別にラベルを付ける必要があるため、Dataset.map を使用して、それぞれにラベラー関数を適用します。これにより、データセット内のすべての例が繰り返され、 (example, label) ペアが返されます。
Step24: 次に、Dataset.concatenate を使用し、これらのラベル付きデータセットを 1 つのデータセットに結合し、Dataset.shuffle を使用してシャッフルします。
Step25: 前述の手順でいくつかの例を出力します。データセットはまだバッチ処理されていないため、all_labeled_data の各エントリは 1 つのデータポイントに対応します。
Step26: トレーニング用データセットを準備する
tf.keras.layers.TextVectorization を使用してテキストデータセットを前処理する代わりに、TensorFlow Text API を使用してデータを標準化およびトークン化し、語彙を作成し、tf.lookup.StaticVocabularyTable を使用してトークンを整数にマッピングし、モデルにフィードします。(詳細については TensorFlow Text を参照してください)。
テキストを小文字に変換してトークン化する関数を定義します。
TensorFlow Text は、さまざまなトークナイザーを提供します。この例では、text.UnicodeScriptTokenizer を使用してデータセットをトークン化します。
Dataset.map を使用して、トークン化をデータセットに適用します。
Step27: データセットを反復処理して、トークン化されたいくつかの例を出力します。
Step28: 次に、トークンを頻度で並べ替え、上位の VOCAB_SIZE トークンを保持することにより、語彙を構築します。
Step29: トークンを整数に変換するには、vocab セットを使用して、tf.lookup.StaticVocabularyTable を作成します。トークンを [2, vocab_size + 2] の範囲の整数にマップします。TextVectorization レイヤーと同様に、0 はパディングを示すために予約されており、1 は語彙外 (OOV) トークンを示すために予約されています。
Step30: 最後に、トークナイザーとルックアップテーブルを使用して、データセットを標準化、トークン化、およびベクトル化する関数を定義します。
Step31: 1 つの例でこれを試して、出力を確認します。
Step32: 次に、Dataset.map を使用して、データセットに対して前処理関数を実行します。
Step33: データセットをトレーニング用セットとテスト用セットに分割する
Keras TextVectorization レイヤーでも、ベクトル化されたデータをバッチ処理してパディングします。バッチ内の例は同じサイズと形状である必要があるため、パディングが必要です。これらのデータセットの例はすべて同じサイズではありません。テキストの各行には、異なる数の単語があります。
tf.data.Dataset は、データセットの分割とパディングのバッチ処理をサポートしています
Step34: validation_data および train_data は (example, label) ペアのコレクションではなく、バッチのコレクションです。各バッチは、配列として表される (多くの例、多くのラベル) のペアです。
以下に示します。
Step35: パディングに 0 を使用し、語彙外 (OOV) トークンに 1 を使用するため、語彙のサイズが 2 つ増えました。
Step36: 以前と同じように、パフォーマンスを向上させるためにデータセットを構成します。
Step37: モデルをトレーニングする
以前と同じように、このデータセットでモデルをトレーニングできます。
Step38: モデルをエクスポートする
モデルが生の文字列を入力として受け取ることができるようにするには、カスタム前処理関数と同じ手順を実行する TextVectorization レイヤーを作成します。すでに語彙をトレーニングしているので、新しい語彙をトレーニングする TextVectorization.adapt の代わりに、TextVectorization.set_vocabulary を使用できます。
Step39: エンコードされた検証セットのモデルと生の検証セットのエクスポートされたモデルの損失と正確度は、予想どおり同じです。
新しいデータで推論を実行する
Step40: TensorFlow Datasets (TFDS) を使用して、より多くのデータセットをダウンロードする
TensorFlow Dataset からより多くのデータセットをダウンロードできます。
この例では、IMDB 大規模映画レビューデータセットを使用して、感情分類のモデルをトレーニングします。
Step41: いくつかの例を出力します。
Step42: これで、以前と同じようにデータを前処理してモデルをトレーニングできます。
注意
Step43: モデルを作成、構成、およびトレーニングする
Step44: モデルをエクスポートする | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
!pip install "tensorflow-text==2.8.*"
import collections
import pathlib
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import losses
from tensorflow.keras import utils
from tensorflow.keras.layers import TextVectorization
import tensorflow_datasets as tfds
import tensorflow_text as tf_text
Explanation: tf.data を使ったテキストの読み込み
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/text"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/text.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/load_data/text.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/load_data/text.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
このチュートリアルでは、テキストを読み込んで前処理する 2 つの方法を紹介します。
まず、Keras ユーティリティと前処理レイヤーを使用します。これには、データを tf.data.Dataset に変換するための tf.keras.utils.text_dataset_from_directory とデータを標準化、トークン化、およびベクトル化するための tf.keras.layers.TextVectorization が含まれます。TensorFlow を初めて使用する場合は、これらから始める必要があります。
次に、tf.data.TextLineDataset などの低レベルのユーティリティを使用してテキストファイルを読み込み、text.UnicodeScriptTokenizer や text.case_fold_utf8 などの TensorFlow Text APIを使用して、よりきめ細かい制御のためにデータを前処理します。
End of explanation
data_url = 'https://storage.googleapis.com/download.tensorflow.org/data/stack_overflow_16k.tar.gz'
dataset_dir = utils.get_file(
origin=data_url,
untar=True,
cache_dir='stack_overflow',
cache_subdir='')
dataset_dir = pathlib.Path(dataset_dir).parent
list(dataset_dir.iterdir())
train_dir = dataset_dir/'train'
list(train_dir.iterdir())
Explanation: 例 1: StackOverflow の質問のタグを予測する
最初の例として、StackOverflow からプログラミングの質問のデータセットをダウンロードします。それぞれの質問 (「ディクショナリを値で並べ替えるにはどうすればよいですか?」) は、1 つのタグ (<code>Python</code>、CSharp、JavaScript、またはJava) でラベルされています。このタスクでは、質問のタグを予測するモデルを開発します。これは、マルチクラス分類の例です。マルチクラス分類は、重要で広く適用できる機械学習の問題です。
データセットをダウンロードして調査する
まず、tf.keras.utils.get_file を使用して Stack Overflow データセットをダウンロードし、ディレクトリの構造を調べます。
End of explanation
sample_file = train_dir/'python/1755.txt'
with open(sample_file) as f:
print(f.read())
Explanation: train/csharp、train/java、train/python および train/javascript ディレクトリには、多くのテキストファイルが含まれています。それぞれが Stack Overflow の質問です。
サンプルファイルを出力してデータを調べます。
End of explanation
batch_size = 32
seed = 42
raw_train_ds = utils.text_dataset_from_directory(
train_dir,
batch_size=batch_size,
validation_split=0.2,
subset='training',
seed=seed)
Explanation: データセットを読み込む
次に、データをディスクから読み込み、トレーニングに適した形式に準備します。これを行うには、tf.keras.utils.text_dataset_from_directory ユーティリティを使用して、ラベル付きの tf.data.Dataset を作成します。これは、入力パイプラインを構築するための強力なツールのコレクションです。tf.data を始めて使用する場合は、tf.data: TensorFlow 入力パイプラインを構築するを参照してください。
tf.keras.utils.text_dataset_from_directory API は、次のようなディレクトリ構造を想定しています。
train/
...csharp/
......1.txt
......2.txt
...java/
......1.txt
......2.txt
...javascript/
......1.txt
......2.txt
...python/
......1.txt
......2.txt
機械学習実験を実行するときは、データセットをトレーニング、検証、および、テストの 3 つに分割することをお勧めします。
Stack Overflow データセットは、すでにトレーニングセットとテストセットに分割されていますが、検証セットはありません。
tf.keras.utils.text_dataset_from_directory を使用し、validation_split を 0.2 (20%) に設定し、トレーニングデータを 80:20 に分割して検証セットを作成します。
End of explanation
for text_batch, label_batch in raw_train_ds.take(1):
for i in range(10):
print("Question: ", text_batch.numpy()[i])
print("Label:", label_batch.numpy()[i])
Explanation: 前のセル出力が示すように、トレーニングフォルダには 8,000 の例があり、そのうち 80% (6,400) をトレーニングに使用します。tf.data.Dataset を Model.fit に直接渡すことで、モデルをトレーニングできます。詳細は、後ほど見ていきます。
まず、データセットを反復処理し、いくつかの例を出力して、データを確認します。
Note: To increase the difficulty of the classification problem, the dataset author replaced occurrences of the words Python, CSharp, JavaScript, or Java in the programming question with the word blank.
End of explanation
for i, label in enumerate(raw_train_ds.class_names):
print("Label", i, "corresponds to", label)
Explanation: ラベルは、0、1、2 または 3 です。これらのどれがどの文字列ラベルに対応するかを確認するには、データセットの class_names プロパティを確認します。
End of explanation
# Create a validation set.
raw_val_ds = utils.text_dataset_from_directory(
train_dir,
batch_size=batch_size,
validation_split=0.2,
subset='validation',
seed=seed)
test_dir = dataset_dir/'test'
# Create a test set.
raw_test_ds = utils.text_dataset_from_directory(
test_dir,
batch_size=batch_size)
Explanation: 次に、tf.keras.utils.text_dataset_from_directory を使って検証およびテスト用データセットを作成します。トレーニング用セットの残りの 1,600 件のレビューを検証に使用します。
注意: tf.keras.utils.text_dataset_from_directory の validation_split および subset 引数を使用する場合は、必ずランダムシードを指定するか、shuffle=Falseを渡して、検証とトレーニング分割に重複がないようにします。
End of explanation
VOCAB_SIZE = 10000
binary_vectorize_layer = TextVectorization(
max_tokens=VOCAB_SIZE,
output_mode='binary')
Explanation: トレーニング用データセットを準備する
次に、tf.keras.layers.TextVectorization レイヤーを使用して、データを標準化、トークン化、およびベクトル化します。
標準化とは、テキストを前処理することを指します。通常、句読点や HTML 要素を削除して、データセットを簡素化します。
トークン化とは、文字列をトークンに分割することです(たとえば、空白で分割することにより、文を個々の単語に分割します)。
ベクトル化とは、トークンを数値に変換して、ニューラルネットワークに入力できるようにすることです。
これらのタスクはすべて、このレイヤーで実行できます。これらの詳細については、tf.keras.layers.TextVectorization API ドキュメントを参照してください。
注意点 :
デフォルトの標準化では、テキストが小文字に変換され、句読点が削除されます (standardize='lower_and_strip_punctuation')。
デフォルトのトークナイザーは空白で分割されます (split='whitespace')。
デフォルトのベクトル化モードは int です (output_mode='int')。これは整数インデックスを出力します(トークンごとに1つ)。このモードは、語順を考慮したモデルを構築するために使用できます。binary などの他のモードを使用して、bag-of-word モデルを構築することもできます。
TextVectorization を使用した標準化、トークン化、およびベクトル化について詳しくみるために、2 つのモデルを作成します。
まず、'binary' ベクトル化モードを使用して、bag-of-words モデルを構築します。
次に、1D ConvNet で 'int' モードを使用します。
End of explanation
MAX_SEQUENCE_LENGTH = 250
int_vectorize_layer = TextVectorization(
max_tokens=VOCAB_SIZE,
output_mode='int',
output_sequence_length=MAX_SEQUENCE_LENGTH)
Explanation: 'int' モードの場合、最大語彙サイズに加えて、明示的な最大シーケンス長 (MAX_SEQUENCE_LENGTH) を設定する必要があります。これにより、レイヤーはシーケンスを正確に output_sequence_length 値にパディングまたは切り捨てます。
End of explanation
# Make a text-only dataset (without labels), then call `TextVectorization.adapt`.
train_text = raw_train_ds.map(lambda text, labels: text)
binary_vectorize_layer.adapt(train_text)
int_vectorize_layer.adapt(train_text)
Explanation: 次に、TextVectorization.adapt を呼び出して、前処理レイヤーの状態をデータセットに適合させます。これにより、モデルは文字列から整数へのインデックスを作成します。
注意: TextVectorization.adapt を呼び出すときは、トレーニング用データのみを使用することが重要です (テスト用セットを使用すると情報が漏洩します)。
End of explanation
def binary_vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return binary_vectorize_layer(text), label
def int_vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return int_vectorize_layer(text), label
# Retrieve a batch (of 32 reviews and labels) from the dataset.
text_batch, label_batch = next(iter(raw_train_ds))
first_question, first_label = text_batch[0], label_batch[0]
print("Question", first_question)
print("Label", first_label)
print("'binary' vectorized question:",
binary_vectorize_text(first_question, first_label)[0])
print("'int' vectorized question:",
int_vectorize_text(first_question, first_label)[0])
Explanation: これらのレイヤーを使用してデータを前処理した結果を出力します。
End of explanation
print("1289 ---> ", int_vectorize_layer.get_vocabulary()[1289])
print("313 ---> ", int_vectorize_layer.get_vocabulary()[313])
print("Vocabulary size: {}".format(len(int_vectorize_layer.get_vocabulary())))
Explanation: 上に示したように、TextVectorization の 'binary' モードは、入力に少なくとも 1 回存在するトークンを示す配列を返しますが、'int' モードでは、各トークンが整数に置き換えられるため、トークンの順序が保持されます。
レイヤーで TextVectorization.get_vocabulary を呼び出すことにより、各整数が対応するトークン (文字列) を検索できます。
End of explanation
binary_train_ds = raw_train_ds.map(binary_vectorize_text)
binary_val_ds = raw_val_ds.map(binary_vectorize_text)
binary_test_ds = raw_test_ds.map(binary_vectorize_text)
int_train_ds = raw_train_ds.map(int_vectorize_text)
int_val_ds = raw_val_ds.map(int_vectorize_text)
int_test_ds = raw_test_ds.map(int_vectorize_text)
Explanation: モデルをトレーニングする準備がほぼ整いました。
最後の前処理ステップとして、トレーニング、検証、およびデータセットのテストのために前に作成した TextVectorization レイヤーを適用します。
End of explanation
AUTOTUNE = tf.data.AUTOTUNE
def configure_dataset(dataset):
return dataset.cache().prefetch(buffer_size=AUTOTUNE)
binary_train_ds = configure_dataset(binary_train_ds)
binary_val_ds = configure_dataset(binary_val_ds)
binary_test_ds = configure_dataset(binary_test_ds)
int_train_ds = configure_dataset(int_train_ds)
int_val_ds = configure_dataset(int_val_ds)
int_test_ds = configure_dataset(int_test_ds)
Explanation: パフォーマンスのためにデータセットを構成する
以下は、データを読み込むときに I/O がブロックされないようにするために使用する必要がある 2 つの重要な方法です。
Dataset.cache はデータをディスクから読み込んだ後、データをメモリに保持します。これにより、モデルのトレーニング中にデータセットがボトルネックになることを回避できます。データセットが大きすぎてメモリに収まらない場合は、この方法を使用して、パフォーマンスの高いオンディスクキャッシュを作成することもできます。これは、多くの小さなファイルを読み込むより効率的です。
Dataset.prefetch はトレーニング中にデータの前処理とモデルの実行をオーバーラップさせます。
以上の 2 つの方法とデータをディスクにキャッシュする方法についての詳細は、<a>データパフォーマンスガイド</a>の <em>プリフェッチ</em>を参照してください。
End of explanation
binary_model = tf.keras.Sequential([layers.Dense(4)])
binary_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
history = binary_model.fit(
binary_train_ds, validation_data=binary_val_ds, epochs=10)
Explanation: モデルをトレーニングする
ニューラルネットワークを作成します。
'binary' のベクトル化されたデータの場合、単純な bag-of-words 線形モデルを定義し、それを構成してトレーニングします。
End of explanation
def create_model(vocab_size, num_labels):
model = tf.keras.Sequential([
layers.Embedding(vocab_size, 64, mask_zero=True),
layers.Conv1D(64, 5, padding="valid", activation="relu", strides=2),
layers.GlobalMaxPooling1D(),
layers.Dense(num_labels)
])
return model
# `vocab_size` is `VOCAB_SIZE + 1` since `0` is used additionally for padding.
int_model = create_model(vocab_size=VOCAB_SIZE + 1, num_labels=4)
int_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
history = int_model.fit(int_train_ds, validation_data=int_val_ds, epochs=5)
Explanation: 次に、'int' ベクトル化レイヤーを使用して、1D ConvNet を構築します。
End of explanation
print("Linear model on binary vectorized data:")
print(binary_model.summary())
print("ConvNet model on int vectorized data:")
print(int_model.summary())
Explanation: 2 つのモデルを比較します。
End of explanation
binary_loss, binary_accuracy = binary_model.evaluate(binary_test_ds)
int_loss, int_accuracy = int_model.evaluate(int_test_ds)
print("Binary model accuracy: {:2.2%}".format(binary_accuracy))
print("Int model accuracy: {:2.2%}".format(int_accuracy))
Explanation: テストデータで両方のモデルを評価します。
End of explanation
export_model = tf.keras.Sequential(
[binary_vectorize_layer, binary_model,
layers.Activation('sigmoid')])
export_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer='adam',
metrics=['accuracy'])
# Test it with `raw_test_ds`, which yields raw strings
loss, accuracy = export_model.evaluate(raw_test_ds)
print("Accuracy: {:2.2%}".format(binary_accuracy))
Explanation: 注意: このサンプルデータセットは、かなり単純な分類問題を表しています。より複雑なデータセットと問題は、前処理戦略とモデルアーキテクチャに微妙ながら重要な違いをもたらします。さまざまなアプローチを比較するために、さまざまなハイパーパラメータとエポックを試してみてください。
モデルをエクスポートする
上記のコードでは、モデルにテキストをフィードする前に、tf.keras.layers.TextVectorization レイヤーをデータセットに適用しました。モデルで生の文字列を処理できるようにする場合 (たとえば、展開を簡素化するため)、モデル内に TextVectorization レイヤーを含めることができます。
これを行うには、トレーニングしたばかりの重みを使用して新しいモデルを作成できます。
End of explanation
def get_string_labels(predicted_scores_batch):
predicted_int_labels = tf.argmax(predicted_scores_batch, axis=1)
predicted_labels = tf.gather(raw_train_ds.class_names, predicted_int_labels)
return predicted_labels
Explanation: これで、モデルは生の文字列を入力として受け取り、Model.predict を使用して各ラベルのスコアを予測できます。最大スコアのラベルを見つける関数を定義します。
End of explanation
inputs = [
"how do I extract keys from a dict into a list?", # 'python'
"debug public static void main(string[] args) {...}", # 'java'
]
predicted_scores = export_model.predict(inputs)
predicted_labels = get_string_labels(predicted_scores)
for input, label in zip(inputs, predicted_labels):
print("Question: ", input)
print("Predicted label: ", label.numpy())
Explanation: 新しいデータで推論を実行する
End of explanation
DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/'
FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt']
for name in FILE_NAMES:
text_dir = utils.get_file(name, origin=DIRECTORY_URL + name)
parent_dir = pathlib.Path(text_dir).parent
list(parent_dir.iterdir())
Explanation: モデル内にテキスト前処理ロジックを含めると、モデルを本番環境にエクスポートして展開を簡素化し、トレーニング/テストスキューの可能性を減らすことができます。
tf.keras.layers.TextVectorization を適用する場所を選択する際に性能の違いに留意する必要があります。モデルの外部で使用すると、GPU でトレーニングするときに非同期 CPU 処理とデータのバッファリングを行うことができます。したがって、GPU でモデルをトレーニングしている場合は、モデルの開発中に最高のパフォーマンスを得るためにこのオプションを使用し、デプロイの準備ができたらモデル内に TextVectorization レイヤーを含めるように切り替えることをお勧めします。
モデルの保存の詳細については、モデルの保存と読み込みチュートリアルをご覧ください。
例 2: イーリアスの翻訳者を予測する
以下に、tf.data.TextLineDataset を使用してテキストファイルから例を読み込み、TensorFlow Text を使用してデータを前処理する例を示します。この例では、ホーマーのイーリアスの 3 つの異なる英語翻訳を使用し、与えられた 1 行のテキストから翻訳者を識別するようにモデルをトレーニングします。
データセットをダウンロードして調査する
3 つのテキストの翻訳者は次のとおりです。
ウィリアム・クーパー — テキスト
エドワード、ダービー伯爵 — テキスト
サミュエル・バトラー — テキスト
このチュートリアルで使われているテキストファイルは、ヘッダ、フッタ、行番号、章のタイトルの削除など、いくつかの典型的な前処理が行われています。
前処理後のファイルをローカルにダウンロードします。
End of explanation
def labeler(example, index):
return example, tf.cast(index, tf.int64)
labeled_data_sets = []
for i, file_name in enumerate(FILE_NAMES):
lines_dataset = tf.data.TextLineDataset(str(parent_dir/file_name))
labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i))
labeled_data_sets.append(labeled_dataset)
Explanation: データセットを読み込む
以前は、tf.keras.utils.text_dataset_from_directory では、ファイルのすべてのコンテンツが 1 つの例として扱われていました。ここでは、tf.data.TextLineDataset を使用します。これは、テキストファイルから tf.data.Dataset を作成するように設計されています。それぞれの例は、元のファイルからの行です。TextLineDataset は、主に行ベースのテキストデータ (詩やエラーログなど) に役立ちます。
これらのファイルを繰り返し処理し、各ファイルを独自のデータセットに読み込みます。各例には個別にラベルを付ける必要があるため、Dataset.map を使用して、それぞれにラベラー関数を適用します。これにより、データセット内のすべての例が繰り返され、 (example, label) ペアが返されます。
End of explanation
BUFFER_SIZE = 50000
BATCH_SIZE = 64
VALIDATION_SIZE = 5000
all_labeled_data = labeled_data_sets[0]
for labeled_dataset in labeled_data_sets[1:]:
all_labeled_data = all_labeled_data.concatenate(labeled_dataset)
all_labeled_data = all_labeled_data.shuffle(
BUFFER_SIZE, reshuffle_each_iteration=False)
Explanation: 次に、Dataset.concatenate を使用し、これらのラベル付きデータセットを 1 つのデータセットに結合し、Dataset.shuffle を使用してシャッフルします。
End of explanation
for text, label in all_labeled_data.take(10):
print("Sentence: ", text.numpy())
print("Label:", label.numpy())
Explanation: 前述の手順でいくつかの例を出力します。データセットはまだバッチ処理されていないため、all_labeled_data の各エントリは 1 つのデータポイントに対応します。
End of explanation
tokenizer = tf_text.UnicodeScriptTokenizer()
def tokenize(text, unused_label):
lower_case = tf_text.case_fold_utf8(text)
return tokenizer.tokenize(lower_case)
tokenized_ds = all_labeled_data.map(tokenize)
Explanation: トレーニング用データセットを準備する
tf.keras.layers.TextVectorization を使用してテキストデータセットを前処理する代わりに、TensorFlow Text API を使用してデータを標準化およびトークン化し、語彙を作成し、tf.lookup.StaticVocabularyTable を使用してトークンを整数にマッピングし、モデルにフィードします。(詳細については TensorFlow Text を参照してください)。
テキストを小文字に変換してトークン化する関数を定義します。
TensorFlow Text は、さまざまなトークナイザーを提供します。この例では、text.UnicodeScriptTokenizer を使用してデータセットをトークン化します。
Dataset.map を使用して、トークン化をデータセットに適用します。
End of explanation
for text_batch in tokenized_ds.take(5):
print("Tokens: ", text_batch.numpy())
Explanation: データセットを反復処理して、トークン化されたいくつかの例を出力します。
End of explanation
tokenized_ds = configure_dataset(tokenized_ds)
vocab_dict = collections.defaultdict(lambda: 0)
for toks in tokenized_ds.as_numpy_iterator():
for tok in toks:
vocab_dict[tok] += 1
vocab = sorted(vocab_dict.items(), key=lambda x: x[1], reverse=True)
vocab = [token for token, count in vocab]
vocab = vocab[:VOCAB_SIZE]
vocab_size = len(vocab)
print("Vocab size: ", vocab_size)
print("First five vocab entries:", vocab[:5])
Explanation: 次に、トークンを頻度で並べ替え、上位の VOCAB_SIZE トークンを保持することにより、語彙を構築します。
End of explanation
keys = vocab
values = range(2, len(vocab) + 2) # Reserve `0` for padding, `1` for OOV tokens.
init = tf.lookup.KeyValueTensorInitializer(
keys, values, key_dtype=tf.string, value_dtype=tf.int64)
num_oov_buckets = 1
vocab_table = tf.lookup.StaticVocabularyTable(init, num_oov_buckets)
Explanation: トークンを整数に変換するには、vocab セットを使用して、tf.lookup.StaticVocabularyTable を作成します。トークンを [2, vocab_size + 2] の範囲の整数にマップします。TextVectorization レイヤーと同様に、0 はパディングを示すために予約されており、1 は語彙外 (OOV) トークンを示すために予約されています。
End of explanation
def preprocess_text(text, label):
standardized = tf_text.case_fold_utf8(text)
tokenized = tokenizer.tokenize(standardized)
vectorized = vocab_table.lookup(tokenized)
return vectorized, label
Explanation: 最後に、トークナイザーとルックアップテーブルを使用して、データセットを標準化、トークン化、およびベクトル化する関数を定義します。
End of explanation
example_text, example_label = next(iter(all_labeled_data))
print("Sentence: ", example_text.numpy())
vectorized_text, example_label = preprocess_text(example_text, example_label)
print("Vectorized sentence: ", vectorized_text.numpy())
Explanation: 1 つの例でこれを試して、出力を確認します。
End of explanation
all_encoded_data = all_labeled_data.map(preprocess_text)
Explanation: 次に、Dataset.map を使用して、データセットに対して前処理関数を実行します。
End of explanation
train_data = all_encoded_data.skip(VALIDATION_SIZE).shuffle(BUFFER_SIZE)
validation_data = all_encoded_data.take(VALIDATION_SIZE)
train_data = train_data.padded_batch(BATCH_SIZE)
validation_data = validation_data.padded_batch(BATCH_SIZE)
Explanation: データセットをトレーニング用セットとテスト用セットに分割する
Keras TextVectorization レイヤーでも、ベクトル化されたデータをバッチ処理してパディングします。バッチ内の例は同じサイズと形状である必要があるため、パディングが必要です。これらのデータセットの例はすべて同じサイズではありません。テキストの各行には、異なる数の単語があります。
tf.data.Dataset は、データセットの分割とパディングのバッチ処理をサポートしています
End of explanation
sample_text, sample_labels = next(iter(validation_data))
print("Text batch shape: ", sample_text.shape)
print("Label batch shape: ", sample_labels.shape)
print("First text example: ", sample_text[0])
print("First label example: ", sample_labels[0])
Explanation: validation_data および train_data は (example, label) ペアのコレクションではなく、バッチのコレクションです。各バッチは、配列として表される (多くの例、多くのラベル) のペアです。
以下に示します。
End of explanation
vocab_size += 2
Explanation: パディングに 0 を使用し、語彙外 (OOV) トークンに 1 を使用するため、語彙のサイズが 2 つ増えました。
End of explanation
train_data = configure_dataset(train_data)
validation_data = configure_dataset(validation_data)
Explanation: 以前と同じように、パフォーマンスを向上させるためにデータセットを構成します。
End of explanation
model = create_model(vocab_size=vocab_size, num_labels=3)
model.compile(
optimizer='adam',
loss=losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(train_data, validation_data=validation_data, epochs=3)
loss, accuracy = model.evaluate(validation_data)
print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
Explanation: モデルをトレーニングする
以前と同じように、このデータセットでモデルをトレーニングできます。
End of explanation
preprocess_layer = TextVectorization(
max_tokens=vocab_size,
standardize=tf_text.case_fold_utf8,
split=tokenizer.tokenize,
output_mode='int',
output_sequence_length=MAX_SEQUENCE_LENGTH)
preprocess_layer.set_vocabulary(vocab)
export_model = tf.keras.Sequential(
[preprocess_layer, model,
layers.Activation('sigmoid')])
export_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer='adam',
metrics=['accuracy'])
# Create a test dataset of raw strings.
test_ds = all_labeled_data.take(VALIDATION_SIZE).batch(BATCH_SIZE)
test_ds = configure_dataset(test_ds)
loss, accuracy = export_model.evaluate(test_ds)
print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
Explanation: モデルをエクスポートする
モデルが生の文字列を入力として受け取ることができるようにするには、カスタム前処理関数と同じ手順を実行する TextVectorization レイヤーを作成します。すでに語彙をトレーニングしているので、新しい語彙をトレーニングする TextVectorization.adapt の代わりに、TextVectorization.set_vocabulary を使用できます。
End of explanation
inputs = [
"Join'd to th' Ionians with their flowing robes,", # Label: 1
"the allies, and his armour flashed about him so that he seemed to all", # Label: 2
"And with loud clangor of his arms he fell.", # Label: 0
]
predicted_scores = export_model.predict(inputs)
predicted_labels = tf.argmax(predicted_scores, axis=1)
for input, label in zip(inputs, predicted_labels):
print("Question: ", input)
print("Predicted label: ", label.numpy())
Explanation: エンコードされた検証セットのモデルと生の検証セットのエクスポートされたモデルの損失と正確度は、予想どおり同じです。
新しいデータで推論を実行する
End of explanation
# Training set.
train_ds = tfds.load(
'imdb_reviews',
split='train[:80%]',
batch_size=BATCH_SIZE,
shuffle_files=True,
as_supervised=True)
# Validation set.
val_ds = tfds.load(
'imdb_reviews',
split='train[80%:]',
batch_size=BATCH_SIZE,
shuffle_files=True,
as_supervised=True)
Explanation: TensorFlow Datasets (TFDS) を使用して、より多くのデータセットをダウンロードする
TensorFlow Dataset からより多くのデータセットをダウンロードできます。
この例では、IMDB 大規模映画レビューデータセットを使用して、感情分類のモデルをトレーニングします。
End of explanation
for review_batch, label_batch in val_ds.take(1):
for i in range(5):
print("Review: ", review_batch[i].numpy())
print("Label: ", label_batch[i].numpy())
Explanation: いくつかの例を出力します。
End of explanation
vectorize_layer = TextVectorization(
max_tokens=VOCAB_SIZE,
output_mode='int',
output_sequence_length=MAX_SEQUENCE_LENGTH)
# Make a text-only dataset (without labels), then call `TextVectorization.adapt`.
train_text = train_ds.map(lambda text, labels: text)
vectorize_layer.adapt(train_text)
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label
train_ds = train_ds.map(vectorize_text)
val_ds = val_ds.map(vectorize_text)
# Configure datasets for performance as before.
train_ds = configure_dataset(train_ds)
val_ds = configure_dataset(val_ds)
Explanation: これで、以前と同じようにデータを前処理してモデルをトレーニングできます。
注意: これは二項分類の問題であるため、モデルには tf.keras.losses.SparseCategoricalCrossentropy の代わりに tf.keras.losses.BinaryCrossentropy を使用します。
トレーニング用データセットを準備する
End of explanation
model = create_model(vocab_size=VOCAB_SIZE + 1, num_labels=1)
model.summary()
model.compile(
loss=losses.BinaryCrossentropy(from_logits=True),
optimizer='adam',
metrics=['accuracy'])
history = model.fit(train_ds, validation_data=val_ds, epochs=3)
loss, accuracy = model.evaluate(val_ds)
print("Loss: ", loss)
print("Accuracy: {:2.2%}".format(accuracy))
Explanation: モデルを作成、構成、およびトレーニングする
End of explanation
export_model = tf.keras.Sequential(
[vectorize_layer, model,
layers.Activation('sigmoid')])
export_model.compile(
loss=losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer='adam',
metrics=['accuracy'])
# 0 --> negative review
# 1 --> positive review
inputs = [
"This is a fantastic movie.",
"This is a bad movie.",
"This movie was so bad that it was good.",
"I will never say yes to watching this movie.",
]
predicted_scores = export_model.predict(inputs)
predicted_labels = [int(round(x[0])) for x in predicted_scores]
for input, label in zip(inputs, predicted_labels):
print("Question: ", input)
print("Predicted label: ", label)
Explanation: モデルをエクスポートする
End of explanation |
7,149 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Φ<sub>Flow</sub> Math
The phi.math module provides abstract access to tensor operations.
It internally uses NumPy/SciPy, TensorFlow or PyTorch to execute the actual operations, depending on which backend is selected (see below).
This ensures that code written against phi.math functions produces equal results on all backends.
To that end, phi.math provides a new Tensor class which should be used instead of directly accessing native tensors from NumPy, TensorFlow or PyTorch.
While similar to the native tensor classes, phi.math.Tensors have named and typed dimensions.
When performing operations such as +, -, *, /, %, ** or calling math functions on Tensors, dimensions are matched by name and type.
This eliminates the need for manual reshaping or the use of singleton dimensions.
Step1: Shapes
The shape of a Tensor is represented by a Shape object which can be accessed as tensor.shape.
In addition to the dimension sizes, the shape also stores the dimension names which determine their types.
There are four types of dimensions
| Dimension type | Description | Examples |
|------------------------------------------------|---------------------------------------------------------------
Step2: Shape objects should be considered immutable.
Do not change any property of a Shape directly.
Important Shape properties (see the API documentation for a full list)
Step3: There are a couple of functions in the phi.math module for creating basic tensors.
zeros()
ones()
linspace()
random_normal()
random_uniform()
meshgrid()
Most functions allow the shape of the tensor to be specified via a Shape object or alternatively through the keyword arguments.
In the latter case, the dimension types are inferred from the names.
Step4: Backend Selection
The phi.math library does not implement basic operators directly but rather delegates the calls to another computing library.
Currently, it supports three such libraries
Step5: Indexing, Slicing, Unstacking
Indexing is read-only.
The recommended way of indexing or slicing tensors is using the syntax
python
tensor.<dim>[start
Step6: Alternatively tensors can be indexed using a dictionary of the form tensor[{'dim'
Step7: Dimensions can be iterated over or unstacked.
Step8: Non-uniform Tensors
The math package allows tensors of varying sizes to be stacked into a single tensor.
This tensor then has dimension sizes of type Tensor where the source tensors vary in size.
One use case of this are StaggeredGrids where the tensors holding the vector components have different shapes.
Step9: Data Types and Precision
The package phi.math provides a custom DataType class that can be used with all backends.
There are no global variables for common data types; instead you can create one by specifying the kind and length in bits. | Python Code:
from phi import math
Explanation: Φ<sub>Flow</sub> Math
The phi.math module provides abstract access to tensor operations.
It internally uses NumPy/SciPy, TensorFlow or PyTorch to execute the actual operations, depending on which backend is selected (see below).
This ensures that code written against phi.math functions produces equal results on all backends.
To that end, phi.math provides a new Tensor class which should be used instead of directly accessing native tensors from NumPy, TensorFlow or PyTorch.
While similar to the native tensor classes, phi.math.Tensors have named and typed dimensions.
When performing operations such as +, -, *, /, %, ** or calling math functions on Tensors, dimensions are matched by name and type.
This eliminates the need for manual reshaping or the use of singleton dimensions.
End of explanation
from phi.math import batch, spatial, instance, channel
channel(vector='x,y')
batch(examples=10)
spatial(x=4, y=3)
instance(points=5)
Explanation: Shapes
The shape of a Tensor is represented by a Shape object which can be accessed as tensor.shape.
In addition to the dimension sizes, the shape also stores the dimension names which determine their types.
There are four types of dimensions
| Dimension type | Description | Examples |
|------------------------------------------------|---------------------------------------------------------------:|-----------------------|
| spatial | Spans a grid with equidistant sample points. | x, y, z |
| channel | Set of properties sampled at per sample point per instance. | vector, color |
| instance | Collection of (interacting) objects belonging to one instance. | points, particles |
| batch | Lists non-interacting instances. | batch, frames |
The default dimension order is (batch, instance, channel, spatial).
When a dimension is not present on a tensor, values are assumed to be constant along that dimension.
Based on these rules rule, operators and functions may add dimensions to tensors as needed.
Many math functions handle dimensions differently depending on their type, or only work with certain types of dimensions.
Batch dimensions are ignored by all operations.
The result is equal to calling the function on each slice.
Spatial operations, such as spatial_gradient() or divergence() operate on spatial dimensions by default, ignoring all others.
When operating on multiple spatial tensors, these tensors are typically required to have the same spatial dimensions, else an IncompatibleShapes error may be raised.
The function join_spaces() can be used to add the missing spatial dimensions so that these errors are avoided.
| Operation | Batch | instance | Spatial | Channel |
|---------------------------------------------------------|:-----------:|:-----------:|:-----------:|:-------------:|
| convolve | - | - | ★ | ⟷ |
| nonzero | - | ★/⟷ | ★/⟷ | ⟷ |
| scatter (grid)<br>scatter (indices)<br>scatter (values) | -<br>-<br>- | 🗙<br>⟷<br>⟷ | ★<br>🗙<br>🗙 | -<br>⟷/🗙<br>- |
| gather/sample (grid)<br>gather/sample (indices) | -<br>- | 🗙<br>- | ★/⟷<br>- | -<br>⟷/🗙 |
In the above table, - denotes batch-type dimensions, 🗙 are not allowed, ⟷ are reduced in the operation, ★ are active
The preferred way to define a Shape is via the shape() function.
It takes the dimension sizes as keyword arguments.
End of explanation
math.tensor((1, 2, 3))
import numpy
math.tensor(numpy.zeros([1, 5, 4, 2]), batch('batch'), spatial('x,y'), channel(vector='x,y'))
math.reshaped_tensor(numpy.zeros([1, 5, 4, 2]), [batch(), *spatial('x,y'), channel(vector='x,y')])
Explanation: Shape objects should be considered immutable.
Do not change any property of a Shape directly.
Important Shape properties (see the API documentation for a full list):
.sizes: tuple enumerates the sizes as ints or None, similar to NumPy's shapes.
.names: tuple enumerates the dimension names.
.rank: int or len(shape) number of dimensions.
.batch, .spatial, .instance, .channel: Shape or math.batch(shape) Filter by dimension type.
.non_batch: Shape etc. Filter by dimension type.
.volume number of elements a tensor of this shape contains.
Important Shape methods:
get_size(dim) returns the size of a dimension.
get_item_names(dim) returns the names given to slices along a dimension.
without(dims) drops the specified dimensions.
only(dims) drops all other dimensions.
Additional tips and tricks
'x' in shape tests whether a dimension by the name of 'x' is present.
shape1 == shape2 tests equality including names, types and order of dimensions.
shape1 & shape2 or math.merge_shapes() combines the shapes.
Tensor Creation
The tensor() function
converts a scalar, a list, a tuple, a NumPy array or a TensorFlow/PyTorch tensor to a Tensor.
The dimension names can be specified using the names keyword and dimension types are inferred from the names.
Otherwise, they are determined automatically.
End of explanation
math.zeros(spatial(x=5, y=4))
math.random_uniform(channel(vector='x,y'))
math.random_normal(batch(examples=6), dtype=math.DType(int, 32))
Explanation: There are a couple of functions in the phi.math module for creating basic tensors.
zeros()
ones()
linspace()
random_normal()
random_uniform()
meshgrid()
Most functions allow the shape of the tensor to be specified via a Shape object or alternatively through the keyword arguments.
In the latter case, the dimension types are inferred from the names.
End of explanation
from phi.math import backend
backend.default_backend()
from phi.torch import TORCH
with TORCH:
print(math.zeros().default_backend)
backend.set_global_default_backend(backend.NUMPY)
Explanation: Backend Selection
The phi.math library does not implement basic operators directly but rather delegates the calls to another computing library.
Currently, it supports three such libraries: NumPy/SciPy, TensorFlow and PyTorch.
These are referred to as backends.
The easiest way to use a certain backend is via the import statement:
phi.flow → NumPy/SciPy
phi.tf.flow → TensorFlow
phi.torch.flow → PyTorch
phi.jax.flow → Jax
This determines what backend is used to create new tensors.
Existing tensors created with a different backend will keep using that backend.
For example, even if TensorFlow is set as the default backend, NumPy-backed tensors will continue using NumPy functions.
The global backend can be set directly using math.backend.set_global_default_backend().
Backends also support context scopes, i.e. tensors created within a with backend: block will use that backend to back the new tensors.
The three backends can be referenced via the global variables phi.math.NUMPY, phi.tf.TENSORFLOW and phi.torch.TORCH.
When passing tensors of different backends to one function, an automatic conversion will be performed,
e.g. NumPy arrays will be converted to TensorFlow or PyTorch tensors.
End of explanation
data = math.random_uniform(spatial(x=10, y=10, z=10), channel(vector='x,y,z'))
data.x[0].y[1:-1].vector['x']
Explanation: Indexing, Slicing, Unstacking
Indexing is read-only.
The recommended way of indexing or slicing tensors is using the syntax
python
tensor.<dim>[start:end:step]
where start >= 0, end and step > 0 are integers.
The access tensor.<dim> returns a temporary TensorDim
object which can be used for slicing and unstacking along a specific dimension.
This syntax can be chained to index or slice multiple dimensions.
End of explanation
data[{'x': 0, 'y': slice(1, -1), 'vector': 'x'}]
Explanation: Alternatively tensors can be indexed using a dictionary of the form tensor[{'dim': slice or int}].
End of explanation
for slice in data.x:
print(slice)
tuple(data.x)
Explanation: Dimensions can be iterated over or unstacked.
End of explanation
t0 = math.zeros(spatial(a=4, b=2))
t1 = math.ones(spatial(b=2, a=5))
stacked = math.stack([t0, t1], channel('c'))
stacked
stacked.shape.is_uniform
Explanation: Non-uniform Tensors
The math package allows tensors of varying sizes to be stacked into a single tensor.
This tensor then has dimension sizes of type Tensor where the source tensors vary in size.
One use case of this are StaggeredGrids where the tensors holding the vector components have different shapes.
End of explanation
from phi.math import DType
DType(float, 32)
DType(complex, 128)
DType(bool)
Explanation: Data Types and Precision
The package phi.math provides a custom DataType class that can be used with all backends.
There are no global variables for common data types; instead you can create one by specifying the kind and length in bits.
End of explanation |
7,150 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Number of papers over time
data source
We load the data from the Competence Centre for Bibliometrics
Step1: set parameter
Step24: load data from SQL database
Step25: merging data
Step26: grouping data
Step27: visualize with plotly | Python Code:
import cx_Oracle #ensure that OS, InstantClient (Basic, ODBC, SDK) and cx_Oracle are all 64 bit. Install with "pip install cx_Oracle". Add link to InstantClient in Path variable!
import pandas as pd
import re
import plotly.plotly as py
import plotly.graph_objs as go
Explanation: Number of papers over time
data source
We load the data from the Competence Centre for Bibliometrics: http://www.bibliometrie.info/.
They licence access to the Web of Science and Scopus bibliometric databases, spanning a high proportion of all peer-reviewed research literature. The Competence Centre for Bibliometrics further processes both databases' data, so that it can be queried with SQL.
load libraries:
End of explanation
#parameter:
searchterm="big data" #lowecase!
colorlist=["#01be70","#586bd0","#c0aa12","#0183e6","#f69234","#0095e9","#bd8600","#007bbe","#bb7300","#63bcfc","#a84a00","#01bedb","#82170e","#00c586","#a22f1f","#3fbe57","#3e4681","#9bc246","#9a9eec","#778f00","#00aad9","#fc9e5e","#01aec1","#832c1e","#55c99a","#dd715b","#017c1c","#ff9b74","#009556","#83392a","#00b39b","#8e5500","#50a7c6","#f4a268","#02aca7","#532b00","#67c4bd","#5e5500","#f0a18f","#007229","#d2b073","#005d3f","#a5be6b","#2a4100","#8cb88c","#2f5c00","#007463","#5b7200","#787c48","#3b7600"]
Explanation: set parameter
End of explanation
dsn_tns=cx_Oracle.makedsn('127.0.0.1','6025',service_name='bibliodb01.fiz.karlsruhe') #due to licence requirements,
# access is only allowed for members of the competence center of bibliometric and cooperation partners. You can still
# continue with the resulting csv below.
#open connection:
db=cx_Oracle.connect(<username>, <password>, dsn_tns)
print(db.version)
#%% define sql-query function:
def read_query(connection, query):
cursor = connection.cursor()
try:
cursor.execute( query )
names = [ x[0] for x in cursor.description]
rows = cursor.fetchall()
return pd.DataFrame( rows, columns=names)
finally:
if cursor is not None:
cursor.close()
#%% load paper titles from WOSdb:
database="wos_B_2016"
command=SELECT DISTINCT(ARTICLE_TITLE), PUBYEAR
FROM +database+.KEYWORDS, +database+.ITEMS_KEYWORDS, +database+.ITEMS
WHERE
+database+.ITEMS_KEYWORDS.FK_KEYWORDS=+database+.KEYWORDS.PK_KEYWORDS
AND +database+.ITEMS.PK_ITEMS=+database+.ITEMS_KEYWORDS.FK_ITEMS
AND (lower(+database+.KEYWORDS.KEYWORD) LIKE '%+searchterm+%' OR lower(ARTICLE_TITLE) LIKE '%+searchterm+%')
dfWOS=read_query(db,command)
dfWOS['wos']=True #to make the source identifyable
dfWOS.to_csv("all_big_data_titles_year_wos.csv", sep=';')
#%% load paper titles from SCOPUSdb:
database="SCOPUS_B_2016"
command=SELECT DISTINCT(ARTICLE_TITLE), PUBYEAR
FROM +database+.KEYWORDS, +database+.ITEMS_KEYWORDS, +database+.ITEMS
WHERE
+database+.ITEMS_KEYWORDS.FK_KEYWORDS=+database+.KEYWORDS.PK_KEYWORDS
AND +database+.ITEMS.PK_ITEMS=+database+.ITEMS_KEYWORDS.FK_ITEMS
AND (lower(+database+.KEYWORDS.KEYWORD) LIKE '%+searchterm+%' OR lower(ARTICLE_TITLE) LIKE '%+searchterm+%')
dfSCOPUS=read_query(db,command)
dfSCOPUS['scopus']=True #to make the source identifyable
dfSCOPUS.to_csv("all_big_data_titles_year_scopus.csv", sep=';')
#this takes some time, we will work with the exported CSV from here on
Explanation: load data from SQL database:
End of explanation
dfWOS=pd.read_csv("all_big_data_titles_year_wos.csv",sep=";")
dfSCOPUS=pd.read_csv("all_big_data_titles_year_scopus.csv",sep=";")
df=pd.merge(dfWOS,dfSCOPUS,on='ARTICLE_TITLE',how='outer')
#get PUBYEAR in one column:
df.loc[df['wos'] == 1, 'PUBYEAR_y'] = df['PUBYEAR_x']
#save resulting csv again:
df=df[['ARTICLE_TITLE','PUBYEAR_y','wos','scopus']]
df.to_csv("all_big_data_titles_with_year.csv", sep=';')
df
Explanation: merging data
End of explanation
grouped=df.groupby(['PUBYEAR_y'])
df2=grouped.agg('count').reset_index()
df2
Explanation: grouping data
End of explanation
#set data for horizontal bar plot:
data = [go.Bar(
x=[pd.DataFrame.sum(df2)['wos'],pd.DataFrame.sum(df2)['scopus'],pd.DataFrame.sum(df2)['ARTICLE_TITLE']],
y=['Web of Science', 'Scopus', 'Total'],
orientation = 'h',
marker=dict(
color=colorlist
)
)]
#py.plot(data, filename='big_data_papers_horizontal') #for uploading to plotly
py.iplot(data, filename='horizontal-bar')
#set data for stacked bar plot:
trace1 = go.Bar(
x=df2['PUBYEAR_y'],
y=df2['wos'],
name='Web of Science',
marker=dict(
color=colorlist[0]
)
)
trace2 = go.Bar(
x=df2['PUBYEAR_y'],
y=df2['scopus'],
name='Scopus',
marker=dict(
color=colorlist[1]
)
)
trace3 = go.Bar(
x=df2['PUBYEAR_y'],
y=df2['ARTICLE_TITLE'],
name='All Papers',
marker=dict(
color=colorlist[2]
)
)
data = [trace1, trace2,trace3]
#set layout for stacked bar chart with logarithmic y scale:
#set layout for stacked bar chart with normal y scale:
layout_no_log = go.Layout(
title='Big data papers over time',
barmode='group',
xaxis=dict(
title='year',
titlefont=dict(
family='Arial, sans-serif',
size=14,
color='lightgrey'
),
tickfont=dict(
family='Arial, sans-serif',
size=10,
color='black'
),
showticklabels=True,
dtick=1,
tickangle=45,
)
)
#plot:
fig1 = go.Figure(data=data, layout=layout_no_log)
py.iplot(fig1, filename='big_data_papers_no_log')
layout_log = go.Layout(
title='Big data papers over time (log y-scale)',
barmode='group',
xaxis=dict(
title='year',
titlefont=dict(
family='Arial, sans-serif',
size=14,
color='lightgrey'
),
tickfont=dict(
family='Arial, sans-serif',
size=10,
color='black'
),
showticklabels=True,
dtick=1,
tickangle=45,
),
yaxis=dict(
type='log'
)
)
fig2 = go.Figure(data=data, layout=layout_log)
py.iplot(fig2, filename='big_data_papers_log')
Explanation: visualize with plotly:
we make three diagrams:
1) a horizontal bar plot comparing the overall papers per db
2) a vertical bar plot differentiating time and db
3) a vertical bar plot differentiating tima and db with a logarithmic y-scale (allows for better
inspection of smaller numbers)
End of explanation |
7,151 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
syncID
Step1: First we will define the extents of the rectangular array containing the section from each BRDF flightline.
Step2: Next we will define the coordinates of the target of interest. These can be set as any coordinate pait that falls within the rectangle above, therefore the coordaintes must be in UTM Zone 16 N.
Step3: To prevent the function of failing, we will first check to ensure the coordinates are within the rectangular bounding box. If they are not, we throw an error message and exit from the script.
Step4: Now we will define the location of the all the subset NEON AOP h5 files from the BRDF flight
Step5: Now we will grab all files / folders within the defined directory and then cycle through them and retain only the h5files
Step6: Now we will print the h5 files to make sure they have been included and set up a figure for plotting all of the reflectance curves
Step7: Now we will begin cycling through all of the h5 files and retrieving the information we need also print the file that is currently being processed
Inside the for loop we will
1) read in the reflectance data and the associated metadata, but construct the file name from the generated file list
2) Determine the indexes of the water vapor bands (bad band windows) in order to mask out all of the bad bands
3) Read in the reflectance dataset using the NEON AOP H5 reader function
4) Check the first value the first value of the reflectance curve (actually any value). If it is equivalent to the NO DATA value, then coordainte chosen did not intersect a pixel for the flight line. We will just continue and move to the next line.
5) Apply NaN values to the areas contianing the bad bands
6) Split the contents of the file name so we can get the line number for labelling in the plot.
7) Plot the curve
Step8: This plots the reflectance curves from all lines onto the same plot. Now, we will add the appropriate legend and plot labels, display and save the plot with the coordaintes in the file name so we can repeat the position of the target | Python Code:
import h5py
import csv
import numpy as np
import os
import gdal
import matplotlib.pyplot as plt
import sys
from math import floor
import time
import warnings
warnings.filterwarnings('ignore')
def h5refl2array(h5_filename):
hdf5_file = h5py.File(h5_filename,'r')
#Get the site name
file_attrs_string = str(list(hdf5_file.items()))
file_attrs_string_split = file_attrs_string.split("'")
sitename = file_attrs_string_split[1]
refl = hdf5_file[sitename]['Reflectance']
reflArray = refl['Reflectance_Data']
refl_shape = reflArray.shape
wavelengths = refl['Metadata']['Spectral_Data']['Wavelength']
#Create dictionary containing relevant metadata information
metadata = {}
metadata['shape'] = reflArray.shape
metadata['mapInfo'] = refl['Metadata']['Coordinate_System']['Map_Info']
#Extract no data value & set no data value to NaN\n",
metadata['scaleFactor'] = float(reflArray.attrs['Scale_Factor'])
metadata['noDataVal'] = float(reflArray.attrs['Data_Ignore_Value'])
metadata['bad_band_window1'] = (refl.attrs['Band_Window_1_Nanometers'])
metadata['bad_band_window2'] = (refl.attrs['Band_Window_2_Nanometers'])
metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value
metadata['EPSG'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)
mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value
mapInfo_string = str(mapInfo); #print('Map Info:',mapInfo_string)\n",
mapInfo_split = mapInfo_string.split(",")
#Extract the resolution & convert to floating decimal number
metadata['res'] = {}
metadata['res']['pixelWidth'] = mapInfo_split[5]
metadata['res']['pixelHeight'] = mapInfo_split[6]
#Extract the upper left-hand corner coordinates from mapInfo\n",
xMin = float(mapInfo_split[3]) #convert from string to floating point number\n",
yMax = float(mapInfo_split[4])
#Calculate the xMax and yMin values from the dimensions\n",
xMax = xMin + (refl_shape[1]*float(metadata['res']['pixelWidth'])) #xMax = left edge + (# of columns * resolution)\n",
yMin = yMax - (refl_shape[0]*float(metadata['res']['pixelHeight'])) #yMin = top edge - (# of rows * resolution)\n",
metadata['extent'] = (xMin,xMax,yMin,yMax),
metadata['ext_dict'] = {}
metadata['ext_dict']['xMin'] = xMin
metadata['ext_dict']['xMax'] = xMax
metadata['ext_dict']['yMin'] = yMin
metadata['ext_dict']['yMax'] = yMax
hdf5_file.close
return reflArray, metadata, wavelengths
print('Starting BRDF Analysis')
Explanation: syncID: bb90898de165446f9a0e92e1399f4697
title: "Hyperspectral Variation Uncertainty Analysis in Python"
description: "Learn to analyze the difference between rasters taken a few days apart to assess the uncertainty between days."
dateCreated: 2017-06-21
authors: Tristan Goulden
contributors: Donal O'Leary
estimatedTime: 0.5 hour
packagesLibraries: numpy, gdal, matplotlib
topics: hyperspectral-remote-sensing, remote-sensing
languagesTool: python
dataProduct:
code1: Python/remote-sensing/uncertainty/hyperspectral_variation_py.ipynb
tutorialSeries: rs-uncertainty-py-series
urlTitle: hyperspectral-variation-py
This tutorial teaches how to open a NEON AOP HDF5 file with a function,
batch processing several HDF5 files, relative comparison between several
NIS observations of the same target from different view angles, error checking.
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Open NEON AOP HDF5 files using a function
* Batch process several HDF5 files
* Complete relative comparisons between several imaging spectrometer observations of the same target from different view angles
* Error check the data.
### Install Python Packages
* **numpy**
* **csv**
* **gdal**
* **matplotlib.pyplot**
* **h5py**
* **time**
### Download Data
To complete this tutorial, you will use data available from the NEON 2017 Data
Institute teaching dataset available for download.
This tutorial will use the files contained in the 'F07A' Directory in <a href="https://neondata.sharefile.com/share/view/cdc8242e24ad4517/fo0c2f24-c7d2-4c77-b297-015366afa9f4" target="_blank">this ShareFile Directory</a>. You will want to download the entire directory as a single ZIP file, then extract that file into a location where you store your data.
<a href="https://neondata.sharefile.com/share/view/cdc8242e24ad4517/fo0c2f24-c7d2-4c77-b297-015366afa9f4" class="link--button link--arrow">
Download Dataset</a>
Caution: This dataset includes all the data for the 2017 Data Institute,
including hyperspectral and lidar datasets and is therefore a large file (12 GB).
Ensure that you have sufficient space on your
hard drive before you begin the download. If not, download to an external
hard drive and make sure to correct for the change in file path when working
through the tutorial.
The LiDAR and imagery data used to create this raster teaching data subset
were collected over the
<a href="http://www.neonscience.org/" target="_blank"> National Ecological Observatory Network's</a>
<a href="http://www.neonscience.org/science-design/field-sites/" target="_blank" >field sites</a>
and processed at NEON headquarters.
The entire dataset can be accessed on the
<a href="http://data.neonscience.org" target="_blank"> NEON data portal</a>.
These data are a part of the NEON 2017 Remote Sensing Data Institute. The complete archive may be found here -<a href="https://neondata.sharefile.com/d-s11d5c8b9c53426db"> NEON Teaching Data Subset: Data Institute 2017 Data Set</a>
### Recommended Prerequisites
We recommend you complete the following tutorials prior to this tutorial to have
the necessary background.
1. <a href="https://www.neonscience.org/neon-aop-hdf5-py"> *NEON AOP Hyperspectral Data in HDF5 format with Python*</a>
1. <a href="https://www.neonscience.org/neon-hsi-aop-functions-python"> *Band Stacking, RGB & False Color Images, and Interactive Widgets in Python*</a>
1. <a href="https://www.neonscience.org/plot-spec-sig-python/"> *Plot a Spectral Signature in Python*</a>
</div>
The NEON AOP has flown several special flight plans called BRDF
(bi-directional reflectance distribution function) flights. These flights were
designed to quantify the the effect of observing targets from a variety of
different look-angles, and with varying surface roughness. This allows an
assessment of the sensitivity of the NEON imaging spectrometer (NIS) results to these paraemters. THe BRDF
flight plan takes the form of a star pattern with repeating overlapping flight
lines in each direction. In the center of the pattern is an area where nearly
all the flight lines overlap. This area allows us to retrieve a reflectance
curve of the same targat from the many different flight lines to visualize
how then change for each acquisition. The following figure displays a BRDF
flight plan as well as the number of flightlines (samples) which are
overlapping.
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/ORNL_BRDF_flightlines.jpg">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/ORNL_BRDF_flightlines.jpg"></a>
</figure>
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/ORNL_NumberSamples.png">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/ORNL_NumberSamples.png"></a>
<figcaption> Top: Flight lines from a bi-directional reflectance distribution
function flight at ORNL. Bottom: A graphical representation of the number of
samples in each area of the sampling.
Source: National Ecological Observatory Network (NEON)
</figcaption>
</figure>
To date (June 2017), the NEON AOP has flown a BRDF flight at SJER and SOAP (D17) and
ORNL (D07). We will work with the ORNL BRDF flight and retrieve reflectance
curves from up to 18 lines and compare them to visualize the differences in the
resulting curves. To reduce the file size, each of the BRDF flight lines have
been reduced to a rectangular area covering where all lines are overlapping,
additionally several of the ancillary rasters normally included have been
removed in order to reduce file size.
We'll start off by again adding necessary libraries and our NEON AOP HDF5 reader
function.
End of explanation
BRDF_rectangle = np.array([[740315,3982265],[740928,3981839]],np.float)
Explanation: First we will define the extents of the rectangular array containing the section from each BRDF flightline.
End of explanation
x_coord = 740600
y_coord = 3982000
Explanation: Next we will define the coordinates of the target of interest. These can be set as any coordinate pait that falls within the rectangle above, therefore the coordaintes must be in UTM Zone 16 N.
End of explanation
if BRDF_rectangle[0,0] <= x_coord <= BRDF_rectangle[1,0] and BRDF_rectangle[1,1] <= y_coord <= BRDF_rectangle[0,1]:
print('Point in bounding area')
y_index = floor(x_coord - BRDF_rectangle[0,0])
x_index = floor(BRDF_rectangle[0,1] - y_coord)
else:
print('Point not in bounding area, exiting')
raise Exception('exit')
Explanation: To prevent the function of failing, we will first check to ensure the coordinates are within the rectangular bounding box. If they are not, we throw an error message and exit from the script.
End of explanation
## You will need to update this filepath for your local data directory
h5_directory = "/Users/olearyd/Git/data/F07A/"
Explanation: Now we will define the location of the all the subset NEON AOP h5 files from the BRDF flight
End of explanation
files = os.listdir(h5_directory)
h5_files = [i for i in files if i.endswith('.h5')]
Explanation: Now we will grab all files / folders within the defined directory and then cycle through them and retain only the h5files
End of explanation
print(h5_files)
fig=plt.figure()
ax = plt.subplot(111)
Explanation: Now we will print the h5 files to make sure they have been included and set up a figure for plotting all of the reflectance curves
End of explanation
for file in h5_files:
print('Working on ' + file)
[reflArray,metadata,wavelengths] = h5refl2array(h5_directory+file)
bad_band_window1 = (metadata['bad_band_window1'])
bad_band_window2 = (metadata['bad_band_window2'])
index_bad_window1 = [i for i, x in enumerate(wavelengths) if x > bad_band_window1[0] and x < bad_band_window1[1]]
index_bad_window2 = [i for i, x in enumerate(wavelengths) if x > bad_band_window2[0] and x < bad_band_window2[1]]
index_bad_windows = index_bad_window1+index_bad_window2
reflectance_curve = np.asarray(reflArray[y_index,x_index,:], dtype=np.float32)
if reflectance_curve[0] == metadata['noDataVal']:
continue
reflectance_curve[index_bad_windows] = np.nan
filename_split = (file).split("_")
ax.plot(wavelengths,reflectance_curve/metadata['scaleFactor'],label = filename_split[5]+' Reflectance')
Explanation: Now we will begin cycling through all of the h5 files and retrieving the information we need also print the file that is currently being processed
Inside the for loop we will
1) read in the reflectance data and the associated metadata, but construct the file name from the generated file list
2) Determine the indexes of the water vapor bands (bad band windows) in order to mask out all of the bad bands
3) Read in the reflectance dataset using the NEON AOP H5 reader function
4) Check the first value the first value of the reflectance curve (actually any value). If it is equivalent to the NO DATA value, then coordainte chosen did not intersect a pixel for the flight line. We will just continue and move to the next line.
5) Apply NaN values to the areas contianing the bad bands
6) Split the contents of the file name so we can get the line number for labelling in the plot.
7) Plot the curve
End of explanation
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
ax.legend(loc='center left',bbox_to_anchor=(1,0.5))
plt.title('BRDF Reflectance Curves at ' + str(x_coord) +' '+ str(y_coord))
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
fig.savefig('BRDF_uncertainty_at_' + str(x_coord) +'_'+ str(y_coord)+'.png',dpi=500,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.show()
Explanation: This plots the reflectance curves from all lines onto the same plot. Now, we will add the appropriate legend and plot labels, display and save the plot with the coordaintes in the file name so we can repeat the position of the target
End of explanation |
7,152 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Index - Back - Next
Widget List
Step1: Numeric widgets
There are many widgets distributed with IPython that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing Float with Int in the widget name, you can find the Integer equivalent.
IntSlider
Step2: FloatSlider
Step3: Sliders can also be displayed vertically.
Step4: FloatLogSlider
The FloatLogSlider has a log scale, which makes it easy to have a slider that covers a wide range of positive magnitudes. The min and max refer to the minimum and maximum exponents of the base, and the value refers to the actual value of the slider.
Step5: IntRangeSlider
Step6: FloatRangeSlider
Step7: IntProgress
Step8: FloatProgress
Step9: The numerical text boxes that impose some limit on the data (range, integer-only) impose that restriction when the user presses enter.
BoundedIntText
Step10: BoundedFloatText
Step11: IntText
Step12: FloatText
Step13: Boolean widgets
There are three widgets that are designed to display a boolean value.
ToggleButton
Step14: Checkbox
Step15: Valid
The valid widget provides a read-only indicator.
Step16: Selection widgets
There are several widgets that can be used to display single selection lists, and two that can be used to select multiple values. All inherit from the same base class. You can specify the enumeration of selectable options by passing a list (options are either (label, value) pairs, or simply values for which the labels are derived by calling str).
Dropdown
Step17: RadioButtons
Step18: Select
Step19: SelectionSlider
Step20: SelectionRangeSlider
The value, index, and label keys are 2-tuples of the min and max values selected. The options must be nonempty.
Step21: ToggleButtons
Step22: SelectMultiple
Multiple values can be selected with <kbd>shift</kbd> and/or <kbd>ctrl</kbd> (or <kbd>command</kbd>) pressed and mouse clicks or arrow keys.
Step23: String widgets
There are several widgets that can be used to display a string value. The Text and Textarea widgets accept input. The HTML and HTMLMath widgets display a string as HTML (HTMLMath also renders math). The Label widget can be used to construct a custom control label.
Text
Step24: Textarea
Step25: Label
The Label widget is useful if you need to build a custom description next to a control using similar styling to the built-in control descriptions.
Step26: HTML
Step27: HTML Math
Step28: Image
Step29: Button
Step30: Output
The Output widget can capture and display stdout, stderr and rich output generated by IPython. For detailed documentation, see the output widget examples.
Play (Animation) widget
The Play widget is useful to perform animations by iterating on a sequence of integers with a certain speed. The value of the slider below is linked to the player.
Step31: Date picker
The date picker widget works in Chrome, Firefox and IE Edge, but does not currently work in Safari because it does not support the HTML date input field.
Step32: Color picker
Step33: Controller
The Controller allows a game controller to be used as an input device.
Step34: Container/Layout widgets
These widgets are used to hold other widgets, called children. Each has a children property that may be set either when the widget is created or later.
Box
Step35: HBox
Step36: VBox
Step37: Accordion
Step38: Tabs
In this example the children are set after the tab is created. Titles for the tabes are set in the same way they are for Accordion.
Step39: Accordion and Tab use selected_index, not value
Unlike the rest of the widgets discussed earlier, the container widgets Accordion and Tab update their selected_index attribute when the user changes which accordion or tab is selected. That means that you can both see what the user is doing and programmatically set what the user sees by setting the value of selected_index.
Setting selected_index = None closes all of the accordions or deselects all tabs.
In the cells below try displaying or setting the selected_index of the tab and/or accordion.
Step40: Nesting tabs and accordions
Tabs and accordions can be nested as deeply as you want. If you have a few minutes, try nesting a few accordions or putting an accordion inside a tab or a tab inside an accordion.
The example below makes a couple of tabs with an accordion children in one of them | Python Code:
import ipywidgets as widgets
Explanation: Index - Back - Next
Widget List
End of explanation
widgets.IntSlider(
value=7,
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
Explanation: Numeric widgets
There are many widgets distributed with IPython that are designed to display numeric values. Widgets exist for displaying integers and floats, both bounded and unbounded. The integer widgets share a similar naming scheme to their floating point counterparts. By replacing Float with Int in the widget name, you can find the Integer equivalent.
IntSlider
End of explanation
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
Explanation: FloatSlider
End of explanation
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='vertical',
readout=True,
readout_format='.1f',
)
Explanation: Sliders can also be displayed vertically.
End of explanation
widgets.FloatLogSlider(
value=10,
base=10,
min=-10, # max exponent of base
max=10, # min exponent of base
step=0.2, # exponent step
description='Log Slider'
)
Explanation: FloatLogSlider
The FloatLogSlider has a log scale, which makes it easy to have a slider that covers a wide range of positive magnitudes. The min and max refer to the minimum and maximum exponents of the base, and the value refers to the actual value of the slider.
End of explanation
widgets.IntRangeSlider(
value=[5, 7],
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d',
)
Explanation: IntRangeSlider
End of explanation
widgets.FloatRangeSlider(
value=[5, 7.5],
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
Explanation: FloatRangeSlider
End of explanation
widgets.IntProgress(
value=7,
min=0,
max=10,
step=1,
description='Loading:',
bar_style='', # 'success', 'info', 'warning', 'danger' or ''
orientation='horizontal'
)
Explanation: IntProgress
End of explanation
widgets.FloatProgress(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Loading:',
bar_style='info',
orientation='horizontal'
)
Explanation: FloatProgress
End of explanation
widgets.BoundedIntText(
value=7,
min=0,
max=10,
step=1,
description='Text:',
disabled=False
)
Explanation: The numerical text boxes that impose some limit on the data (range, integer-only) impose that restriction when the user presses enter.
BoundedIntText
End of explanation
widgets.BoundedFloatText(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Text:',
disabled=False
)
Explanation: BoundedFloatText
End of explanation
widgets.IntText(
value=7,
description='Any:',
disabled=False
)
Explanation: IntText
End of explanation
widgets.FloatText(
value=7.5,
description='Any:',
disabled=False
)
Explanation: FloatText
End of explanation
widgets.ToggleButton(
value=False,
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check'
)
Explanation: Boolean widgets
There are three widgets that are designed to display a boolean value.
ToggleButton
End of explanation
widgets.Checkbox(
value=False,
description='Check me',
disabled=False
)
Explanation: Checkbox
End of explanation
widgets.Valid(
value=False,
description='Valid!',
)
Explanation: Valid
The valid widget provides a read-only indicator.
End of explanation
widgets.Dropdown(
options=['1', '2', '3'],
value='2',
description='Number:',
disabled=False,
)
Explanation: Selection widgets
There are several widgets that can be used to display single selection lists, and two that can be used to select multiple values. All inherit from the same base class. You can specify the enumeration of selectable options by passing a list (options are either (label, value) pairs, or simply values for which the labels are derived by calling str).
Dropdown
End of explanation
widgets.RadioButtons(
options=['pepperoni', 'pineapple', 'anchovies'],
# value='pineapple',
description='Pizza topping:',
disabled=False
)
Explanation: RadioButtons
End of explanation
widgets.Select(
options=['Linux', 'Windows', 'OSX'],
value='OSX',
# rows=10,
description='OS:',
disabled=False
)
Explanation: Select
End of explanation
widgets.SelectionSlider(
options=['scrambled', 'sunny side up', 'poached', 'over easy'],
value='sunny side up',
description='I like my eggs ...',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True
)
Explanation: SelectionSlider
End of explanation
import datetime
dates = [datetime.date(2015,i,1) for i in range(1,13)]
options = [(i.strftime('%b'), i) for i in dates]
widgets.SelectionRangeSlider(
options=options,
index=(0,11),
description='Months (2015)',
disabled=False
)
Explanation: SelectionRangeSlider
The value, index, and label keys are 2-tuples of the min and max values selected. The options must be nonempty.
End of explanation
widgets.ToggleButtons(
options=['Slow', 'Regular', 'Fast'],
description='Speed:',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltips=['Description of slow', 'Description of regular', 'Description of fast'],
# icons=['check'] * 3
)
Explanation: ToggleButtons
End of explanation
widgets.SelectMultiple(
options=['Apples', 'Oranges', 'Pears'],
value=['Oranges'],
#rows=10,
description='Fruits',
disabled=False
)
Explanation: SelectMultiple
Multiple values can be selected with <kbd>shift</kbd> and/or <kbd>ctrl</kbd> (or <kbd>command</kbd>) pressed and mouse clicks or arrow keys.
End of explanation
widgets.Text(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
Explanation: String widgets
There are several widgets that can be used to display a string value. The Text and Textarea widgets accept input. The HTML and HTMLMath widgets display a string as HTML (HTMLMath also renders math). The Label widget can be used to construct a custom control label.
Text
End of explanation
widgets.Textarea(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
Explanation: Textarea
End of explanation
widgets.HBox([widgets.Label(value="The $m$ in $E=mc^2$:"), widgets.FloatSlider()])
Explanation: Label
The Label widget is useful if you need to build a custom description next to a control using similar styling to the built-in control descriptions.
End of explanation
widgets.HTML(
value="Hello <b>World</b>",
placeholder='Some HTML',
description='Some HTML',
)
Explanation: HTML
End of explanation
widgets.HTMLMath(
value=r"Some math and <i>HTML</i>: \(x^2\) and $$\frac{x+1}{x-1}$$",
placeholder='Some HTML',
description='Some HTML',
)
Explanation: HTML Math
End of explanation
file = open("images/WidgetArch.png", "rb")
image = file.read()
widgets.Image(
value=image,
format='png',
width=300,
height=400,
)
Explanation: Image
End of explanation
widgets.Button(
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me',
icon='check'
)
Explanation: Button
End of explanation
play = widgets.Play(
# interval=10,
value=50,
min=0,
max=100,
step=1,
description="Press play",
disabled=False
)
slider = widgets.IntSlider()
widgets.jslink((play, 'value'), (slider, 'value'))
widgets.HBox([play, slider])
Explanation: Output
The Output widget can capture and display stdout, stderr and rich output generated by IPython. For detailed documentation, see the output widget examples.
Play (Animation) widget
The Play widget is useful to perform animations by iterating on a sequence of integers with a certain speed. The value of the slider below is linked to the player.
End of explanation
widgets.DatePicker(
description='Pick a Date',
disabled=False
)
Explanation: Date picker
The date picker widget works in Chrome, Firefox and IE Edge, but does not currently work in Safari because it does not support the HTML date input field.
End of explanation
widgets.ColorPicker(
concise=False,
description='Pick a color',
value='blue',
disabled=False
)
Explanation: Color picker
End of explanation
widgets.Controller(
index=0,
)
Explanation: Controller
The Controller allows a game controller to be used as an input device.
End of explanation
items = [widgets.Label(str(i)) for i in range(4)]
widgets.Box(items)
Explanation: Container/Layout widgets
These widgets are used to hold other widgets, called children. Each has a children property that may be set either when the widget is created or later.
Box
End of explanation
items = [widgets.Label(str(i)) for i in range(4)]
widgets.HBox(items)
Explanation: HBox
End of explanation
items = [widgets.Label(str(i)) for i in range(4)]
left_box = widgets.VBox([items[0], items[1]])
right_box = widgets.VBox([items[2], items[3]])
widgets.HBox([left_box, right_box])
Explanation: VBox
End of explanation
accordion = widgets.Accordion(children=[widgets.IntSlider(), widgets.Text()])
accordion.set_title(0, 'Slider')
accordion.set_title(1, 'Text')
accordion
Explanation: Accordion
End of explanation
tab_contents = ['P0', 'P1', 'P2', 'P3', 'P4']
children = [widgets.Text(description=name) for name in tab_contents]
tab = widgets.Tab()
tab.children = children
for i in range(len(children)):
tab.set_title(i, str(i))
tab
Explanation: Tabs
In this example the children are set after the tab is created. Titles for the tabes are set in the same way they are for Accordion.
End of explanation
tab.selected_index = 3
accordion.selected_index = None
Explanation: Accordion and Tab use selected_index, not value
Unlike the rest of the widgets discussed earlier, the container widgets Accordion and Tab update their selected_index attribute when the user changes which accordion or tab is selected. That means that you can both see what the user is doing and programmatically set what the user sees by setting the value of selected_index.
Setting selected_index = None closes all of the accordions or deselects all tabs.
In the cells below try displaying or setting the selected_index of the tab and/or accordion.
End of explanation
tab_nest = widgets.Tab()
tab_nest.children = [accordion, accordion]
tab_nest.set_title(0, 'An accordion')
tab_nest.set_title(1, 'Copy of the accordion')
tab_nest
Explanation: Nesting tabs and accordions
Tabs and accordions can be nested as deeply as you want. If you have a few minutes, try nesting a few accordions or putting an accordion inside a tab or a tab inside an accordion.
The example below makes a couple of tabs with an accordion children in one of them
End of explanation |
7,153 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Decision Tree of Observable Operators
Part 3
Step1: ...by emitting all of the items emitted by corresponding Observables
flat_map(flat_map)
Step2: flat_map_latest(select_switch)
Step3: concat_map
Step4: many_select
manySelect internally transforms each item emitted by the source Observable into an Observable that emits that item and all items subsequently emitted by the source Observable, in the same order.
So, for example, it internally transforms an Observable that emits the numbers 1,2,3 into three Observables
Step5: ... based on ALL of the items that preceded them scan
Step6: ... by attaching a timestamp to them timestamp
Step7: ... into an indicator of the amount of time that lapsed before the emission of the item time_interval | Python Code:
reset_start_time(O.map, title='map') # alias is "select"
# warming up:
d = subs(O.from_((1, 2 , 3)).map(lambda x: x * 2))
rst(O.pluck, title='pluck')
d = subs(O.from_([{'x': 1, 'y': 2}, {'x': 3, 'y': 4}]).pluck('y'))
class Coord:
def __init__(self, x, y):
self.x = x
self.y = y
rst(title='pluck_attr')
d = subs(O.from_([Coord(1, 2), Coord(3, 4)]).pluck_attr('y'))
Explanation: A Decision Tree of Observable Operators
Part 3: Transformation
source: http://reactivex.io/documentation/operators.html#tree.
(transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, axiros)
This tree can help you find the ReactiveX Observable operator you’re looking for.
See Part 1 for Usage and Output Instructions.
We also require acquaintance with the marble diagrams feature of RxPy.
<h2 id="tocheading">Table of Contents</h2>
<div id="toc"></div>
I want emit the items from an Observable after transforming them
... one at a time with a function map / pluck / pluck_attr
End of explanation
rst(O.flat_map)
stream = O.range(1, 2)\
.flat_map(lambda x: O.range(x, 2)) # alias: flat_map
d = subs(stream)
rst() # from an array
s1 = O.from_(('a', 'b', 'c'))
d = subs(s1.flat_map(lambda x: x))
d = subs(s1.flat_map(lambda x, i: (x, i)))
#d = subs(O.from_(('a', 'b', 'c')).flat_map(lambda x, i: '%s%s' % (x, i))) # ident, a string is iterable
header('using a result mapper')
def res_sel(*a):
# in conrast to the RxJS example I get only 3 parameters, see output
return '-'.join([str(s) for s in a])
# for every el of the original stream we get *additional* two elements: the el and its index:
d = subs(s1.flat_map(lambda x, i: (x, i) , res_sel))
# ident, flat_map flattens the inner stream:
d = subs(s1.flat_map(lambda x, i: O.from_((x, i)), res_sel))
Explanation: ...by emitting all of the items emitted by corresponding Observables
flat_map(flat_map)
End of explanation
rst(O.flat_map_latest) # alias: select_switch
d = subs(O.range(1, 2).flat_map_latest(lambda x: O.range(x, 2)))
# maybe better to understand: A, B, C are emitted always more recent, then the inner streams' elements
d = subs(O.from_(('A', 'B', 'C')).flat_map_latest(
lambda x, i: O.from_(('%s%s-a' % (x, i),
'%s%s-b' % (x, i),
'%s%s-c' % (x, i),
))))
# with emission delays: Now the inner's is faster:
outer = O.from_marbles('A--B--C|').to_blocking()
inner = O.from_marbles('a-b-c|').to_blocking()
# the inner .map is to show also outer's value
d = subs(outer.flat_map_latest(lambda X: inner.map(lambda x: '%s%s' % (X, x))))
Explanation: flat_map_latest(select_switch)
End of explanation
rst(O.for_in)
abc = O.from_marbles('a-b|').to_blocking()
# abc times 3, via:
d = subs(O.for_in([1, 2, 3],
lambda i: abc.map(
# just to get the results of array and stream:
lambda letter: '%s%s' % (letter, i) )))
sleep(0.5)
# we can also for_in from an observable.
# TODO: Dont' understand the output though - __doc__ says only arrays.
d = subs(O.for_in(O.from_((1, 2, 3)),
lambda i: abc.map(lambda letter: '%s%s' % (letter, i) )).take(2))
Explanation: concat_map
End of explanation
rst(O.many_select)
stream = O.from_marbles('a-b-c|')
# TODO: more use cases
d = subs(stream.many_select(lambda x: x.first()).merge_all())
Explanation: many_select
manySelect internally transforms each item emitted by the source Observable into an Observable that emits that item and all items subsequently emitted by the source Observable, in the same order.
So, for example, it internally transforms an Observable that emits the numbers 1,2,3 into three Observables: one that emits 1,2,3, one that emits 2,3, and one that emits 3.
Then manySelect passes each of these Observables into a function that you provide, and emits, as the emissions from the Observable that manySelect returns, the return values from those function calls.
In this way, each item emitted by the resulting Observable is a function of the corresponding item in the source Observable and all of the items emitted by the source Observable after it.
End of explanation
rst(O.scan)
s = O.from_marbles("1-2-3-4---5").to_blocking()
d = subs(s.scan(lambda x, y: int(x) + int(y), seed=10000))
Explanation: ... based on ALL of the items that preceded them scan
End of explanation
rst(O.timestamp)
# the timestamps are objects, not dicts:
d = subs(marble_stream('a-b-c|').timestamp().pluck_attr('timestamp'))
Explanation: ... by attaching a timestamp to them timestamp
End of explanation
rst(O.time_interval)
d = subs(marble_stream('a-b--c|').time_interval().map(lambda x: x.interval))
Explanation: ... into an indicator of the amount of time that lapsed before the emission of the item time_interval
End of explanation |
7,154 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solving the HJB
The HJB equation, is used in dynamic programming to solve optimisation problem. Optimisation problems occur in all walks of life and some can even be solved. And some of those that can be solved are best solved with dynamic programming, a recursive evaluation of the best descission.
This notebook aims to explain by way of demonstration how to solve the HJB and fidn the optimal set of descissions, but also provide an easy to use base to formuate and solve your optimisation problems.
Step11: $$
v(s) = \min_x(cost(x,s) + v(new state(x,s)))
$$
Step15: We can also solve classic dynamic programming problems such as the knapsack problem, hannoi towers or the fibonacci number calculation. Blank functions are outlined below.
The functions must fulfill a range of conditions
Step16: We can solve a very simple pump optimsiation where the state of water in a tank is given by h and described by
Step17: We can have more than one state variable. For example we can add a second tank and now pump to either of them
Step21: One final example is the checkerboard problem as outlined here
Step25: An example we can solve is the water allocation problem from the tutorial sheets
Step30: Stochastic Programming
$$
v(s,i) = \min_x( cost(x,s,i)+\sum_j (p_{i,j} \times v(newstate(x,s,j))) )
$$
Where the probability $p_{i,j}$ is the probaility of jumping from state $i$ to state $j$. Currently the transition matrix is invariate, however this can be easily implimented with P as a list of lists.
Step31: The cost of operating a pump with a given wind turbine power input given by a certain state is given by
Step33: The cost of operating a pump with a given wind turbine power input is given by
Step34: $$
s_{new} = \begin{cases} (t+1,h-1,i) & \text{if } x = 0 \ (t+1,h+1,i) & \text{if } x = 1 \ (t+1,h+1.5,i) & \text{if } x = 2\end{cases}
$$
Step35: state is given by $s = (t,h,i)$
$$
v(s) = \min_x(cost(x,s) + \sum_j p_{ij} v(new state(x,s)))
$$ | Python Code:
import numpy as np
import time
import matplotlib.pyplot as plt
Explanation: Solving the HJB
The HJB equation, is used in dynamic programming to solve optimisation problem. Optimisation problems occur in all walks of life and some can even be solved. And some of those that can be solved are best solved with dynamic programming, a recursive evaluation of the best descission.
This notebook aims to explain by way of demonstration how to solve the HJB and fidn the optimal set of descissions, but also provide an easy to use base to formuate and solve your optimisation problems.
End of explanation
class DynamicProgram(object):
Generate a dynamic program to find a set of optimal descissions using the HJB.
define the program by:
Setting intial states via: set_inital_state(list or int)
Setting the number of steps via: set_step_number(int)
Add a set of descissions: add_decisions_set(set)
Add a cost function: add_cost_function(function in terms of state )
Add a state change equation: add_state_eq(function(state))
Add an expression for the last value: add_final_value_expression(function(state,settings))
Add limits on the states: add_state_limits(lower=list or int,upper = list or int)
See below for examples:
def __init__(self):
self.settings = {
'Lower state limits' : [],
'Upper state limits' : [],
'x_set' : set(),
'cache' : {},}
def add_state_eq(self,function):
Returns a tuple describing the states.
Remember to increment the first state, b convention the number of steps.
Load additional parameters (usually needed for cost and state value with global variables)
Example of a function that changes the state by the decission:
def new_state(x,s):
return (s[0]+1,s[1]+x) #Return a tuple, use (s[:-1]+(5,)) to slice tuples.
self.settings['State eq.'] = function
def add_cost_function(self,function):
Returns a float or integer describing the cost value.
Load additional parameters (usually needed for cost and state value with global variables)
Example is a function that simply returns the decision as cost:
def cost(x,s):
return x
self.settings['Cost eq.'] = function
def add_final_value_expression(self, function,):
Returns a float or integer as the final value:
Example is a function that returns the ratio of the initial state and the final state:
def val_T(s,settings):
return s[1]/float(settings['Initial state'][1])
self.settings['Final value'] = function
def set_step_number(self,step_number):
Number of stages / steps. Integer
self.settings['T'] = step_number
def set_inital_state(self,intial_values):
Provide the inital state of the states other than the stage number
if type(intial_values) is list:
self.settings['Initial state'] = intial_values
self.settings['Initial state'].insert(0,0)
elif type(intial_values) is int:
self.settings['Initial state'] = [intial_values]
self.settings['Initial state'].insert(0,0)
self.settings['Initial state'] = tuple(self.settings['Initial state'])
def add_state_limits(self,lower=[],upper=[]):
Add the limits on the state other than the stage number, leave empty if none
if type(lower) is list:
self.settings['Lower state limits'].extend(lower)
self.settings['Upper state limits'].extend(upper)
elif type(lower) is int:
self.settings['Lower state limits'] = [lower]
self.settings['Upper state limits'] = [upper]
def solve(self):
Solves the HJB. Returns the optimal value.
Path and further info is stored in the cache. Access it via
retrieve_decisions()
self.settings['cache'] ={}
return self._hjb_(self.settings['Initial state'])
def retrieve_decisions(self):
Retrieve the decisions that led to the optimal value
Returns the cost for the different states, the optimal schedule and the states
that the schedule results in.
sched = np.ones(self.settings['T'])*np.nan
cost_calc= 0
states = []
s = self.settings['Initial state']
t = 0
while t < self.settings['T']:
sched[t] = self.settings['cache'][s][1]
cost_calc += self.settings['Cost eq.'](sched[t],s)
states.append(s[1:])
s = self.settings['State eq.'](sched[t],s)
t += 1
states.append(s[1:])
return cost_calc, sched, states
def return_settings(self):
return self.settings
def return_cache(self):
return self.settings['cache']
def add_decisions_set(self,set_of_decisions):
Add a set of permissable decissions. Must be set of unique integers.
if set(set_of_decisions) != set_of_decisions:
raise TypeError('Expected a set unique values, use set() to declare a set')
self.settings['x_set'] = set(set_of_decisions)
def add_setting(self,key,value):
self.settings[key] = value
def _hjb_(self,s):
if self.settings['cache'].has_key(s):
return self.settings['cache'][s][0]
# check state bounds
for c,i in enumerate(s[1:]):
if i < self.settings['Lower state limits'][c] or i > self.settings['Upper state limits'][c]:
return float('inf')
#Check if reached time step limit:
if s[0] == self.settings['T']:
m = self.settings['Final value'](s,self.settings)
self.settings['cache'][s] = [m, np.nan]
return m
# Else enter recursion
else:
### Make decision variable vector ###
######################################################################################
# Faster but only with integer decisions
p=[]
for x in self.settings['x_set']:
p.append(self.settings['Cost eq.'](x,s)+self._hjb_(self.settings['State eq.'](x,s)))
m = min(p)
### Slower but with any imutable decisions, uncomment if desired: ###
# p ={}
# for x in self.settings['x_set']:
# p[x] = self.settings['Cost eq.'](x,s)+self._hjb_(self.settings['State eq.'](x,s))
# m = min(p, key=p.get)
########################################################################################
##############################################################
## Finding the index of the best solution ##
for x in self.settings['x_set']:
if m == p[x]:
pp = x
################################################################
self.settings['cache'][s] = [m, pp]
return m
Explanation: $$
v(s) = \min_x(cost(x,s) + v(new state(x,s)))
$$
End of explanation
#help(DynamicProgram)
def cost(x,s):
Return a float or integer
pass
def new_state(x,s):
Return a tuple
pass
def val_T(s,settings):
Return a float or int
pass
Explanation: We can also solve classic dynamic programming problems such as the knapsack problem, hannoi towers or the fibonacci number calculation. Blank functions are outlined below.
The functions must fulfill a range of conditions:
$$
f:\mathcal{S}^n\times x \rightarrow \mathbb{R}
$$
where $\mathcal{S}$ is the set of permissable states, $x$ the decision variable and $n$ the number of dimensions of the state. These are defined by $x \in \mathcal{X}$ the \texttt{set()} of permissable decisions. The state $s \in \mathcal{S}^n$ where $n \geq 2$ and $\mathcal{S} \subset \mathbb{R} $ and is finite.
The new state is defined by a function such that:
$$
f:\mathcal{S}^n\times x \rightarrow \mathcal{S}^n
$$
The value of the final stage is defined as:
$$
f:\mathcal{S}^n \rightarrow \mathbb{R}
$$
which as an impimentation detail has the required argument settings, to which features can be added through add_setting(key,value) where key must be a new and unique dictionary key and value may be any permissable dictionary entry.
End of explanation
def simple_cost(x,s):
tariff = [19, 8, 20, 3, 12, 14, 0, 4, 3, 13, 11, 13, 13, 11, 16, 14, 16,
19, 1, 8, 0, 4, 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
12, 3, 18, 15, 3, 10, 12, 6, 3, 5, 11, 0, 11, 8, 10, 11, 5,
15, 8, 2, 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
9, 10, 13, 7, 7, 1, 12, 2, 2, 1, 5, 8, 4, 0, 11, 2, 5,
16, 8, 1, 17, 16, 3, 0, 4, 16, 0, 7]
return tariff[s[0]]*x
def val_T(s,settings):
if s[1] < settings['Initial state'][1]:
return float('inf')
else:
return 0
def simple_state(x,s):
#print s
if x == 0:
return (s[0]+1,s[1]-1)
elif x == 1:
return (s[0]+1,s[1]+1)
elif x == 2:
return (s[0]+1,s[1]+1.5)
pumping = DynamicProgram()
pumping.set_step_number(96)
pumping.add_decisions_set({0,1,2})
pumping.add_cost_function(simple_cost)
pumping.add_state_eq(simple_state)
pumping.add_final_value_expression(val_T)
pumping.add_state_limits(lower=0,upper = 200)
pumping.set_inital_state(100)
pumping.return_settings()
#pumping.retrieve_decisions()
Explanation: We can solve a very simple pump optimsiation where the state of water in a tank is given by h and described by:
$$
s_{new} = \begin{cases} (t+1,h-1) & \text{if } x = 0 \ (t+1,h+1) & \text{if } x = 1 \ (t+1,h+1.5) & \text{if } x = 2\end{cases}
$$
The operating cost are described by:
$$
cost = tarrif(t)\times x
$$
where $x$ is the descission variable.
The final value is given by:
$$
V_T = \begin{cases} 0 & \text{if: } h_T \geq h_0 \ Inf &\text{otherwise} \end{cases}
$$
End of explanation
def simple_state2(x,s):
if x == 0:
return (s[0]+1,s[1]-1,s[2]-1)
elif x == 1:
return (s[0]+1,s[1]+1,s[2])
elif x == 2:
return (s[0]+1,s[1] ,s[2]+2)
# We also need to update the final value function.
def val_T2(s,settings):
if s[1] < settings['Initial state'][1] or s[2] < settings['Initial state'][2]:
return float('inf')
else:
return 0
# For now we leave the cost function the same, but that could also be changed.
Explanation: We can have more than one state variable. For example we can add a second tank and now pump to either of them:
Here they have similar equations, but they can be completly independant, of each other. Any cost and state function that meets the requirments above is allowed.
$$
s_{new} = \begin{cases} (t+1,h_1-1,h_2-1) & \text{if } x = 0 \ (t+1,h_1+1,h_2) & \text{if } x = 1 \ (t+1,h_1,h_2+2) & \text{if } x = 2\end{cases}
$$
End of explanation
board = np.array([[1,3,4,6],
[2,6,2,1],
[7,3,2,1],
[0,4,2,9]])
def cost(x,s):
Return a float or integer
global board
return board[s[0],s[1]]
def new_state(x,s):
Return a tuple
global board
return (int(s[0]+1),int(s[1]+x))
def val_T(s,settings):
Return a float or int
global board
return board[s[0],s[1]]
Checkerboard = DynamicProgram()
Checkerboard.add_cost_function(cost)
Checkerboard.add_decisions_set({-1,0,1})
Checkerboard.add_final_value_expression(val_T)
Checkerboard.add_state_eq(new_state)
Checkerboard.add_state_limits(lower=-1,upper=3)
Checkerboard.set_inital_state(2)
Checkerboard.set_step_number(3)
Checkerboard.solve()
print Checkerboard.retrieve_decisions()
Explanation: One final example is the checkerboard problem as outlined here: https://en.wikipedia.org/wiki/Dynamic_programming#Checkerboard
End of explanation
Costs = np.array([[100,0.1, 10,0.6],
[50, 0.4, 10,0.8],
[100,0.2, 25,0.4]])
def value(x,s):
global Costs
return Costs[s[0],0]*(1-math.exp(-Costs[s[0],1]*x))
def cost(x,s):
Return a float or integer
global Costs
if
return -value(x,s) +Costs[s[0],2]*x**Costs[s[0],3]
def new_state(x,s):
Return a tuple
return s[1]-x
def val_T(s,settings):
Return a float or int
pass
Explanation: An example we can solve is the water allocation problem from the tutorial sheets:
Consider a water supply allocation problem. Suppose that a quantity Q can be
allocated to three water users (indices j=1, 2 and 3), what is allocation x4 which
maximises the total net benefits?
The gross benefit resulting from the allocation of $x_j$ to j is:
$$
a_j(1-exp(-b_jx_j))
$$
Subject to $ (a_j, b_j > 0). $
Moreover, the costs of this allocation are:
$c_jx_j^{d_j}$ ($c_j$, $d_j >0$ and $d_j<1$, because of economy of scale).
The values of the constants are:
Q=5
| | $a_j$|$b_j$|$c_j$|$d_j$|
|---|---|---|---|---|
|j=1 |100|0.1| 10|0.6|
|j=2 | 50|0.4| 10|0.8|
|j=3 |100|0.2| 25|0.4|
Solve the problem in the discrete case (i.e. assuming that $x_j$ is an integer).
The optimisation problem is therefore:
$$
Maximise \sum_j (a_j(1-exp(-b_jx_j)) - c_jx_j^{d_j}
$$
$$
subject to: \sum_j x_j \leq Q
$$
$$
x_j \geq 0
$$
This problem is a non-linear mixed integer problem and quite expensive to solve as branch and bound problem. But with DP it is much easier. (The continous case, may be much easier as NLP than DP, this shows how important it is to pay attenttion ot the problem type)
The decisions are 0 ... 5
The states are the amount of water allocated.
End of explanation
class StochasticProgram(DynamicProgram):
Adds a stochastic component to the dynamic program.
state now is: s where s[0] is the step s[1:-1] is the states of the system and s[-1] is the stochastic state
The transition matrix for the markov chain describing the stochastic bhavior is added by:
add_transition_matrix(P) with P as a list of lists.
def add_transition_matrix(self,P):
Add the transition matrix as list of lists
eg. P = [[0.4,0.5,0.1],
[0.2,0.6,0.2],
[0.1,0.5,0.4]]
self.settings['P'] = np.array(P)
self.settings['Len P'] = len(P)
def retrieve_decisions(self):
Retrieve the decisions that led to the optimal value
Returns the cost for the different states, the optimal schedule and the states that the schedule results in.
schedule = []
cost_calc= np.zeros(self.settings['Len P'])
states = []
for i in range(self.settings['Len P']):
schedule_part = []
states_part = []
s = self.settings['Initial state']
t = 0
while t < self.settings['T']:
schedule_part.append(self.settings['cache'][s][1])
cost_calc[i] += self.settings['Cost eq.'](schedule_part[t],s)
states_part.append(s[1:])
s = self.settings['State eq.'](schedule_part[t],(s[:-1]+(i,) ) )
t += 1
states_part.append(s[1:])
states.append(states_part)
schedule.append(schedule_part)
return cost_calc, schedule, states
def solve(self):
Solves the HJB. Returns the optimal value.
Path and further info is stored in the cache. Access it via
retrieve_decisions()
return self._hjb_stoch_(self.settings['Initial state'])
def _hjb_stoch_(self,s):
if self.settings['cache'].has_key(s):
return self.settings['cache'][s][0]
# check state bounds
for c,i in enumerate(s[1:-1]):
if i < self.settings['Lower state limits'][c] or i > self.settings['Upper state limits'][c]:
return float('inf')
#Check if reached time step limit:
if s[0] == self.settings['T']:
m = self.settings['Final value'](s,self.settings)
self.settings['cache'][s] = [m, np.nan]
return m
# Else enter recursion
else:
p=[]
for x in self.settings['x_set']:
#future = 0
# for i in range(self.settings['Len P']):
# #print self.hjb_stoch(self.settings['State eq.'](x,(s[0:-1]+(i,)))), self.settings['P'][s[-1],i]
# future += self.hjb_stoch(self.settings['State eq.'](x,(s[0:-1]+(i,))))*self.settings['P'][s[-1],i]
future = sum(self._hjb_stoch_(self.settings['State eq.'](x,(s[0:-1]+ (i,))))
*self.settings['P'][s[-1],i] for i in range(self.settings['Len P']))
p.append(self.settings['Cost eq.'](x,s) + future)
m = min(p)
for x in self.settings['x_set']:
if m == p[x]:
pp = x
self.settings['cache'][s] = [m, pp]
return m
help(StochasticProgram)
Explanation: Stochastic Programming
$$
v(s,i) = \min_x( cost(x,s,i)+\sum_j (p_{i,j} \times v(newstate(x,s,j))) )
$$
Where the probability $p_{i,j}$ is the probaility of jumping from state $i$ to state $j$. Currently the transition matrix is invariate, however this can be easily implimented with P as a list of lists.
End of explanation
# Convention s = t,h,j
def stoch_simple_state(x,s):
#print s
if x == 0:
return (s[0]+1,s[1]-1,s[2])
elif x == 1:
return (s[0]+1,s[1]+1,s[2])
elif x == 2:
return (s[0]+1,s[1]+1.5,s[2])
def err_corr_wind_power_cost(x,s):
Tariff = [5,5,5,5,5,8,8,8,8,8,12,12,12,12,12,50,50,50,50,20,20,6,5,5]
Wind = [46, 1, 3, 36, 30, 19, 9, 26, 35, 5, 49, 3, 6, 36, 43, 36, 14,
34, 2, 0, 0, 30, 13, 36]
diff = np.array([-1,0,1])*3
Export_price = 5.5
wind_out = Wind[s[0]]+diff[s[2]]
if wind_out <= 0:
wind_out = 0
power_con = x*60-wind_out
if power_con >= 0:
return power_con*Tariff[s[0]]
else:
return power_con*Export_price
def val_T(s,settings):
if s[1] < settings['Initial state'][1]:
return float('inf')
else:
return 0
transition = np.array([[0.4,0.5,0.1],[0.2,0.6,0.2],[0.1,0.5,0.4]])
pumping_stoch = StochasticProgram()
pumping_stoch.add_decisions_set({0,1,2})
pumping_stoch.add_cost_function(err_corr_wind_power_cost)
pumping_stoch.add_state_eq(stoch_simple_state)
pumping_stoch.add_final_value_expression(val_T)
pumping_stoch.add_state_limits(lower=[0,0],upper = [200,3])
pumping_stoch.set_inital_state([100,1])
pumping_stoch.set_step_number(24)
pumping_stoch.add_transition_matrix(transition)
#print pumping_stoch.settings
pumping_stoch.solve()
#pumping_stoch.retrieve_decisions()
pumping_stoch.retrieve_decisions()
Explanation: The cost of operating a pump with a given wind turbine power input given by a certain state is given by:
$$
cost(x,t,h,j) := \begin{cases} T(t) \times (x \times P_p - W(t,j)) & \text{if} +ve \
E_{xp} \times (x \times P_p - W(t,j)) & \text{if} -ve\end{cases}
$$
where $W(t,j)$ is the wind power output at time $t$ with an error state $j$.
End of explanation
def wind_power_cost(x,s):
Very simple cost function for a pump with wind turbine power
Tariff = [5,5,5,5,5,8,8,8,8,8,12,12,12,12,12,50,50,50,50,20,20,6,5,5]
Wind = [46, 1, 3, 36, 30, 19, 9, 26, 35, 5, 49, 3, 6, 36, 43, 36, 14,
34, 2, 0, 0, 30, 13, 36]
Export_price = 5.5
power_con = x*60-Wind[s[0]]
if power_con >= 0:
return power_con*Tariff[s[0]]
else:
return power_con*Export_price
def stoch_wind_power_cost(x,s):
Tariff = [5,5,5,5,5,8,8,8,8,8,12,12,12,12,12,50,50,50,50,20,20,6,5,5]
Wind = [46, 1, 3, 36, 30, 19, 9, 26, 35, 5, 49, 3, 6, 36, 43, 36, 14,
34, 2, 0, 0, 30, 13, 36]
Export_price = 5.5
wind_out = sum(Wind[s[0]]*i for i in settings['P'][s[2]])
power_con = x*60-wind_out
if power_con >= 0:
return power_con*Tariff[s[0]]
else:
return power_con*Export_price
Explanation: The cost of operating a pump with a given wind turbine power input is given by:
$$
cost(x,t,h) := \begin{cases} T(t) \times (x \times P_p - W(t)) & \text{if} +ve \
E_{xp} \times (x \times P_p - W(t)) & \text{if} -ve\end{cases}
$$
where $x$ is the descision variable, $W(t)$ is the wind turbine output in time step $t$. $P_p$ is the pump power, $E_{xp}$ is the export price.
End of explanation
assert(hjb((settings['T'],settings['H_init']-1)) == 10000)
assert(hjb((settings['T'],settings['H_init'])) == 0)
assert(hjb((settings['T']-1,settings['H_min']-1)) == 10000)
assert(hjb((settings['T']-1,settings['H_max']+1)) == 10000)
Explanation: $$
s_{new} = \begin{cases} (t+1,h-1,i) & \text{if } x = 0 \ (t+1,h+1,i) & \text{if } x = 1 \ (t+1,h+1.5,i) & \text{if } x = 2\end{cases}
$$
End of explanation
def make_schedule(settings):
sched = np.ones(settings['T'])*np.nan
cost_calc= 0
elev = np.ones(settings['T']+1)*np.nan
s = settings['Initial state']
t = 0
while t < settings['T']:
sched[t] = settings['cache'][s][1]
print sched[t]
cost_calc += settings['Cost eq.'](sched[t],s)
elev[t] = s[1]
s = settings['State eq.'](sched[t],s)
t += 1
elev[settings['T']] = s[1]
return cost_calc, sched, elev
def make_schedule2(settings):
sched_stack = []
cost_summary = []
string_stack = []
elev = np.ones(settings['T']+1)*np.nan
for ij in [0,1,2]:
cost_calc = 0
string_stack.insert(i,[])
s = settings['Initial state']
t = 0
#string_stack[ij].insert(0,[])
#string_stack[ij].insert(0,'{0:2} {1} {2}'.format(t,settings['cache'][s][1], s[1:]))
string_stack[ij].insert(0,'{0}'.format(s))
while t < settings['T']:
state = tuple(sk if sk in s[:-1] else ij for sk in s )
print s, s[:-1], state
x = settings['cache'][state][1]
print x
cost_calc += settings['Cost eq.'](x,s)
elev[t] = s[1]
s = settings['State eq.'](x,s)
t += 1
string_stack[ij].insert(t,'{0} {1} {2}'.format(x,s[1:],ij) )
elev[settings['T']] = s[1]
return string_stack, cost_calc
make_schedule2(settings)
settings['cache']
def stoch_hjb(s):
global settings
if settings['cache'].has_key(s):
return settings['cache'][s][0]
if s[0] == settings['T'] and s[1] < settings['H_init']:
return 10000
elif s[0] == settings['T'] and s[1] >= settings['H_init']:
return 0
elif s[1] < settings['H_min'] or s[1] > settings['H_max']:
return 10000
else:
p=[]
for x in settings['x_set']:
future = sum(stoch_hjb(settings['State eq.'](x,(s[0],s[1],i)))
*settings['P'][s[2]][i] for i in [0,1,2])
p.append(settings['Cost eq.'](x,s) + future)
m = min(p)
for x in settings['x_set']:
if m == p[x]:
pp = x
settings['cache'][s] = [m, pp]
return m
A = []
for i in [0,1,2]:
A.insert(i,[])
print A
for t in range(5):
A[i].insert(t,i*t**2)
A
cost_calc, sched, elev = make_schedule(settings)
print sched
print elev
print cost_calc
settings['cache'][settings['Initial state']]
cost_calc, sched, elev = make_schedule(settings)
print sched
print elev
print cost_calc
dic = {'blub': simple_cost
}
dic['blub']
dic['blub'](1,(0,1))
len([5,5,5,5,5,8,8,8,8,8,12,12,12,12,12,50,50,50,50,20,20,6,5,5])
np.random.randint(50,size=24)
x = [100,7,90,787]
x_lim_min = [0,10,0,0]
x_lim_max = [100,10,100,1000]
for c,i in enumerate(x):
if i < x_lim_min[c] or i > x_lim_max[c]:
print 1000
Explanation: state is given by $s = (t,h,i)$
$$
v(s) = \min_x(cost(x,s) + \sum_j p_{ij} v(new state(x,s)))
$$
End of explanation |
7,155 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with Graphs using Networkx Intro
When creating a graph object, it can either be empty (default) or you can pass data as an argument. The data can take multiple forms
Step1: Notice that the edge list is empty, since we haven't added any edges yet. Also, because the number of edges in a graph can become very large, there is an iterator method for returning edges
Step2: If we had a script that needs to add a single edge from a tuple, we would use * preceeding the tuple or it's assigned variable in the add_edge method
Step3: We can also add nodes and edges using objects called nbunch and ebunch. These objects are any iterables or generators of nodes or edge tuples. We will do this below using a couple different methods.
Step4: We can also remove nodes or edges using similar methods, just replacing 'add' with 'remove'
Step5: Notice that removing nodes automatically removed the related edges for us. One last basic inspection method is to get a list of neighbors (adjacent nodes) for a specific node in a graph.
Step6: While we have been using numbers to represent nodes, we can use any hashable object as a node. For example, this means that lists, sets and arrays can't be nodes, but frozensets can
Step7: Edge properties
We can add weights or other properties to edges in a graph in different ways. The first is to add properties at creation time by passing triples instead of doubles for each edge. The third value will be the edge property.
Step8: Subscript notation for accessing edges
We can use subscript notiation on a graph object to easily get edge data. Access edge data for a node by entering that node as the subscript. This will return a dict with connected nodes as keys, and their respective edge weights as values.
Step9: We can also modify specific edge attributes
Step10: And we can add other attributes
Step11: 1. Create a complete graph with 7 nodes and verify that it is complete by looking at the edges. Do this manually and using a built-in method.
Remember, a complete graph is an undirected graph that has every pair of nodes connected by a unique edge. In other words, every node is adjacent to every other.
Step12: 2. Create a function that will draw a given graph that has a layout type parameter and labels the nodes. Now draw the graph created in the last problem using circular layout.
Step13: 3. Create a graph with 10 nodes and 3 components. Draw this graph.
Recall that a component is a connected subgraph.
Step14: 4. Create a simple connected digraph with 4 nodes and a diameter of 2.
Step15: 5. Create another 4 node digraph with weighted edges. Draw this graph with node and edge weight labels.
Step16: 6. Create the adjacnency matrix from the edge data in edges_1.pkl
Step17: 7. Using networkx built-in functions, create the distance matrix for the same graph from the previous problem
Step18: 8. Identify and remove a cutpoint from this graph and re-draw it
Step19: 9. Use edges_2 to create a graph. List any subgraphs that are maximal cliques
Step20: 10. Determine the Degree, Closeness, and Betweenness measures of centrality for this network
Step21: 11. Based on the measures above, which actors have the greatest control over the flow of information? Why do some actors have betweenness measures of zero?
When considering control over flow of information, we look at the betweenness measure. In this case, actors 0, 1, and 2 have the greatest control over information flow. The actors with betweenness measures of zero have no geodesics through them connecting other pairs of nodes. For example, there is a path from 0 to 4 that goes through 9, but that has a length of 2 and 0 connects to 4 directly.
12. Create a copy of the last network and do the following
Step22: 13. Create a directed graph from edges_3.pkl and do the following | Python Code:
# isntantiate a graph object
G = nx.Graph()
# add a single node
G.add_node(1)
# add multiple nodes from a list
G.add_nodes_from([2,3,5])
# return lists of nodes and edges in the graph
G.nodes(), G.edges()
Explanation: Working with Graphs using Networkx Intro
When creating a graph object, it can either be empty (default) or you can pass data as an argument. The data can take multiple forms:
an edge list
a numpy matrix or 2D ndarray
a Networkx graph object
a scipy sparse matrix
a PyGraphviz graph
Let's start by creating an empty graph, and then add nodes.
End of explanation
# add a single edge between 3 and 5
G.add_edge(3,5)
# add multiple edges using list of tuples
edge_list = [(1,2),(2,3),(2,5)]
G.add_edges_from(edge_list)
G.edges()
Explanation: Notice that the edge list is empty, since we haven't added any edges yet. Also, because the number of edges in a graph can become very large, there is an iterator method for returning edges: edges_iter().
End of explanation
# the asterisk indicates that the values should be extracted
G.add_edge(*(1,3))
G.edges()
Explanation: If we had a script that needs to add a single edge from a tuple, we would use * preceeding the tuple or it's assigned variable in the add_edge method:
End of explanation
# generate a graph of linearly connected nodes
# this is a graph of a single path with 5 nodes and 4 edges
H = nx.path_graph(5)
# a look at the nodes and edges produced
H.nodes(), H.edges()
# Create a graph using nbunch and ebunch from the graph H
G = nx.Graph()
G.add_nodes_from(H)
# we have to specify edges
G.add_edges_from(H.edges())
G.nodes(), G.edges()
# now add edges to a graph using an iterator instead of an iterable list
# this is another example of ebunch, and node iterators work too
G = nx.Graph()
G.add_nodes_from([1,2,3])
# create edge generator connecting all possible node pairs
from itertools import combinations
edge_generator = combinations([1,2,3], 2)
# to show this is a generator and not a list
print('not a list: ', edge_generator)
# now lets add the edges using the iterator
G.add_edges_from(edge_generator)
G.edges()
Explanation: We can also add nodes and edges using objects called nbunch and ebunch. These objects are any iterables or generators of nodes or edge tuples. We will do this below using a couple different methods.
End of explanation
H.nodes(), H.edges()
H.remove_nodes_from([0,4])
H.nodes(), H.edges()
Explanation: We can also remove nodes or edges using similar methods, just replacing 'add' with 'remove':
End of explanation
H = nx.path_graph(7)
# get the neighbors for node 5
H.neighbors(5)
Explanation: Notice that removing nodes automatically removed the related edges for us. One last basic inspection method is to get a list of neighbors (adjacent nodes) for a specific node in a graph.
End of explanation
G = nx.Graph()
# G.add_node([0,1]) <-- raises error
# G.add_node({0,1}) <-- raises error
G.add_node(frozenset([0,1])) # this works
G.nodes()
Explanation: While we have been using numbers to represent nodes, we can use any hashable object as a node. For example, this means that lists, sets and arrays can't be nodes, but frozensets can:
End of explanation
G = nx.Graph()
G.add_nodes_from([1,2,3])
G.add_weighted_edges_from([(1,2,3.14), (2,3,6.5)])
# calling edges() alone will not return weights
print(G.edges(), '\n')
# we need to use the data parameter to get triples
print(G.edges(data='weight'), '\n')
# we can also get data for individual edges
print(G.get_edge_data(1,2))
Explanation: Edge properties
We can add weights or other properties to edges in a graph in different ways. The first is to add properties at creation time by passing triples instead of doubles for each edge. The third value will be the edge property.
End of explanation
# get edge data for node 2
print(G[2], '\n')
# subscript further to get only the weight for edge between 2 and 3
print(G[2][3])
Explanation: Subscript notation for accessing edges
We can use subscript notiation on a graph object to easily get edge data. Access edge data for a node by entering that node as the subscript. This will return a dict with connected nodes as keys, and their respective edge weights as values.
End of explanation
G[2][3]['weight'] = 17
G[2][3]
Explanation: We can also modify specific edge attributes:
End of explanation
G[2][3]['attr'] = 'value'
G[2][3]
Explanation: And we can add other attributes
End of explanation
# manually
from itertools import combinations
complete_edges = combinations(range(7), 2)
G_complete = nx.Graph(complete_edges)
G_complete.edges()
# built-in method
G_complete = nx.complete_graph(7)
G_complete.edges()
Explanation: 1. Create a complete graph with 7 nodes and verify that it is complete by looking at the edges. Do this manually and using a built-in method.
Remember, a complete graph is an undirected graph that has every pair of nodes connected by a unique edge. In other words, every node is adjacent to every other.
End of explanation
# function to draw and label nodes in a graph
def draw(G, layout):
import warnings
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
warnings.filterwarnings("ignore",category=UserWarning)
nx.draw(G, pos=layout(G))
nx.draw_networkx_labels(G, pos=layout(G));
draw(G_complete, nx.circular_layout)
Explanation: 2. Create a function that will draw a given graph that has a layout type parameter and labels the nodes. Now draw the graph created in the last problem using circular layout.
End of explanation
from networkx.drawing.nx_agraph import graphviz_layout
edges_1 = [(0,1), (0,2)]
edges_2 = [(3,4), (3,5), (4,5), (4,6)]
edges_3 = [(7,8), (7,9)]
G = nx.Graph(edges_1 + edges_2 + edges_3)
draw(G, graphviz_layout)
Explanation: 3. Create a graph with 10 nodes and 3 components. Draw this graph.
Recall that a component is a connected subgraph.
End of explanation
# create in and out edge lists
out_edges = [(0,1), (0,2), (1,2), (2,3)]
# create the empty digraph
G = nx.DiGraph(out_edges)
draw(G, graphviz_layout)
Explanation: 4. Create a simple connected digraph with 4 nodes and a diameter of 2.
End of explanation
out_edges = [(0,1,0.5), (0,2,0.5), (1,2,1), (2,3,0.7)]
G = nx.DiGraph()
G.add_weighted_edges_from(out_edges)
draw(G, graphviz_layout)
labels = nx.get_edge_attributes(G, 'weight')
nx.draw_networkx_edge_labels(G, pos=graphviz_layout(G), edge_labels=labels);
Explanation: 5. Create another 4 node digraph with weighted edges. Draw this graph with node and edge weight labels.
End of explanation
import pickle
with open('../edges_1.pkl', 'rb') as f:
edges = pickle.load(f)
G = nx.Graph(edges)
# lets see what it looks like
draw(G, graphviz_layout)
adj_matrix = nx.to_numpy_matrix(G)
DF(adj_matrix)
Explanation: 6. Create the adjacnency matrix from the edge data in edges_1.pkl
End of explanation
geodesics = nx.all_pairs_shortest_path_length(G)
# this gave us a dict of shortest path lengths between all connected pairs
geodesics
# we can easily convert this to a matrix using pandas
DF(geodesics)
Explanation: 7. Using networkx built-in functions, create the distance matrix for the same graph from the previous problem
End of explanation
# 4 is a cutpoint
G.remove_node(4)
draw(G, graphviz_layout)
Explanation: 8. Identify and remove a cutpoint from this graph and re-draw it
End of explanation
# G = nx.moebius_kantor_graph()
G = nx.dorogovtsev_goltsev_mendes_graph(3)
with open('../edges_2.pkl', 'rb') as f:
edges = pickle.load(f)
G = nx.Graph(edges)
# draw(G, graphviz_layout)
draw(G, nx.circular_layout)
list(nx.find_cliques(G))
Explanation: 9. Use edges_2 to create a graph. List any subgraphs that are maximal cliques
End of explanation
degree = nx.degree_centrality(G)
closeness = nx.closeness_centrality(G)
betweenness = nx.betweenness_centrality(G)
Series(degree)
Series(closeness)
Series(betweenness)
Explanation: 10. Determine the Degree, Closeness, and Betweenness measures of centrality for this network
End of explanation
H = G.copy()
H.remove_node(0)
H.add_edge(10,14)
H.remove_edge(1,2)
# draw(H, graphviz_layout)
draw(H, nx.circular_layout)
# eccentricity of node 1
nx.eccentricity(H, 1)
# find cliques containing node 1
nx.cliques_containing_node(H, 1)
# density
nx.density(H)
# remove node 1
H.remove_node(1)
nx.density(H)
Explanation: 11. Based on the measures above, which actors have the greatest control over the flow of information? Why do some actors have betweenness measures of zero?
When considering control over flow of information, we look at the betweenness measure. In this case, actors 0, 1, and 2 have the greatest control over information flow. The actors with betweenness measures of zero have no geodesics through them connecting other pairs of nodes. For example, there is a path from 0 to 4 that goes through 9, but that has a length of 2 and 0 connects to 4 directly.
12. Create a copy of the last network and do the following:
remove node 0
add and edge between 10 and 14 and remove the edge between 1 and 2
draw the graph
determine the eccentricity of node 1
find all cliques containing node 1
compute the density of the graph
remove node 1 and compute the density of the resulting graph
End of explanation
with open('edges_3.pkl', 'rb') as f:
edges = pickle.load(f)
G = nx.DiGraph(edges)
draw(G, graphviz_layout)
# adjacency matrix
adj_matrix = nx.to_numpy_matrix(G)
DF(adj_matrix)
# indegree and outdegree
indegree = adj_matrix.sum(axis=0) / (len(adj_matrix)-1)
outdegree = adj_matrix.sum(axis=1) / (len(adj_matrix)-1)
in_method = nx.in_degree_centrality(G)
out_method = nx.out_degree_centrality(G)
# indegree comparison
(Series(np.array(indegree).flatten()) == Series(in_method)).all()
# outdegree comparison
(Series(np.array(outdegree).flatten()) == Series(out_method)).all()
Series(in_method).sort_values(ascending=False)
Explanation: 13. Create a directed graph from edges_3.pkl and do the following:
create the adjacency matrix
compute the indegree and outdegree centrality for all actors in this network using the adjacency matrix
compare your results to the in and out_degree_centrality methods
End of explanation |
7,156 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ATM 623
Step1: Contents
The ice ages
Introducing the astronomical theory of the ice ages
Ellipses and orbits
Past orbital variations
Using climlab to calculate insolation for arbitrary orbital parameters
Past changes in insolation
Step2: <a id='section1'></a>
1. The ice ages
Recent Earth history (past few million years) has been dominated by the repeated growth and retreat of large continental ice sheets, mostly over the land masses of the Northern Hemisphere.
Extent of glaciation
The images below show typical maximum extents of the ice sheets during recent glaciations (grey) compared with present-day ice sheets (black)
<img src='http
Step3: These asymmetries arise because of the detailed shape of the orbit of the Earth around the Sun and the tilt of the Earth's axis of rotation.
As these orbitals details change over time, there are significant changes in the distribution of sunlight over the seasons and latitudes.
The Milankovitch hypothesis
Version of the astronomical theory have been debated for at least 150 years.
The most popular flavor has been the so-called Milankovitch hypothesis
Step4: Make reference plots of the variation in the three orbital parameter over the last 1 million years
Step5: The object called
orb
is an xarray.Dataset which now holds 1 million years worth of orbital data, total of 1001 data points for each element
Step6: Timescales of orbital variation
Step7: Compare this with the same calculation for default (present-day) orbital parameters
Step8: Do you understand what's going on here?
<a id='section6'></a>
6. Past changes in insolation
Step9: Indeed, there was an increase of 60 W m$^{-2}$ over a 10 kyr interval following the LGM.
Why?
What orbital factors favor high insolation at 65ºN at summer solstice?
high obliquity
large, positive precessional parameter
Looking back at our plots of the orbital parameters, it turns out that both were optimal around 10,000 years ago.
Actually 10,000 years ago the climate was slightly warmer than today and the ice sheets had mostly disappeared already.
The LGM occurred near a minimum in summer insolation in the north – mostly due to obliquity reaching a minimum, since we have been in a period of weak precession due to the nearly circular orbit. So this is consistent with the orbital theory.
The hypothesis is incomplete, but compelling.
Comparing insolation at 10 kyr and 23 kyr
Step10: This figure shows that the insolation at summer solstice does not tell the whole story!
For example, the insolation in late summer / early fall apparently got weaker between 23 and 10 kyr (in the high northern latitudes).
The annual mean plot is perfectly symmetric about the equator.
This actually shows a classic obliquity signal
Step11: The difference is tiny (and due to very small changes in the eccentricity).
Ice ages are driven by seasonal and latitudinal redistributions of solar energy, NOT by changes in the total global amount of solar energy!
<a id='section7'></a>
7. Understanding the effects of orbital variations on insolation
We are going to create a figure showing past time variations in three quantities
Step12: We see that
Global annual mean insolation varies only with eccentricity (slow), and the variations are very small!
Annual mean insolation varies with obliquity (medium). Annual mean insolation does NOT depend on precession!
Summer solstice insolation at high northern latitudes is affected by both precession and obliquity. The variations are large.
<a id='section8'></a>
8. Summary
The annual, global mean insolation varies only as a result of eccentricity $e$. The changes are very small (about 0.1 % through a typical eccentricity cycle from more circular to more elliptical)
Obliquity controls the annual-mean equator-to-pole insolation gradient.
The precessional parameter $e \sin\Lambda$ controls the modulation in seasonal insolation due to eccentricity and longitude of perihelion $\Lambda$.
The combined effects can result in 15% changes in high-latitude summer insolation
Obliquity combined with eccentricity and longitude of perihelion control the amplitude of seasonal insolation variations at a point.
Combined effects of the three orbital parameters can cause variations in seasonal insolation as large as 30% in high latitudes.
These geometrical considerations tell us that seasonal variations in $Q$ can be rather large, and will surely impact the climate. But to go from there to understanding how large ice sheets come and go is a difficult step, and requires climate models!
One thing is clear | Python Code:
# Ensure compatibility with Python 2 and 3
from __future__ import print_function, division
Explanation: ATM 623: Climate Modeling
Brian E. J. Rose, University at Albany
Lecture 16: Orbital variations, insolation, and the ice ages
Warning: content out of date and not maintained
You really should be looking at The Climate Laboratory book by Brian Rose, where all the same content (and more!) is kept up to date.
Here you are likely to find broken links and broken code.
About these notes:
This document uses the interactive Jupyter notebook format. The notes can be accessed in several different ways:
The interactive notebooks are hosted on github at https://github.com/brian-rose/ClimateModeling_courseware
The latest versions can be viewed as static web pages rendered on nbviewer
A complete snapshot of the notes as of May 2017 (end of spring semester) are available on Brian's website.
Also here is a legacy version from 2015.
Many of these notes make use of the climlab package, available at https://github.com/brian-rose/climlab
End of explanation
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from climlab import constants as const
Explanation: Contents
The ice ages
Introducing the astronomical theory of the ice ages
Ellipses and orbits
Past orbital variations
Using climlab to calculate insolation for arbitrary orbital parameters
Past changes in insolation: investigating the Milankovitch hypothesis
Understanding the effects of orbital variations on insolation
Summary
End of explanation
from climlab.solar.insolation import daily_insolation
days = np.linspace(0, const.days_per_year, 365)
Qnorth = daily_insolation(90,days)
Qsouth = daily_insolation(-90,days)
print( 'Daily average insolation at summer solstice:')
print( 'North Pole: %0.2f W/m2.' %np.max(Qnorth))
print( 'South Pole: %0.2f W/m2.' %np.max(Qsouth))
Explanation: <a id='section1'></a>
1. The ice ages
Recent Earth history (past few million years) has been dominated by the repeated growth and retreat of large continental ice sheets, mostly over the land masses of the Northern Hemisphere.
Extent of glaciation
The images below show typical maximum extents of the ice sheets during recent glaciations (grey) compared with present-day ice sheets (black)
<img src='http://upload.wikimedia.org/wikipedia/commons/thumb/e/ef/Iceage_north-intergl_glac_hg.png/480px-Iceage_north-intergl_glac_hg.png'>
<img src='http://upload.wikimedia.org/wikipedia/commons/thumb/4/44/Iceage_south-intergl_glac_hg.png/480px-Iceage_south-intergl_glac_hg.png'>
Hannes Grobe/AWI, http://commons.wikimedia.org/wiki/File:Iceage_north-intergl_glac_hg.png
<a id='icevolumeseries'></a>
Pacing of ice ages: evidence from ocean sediments
The figure below shows a global record of oxygen isotopes recorded in the shells of marine organisms. This record tells us primarily about variations in global ice volume -- because the net evaporation of water from the oceans to build up the ice sheets leaves the oceans enriched in heavier isotopes.
The x axis is plotted in Thousands of years before present (present-day is at zero on the left).
<img src='../images/Lisiecki_Raymo_Fig.4top.png' width=800>
Lisiecki, L. E. and Raymo, M. E. (2005). A Pliocene-Pleistocene stack of 57 globally distributed benthic δ18O records. Paleoceanog., 20.
The ice ages (times of extensive glaciation and high ocean $\delta^{18}$O) do not seem to be random fluctations. They have come and gone (approximately) periodically, somewhat like the seasons.
Spectral analysis of such records reveals peaks at some special frequencies:
<img src='../images/ImbrieImbrie_Fig42.png' width=400>
Imbrie, J. and Imbrie, K. P. (1986). Ice Ages: Solving the Mystery. Harvard University Press, Cambridge, Massachusetts.
The peaks noted on this figure are special because they correspond to frequencies of variations in Earth's orbital parameters, as we will see.
These kind of results became available in the 1970’s for the first time, because ocean sediment cores allowed a sufficiently detailed look into the past to use time series analysis methods on them, e.g. to compute spectra.
The presence of peaks in the spectrum at orbital frequencies was seen as convincing evidence that the so-called astronomical theory of the ice ages was (at least partially) correct.
<a id='section2'></a>
2. Introducing the astronomical theory of the ice ages
The Astronomical Theory of climate and the ice ages looks to the regular, predictable variations in the Earth's orbit around the Sun as the driving force for the growth and melt of the great ice sheets. Such theories have been discussed since long before there was any evidence about the timing of past glaciations.
Last time we saw that insolation is NOT perfectly symmetrically distributed between the two hemispheres and seasons.
To refresh our memory, let's use
climlab.solar.insolation.daily_insolation()
to compare the maximum insolation received at the North Pole (at its summer solstice) and the South Pole (at its summer solstice).
End of explanation
from climlab.solar.orbital import OrbitalTable
OrbitalTable
Explanation: These asymmetries arise because of the detailed shape of the orbit of the Earth around the Sun and the tilt of the Earth's axis of rotation.
As these orbitals details change over time, there are significant changes in the distribution of sunlight over the seasons and latitudes.
The Milankovitch hypothesis
Version of the astronomical theory have been debated for at least 150 years.
The most popular flavor has been the so-called Milankovitch hypothesis:
Ice sheets grow during periods of weak summer insolation in the Northern high latitudes.
The idea is that for an ice sheet to grow, seasonal snow must survive through the summer. Milankovitch therefore focussed on the factors determining the climatic conditions during summer.
<a id='section3'></a>
3. Ellipses and orbits
First, watch this neat animation from Peter Huybers (Harvard University):
http://www.people.fas.harvard.edu/~phuybers/Inso/Orbit.mv4
Watch carefully and note the three ways that the orbit is varying simultaneously.
From Professor Huybers' web page:
A movie depicting Earth's changing orbit over the last 100Ky. The orientation is such that spring equinox (indicated by a vertical bar) is directly to the front with the sun behind it. Northern Hemisphere summer is to our right, and Northern Hemisphere winter is to the left. The apsidal (dashed) line connects perihelion (Earth's closest approach to the sun) to aphelion (the point when Earth is furthest from the sun). The rotaion of the apsidal line occurs because of the precession of the equinoxes and has a roughly twenty-two thousand year period. The semi-circle around the Earth indicates the location of the equator and the straight line is the polar axis. Obliquity is defined as the angle beetween the orbital and equatorial planes. The variations in Earth's obliquity and the eccentricity of Earth's orbit have both been increased in magnitude by a factor of ten. Also, the Earth's angular velocity has been decreased by a factor of five thousand. Note that Earth's angular velocity is slowest at aphelion and fastest at perihelion.
The Earth’s orbit around the Sun traces out an ellipse, with the Sun at one focal point.
<img src='../images/ImbrieImbrie_Fig14.png' width=600>
Imbrie, J. and Imbrie, K. P. (1986). Ice Ages: Solving the Mystery. Harvard University Press, Cambridge, Massachusetts.
How to draw an ellipse
Take any two points on a plane
Attach the two ends of a piece of string to the two points.
Pull the loose string out as far as it will go in any direction, and place a pencil mark at that point.
Do the same for every possible direction.
Congratulations, you have just drawn a perfect ellipse. The two points are called foci or focal points.
Keep this in mind, and you will always understand the mathematical definition of an ellipse:
An ellipse is a curve that is the locus of all points in the plane the sum of whose distances from two fixed points (the foci) is a positive constant.
In our case, the positive constant is the total length of the string.
Perihelion and Aphelion
The point in the orbit that is closest to the sun is called Perihelion.
The farthest point is called Aphelion.
Distances (present-day):
Perihelion, $ d_p = 1.47 \times 10^{11}$ m
Aphelion, $ d_a = 1.52 \times 10^{11}$ m
Eccentricity
The eccentricity of the orbit is defined as
$$ e = \frac{d_a-d_p}{d_a+d_p} $$
So for present-day values, $e = 0.017 = 1.7\%$
Earth’s orbit is nearly circular, but not quite!
(What value of $e$ would a purely circular orbit have?)
As the Earth travels around its orbit, the distance to the sun varies. The energy flux (W m$^{-2}$) is larger when the Earth is closer to the sun (i.e. near perihelion).
At present, perihelion occurs on January 3. This is very close to the Northern Hemisphere winter solstice (Dec. 21).
The Earth actually receives MORE total sunlight during Northern Winter than during Northern Summer.
It is thus critical to understand the relative timing of our seasons (which are determined by the axial tilt or obliquity) and the perehelion.
Obliquity
The obliquity $\Phi$ is the tilt of the Earth’s axis of rotation with respect to a line normal to the plane of the Earth’s orbit around the Sun.
Currently $\Phi = 23.5^\circ$
Obliquity is the fundamental reason we have seasons, and would have seasons even with a perfectly circular orbit ($e=0$)
Higher obliquity means:
more summertime insolation at the poles
less wintertime insolation in mid-latitudes
Longitude of perihelion and precession of the equinoxes
The longitude of perihelion is defined as the angle $\Lambda$ between the Earth-Sun line at vernal equinox and the line from the Sun to perihelion (see sketch).
The current value is $\Lambda = 281^\circ$ (perihelion on January 3, shortly after NH winter solstice).
We call the gradual change over time of the longitude of perihelion the precession of the equinoxes (or just precession). It is the gradual change in the time of year at which the Earth is closest to the Sun.
Question
Can there be any precession for a planet with a perfectly circular orbit (zero eccentricity)?
It is important to understand that eccentricity modulates the precession. Highly eccentric orbits lead to larger differences in the seasonal distribution of insolation.
We quantify this with the precessional parameter
$$ e \sin \Lambda $$
Large positive precessional parameter = Excess insolation during summer in the northern hemisphere.
The three orbital parameters
We have just identified three parameters that control the seasonal and latitudinal distribution of insolation: $e, \Lambda, \Phi$
All three vary in predictable ways over time. They have been calculated very accurately from astronomical considerations (basically the gravity of the Earth, Sun, moon, and other solar system objects).
<a id='section4'></a>
4. Past orbital variations
There are tools in climlab to look up orbital parameters for Earth over the last 5 million years.
We will use the package
climlab.solar.orbital
End of explanation
kyears = np.arange( -1000., 1.)
orb = OrbitalTable.interp(kyear=kyears)
orb
Explanation: Make reference plots of the variation in the three orbital parameter over the last 1 million years
End of explanation
fig = plt.figure( figsize = (8,8) )
ax1 = fig.add_subplot(3,1,1)
ax1.plot( kyears, orb['ecc'] )
ax1.set_title('Eccentricity $e$', fontsize=18 )
ax2 = fig.add_subplot(3,1,3)
ax2.plot( kyears, orb['ecc'] * np.sin( np.deg2rad( orb['long_peri'] ) ) )
ax2.set_title('Precessional parameter $e \sin(\Lambda)$', fontsize=18 )
ax2.set_xlabel( 'Thousands of years before present', fontsize=14 )
ax3 = fig.add_subplot(3,1,2)
ax3.plot( kyears, orb['obliquity'] )
ax3.set_title('Obliquity (axial tilt) $\Phi$', fontsize=18 )
Explanation: The object called
orb
is an xarray.Dataset which now holds 1 million years worth of orbital data, total of 1001 data points for each element:
eccentricity ecc
obliquity angle obliquity
solar longitude of perihelion long_peri
End of explanation
from climlab.solar.insolation import daily_insolation
thisorb = {'ecc':0., 'obliquity':0., 'long_peri':0.}
days = np.linspace(1.,20.)/20 * const.days_per_year
daily_insolation(90, days, thisorb)
Explanation: Timescales of orbital variation:
Eccentricity varies slowly between nearly circular and slightly eccentric, with dominant periodicities of about 100 and 400 kyear. Current eccentricity is relatively small compared to previous few million years
Longitude of perihelion has a period around 20 kyears, but effect is modulated by slow eccentricity variations. Precessional cycles are predicted to be small for the coming 50 kyears because of weak eccentricity.
Obliquity varies between about 22.5º and 24.5º over a period of 40 kyears. It is currently near the middle of its range.
<a id='section5'></a>
5. Using climlab to calculate insolation for arbitrary orbital parameters
We can use the function
climlab.solar.insolation.daily_insolation()
to calculate insolation for any arbitrary orbital parameters.
We just need to pass a dictionary of orbital parameters. This works automatically when we slice or interpolate from the xarray object OrbitalTable.
An example: zero obliquity
Calculate the insolation at the North Pole for a planet with zero obliquity and zero eccentricity.
End of explanation
daily_insolation(90, days)
Explanation: Compare this with the same calculation for default (present-day) orbital parameters:
End of explanation
# Plot summer solstice insolation at 65ºN
years = np.linspace(-100, 0, 101) # last 100 kyr
thisorb = OrbitalTable.interp(kyear=years)
S65 = daily_insolation( 65, 172, thisorb )
fig, ax = plt.subplots()
ax.plot(years, S65)
ax.set_xlabel('Thousands of years before present')
ax.set_ylabel('W/m2')
ax.set_title('Summer solstice insolation at 65N')
ax.grid()
Explanation: Do you understand what's going on here?
<a id='section6'></a>
6. Past changes in insolation: investigating the Milankovitch hypothesis
The Last Glacial Maximum or "LGM" occurred around 23,000 years before present, when the ice sheets were at their greatest extent.
By 10,000 years ago, the ice sheets were mostly gone and the last ice age was over.
If the Milankovitch hypothesis is correct, we should that summer insolation in the high northern latitudes increased substantially after the LGM.
The classical way to plot this is the look at insolation at summer solstice at 65ºN. Let's plot this for the last 100,000 years.
End of explanation
lat = np.linspace(-90, 90, 181)
days = np.linspace(1.,50.)/50 * const.days_per_year
orb_0 = OrbitalTable.interp(kyear=0) # present-day orbital parameters
orb_10 = OrbitalTable.interp(kyear=-10) # orbital parameters for 10 kyrs before present
orb_23 = OrbitalTable.interp(kyear=-23) # 23 kyrs before present
Q_0 = daily_insolation( lat, days, orb_0 )
Q_10 = daily_insolation( lat, days, orb_10 ) # insolation arrays for each of the three sets of orbital parameters
Q_23 = daily_insolation( lat, days, orb_23 )
fig = plt.figure( figsize=(12,6) )
ax1 = fig.add_subplot(1,2,1)
Qdiff = Q_10 - Q_23
CS1 = ax1.contour( days, lat, Qdiff, levels = np.arange(-100., 100., 10.) )
ax1.clabel(CS1, CS1.levels, inline=True, fmt='%r', fontsize=10)
ax1.contour( days, lat, Qdiff, levels = [0], colors='k' )
ax1.set_xlabel('Days since January 1', fontsize=16 )
ax1.set_ylabel('Latitude', fontsize=16 )
ax1.set_title('Insolation differences: 10 kyrs - 23 kyrs', fontsize=18 )
ax2 = fig.add_subplot(1,2,2)
ax2.plot( np.mean( Qdiff, axis=1 ), lat )
ax2.set_xlabel('W m$^{-2}$', fontsize=16 )
ax2.set_ylabel( 'Latitude', fontsize=16 )
ax2.set_title(' Annual mean differences', fontsize=18 )
ax2.set_ylim((-90,90))
ax2.grid()
Explanation: Indeed, there was an increase of 60 W m$^{-2}$ over a 10 kyr interval following the LGM.
Why?
What orbital factors favor high insolation at 65ºN at summer solstice?
high obliquity
large, positive precessional parameter
Looking back at our plots of the orbital parameters, it turns out that both were optimal around 10,000 years ago.
Actually 10,000 years ago the climate was slightly warmer than today and the ice sheets had mostly disappeared already.
The LGM occurred near a minimum in summer insolation in the north – mostly due to obliquity reaching a minimum, since we have been in a period of weak precession due to the nearly circular orbit. So this is consistent with the orbital theory.
The hypothesis is incomplete, but compelling.
Comparing insolation at 10 kyr and 23 kyr
End of explanation
print( np.average(np.mean(Qdiff,axis=1), weights=np.cos(np.deg2rad(lat))) )
Explanation: This figure shows that the insolation at summer solstice does not tell the whole story!
For example, the insolation in late summer / early fall apparently got weaker between 23 and 10 kyr (in the high northern latitudes).
The annual mean plot is perfectly symmetric about the equator.
This actually shows a classic obliquity signal: at 10 kyrs, the axis close to its maximum tilt, around 24.2º. At 23 kyrs, the tilt was much weaker, only about 22.7º. In the annual mean, a stronger tilt means more sunlight to the poles and less to the equator. This is very helpful if you are trying to melt an ice sheet.
Finally, take the global average of the difference:
End of explanation
lat = np.linspace(-90, 90, 91)
num = 365.
days = np.linspace(1.,num,365)/num * const.days_per_year
Q = daily_insolation(lat, days, orb)
print( Q.shape)
Qann = np.mean(Q, axis=1) # time average over the year
print( Qann.shape)
Qglobal = np.empty_like( kyears )
for n in range( kyears.size ): # global area-weighted average
Qglobal[n] = np.average( Qann[:,n], weights=np.cos(np.deg2rad(lat)))
print( Qglobal.shape)
fig = plt.figure(figsize = (16,10))
ax = []
for n in range(6):
ax.append(fig.add_subplot(3,2,n+1))
ax[0].plot( kyears, orb['ecc'] )
ax[0].set_title('Eccentricity $e$', fontsize=18 )
ax[2].plot( kyears, orb['obliquity'] )
ax[2].set_title('Obliquity (axial tilt) $\Phi$', fontsize=18 )
ax[4].plot( kyears, orb['ecc'] * np.sin( np.deg2rad( orb['long_peri'] ) ) )
ax[4].set_title('Precessional parameter $e \sin(\Lambda)$', fontsize=18 )
ax[1].plot( kyears, Qglobal )
ax[1].set_title('Global, annual mean insolation', fontsize=18 )
ax[1].ticklabel_format( useOffset=False )
ax[3].plot( kyears, Qann[80,:] )
ax[3].set_title('Annual mean insolation at 70N', fontsize=18 )
ax[5].plot( kyears, Q[80,170,:] )
ax[5].set_title('Summer solstice insolation at 70N', fontsize=18 )
for n in range(6):
ax[n].grid()
for n in [4,5]:
ax[n].set_xlabel( 'Thousands of years before present', fontsize=14 )
Explanation: The difference is tiny (and due to very small changes in the eccentricity).
Ice ages are driven by seasonal and latitudinal redistributions of solar energy, NOT by changes in the total global amount of solar energy!
<a id='section7'></a>
7. Understanding the effects of orbital variations on insolation
We are going to create a figure showing past time variations in three quantities:
Global, annual mean insolation
Annual mean insolation at high northern latitudes
Summer solstice insolation at high northern latitudes
which we will compare to the orbital variations we plotted earlier.
Create a large array of insolation over the whole globe, whole year, and for every set of orbital parameters.
End of explanation
%load_ext version_information
%version_information numpy, matplotlib, climlab
Explanation: We see that
Global annual mean insolation varies only with eccentricity (slow), and the variations are very small!
Annual mean insolation varies with obliquity (medium). Annual mean insolation does NOT depend on precession!
Summer solstice insolation at high northern latitudes is affected by both precession and obliquity. The variations are large.
<a id='section8'></a>
8. Summary
The annual, global mean insolation varies only as a result of eccentricity $e$. The changes are very small (about 0.1 % through a typical eccentricity cycle from more circular to more elliptical)
Obliquity controls the annual-mean equator-to-pole insolation gradient.
The precessional parameter $e \sin\Lambda$ controls the modulation in seasonal insolation due to eccentricity and longitude of perihelion $\Lambda$.
The combined effects can result in 15% changes in high-latitude summer insolation
Obliquity combined with eccentricity and longitude of perihelion control the amplitude of seasonal insolation variations at a point.
Combined effects of the three orbital parameters can cause variations in seasonal insolation as large as 30% in high latitudes.
These geometrical considerations tell us that seasonal variations in $Q$ can be rather large, and will surely impact the climate. But to go from there to understanding how large ice sheets come and go is a difficult step, and requires climate models!
One thing is clear: any serious astronomical theory of climate needs to take account of the climate response to seasonal variations in Q, because these are much larger than the variations in annual mean insolation.
<div class="alert alert-success">
[Back to ATM 623 notebook home](../index.ipynb)
</div>
Version information
End of explanation |
7,157 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random forest regression example
As an experiment, we'll look at a dataset uniquely well suited to modeling with random forest regression.
Step1: Generate fake data
Step2: The target variable is a little bit complicated. One of the categorical variables is used to select which continuous variable comes into play.
Step3: Best possible model
To calibrate our expectations, even if we have perfect insight, there's still some noise involved. How well could we do, in the best case?
Step4: Model with random forest regression
Train / test split
Step5: Train model
Step6: Cross-validate to select params max_depth, min_samples_leaf?
Predict on test set
Step7: Compare to linear regression
The idea was that random forest could capture the interaction with the categorical variable. If that's true, it should outperform standard linear regression. Let's fit a linear regression to the same data and see how that does.
Step8: Fit
Step9: Predict
Step10: Score and plot
Step11: Easy example | Python Code:
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
import matplotlib.pyplot as plt
Explanation: Random forest regression example
As an experiment, we'll look at a dataset uniquely well suited to modeling with random forest regression.
End of explanation
n = 2000
df = pd.DataFrame({
'a': np.random.normal(size=n),
'b': np.random.normal(size=n),
'c': np.random.normal(size=n),
'd': np.random.uniform(size=n),
'e': np.random.uniform(size=n),
'f': np.random.choice(list('abc'), size=n, replace=True),
'g': np.random.choice(list('efghij'), size=n, replace=True),
})
df = pd.get_dummies(df)
df.head()
Explanation: Generate fake data
End of explanation
y = df.a*df.f_a + df.b*df.f_b + df.c*df.f_c + 3*df.d + np.random.normal(scale=1/3, size=n)
Explanation: The target variable is a little bit complicated. One of the categorical variables is used to select which continuous variable comes into play.
End of explanation
y_pred = df.a*df.f_a + df.b*df.f_b + df.c*df.f_c + 3*df.d
r2_score(y, y_pred)
plt.scatter(y, y_pred)
Explanation: Best possible model
To calibrate our expectations, even if we have perfect insight, there's still some noise involved. How well could we do, in the best case?
End of explanation
i = np.random.choice((1,2,3), size=n, replace=True, p=(3/5,1/5,1/5))
i = np.array([1]*int(3/5*n) + [2]*int(1/5*n) + [3]*int(1/5*n))
np.random.shuffle(i)
len(i) == n
df_train = df[i==1]
df_val = df[i==2]
df_test = df[i==3]
y_train = y[i==1]
y_val = y[i==2]
y_test = y[i==3]
len(df_train), len(df_val), len(df_test)
Explanation: Model with random forest regression
Train / test split
End of explanation
regr = RandomForestRegressor(n_estimators=200, oob_score=True)
regr.fit(df_train, y_train)
print(regr.oob_score_)
Explanation: Train model
End of explanation
y_pred = regr.predict(df_test)
r2_score(y_test, y_pred)
plt.scatter(y_test, y_pred)
Explanation: Cross-validate to select params max_depth, min_samples_leaf?
Predict on test set
End of explanation
from sklearn.linear_model import LinearRegression
Explanation: Compare to linear regression
The idea was that random forest could capture the interaction with the categorical variable. If that's true, it should outperform standard linear regression. Let's fit a linear regression to the same data and see how that does.
End of explanation
lm = LinearRegression()
lm.fit(df_train, y_train)
Explanation: Fit
End of explanation
y_pred = lm.predict(df_test)
Explanation: Predict
End of explanation
r2_score(y_test, y_pred)
plt.scatter(y_test, y_pred)
Explanation: Score and plot
End of explanation
n = 2000
df = pd.DataFrame({
'a': np.random.normal(size=n),
'b': np.random.normal(size=n),
'c': np.random.choice(list('xyz'), size=n, replace=True),
})
df = pd.get_dummies(df)
df.head()
y = 0.234*df.a + -0.678*df.b + -0.456*df.c_x + 0.123*df.c_y + 0.811*df.c_z + np.random.normal(scale=1/3, size=n)
i = np.random.choice((1,2,3), size=n, replace=True, p=(3/5,1/5,1/5))
i = np.array([1]*int(3/5*n) + [2]*int(1/5*n) + [3]*int(1/5*n))
np.random.shuffle(i)
len(i) == n
df_train = df[i==1]
df_val = df[i==2]
df_test = df[i==3]
y_train = y[i==1]
y_val = y[i==2]
y_test = y[i==3]
len(df_train), len(df_val), len(df_test)
regr = RandomForestRegressor(n_estimators=10, oob_score=True)
regr.fit(df_train, y_train)
print(regr.oob_score_)
y_pred = regr.predict(df_test)
r2_score(y_test, y_pred)
plt.scatter(y_test, y_pred)
lm = LinearRegression()
lm.fit(df_train, y_train)
y_pred = lm.predict(df_test)
r2_score(y_test, y_pred)
plt.scatter(y_test, y_pred)
Explanation: Easy example
End of explanation |
7,158 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing
Step26: Test accuracy | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels_orig = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
import pickle
with open('../embeddings/word_embeddings.pkl', 'rb') as f:
[vocab_to_int, embed_mat] = pickle.load(f)
embed_mat.shape
bak = embed_mat[vocab_to_int['the'],:]
# Reorganize so that 0 can be the empty string.
embed_mat = np.concatenate((np.random.uniform(-1,1, (1,embed_mat.shape[1])),
embed_mat),
axis=0)
vocab_to_int = {k:v+1 for k,v in vocab_to_int.items()}
# embed_mat = embed_mat.copy()
# embed_mat.resize((embed_mat.shape[0]+1, embed_mat.shape[1]))
# embed_mat[-1,:] = embed_mat[0]
# embed_mat[0,:] = np.random.uniform(-1,1, (1,embed_mat.shape[1]))
# embed_mat.shape
vocab_to_int[''] = 0
assert(all(bak == embed_mat[vocab_to_int['the'],:]))
[k for k,v in vocab_to_int.items() if v == 0]
embed_mat[vocab_to_int['stupid'],:]
non_words = set(['','.','\n'])
extra_words = set([w for w in set(words) if w not in vocab_to_int and w not in non_words])
new_vocab = [(word, index) for index,word in enumerate(extra_words, len(vocab_to_int))]
embed_mat = np.concatenate(
(embed_mat,
np.random.uniform(-1,1, (len(extra_words), embed_mat.shape[1]))),
axis=0)
print("added {} extra words".format(len(extra_words)))
vocab_to_int.update(new_vocab)
del extra_words
del new_vocab
37807/63641
reviews_ints = [[vocab_to_int[word] for word in review.split(' ') if word not in non_words] for review in reviews]
set([word for word in set(words) if word not in vocab_to_int])
len(vocab_to_int)
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = np.array([(0 if l == 'negative' else 1) for l in labels_orig.split('\n')])
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: If you built labels correctly, you should see the next output.
End of explanation
x = [1,2,3]
x[:10]
# Filter out that review with 0 length
new_values = [(review_ints[:200], label) for review_ints,label
in zip(reviews_ints, labels)
if len(review_ints) > 0]
reviews_ints, labels = zip(*new_values)
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
features = np.array([([0] * (seq_len-len(review))) + review for review in reviews_ints])
labels = np.array(labels)
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
review = reviews_ints[0]
len(review)
features[:10,:]
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
split_frac = 0.8
split_tv = int(features.shape[0] * split_frac)
split_vt = int(round(features.shape[0] * (1-split_frac) / 2)) + split_tv
train_x = features[:split_tv,:]
train_y = labels[:split_vt ]
val_x = features[split_tv:split_vt,:]
val_y = labels[split_tv:split_vt]
test_x = features[split_vt:,:]
test_y = labels[split_vt: ]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
#run_number = 7
if 'run_number' in locals():
run_number += 1
else:
run_number = 1
run_number
lstm_size = 512
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, shape=(None,seq_len), name='inputs')
labels_ = tf.placeholder(tf.int32, shape=(None,1), name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
n_words
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
import os
os.makedirs('./logs/{}/val'.format(run_number), exist_ok=True)
word_order = {v:k for k,v in vocab_to_int.items()}
embedding_metadata_file = './logs/{}/val/metadata.tsv'.format(run_number)
with open(embedding_metadata_file, 'w') as f:
for i in range(len(word_order)):
f.write(word_order[i]+'\n')
projector_config = tf.contrib.tensorboard.plugins.projector.ProjectorConfig()
embedding_config = projector_config.embeddings.add()
embed_mat.dtype
# Size of the embedding vectors (number of units in the embedding layer)
if False:
embed_size = 300
with graph.as_default():
with tf.name_scope('embedding'):
embedding = tf.Variable(
tf.random_uniform((n_words,embed_size),
-1,1), name="word_embedding")
embedding_config.tensor_name = embedding.name
embedding_config.metadata_path = embedding_metadata_file
embed = tf.nn.embedding_lookup(embedding, inputs_)
tf.summary.histogram('embedding', embedding)
else:
embed_size = embed_mat.shape[1]
with graph.as_default():
with tf.name_scope('embedding'):
embedding = tf.Variable(embed_mat, name="word_embedding", dtype=tf.float32)
embedding_config.tensor_name = embedding.name
embedding_config.metadata_path = embedding_metadata_file
embed = tf.nn.embedding_lookup(embedding, inputs_)
tf.summary.histogram('embedding', embedding)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
with tf.name_scope('LSTM'):
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
with tf.name_scope('LSTM'):
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
with tf.name_scope('Prediction'):
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
with tf.name_scope('Loss'):
cost = tf.losses.mean_squared_error(labels_, predictions)
tf.summary.scalar('cost', cost)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
with tf.name_scope('Accuracy'):
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.summary.scalar('accuracy',accuracy)
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
train_y.mean()
epochs = 20
with graph.as_default():
merged = tf.summary.merge_all()
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter('./logs/{}/train'.format(run_number), sess.graph)
val_writer = tf.summary.FileWriter('./logs/{}/val'.format(run_number))
tf.contrib.tensorboard.plugins.projector.visualize_embeddings(val_writer, embedding_config)
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
summary, loss, state, _ = sess.run([merged, cost, final_state, optimizer],
feed_dict=feed)
train_writer.add_summary(summary, iteration)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
summary, batch_acc, val_state = sess.run([merged, accuracy, final_state],
feed_dict=feed)
val_acc.append(batch_acc)
val_writer.add_summary(summary, iteration)
saver.save(sess, './logs/{}/model.ckpt'.format(run_number), iteration)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
train_writer.flush()
val_writer.flush()
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation
print(features.shape)
def TestSomeText(text):
text = text.lower()
delete = ['.','!',',','"',"'",'\n']
for d in delete:
text = text.replace(d," ")
text_ints = [vocab_to_int[word] for word in text.split(' ') if word in vocab_to_int]
print(len(text_ints))
text_ints = text_ints[:seq_len]
#print(text_ints)
#text_features = np.zeros((batch_size,seq_len))
text_features = np.array([([0] * (seq_len-len(text_ints))) + text_ints] * batch_size)
#print(text_features)
#print(text_features.shape)
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for i in range(1):
feed = {inputs_: text_features,
labels_: [[0]]*batch_size,
keep_prob: 1,
initial_state: test_state}
pred, mycost, test_state = sess.run([predictions, accuracy, final_state], feed_dict=feed)
return pred[0,0]
TestSomeText("highly underrated movie")
#pred[0,0]
TestSomeText('overrated movie')
TestSomeText(I ve been looking forward to a viking film or TV series for many years
and when my wishes were finally granted, I was very worried that this production
was going to be total crap. After viewing the first two episodes I do not worry
about that anymore. Thank you, Odin
As a person of some historical knowledge of the viking era, I can point out numerous
flaws - but they don't ruin the story for me, so I will let them slip. Historical
accounts about those days are, after all, not entirely reliable.
Happy to see Travis Fimmel in a role that totally suits him. A physical and intense
character, with that spice of humor that is the viking trademark from the sagas.
Gabriel Byrne plays a stern leader, that made me think of him in "Prince of Jutland",
and Clive Standen seems like he's going to surprise us.
Been pondering the Game of Thrones comparison, since I love that show too, but in my
opinion Vikings has its own thing going on. Way fewer lead characters to begin with,
and also a more straight forward approach. Plenty of room for more series with this
high class!
Can I wish for more than the planned nine episodes, PLEASE!!!)
TestSomeText(vikings)
delete = ['.','!',',','"',"'",'\n']
for d in delete:
text = text.replace(d," ")
TestSomeText(Pirates of the Caribbean has always been a franchise that makes no attempt for Oscar worthy material but in its own way is massively enjoyable.
Pirates of the Caribbean: Dead Men Tell No Tales certainly embraces the aspects of the original movie while also incorporating new plot lines that fit in well with plots from the original story. With the introduction of Henry and Karina there is a new love interest that is provided to the audience that rivals that of Will and Elizabeth Turner's.
Henry Turner is portrayed as an almost exact copy of his father except just a teensy bit worse at sword fighting while Karina differs from the usual women as she remains just as important, if not more, as Henry as she guides the course towards Posiedon's trident.
Jack Sparrow is entertaining as always with his usual drunk characteristics. For those of you who are tired of Sparrow acting this way Don't SEE THE MOVIE Jack sparrow isn't going to change because it doesn't make sense for his character to suddenly change.
All together the movie was expertly written and expertly performed by the entire cast even Kiera Knightely who didn't manage to get one word throughout the whole movie. I know as a major fan of the Pirates of the Caribbean I can't wait to see what happens for the future of the franchise.
)
pred,mycost = TestSomeText(text)
pred[0,0]
TestSomeText(If your child is a fan of the Wimpy Kid series, they'll no doubt enjoy this one, it's entertaining and lowbrow enough to also appease the moodiest of teens and grumpiest adults.)
TestSomeText(At first I thought the film was going to be just a normal thriller but it turned out to be a thousand times better than I expected. The film is truly original and was so dark & sinister that gives the tensive mood also it is emotionally & psychologically thrilling, the whole movie is charged with pulse pounding suspense and seems like it's really happening. It's amazing that how they managed to make an 80 minute movie with just a guy in a phone booth but the full credit goes to Colin Farrell and Larry Cohen the writer not Joel Schumacher because he is a crappy director. Joel Schumacher's films are rubbish especially The Number 23, Phone Booth was shot in 10 days with a budget of $10 million so it wasn't a hard job to make it, that's why Joel doesn't get any credit but the cast & crew did a fantastic job. I also really liked the raspberry coloured shirt Colin was wearing and it was an excellent choice of clothing because the viewers are going to watch him throughout the whole film. When I first saw the movie I fell in love with it and I bought it on DVD the next day and I've seen it about 20 times and I'm still not fed up with it. Phone Booth is and always will be Colin Farrell's best film! Overall it is simply one of my favourite films and I even argued over my friend because he didn't like it.
)
delete = ['.','!',',','"',"'",'\n']
for d in delete:
text = text.replace(d," ")
text
pred,mycost = TestSomeText(text)
pred[0,0]
TestSomeText(There are few quality movies or series about the Vikings, and this series is outstanding and well worth waiting for. Not only is Vikings a series that is a joy to watch, it is also a series that is easy to recommend. I personally feel that the creator and producers did a fine job of giving the viewer quality material. Now, there are a few inconsistencies with the series, most notably would be the idea that Vikings had very little knowledge of other European countries and were amazed by these people across the big waters. In reality Vikings engaged in somewhat normal commercial activities with other Anglo-Saxons, so the idea that Vikings were as amazed as they seemed when they realize that other people were out there is not that realistic. However, it is this small inconsistency that goes a long way in holding the premise together. I simply love the series and would recommend it to anyone wanting to watch a quality show.)
delete = ['.','!',',','"',"'",'\n']
for d in delete:
text = text.replace(d," ")
pred,mycost = TestSomeText(text)
pred[0,0]
TestSomeText(This movie didn't feel any different from the other anime movies out there. Sure, the sibling dynamics were good, as well as the family values, the childhood memories and older brother anxiety. The main idea was interesting, with the new baby seeming rather like a boss sent into the family to spy on the parents and solve a big problem for his company. You can't help but identify with the older kid, especially if you have younger siblings. But eventually, the action was a bit main stream. The action scenes were not original and kind of boring. Other than that, the story became a little complicated when you start to think about what's real and what's not. The narration was good and the animation was nice, with the cute babies and puppies. So, 4 out of 10.
)
delete = ['.','!',',','"',"'",'\n']
for d in delete:
text = text.replace(d," ")
text
pred,mycost = TestSomeText(text)
pred[0,0]
TestSomeText('seriously awesome movie')
Explanation: Test accuracy: 0.748
Test accuracy: 0.784
End of explanation |
7,159 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression
cf. sklearn.linear_model.LogisticRegression documentation
Let's take a look at the examples in the LogisticRegression documentation of sklearn.
The Logistic Regression 3-class Classifier¶ has been credited to
Code source
Step1: Loading files and dealing with local I/O
Step2: Let's load the data for Exercise 2 of Machine Learning, taught by Andrew Ng, of Coursera.
Step3: Get the probability estimates; say a student has an Exam 1 score of 45 and an Exam 2 score of 85.
Step4: Let's change the "regularization" with the C parameter/option for LogisticRegression. Call this logreg2
Step5: As one can see, the "dataset cannot be separated into positive and negative examples by a straight-line through the plot." cf. ex2.pdf
We're going to need polynomial terms to map onto.
Use this code | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model, datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # take the first two features. # EY : 20160503 type(X) is numpy.ndarray
Y = iris.target # EY : 20160503 type(Y) is numpy.ndarray
h = .02 # step size in the mesh
print "X shape: %s, Y shape: %s" % X.shape, Y.shape
logreg = linear_model.LogisticRegression(C=1e5)
# we create an instance of Neighbours Classifier and fit the data.
logreg.fit(X,Y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mest [x_min, x_max]x[y_min, y_max]
x_min, x_max = X[:,0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:,1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1, figsize=(4,3))
plt.pcolormesh(xx,yy,Z, cmap=plt.cm.Paired)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, edgecolors='k', cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.show()
Explanation: Logistic Regression
cf. sklearn.linear_model.LogisticRegression documentation
Let's take a look at the examples in the LogisticRegression documentation of sklearn.
The Logistic Regression 3-class Classifier¶ has been credited to
Code source: Gaël Varoquaux
Modified for documentation by Jaques Grobler
License: BSD 3 clause
End of explanation
import os
print os.getcwd()
print os.path.abspath("./") # find out "where you are" and "where Data folder is" with these commands
Explanation: Loading files and dealing with local I/O
End of explanation
ex2data1 = np.loadtxt("./Data/ex2data1.txt",delimiter=',') # you, the user, may have to change this, if the directory that you're running this from is somewhere else
ex2data2 = np.loadtxt("./Data/ex2data2.txt",delimiter=',')
X_ex2data1 = ex2data1[:,0:2]
Y_ex2data1 = ex2data1[:,2]
X_ex2data2 = ex2data2[:,:2]
Y_ex2data2 = ex2data2[:,2]
logreg.fit(X_ex2data1,Y_ex2data1)
def trainingdat2mesh(X,marginsize=.5, h=0.2):
rows, features = X.shape
ranges = []
for feature in range(features):
minrange = X[:,feature].min()-marginsize
maxrange = X[:,feature].max()+marginsize
ranges.append((minrange,maxrange))
if len(ranges) == 2:
xx, yy = np.meshgrid(np.arange(ranges[0][0], ranges[0][1], h), np.arange(ranges[1][0], ranges[1][1], h))
return xx, yy
else:
return ranges
xx_ex2data1, yy_ex2data1 = trainingdat2mesh(X_ex2data1,h=0.2)
Z_ex2data1 = logreg.predict(np.c_[xx_ex2data1.ravel(),yy_ex2data1.ravel()])
Z_ex2data1 = Z_ex2data1.reshape(xx_ex2data1.shape)
plt.figure(2)
plt.pcolormesh(xx_ex2data1,yy_ex2data1,Z_ex2data1)
plt.scatter(X_ex2data1[:, 0], X_ex2data1[:, 1], c=Y_ex2data1, edgecolors='k')
plt.show()
Explanation: Let's load the data for Exercise 2 of Machine Learning, taught by Andrew Ng, of Coursera.
End of explanation
logreg.predict_proba(np.array([[45,85]])).flatten()
print "The student has a probability of no admission of %s and probability of admission of %s" % tuple( logreg.predict_proba(np.array([[45,85]])).flatten() )
Explanation: Get the probability estimates; say a student has an Exam 1 score of 45 and an Exam 2 score of 85.
End of explanation
logreg2 = linear_model.LogisticRegression()
logreg2.fit(X_ex2data2,Y_ex2data2)
xx_ex2data2, yy_ex2data2 = trainingdat2mesh(X_ex2data2,h=0.02)
Z_ex2data2 = logreg.predict(np.c_[xx_ex2data2.ravel(),yy_ex2data2.ravel()])
Z_ex2data2 = Z_ex2data2.reshape(xx_ex2data2.shape)
plt.figure(3)
plt.pcolormesh(xx_ex2data2,yy_ex2data2,Z_ex2data2)
plt.scatter(X_ex2data2[:, 0], X_ex2data2[:, 1], c=Y_ex2data2, edgecolors='k')
plt.show()
Explanation: Let's change the "regularization" with the C parameter/option for LogisticRegression. Call this logreg2
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
polynomial_features = PolynomialFeatures(degree=6,include_bias=False)
pipeline = Pipeline([("polynomial_features", polynomial_features),("logistic_regression",logreg2)])
pipeline.fit(X_ex2data2,Y_ex2data2)
Z_ex2data2 = pipeline.predict(np.c_[xx_ex2data2.ravel(),yy_ex2data2.ravel()])
Z_ex2data2 = Z_ex2data2.reshape(xx_ex2data2.shape)
plt.figure(3)
plt.pcolormesh(xx_ex2data2,yy_ex2data2,Z_ex2data2)
plt.scatter(X_ex2data2[:, 0], X_ex2data2[:, 1], c=Y_ex2data2, edgecolors='k')
plt.show()
Explanation: As one can see, the "dataset cannot be separated into positive and negative examples by a straight-line through the plot." cf. ex2.pdf
We're going to need polynomial terms to map onto.
Use this code: cf. Underfitting vs. Overfitting¶
End of explanation |
7,160 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return None
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return None
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
# TODO: return output
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
pass
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = None
batch_size = None
keep_probability = None
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
7,161 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filter Design Using the Helper Modules
The Scipy package signal assists with the design of many digital filter types. As an alternative, here we explore the use of the filter design modules found in scikit-dsp-comm
(https
Step1: There are 10 filter design functions and one plotting function available in fir_design_helper.py. Four functions for designing Kaiser window based FIR filters and four functions for designing equiripple based FIR filters. Of the eight just described, they all take in amplitude response requirements and return a coefficients array. Two of the 10 filter functions are simply wrappers around the scipy.signal function signal.firwin() for designing filters of a specific order when one (lowpass) or two (bandpass) critical frequencies are given. The wrapper functions fix the window type to the firwin default of hann (hanning). The remamining eight are described below in Table 1. The plotting function provides an easy means to compare the resulting frequency response of one or more designs on a single plot. Display modes allow gain in dB, phase in radians, group delay in samples, and group delay in seconds for a given sampling rate. This function, freq_resp_list(), works for both FIR and IIR designs. Table 1 provides the interface details to the eight design functions where d_stop and d_pass are positive dB values and the critical frequencies have the same unit as the sampling frequency $f_s$. These functions do not create perfect results so some tuning of of the design parameters may be needed, in addition to bumping the filter order up or down via N_bump.
Step2: Design Examples
Example 1
Step3: A Design Example Useful for Interpolation or Decimation
Here we consider a lowpass design that needs to pass frequencies from [0, 4000] Hz with a sampling rate of 96000 Hz. This scenario arises when building an interpolator using the classes of the scikit-dps-comm module multirate_helper.py to increase the sampling rate from 8000 Hz to 96000 Hz, or an interpolation factor of $L = 12$. Note at the top of this notebook we have also have the import
python
import sk_dsp_comm.multirate_helper as mrh
so that some of the functionality can be accessed. For more details on the use of multirate_helper see.
Start with an equalripple design having transition band centered on 4000 Hz with passband ripple of 0.5 dB and stopband attenuation of 60 dB.
Step4: Consider the pole-zero configuration for this high-order filter
Step5: Check out the passband and stopband gains
Step6: See that the group delay is the expected value of $(N_\text{taps} - 1)/2 = 98$ samples
Step7: The object mr_up can now be used for interpolation or decimation with a rate change factor of 12.
Traditional IIR Filter Design using the Bilinear Transform
The scipy.signal package fully supports the design of IIR digital filters from analog prototypes. IIR filters like FIR filters, are typically designed with amplitude response requirements in mind. A collection of design functions are available directly from scipy.signal for this purpose, in particular the function scipy.signal.iirdesign(). To make the design of lowpass, highpass, bandpass, and bandstop filters consistent with the module fir_design_helper.py the module iir_design_helper.py was written. Figure 2, below, details how the amplitude response parameters are defined graphically.
Step8: Within iir_design_helper.py there are four filter design functions and a collection of supporting functions available. The four filter design functions are used for designing lowpass, highpass, bandpass, and bandstop filters, utilizing Butterworth, Chebshev type 1, Chebyshev type 2, and elliptical filter prototypes. See Oppenheim2010 and ECE 5650 notes Chapter 9 for detailed design information. The function interfaces are described in Table 2.
Step9: The filter functions return the filter coefficients in two formats
Step10: Frequency Response Comparison
Here we compare the magnitude response in dB using the sos form of each filter as the input. The elliptic is the most efficient, and actually over achieves by reaching the stopband requirement at less than 8 kHz.
Step11: Next plot the pole-zero configuration of just the butterworth design. Here we use the a special version of ss.zplane that works with the sos 2D array.
Step12: Note the two plots above can also be obtained using the transfer function form via iir_d.freqz_resp_list([b],[a],'dB',fs=48) and ss.zplane(b,a), respectively. The sos form will yield more accurate results, as it is less sensitive to coefficient quantization. This is particularly true for the pole-zero plot, as rooting a 15th degree polynomial is far more subject to errors than rooting a simple quadratic.
For the 15th-order Butterworth the bilinear transformation maps the expected 15 s-domain zeros at infinity to $z=-1$. If you use sk_dsp_comm.sigsys.zplane() you will find that the 15 zeros at are in a tight circle around $z=-1$, indicating polynomial rooting errors. Likewise the frequency response will be more accurate.
Signal filtering of ndarray x is done using the filter designs is done using functions from scipy.signal
Step13: Pass Gaussian white noise of variance $\sigma_x^2 = 1$ through the filter. Use a lot of samples so the spectral estimate can accurately form $S_y(f) = \sigma_x^2\cdot |H(e^{j2\pi f/f_s})|^2 = |H(e^{j2\pi f/f_s})|^2$.
Step14: Amplitude Response Bandpass Design
Here we consider FIR and IIR bandpass designs for use in an SSB demodulator to remove potential adjacent channel signals sitting either side of a frequency band running from 23 kHz to 24 kHz.
Step15: The group delay is flat (constant) by virture of the design having linear phase.
Step16: Compare the FIR design with an elliptical design
Step17: This high order elliptic has a nice tight amplitude response for minimal coefficients, but the group delay is terrible | Python Code:
Image('300ppi/[email protected]',width='90%')
Explanation: Filter Design Using the Helper Modules
The Scipy package signal assists with the design of many digital filter types. As an alternative, here we explore the use of the filter design modules found in scikit-dsp-comm
(https://github.com/mwickert/scikit-dsp-comm).
In this note we briefly explore the use of sk_dsp_comm.fir_design_helper and sk_dsp_comm.iir_design_helper. In the examples that follow we assume the import of these modules is made as follows:
python
import sk_dsp_comm.fir_design_helper as fir_d
import sk_dsp_comm.iir_design_helper as iir_d
The functions in these modules provide an easier and more consistent interface for both finte impulse response (FIR) (linear phase) and infinite impulse response (IIR) classical designs. Functions inside these modules wrap scipy.signal functions and also incorporate new functionality.
Design From Amplitude Response Requirements
With both fir_design_helper and iir_design_helper a design starts with amplitude response requirements, that is the filter passband critical frequencies, stopband critical frequencies, passband ripple, and stopband attenuation. The number of taps/coefficients (FIR case) or the filter order (IIR case) needed to meet these requirements is then determined and the filter coefficients are returned as an ndarray b for FIR, and for IIR both b and a arrays, and a second-order sections sos 2D array, with the rows containing the corresponding cascade of second-order sections toplogy for IIR filters.
For the FIR case we have in the $z$-domain
$$
H_\text{FIR}(z) = \sum_{k=0}^N b_k z^{-k}
$$
with ndarray b = $[b_0, b_1, \ldots, b_N]$. For the IIR case we have in the $z$-domain
$$\begin{align}
H_\text{IIR}(z) &= \frac{\sum_{k=0}^M b_k z^{-k}}{\sum_{k=1}^N a_k z^{-k}} \
&= \prod_{k=0}^{N_s-1} \frac{b_{k0} + b_{k1} z^{-1} + b_{k2} z^{-2}}{1 + a_{k1} z^{-1} + a_{k2} z^{-2}} = \prod_{k=0}^{N_s-1} H_k(z)
\end{align}$$
where $N_s = \lfloor(N+1)/2\rfloor$. For the b/a form the coefficients are arranged as
python
b = [b0, b1, ..., bM-1], the numerator filter coefficients
a = [a0, a1, ..., aN-1], the denominator filter ceofficients
For the sos form each row of the 2D sos array corresponds to the coefficients of $H_k(z)$, as follows:
python
SOS_mat = [[b00, b01, b02, 1, a01, a02], #biquad 0
[b10, b11, b12, 1, a11, a12], #biquad 1
.
.
[bNs-10, bNs-11, bNs-12, 1, aNs-11, aNs-12]] #biquad Ns-1
Linear Phase FIR Filter Design
The primary focus of this module is adding the ability to design linear phase FIR filters from user friendly amplitude response requirements.
Most digital filter design is motivated by the desire to approach an ideal filter. Recall an ideal filter will pass signals of a certain of frequencies and block others. For both analog and digital filters the designer can choose from a variety of approximation techniques. For digital filters the approximation techniques fall into the categories of IIR or FIR. In the design of FIR filters two popular techniques are truncating the ideal filter impulse response and applying a window, and optimum equiripple approximations Oppenheim2010. Frequency sampling based approaches are also popular, but will not be considered here, even though scipy.signal supports all three. Filter design generally begins with a specification of the desired frequency response. The filter frequency response may be stated in several ways, but amplitude response is the most common, e.g., state how $H_c(j\Omega)$ or $H(e^{j\omega}) = H(e^{j2\pi f/f_s})$ should behave. A completed design consists of the number of coefficients (taps) required and the coefficients themselves (double precision float or float64 in Numpy, and float64_t in C). Figure 1, below, shows amplitude response requirements in terms of filter gain and critical frequencies for lowpass, highpass, bandpass, and bandstop filters. The critical frequencies are given here in terms of analog requirements in Hz. The sampling frequency is assumed to be in Hz. The passband ripple and stopband attenuation values are in dB. Note in dB terms attenuation is the negative of gain, e.g., -60 of stopband gain is equivalent to 60 dB of stopband attenuation.
End of explanation
Image('300ppi/[email protected]',width='80%')
Explanation: There are 10 filter design functions and one plotting function available in fir_design_helper.py. Four functions for designing Kaiser window based FIR filters and four functions for designing equiripple based FIR filters. Of the eight just described, they all take in amplitude response requirements and return a coefficients array. Two of the 10 filter functions are simply wrappers around the scipy.signal function signal.firwin() for designing filters of a specific order when one (lowpass) or two (bandpass) critical frequencies are given. The wrapper functions fix the window type to the firwin default of hann (hanning). The remamining eight are described below in Table 1. The plotting function provides an easy means to compare the resulting frequency response of one or more designs on a single plot. Display modes allow gain in dB, phase in radians, group delay in samples, and group delay in seconds for a given sampling rate. This function, freq_resp_list(), works for both FIR and IIR designs. Table 1 provides the interface details to the eight design functions where d_stop and d_pass are positive dB values and the critical frequencies have the same unit as the sampling frequency $f_s$. These functions do not create perfect results so some tuning of of the design parameters may be needed, in addition to bumping the filter order up or down via N_bump.
End of explanation
b_k = fir_d.firwin_kaiser_lpf(1/8,1/6,50,1.0)
b_r = fir_d.fir_remez_lpf(1/8,1/6,0.2,50,1.0)
fir_d.freqz_resp_list([b_k,b_r],[[1],[1]],'dB',fs=1)
ylim([-80,5])
title(r'Kaiser vs Equal Ripple Lowpass')
ylabel(r'Filter Gain (dB)')
xlabel(r'Frequency in kHz')
legend((r'Kaiser: %d taps' % len(b_k),r'Remez: %d taps' % len(b_r)),loc='best')
grid();
b_k_hp = fir_d.firwin_kaiser_hpf(1/8,1/6,50,1.0)
b_r_hp = fir_d.fir_remez_hpf(1/8,1/6,0.2,50,1.0)
fir_d.freqz_resp_list([b_k_hp,b_r_hp],[[1],[1]],'dB',fs=1)
ylim([-80,5])
title(r'Kaiser vs Equal Ripple Lowpass')
ylabel(r'Filter Gain (dB)')
xlabel(r'Frequency in kHz')
legend((r'Kaiser: %d taps' % len(b_k),r'Remez: %d taps' % len(b_r)),loc='best')
grid();
b_k_bp = fir_d.firwin_kaiser_bpf(7000,8000,14000,15000,50,48000)
b_r_bp = fir_d.fir_remez_bpf(7000,8000,14000,15000,0.2,50,48000)
fir_d.freqz_resp_list([b_k_bp,b_r_bp],[[1],[1]],'dB',fs=48)
ylim([-80,5])
title(r'Kaiser vs Equal Ripple Bandpass')
ylabel(r'Filter Gain (dB)')
xlabel(r'Frequency in kHz')
legend((r'Kaiser: %d taps' % len(b_k_bp),
r'Remez: %d taps' % len(b_r_bp)),
loc='lower right')
grid();
Explanation: Design Examples
Example 1: Lowpass with $f_s = 1$ Hz
For this 31 tap filter we choose the cutoff frequency to be $F_c = F_s/8$, or in normalized form $f_c = 1/8$.
End of explanation
b_up = fir_d.fir_remez_lpf(3300,4300,0.5,60,96000)
mr_up = mrh.multirate_FIR(b_up)
Explanation: A Design Example Useful for Interpolation or Decimation
Here we consider a lowpass design that needs to pass frequencies from [0, 4000] Hz with a sampling rate of 96000 Hz. This scenario arises when building an interpolator using the classes of the scikit-dps-comm module multirate_helper.py to increase the sampling rate from 8000 Hz to 96000 Hz, or an interpolation factor of $L = 12$. Note at the top of this notebook we have also have the import
python
import sk_dsp_comm.multirate_helper as mrh
so that some of the functionality can be accessed. For more details on the use of multirate_helper see.
Start with an equalripple design having transition band centered on 4000 Hz with passband ripple of 0.5 dB and stopband attenuation of 60 dB.
End of explanation
# Take a look at the pole-zero configuration of this very
# high-order (many taps) linear phase FIR
mr_up.zplane()
Explanation: Consider the pole-zero configuration for this high-order filter
End of explanation
# Verify the passband and stopband gains are as expected
mr_up.freq_resp('db',96000)
Explanation: Check out the passband and stopband gains
End of explanation
(len(b_up-1))/2
# Verify that the FIR design has constant group delay (N_taps - 1)/2 samples
mr_up.freq_resp('groupdelay_s',96000,[0,100])
Explanation: See that the group delay is the expected value of $(N_\text{taps} - 1)/2 = 98$ samples
End of explanation
Image('300ppi/[email protected]',width='90%')
Explanation: The object mr_up can now be used for interpolation or decimation with a rate change factor of 12.
Traditional IIR Filter Design using the Bilinear Transform
The scipy.signal package fully supports the design of IIR digital filters from analog prototypes. IIR filters like FIR filters, are typically designed with amplitude response requirements in mind. A collection of design functions are available directly from scipy.signal for this purpose, in particular the function scipy.signal.iirdesign(). To make the design of lowpass, highpass, bandpass, and bandstop filters consistent with the module fir_design_helper.py the module iir_design_helper.py was written. Figure 2, below, details how the amplitude response parameters are defined graphically.
End of explanation
Image('300ppi/[email protected]',width='80%')
Explanation: Within iir_design_helper.py there are four filter design functions and a collection of supporting functions available. The four filter design functions are used for designing lowpass, highpass, bandpass, and bandstop filters, utilizing Butterworth, Chebshev type 1, Chebyshev type 2, and elliptical filter prototypes. See Oppenheim2010 and ECE 5650 notes Chapter 9 for detailed design information. The function interfaces are described in Table 2.
End of explanation
fs = 48000
f_pass = 5000
f_stop = 8000
b_but,a_but,sos_but = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'butter')
b_cheb1,a_cheb1,sos_cheb1 = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'cheby1')
b_cheb2,a_cheb2,sos_cheb2 = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'cheby2')
b_elli,a_elli,sos_elli = iir_d.IIR_lpf(f_pass,f_stop,0.5,60,fs,'ellip')
Explanation: The filter functions return the filter coefficients in two formats:
Traditional transfer function form as numerator coefficients b and denominator a coefficients arrays, and
Cascade of biquadratic sections form using the previously introduced sos 2D array or matrix.
Both are provided to allow further analysis with either a direct form topology or the sos form. The underlying signal.iirdesign() function also provides a third option: a list of poles and zeros. The sos form desireable for high precision filters, as it is more robust to coefficient quantization, in spite using double precision coefficients in the b and a arrays.
Of the remaining support functions four are also described in Table 2, above. The most significant functions are freqz_resp_cas_list, available for graphically comparing the frequency response over several designs, and sos_zplane a function for plotting the pole-zero pattern. Both operate using the sos matrix. A transfer function form (b/a) for frequency response plotting, freqz_resp_list, is also present in the module. This function was first introduced in the FIR design section. The frequency response function plotting offers modes for gain in dB, phase in radians, group delay in samples, and group delay in seconds, all for a given sampling rate in Hz. The pole-zero plotting function locates pole and zeros more accurately than sk_dsp_commsigsys.zplane, as the numpy function roots() is only solving quadratic polynomials. Also, repeated roots can be displayed as theoretically expected, and also so noted in the graphical display by superscripts next to the pole and zero markers.
IIR Design Based on the Bilinear Transformation
There are multiple ways of designing IIR filters based on amplitude response requirements. When the desire is to have the filter approximation follow an analog prototype such as Butterworth, Chebychev, etc., is using the bilinear transformation. The function signal.iirdesign() described above does exactly this.
In the example below we consider lowpass amplitude response requirements and see how the filter order changes when we choose different analog prototypes.
Example: Lowpass Design Comparison
The lowpass amplitude response requirements given $f_s = 48$ kHz are:
1. $f_\text{pass} = 5$ kHz
2. $f_\text{stop} = 8$ kHz
3. Passband ripple of 0.5 dB
4. Stopband attenuation of 60 dB
Design four filters to meet the same requirements: butter, cheby1, ,cheby2, and ellip:
End of explanation
iir_d.freqz_resp_cas_list([sos_but,sos_cheb1,sos_cheb2,sos_elli],'dB',fs=48)
ylim([-80,5])
title(r'IIR Lowpass Compare')
ylabel(r'Filter Gain (dB)')
xlabel(r'Frequency in kHz')
legend((r'Butter order: %d' % (len(a_but)-1),
r'Cheby1 order: %d' % (len(a_cheb1)-1),
r'Cheby2 order: %d' % (len(a_cheb2)-1),
r'Elliptic order: %d' % (len(a_elli)-1)),loc='best')
grid();
Explanation: Frequency Response Comparison
Here we compare the magnitude response in dB using the sos form of each filter as the input. The elliptic is the most efficient, and actually over achieves by reaching the stopband requirement at less than 8 kHz.
End of explanation
iir_d.sos_zplane(sos_but)
Explanation: Next plot the pole-zero configuration of just the butterworth design. Here we use the a special version of ss.zplane that works with the sos 2D array.
End of explanation
# Elliptic IIR Lowpass
b_lp,a_lp,sos_lp = iir_d.IIR_lpf(1950,2050,0.5,80,8000.,'ellip')
mr_lp = mrh.multirate_IIR(sos_lp)
mr_lp.freq_resp('db',8000)
Explanation: Note the two plots above can also be obtained using the transfer function form via iir_d.freqz_resp_list([b],[a],'dB',fs=48) and ss.zplane(b,a), respectively. The sos form will yield more accurate results, as it is less sensitive to coefficient quantization. This is particularly true for the pole-zero plot, as rooting a 15th degree polynomial is far more subject to errors than rooting a simple quadratic.
For the 15th-order Butterworth the bilinear transformation maps the expected 15 s-domain zeros at infinity to $z=-1$. If you use sk_dsp_comm.sigsys.zplane() you will find that the 15 zeros at are in a tight circle around $z=-1$, indicating polynomial rooting errors. Likewise the frequency response will be more accurate.
Signal filtering of ndarray x is done using the filter designs is done using functions from scipy.signal:
For transfer function form y = signal.lfilter(b,a,x)
For sos form y = signal.sosfilt(sos,x)
A Half-Band Filter Design to Pass up to $W/2$ when $f_s = 8$ kHz
Here we consider a lowpass design that needs to pass frequencies up to $f_s/4$. Specifically when $f_s = 8000$ Hz, the filter passband becomes [0, 2000] Hz. Once the coefficients are found a mrh.multirate object is created to allow further study of the filter, and ultimately implement filtering of a white noise signal.
Start with an elliptical design having transition band centered on 2000 Hz with passband ripple of 0.5 dB and stopband attenuation of 80 dB. The transition bandwidth is set to 100 Hz, with 50 Hz on either side of 2000 Hz.
End of explanation
x = randn(1000000)
y = mr_lp.filter(x)
psd(x,2**10,8000);
psd(y,2**10,8000);
title(r'Filtering White Noise Having $\sigma_x^2 = 1$')
legend(('Input PSD','Output PSD'),loc='best')
ylim([-130,-30])
fs = 8000
print('Expected PSD of %2.3f dB/Hz' % (0-10*log10(fs),))
Explanation: Pass Gaussian white noise of variance $\sigma_x^2 = 1$ through the filter. Use a lot of samples so the spectral estimate can accurately form $S_y(f) = \sigma_x^2\cdot |H(e^{j2\pi f/f_s})|^2 = |H(e^{j2\pi f/f_s})|^2$.
End of explanation
b_rec_bpf1 = fir_d.fir_remez_bpf(23000,24000,28000,29000,0.5,70,96000,8)
fir_d.freqz_resp_list([b_rec_bpf1],[1],mode='dB',fs=96000)
ylim([-80, 5])
grid();
Explanation: Amplitude Response Bandpass Design
Here we consider FIR and IIR bandpass designs for use in an SSB demodulator to remove potential adjacent channel signals sitting either side of a frequency band running from 23 kHz to 24 kHz.
End of explanation
b_rec_bpf1 = fir_d.fir_remez_bpf(23000,24000,28000,29000,0.5,70,96000,8)
fir_d.freqz_resp_list([b_rec_bpf1],[1],mode='groupdelay_s',fs=96000)
grid();
Explanation: The group delay is flat (constant) by virture of the design having linear phase.
End of explanation
b_rec_bpf2,a_rec_bpf2,sos_rec_bpf2 = iir_d.IIR_bpf(23000,24000,28000,29000,
0.5,70,96000,'ellip')
with np.errstate(divide='ignore'):
iir_d.freqz_resp_cas_list([sos_rec_bpf2],mode='dB',fs=96000)
ylim([-80, 5])
grid();
Explanation: Compare the FIR design with an elliptical design:
End of explanation
with np.errstate(divide='ignore', invalid='ignore'): #manage singularity warnings
iir_d.freqz_resp_cas_list([sos_rec_bpf2],mode='groupdelay_s',fs=96000)
#ylim([-80, 5])
grid();
Explanation: This high order elliptic has a nice tight amplitude response for minimal coefficients, but the group delay is terrible:
End of explanation |
7,162 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
When we cannot afford to sample the quantity of interest many times at every design within an optimization, we can use surrogate models instead. Here we will show you how to use third party surrogates as well as the polynomial chaos surrogate provided with horsetail matching.
For the third party surrogates, we will use the effective-quadratures package [Seshadri, P. and Parks, G. (2017) Effective-Quadratures (EQ)
Step1: Lets start with the built in in polynomial chaos surrogate. This finds the coefficients of a polynomial expansion by evaluating the inner product of the qoi function with each polynomial using gaussian quadrature.
The polynomial chaos expansion used by the PolySurrogate class uses specific quadrature points over the uncertainty space to perform efficient integration, and so we must tell the HorsetailMatching object that these are the points at which to evaluate the quantity of interest when making the surrogate. This is done with the surrogate_points argument.
Step2: Next we use the pyKriging samplingplan function to give us 20 points found via latin hypercube sampling at which to evaluate the metric to create the surrogate. Then we create a function in the form required by horsetail matching called myKrigingSurrogate, and pass this as the surrogate argument when making the horestail matching object, along with the LHS points as the surrogate_points argument. Here we modify the already created horsetail matching object instead of making a new one.
Step3: Now we do a similar thing with the effective quadrature toolbox to make a quadratic polynomial surrogate. | Python Code:
from horsetailmatching import HorsetailMatching, UniformParameter
from horsetailmatching.demoproblems import TP2
from horsetailmatching.surrogates import PolySurrogate
import numpy as np
uparams = [UniformParameter(), UniformParameter()]
Explanation: When we cannot afford to sample the quantity of interest many times at every design within an optimization, we can use surrogate models instead. Here we will show you how to use third party surrogates as well as the polynomial chaos surrogate provided with horsetail matching.
For the third party surrogates, we will use the effective-quadratures package [Seshadri, P. and Parks, G. (2017) Effective-Quadratures (EQ): Polynomials for Computational Engineering Studies, The Open Journal, http://dx.doi.org/10.21105/joss.0016], (also see http://www.effective-quadratures.org/). We will also use pyKriging [pyKriging 0.5281/zenodo.593877] (also see http://pykriging.com/).
The HorstailMaching object can take a "surrogate" argument, which should be a function that takes an np.ndarray of values of the uncertain parameters of size (num_points, num_uncertainties), and a np.ndarray of the quantity of interest evaluated at these values of size (num_points) that returns a function that predicts the function output at any value of the uncertainties. num_points is the number of points at which the surrogate is to be evaluated, and num_uncertainties is the number of uncertain parameters. The object also takes a "surrogate_points" argument, which is a list of points (values of u) at which horsetail matching calls the qoi function in order to fit the surrogate.
The following examples should make this more clear.
End of explanation
thePoly = PolySurrogate(dimensions=len(uparams), order=4)
u_quadrature = thePoly.getQuadraturePoints()
def myPolynomialChaosSurrogate(u_quad, q_quad):
thePoly.train(q_quad)
return thePoly.predict
theHM = HorsetailMatching(TP2, uparams, surrogate=myPolynomialChaosSurrogate, surrogate_points=u_quadrature)
print('Metric evaluated with polynomial chaos surrogate: ', theHM.evalMetric([0, 1]))
theHM.surrogate = None
print('Metric evaluated with direct sampling: ', theHM.evalMetric([0, 1]))
Explanation: Lets start with the built in in polynomial chaos surrogate. This finds the coefficients of a polynomial expansion by evaluating the inner product of the qoi function with each polynomial using gaussian quadrature.
The polynomial chaos expansion used by the PolySurrogate class uses specific quadrature points over the uncertainty space to perform efficient integration, and so we must tell the HorsetailMatching object that these are the points at which to evaluate the quantity of interest when making the surrogate. This is done with the surrogate_points argument.
End of explanation
from pyKriging.krige import kriging
from pyKriging.samplingplan import samplingplan
sp = samplingplan(2)
u_sampling = sp.optimallhc(25)
def myKrigingSurrogate(u_lhc, q_lhc):
krig = kriging(u_lhc, q_lhc)
krig.train()
return krig.predict
theHM.surrogate = myKrigingSurrogate
theHM.surrogate_points = u_sampling
print('Metric evaluated with kriging surrogate: ', theHM.evalMetric([0, 1]))
theHM.surrogate = None
print('Metric evaluated with direct sampling: ', theHM.evalMetric([0, 1]))
Explanation: Next we use the pyKriging samplingplan function to give us 20 points found via latin hypercube sampling at which to evaluate the metric to create the surrogate. Then we create a function in the form required by horsetail matching called myKrigingSurrogate, and pass this as the surrogate argument when making the horestail matching object, along with the LHS points as the surrogate_points argument. Here we modify the already created horsetail matching object instead of making a new one.
End of explanation
from equadratures import Polyreg
U1, U2 = np.meshgrid(np.linspace(-1, 1, 5), np.linspace(-1, 1, 5))
u_tensor = np.vstack([U1.flatten(), U2.flatten()]).T
def myQuadraticSurrogate(u_tensor, q_tensor):
poly = Polyreg(np.mat(u_tensor), np.mat(q_tensor).T, 'quadratic')
def model(u):
return poly.testPolynomial(np.mat(u))
return model
theHM.surrogate = myQuadraticSurrogate
theHM.surrogate_points = u_tensor
print('Metric evaluated with quadratic surrogate: ', theHM.evalMetric([0, 1]))
theHM.surrogate = None
print('Metric evaluated with direct sampling: ', theHM.evalMetric([0, 1]))
Explanation: Now we do a similar thing with the effective quadrature toolbox to make a quadratic polynomial surrogate.
End of explanation |
7,163 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note that job 2 has not executed because it is waiting for job 4, which has not run yet
Step3: Here is how to make a dependency queue with pool
We have to sleep a lot in this script to allow the queue to update.
Step4: Note that job 2 has not executed because it is waiting for job 4, which has not run yet
Step7: Now job 2 and job 4 have both completed. Works perfectly.
Note
Step8: Note that the get command only gets the first dictionary in the stack, it needs to be run twice if two commands are put.
Step9: Note that job 2 has not completed.
Step11: Now it has though, we needed to get it a second time, as one get only fetches one successful loop.
Here is an earlier Pool example that does not work, good example of issues with dictionary sharing
Step12: Unexpected behavior
That should have completed, but it didn't, let's try to get a second time to see if it worked yet.
Step13: Nope
Still isn't completing why?
Step14: It worked here once... but it isn't anymore. I am not sure why. Even after though, it was overwritten with the incomplete entry... maybe the dictionary copying is part of the issue?
Step15: Unexpected behavior
We have reverted back to the prior dictionary... the completed info for job 1 is gone.
Step16: No jobs have completed now... | Python Code:
queue.put([jon, 'done', None])
myjobs = update_jobs(myjobs, outqueue)
myjobs
runner.is_alive()
outqueue.empty()
myjobs = update_jobs(myjobs, outqueue)
myjobs
Explanation: Note that job 2 has not executed because it is waiting for job 4, which has not run yet
End of explanation
def job_runner(cores, jobqueue, outputs):
jobs: [(command, args)]
outputs: {id: retval}
import sys
def output(out):
Let's try and explicitly clear the dictionary before sending the output.
lastout = outputs.get() if not outputs.empty() else ''
if out == lastout:
return
while not outputs.empty():
# Clear the output object
outputs.get()
outputs.put(out)
jobno = 0
jobs = {}
runners = {}
done = []
pool = mp.Pool(cores)
while True:
if not jobqueue.empty():
fun_args = jobqueue.get_nowait()
if len(fun_args) == 1:
function = fun_args[0]
args = None
depends = None
elif len(fun_args) == 2:
function, args = fun_args
depends = None
elif len(fun_args) == 3:
function, args, depends = fun_args
else:
continue
jobno += 1
jobs[jobno] = {'func': function, 'args': args, 'depends': depends, 'done': False, 'out': None,
'started': False}
output(jobs)
if jobs:
for jobno, job_info in jobs.items():
if job_info['done']:
continue
ready = True
if job_info['depends']:
for depend in job_info['depends']:
if not depend in done:
ready = False
if ready and not job_info['started']:
if job_info['args']:
runners[jobno] = pool.apply_async(job_info['func'], (job_info['args'],))
else:
runners[jobno] = pool.apply_async(job_info['func'])
job_info['started'] = True
output(jobs)
sleep(0.5) # Wait for a second to allow job to start
if job_info['started'] and not job_info['done'] and runners[jobno].ready():
job_info['out'] = runners[jobno].get()
job_info['done'] = True
done.append(jobno)
output(jobs)
#if job_info['depends']:
# outputs.put(jobs.copy())
sleep(0.5)
def update_jobs(jobdict, outqueue):
sleep(2)
while not outqueue.empty():
jobdict.update(outqueue.get_nowait())
return jobdict
def jon(string='bob'):
return 'hi ' + string
queue = mp.Queue()
outqueue = mp.Queue()
runner = mp.Process(target=job_runner, args=(3, queue, outqueue))
runner.start()
queue.empty()
outqueue.empty()
runner.is_alive()
myjobs = {}
queue.put([jon, 'fred', None])
myjobs = update_jobs(myjobs, outqueue)
myjobs
outqueue.empty()
queue.put([jon, 'bob', [4]])
queue.put([jon, 'joe', None])
outqueue.empty()
myjobs = update_jobs(myjobs, outqueue)
myjobs
Explanation: Here is how to make a dependency queue with pool
We have to sleep a lot in this script to allow the queue to update.
End of explanation
queue.put([jon, 'done', None])
myjobs = update_jobs(myjobs, outqueue)
myjobs
runner.is_alive()
outqueue.empty()
myjobs = update_jobs(myjobs, outqueue)
myjobs
Explanation: Note that job 2 has not executed because it is waiting for job 4, which has not run yet
End of explanation
def job_runner(cores, jobqueue, outputs):
jobs: [(command, args)]
outputs: {id: retval}
import sys
def output(out):
Let's try and explicitly clear the dictionary before sending the output.
lastout = outputs.get() if not outputs.empty() else ''
if out == lastout:
return
sleep(0.5)
while not outputs.empty():
# Clear the output object
outputs.get()
outputs.put(out)
jobno = 0
jobs = {}
done = []
while True:
try:
fun_args = jobqueue.get_nowait()
if len(fun_args) == 1:
function = fun_args[0]
args = None
depends = None
elif len(fun_args) == 2:
function, args = fun_args
depends = None
elif len(fun_args) == 3:
function, args, depends = fun_args
else:
continue
jobno += 1
jobs[jobno] = {'func': function, 'args': args, 'depends': depends, 'done': False, 'out': None,
'started': False}
output(jobs)
except Empty:
pass
if jobs:
for jobno, job_info in jobs.items():
if job_info['done']:
continue
ready = True
if job_info['depends']:
for depend in job_info['depends']:
if not depend in done:
ready = False
if ready:
if job_info['args']:
job_info['out'] = job_info['func'](job_info['args'])
else:
job_info['out'] = job_info['func']()
job_info['started'] = True
job_info['done'] = True
done.append(jobno)
if job_info['depends']:
output(jobs)
sleep(1)
queue = mp.Queue()
outqueue = mp.Queue()
runner = mp.Process(target=job_runner, args=(3, queue, outqueue))
runner.start()
queue.empty()
outqueue.empty()
runner.is_alive()
myjobs = {}
queue.put([jon, 'fred', None])
myjobs = update_jobs(myjobs, outqueue)
myjobs
myjobs = update_jobs(myjobs, outqueue)
myjobs
queue.put([jon, 'bob', [4]])
queue.put([jon, 'joe', None])
myjobs = update_jobs(myjobs, outqueue)
myjobs
Explanation: Now job 2 and job 4 have both completed. Works perfectly.
Note: It takes a long time for the function to run jobs, so a lot of sleeping is required.
Here is the same idea, but with no pool, it is simpler
End of explanation
myjobs = update_jobs(myjobs, outqueue)
myjobs
queue.put([jon, 'done', None])
myjobs = update_jobs(myjobs, outqueue)
myjobs
Explanation: Note that the get command only gets the first dictionary in the stack, it needs to be run twice if two commands are put.
End of explanation
myjobs = update_jobs(myjobs, outqueue)
myjobs
Explanation: Note that job 2 has not completed.
End of explanation
def job_runner(cores, jobqueue, outputs):
jobs: [(command, args)]
outputs: {id: retval}
import sys
jobno = 0
jobs = {}
runners = {}
done = []
pool = mp.Pool()
while True:
try:
fun_args = jobqueue.get_nowait()
if len(fun_args) == 1:
function = fun_args[0]
args = None
depends = None
elif len(fun_args) == 2:
function, args = fun_args
depends = None
elif len(fun_args) == 3:
function, args, depends = fun_args
else:
continue
jobno += 1
jobs[jobno] = {'func': function, 'args': args, 'depends': depends, 'done': False, 'out': None,
'started': False}
outputs.put(jobs.copy())
except Empty:
pass
if jobs:
for jobno, job_info in jobs.items():
if job_info['done']:
continue
ready = True
if job_info['depends']:
for depend in job_info['depends']:
if not depend in done:
ready = False
if ready:
if job_info['args']:
runners[jobno] = pool.apply_async(job_info['func'], (job_info['args'],))
else:
runners[jobno] = pool.apply_async(job_info['func'])
job_info['started'] = True
if job_info['started'] and runners[jobno].ready():
job_info['out'] = runners[jobno].join()
job_info['done'] = True
done.append(jobno)
outputs.put(jobs.copy())
#if job_info['depends']:
# outputs.put(jobs.copy())
sleep(1)
queue = mp.Queue()
outqueue = mp.Queue()
runner = mp.Process(target=job_runner, args=(3, queue, outqueue))
runner.start()
queue.empty()
outqueue.empty()
runner.is_alive()
queue.put([jon, 'fred', None])
outdict = {}
sleep(2)
try:
j = outqueue.get_nowait()
except Empty as e:
print(e)
j
if j:
outdict.update(j)
outdict
Explanation: Now it has though, we needed to get it a second time, as one get only fetches one successful loop.
Here is an earlier Pool example that does not work, good example of issues with dictionary sharing
End of explanation
sleep(2)
try:
j = outqueue.get_nowait()
except Empty as e:
print(e)
j
Explanation: Unexpected behavior
That should have completed, but it didn't, let's try to get a second time to see if it worked yet.
End of explanation
queue.put([jon, 'bob', [4]])
queue.put([jon, 'joe', None])
sleep(1)
j = outqueue.get_nowait()
if j:
outdict.update(j)
outdict
Explanation: Nope
Still isn't completing why?
End of explanation
sleep(2)
j = outqueue.get_nowait()
if j:
outdict.update(j)
outdict
Explanation: It worked here once... but it isn't anymore. I am not sure why. Even after though, it was overwritten with the incomplete entry... maybe the dictionary copying is part of the issue?
End of explanation
queue.put([jon, 'done', None])
sleep(2)
j = outqueue.get_nowait()
if j:
outdict.update(j)
outdict
Explanation: Unexpected behavior
We have reverted back to the prior dictionary... the completed info for job 1 is gone.
End of explanation
sleep(2)
try:
j = outqueue.get_nowait()
except Empty as e:
print(repr(e))
j
outqueue.qsize()
Explanation: No jobs have completed now...
End of explanation |
7,164 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Программирование на Python
Дзен Python
Step1: Красивое лучше, чем уродливое.<br>
Явное лучше, чем неявное.<br>
Простое лучше, чем сложное.<br>
Сложное лучше, чем запутанное.<br>
Плоское лучше, чем вложенное.<br>
Разреженное лучше, чем плотное.<br>
Читаемость имеет значение.<br>
Особые случаи не настолько особые, чтобы нарушать правила.<br>
При этом практичность важнее безупречности.<br>
Ошибки никогда не должны замалчиваться.<br>
Если не замалчиваются явно.<br>
Встретив двусмысленность, отбрось искушение угадать.<br>
Должен существовать один — и, желательно, только один — очевидный способ сделать это.<br>
Хотя он поначалу может быть и не очевиден, если вы не голландец.<br>
Сейчас лучше, чем никогда.<br>
Хотя никогда зачастую лучше, чем прямо сейчас.<br>
Если реализацию сложно объяснить — идея плоха.<br>
Если реализацию легко объяснить — идея, возможно, хороша.<br>
Пространства имён — отличная штука! Будем делать их побольше!
Step2: Типы данных в np.array
Целые знаковые
* np.int8
* np.int16
* np.int32
* np.int64
Целые беззнаковые
* np.uint8
* np.uint16
* np.uint32
* np.uint64
С плавающей точкой
* np.float16
* np.float32
* np.float64
Step3: Создание массивов в numpy
Диапозон значений
Step4: Заполнение массива
Step5: Случайные значения
Step6: $$f(x) = kx+b$$
$$f(x) = X*W$$
Step7: $$MSE(X,\omega, y) = \frac{1}{N} \sum_i (f(x_i, \omega) - y_i)^2$$
$$\frac{dMSE}{dk} = \frac{2}{N} \sum_i (f(x_i, \omega) - y_i)*x_i$$
$$\frac{dMSE}{db} = \frac{2}{N} \sum_i (f(x_i, \omega) - y_i)$$
$$MSE(X,\omega, y) = \frac{1}{N}(XW - y)(XW - y)$$
$$\frac{dMSE}{dk} = \frac{2}{N}(XW - y)x$$
$$\frac{dMSE}{db} = \frac{2}{N} \sum (XW - y)$$
$$dMSE = \frac{2}{N} * (X * W-y)*X$$
$$W_{i+1} = W_{i}-\alpha*dMSE(X, W_i, y)$$ | Python Code:
%pylab inline
import this
Explanation: Программирование на Python
Дзен Python
End of explanation
import numpy as np
np.array([1,2,3])
a = np.array([[1,2,3], [4,5,6]])
a = np.array([1,2,3])
b = np.array([4,5,6])
a+b
a*b
a/b
a**b
Explanation: Красивое лучше, чем уродливое.<br>
Явное лучше, чем неявное.<br>
Простое лучше, чем сложное.<br>
Сложное лучше, чем запутанное.<br>
Плоское лучше, чем вложенное.<br>
Разреженное лучше, чем плотное.<br>
Читаемость имеет значение.<br>
Особые случаи не настолько особые, чтобы нарушать правила.<br>
При этом практичность важнее безупречности.<br>
Ошибки никогда не должны замалчиваться.<br>
Если не замалчиваются явно.<br>
Встретив двусмысленность, отбрось искушение угадать.<br>
Должен существовать один — и, желательно, только один — очевидный способ сделать это.<br>
Хотя он поначалу может быть и не очевиден, если вы не голландец.<br>
Сейчас лучше, чем никогда.<br>
Хотя никогда зачастую лучше, чем прямо сейчас.<br>
Если реализацию сложно объяснить — идея плоха.<br>
Если реализацию легко объяснить — идея, возможно, хороша.<br>
Пространства имён — отличная штука! Будем делать их побольше!
End of explanation
np.array([1, 2, 4], dtype=np.float32)
a = np.array([1,2,3])
print(a.dtype)
print(a.astype(np.float64).dtype)
Explanation: Типы данных в np.array
Целые знаковые
* np.int8
* np.int16
* np.int32
* np.int64
Целые беззнаковые
* np.uint8
* np.uint16
* np.uint32
* np.uint64
С плавающей точкой
* np.float16
* np.float32
* np.float64
End of explanation
np.arange(2, 10, 3, dtype=np.float32)
np.linspace(1,10,10000)
Explanation: Создание массивов в numpy
Диапозон значений
End of explanation
np.zeros((3,1),dtype=np.float16)
np.ones((5,3),dtype=np.float16)
Explanation: Заполнение массива
End of explanation
np.random.random((4,2,3))
np.random.randint(1,10,(5,3))
np.random.normal(5, 6, (4,2))
np.random.seed(42)
a = np.zeros((3,2))
b = np.ones((3,2))
np.hstack([a,b])
np.vstack([a, b])
a
a.shape
b = np.array([[1,2],[3,4],[5,6]])
b.T
a.dot(b)
X = np.arange(1,11).reshape((-1,1))
y = np.arange(2,12)+np.random.normal(size=(10))
y = y.reshape((-1,1))
W = np.random.random((2,1))
Explanation: Случайные значения
End of explanation
X = np.hstack([X, np.ones((10,1))])
f(X)
Explanation: $$f(x) = kx+b$$
$$f(x) = X*W$$
End of explanation
def f(X, W):
return X.dot(W)
def MSE(X, W, y):
return (X.dot(W)-y).T.dot(X.dot(W)-y)/X.shape[0]
def dMSE(X, W, y):
return 2/X.shape[0]*X.T.dot((X.dot(W)-y))
def optimize(W,X,y,a):
for i in range(1000):
W = W - a*dMSE(X,W,y)
MSE(X, W, y)
dMSE(X,W,y)
def optimize(W,X,y,a):
global coef, mses
coef = []
mses = []
for i in range(1000):
coef.append(W)
mses.append(MSE(X,W,y)[0,0])
W = W - a*dMSE(X,W,y)
# print(MSE(X,W,y))
return W
W = np.random.random((2,1))
P = optimize(W, X, y, 0.02)
coef = np.array(coef)
ylabel("k")
xlabel("b")
plot(coef[:,0,0], coef[:,1,0]);
ylabel("MSE")
xlabel("iteration")
plot(mses);
scatter(X[:,0],y.reshape(-1))
plot(X[:,0], f(X, W))
plot(X[:,0], f(X, P))
Explanation: $$MSE(X,\omega, y) = \frac{1}{N} \sum_i (f(x_i, \omega) - y_i)^2$$
$$\frac{dMSE}{dk} = \frac{2}{N} \sum_i (f(x_i, \omega) - y_i)*x_i$$
$$\frac{dMSE}{db} = \frac{2}{N} \sum_i (f(x_i, \omega) - y_i)$$
$$MSE(X,\omega, y) = \frac{1}{N}(XW - y)(XW - y)$$
$$\frac{dMSE}{dk} = \frac{2}{N}(XW - y)x$$
$$\frac{dMSE}{db} = \frac{2}{N} \sum (XW - y)$$
$$dMSE = \frac{2}{N} * (X * W-y)*X$$
$$W_{i+1} = W_{i}-\alpha*dMSE(X, W_i, y)$$
End of explanation |
7,165 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text Analysis with NLTK
Author
Step1: 1. Corpus acquisition.
In these notebooks we will explore some tools for text analysis and two topic modeling algorithms available from Python toolboxes.
To do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes the capture of content from wikimedia sites very easy.
(As a side note, there are many other available text collections to work with. In particular, the NLTK library has many examples, that you can explore using the nltk.download() tool.
import nltk
nltk.download()
for instance, you can take the gutemberg dataset
Mycorpus = nltk.corpus.gutenberg
text_name = Mycorpus.fileids()[0]
raw = Mycorpus.raw(text_name)
Words = Mycorpus.words(text_name)
Also, tools like Gensim or Sci-kit learn include text databases to work with).
In order to use Wikipedia data, we will select a single category of articles
Step2: You can try with any other categories, but take into account that some categories may contain very few articles. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https
Step3: Now, we have stored the whole text collection in two lists
Step4: 2. Corpus Processing
Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.
Thus, we will proceed with the following steps
Step5: Task
Step6: 2.2. Homogeneization
By looking at the tokenized corpus you may verify that there are many tokens that correspond to punktuation signs and other symbols that are not relevant to analyze the semantic content. They can be removed using the stemming tool from nltk.
The homogeneization process will consist of
Step7: 2.2.2. Stemming vs Lemmatization
At this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.
Task
Step8: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
Step9: Task
Step10: One of the advantages of the lemmatizer method is that the result of lemmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
However, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why "is" or "are" are preserved and not replaced by infinitive "be".
As an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').
2.3. Cleaning
The third step consists of removing those words that are very common in language and do not carry out usefull semantic content (articles, pronouns, etc).
Once again, we might need to load the stopword files using the download tools from nltk
Step11: Task
Step12: 2.4. Vectorization
Up to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library.
As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.
Step13: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Task
Step14: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
After that, we have transformed each article (in corpus_clean) in a list tuples (id, n).
Step15: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
[(0, 1), (3, 3), (5,2)]
for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.
[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]
These sparse vectors will be the inputs to the topic modeling algorithms.
Note that, at this point, we have built a Dictionary containing
Step16: and a bow representation of a corpus with
Step17: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
Step18: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
Step19: which appears
Step20: In the following we plot the most frequent terms in the corpus.
Step21: Exercise
Step22: Exercise
Step23: Exercise
Step24: Exercise (All in one)
Step25: Exercise (Visualizing categories)
Step26: Exercise (bigrams)
Step27: 2.4. Saving results
The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms analyzed in the following notebook. Save them to be ready to use them during the next session. | Python Code:
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# import pylab
# Required imports
from wikitools import wiki
from wikitools import category
import nltk
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from time import time
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from test_helper import Test
import gensim
Explanation: Text Analysis with NLTK
Author: Jesús Cid-Sueiro
Date: 2016/04/03
Last review: 2017/04/21
End of explanation
site = wiki.Wiki("https://en.wikipedia.org/w/api.php")
# Select a category with a reasonable number of articles (>100)
# cat = "Economics"
cat = "Pseudoscience"
print cat
Explanation: 1. Corpus acquisition.
In these notebooks we will explore some tools for text analysis and two topic modeling algorithms available from Python toolboxes.
To do so, we will explore and analyze collections of Wikipedia articles from a given category, using wikitools, that makes the capture of content from wikimedia sites very easy.
(As a side note, there are many other available text collections to work with. In particular, the NLTK library has many examples, that you can explore using the nltk.download() tool.
import nltk
nltk.download()
for instance, you can take the gutemberg dataset
Mycorpus = nltk.corpus.gutenberg
text_name = Mycorpus.fileids()[0]
raw = Mycorpus.raw(text_name)
Words = Mycorpus.words(text_name)
Also, tools like Gensim or Sci-kit learn include text databases to work with).
In order to use Wikipedia data, we will select a single category of articles:
End of explanation
# Loading category data. This may take a while
print "Loading category data. This may take a while..."
cat_data = category.Category(site, cat)
corpus_titles = []
corpus_text = []
for n, page in enumerate(cat_data.getAllMembersGen()):
print "\r Loading article {0}".format(n + 1),
corpus_titles.append(page.title)
corpus_text.append(page.getWikiText())
n_art = len(corpus_titles)
print "\nLoaded " + str(n_art) + " articles from category " + cat
Explanation: You can try with any other categories, but take into account that some categories may contain very few articles. Select a category with at least 100 articles. You can browse the wikipedia category tree here, https://en.wikipedia.org/wiki/Category:Contents, for instance, and select the appropriate one.
We start downloading the text collection.
End of explanation
# n = 5
# print corpus_titles[n]
# print corpus_text[n]
Explanation: Now, we have stored the whole text collection in two lists:
corpus_titles, which contains the titles of the selected articles
corpus_text, with the text content of the selected wikipedia articles
You can browse the content of the wikipedia articles to get some intuition about the kind of documents that will be processed.
End of explanation
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "punkt"
# nltk.download("punkt")
Explanation: 2. Corpus Processing
Topic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.
Thus, we will proceed with the following steps:
Tokenization
Homogeneization
Cleaning
Vectorization
2.1. Tokenization
For the first steps, we will use some of the powerfull methods available from the Natural Language Toolkit. In order to use the word_tokenize method from nltk, you might need to get the appropriate libraries using nltk.download(). You must select option "d) Download", and identifier "punkt"
End of explanation
corpus_tokens = []
for n, art in enumerate(corpus_text):
print "\rTokenizing article {0} out of {1}".format(n + 1, n_art),
# This is to make sure that all characters have the appropriate encoding.
art = art.decode('utf-8')
# Tokenize each text entry.
# scode: tokens = <FILL IN>
# Add the new token list as a new element to corpus_tokens (that will be a list of lists)
# scode: <FILL IN>
print "\n The corpus has been tokenized. Let's check some portion of the first article:"
print corpus_tokens[0][0:30]
Test.assertEquals(len(corpus_tokens), n_art, "The number of articles has changed unexpectedly")
Test.assertTrue(len(corpus_tokens) >= 100,
"Your corpus_tokens has less than 100 articles. Consider using a larger dataset")
Explanation: Task: Insert the appropriate call to word_tokenize in the code below, in order to get the tokens list corresponding to each Wikipedia article:
End of explanation
corpus_filtered = []
for n, token_list in enumerate(corpus_tokens):
print "\rFiltering article {0} out of {1}".format(n + 1, n_art),
# Convert all tokens in token_list to lowercase, remove non alfanumeric tokens and stem.
# Store the result in a new token list, clean_tokens.
# scode: filtered_tokens = <FILL IN>
# Add art to corpus_filtered
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after filtering:"
print corpus_filtered[0][0:30]
Test.assertTrue(all([c==c.lower() for c in corpus_filtered[23]]), 'Capital letters have not been removed')
Test.assertTrue(all([c.isalnum() for c in corpus_filtered[13]]), 'Non alphanumeric characters have not been removed')
Explanation: 2.2. Homogeneization
By looking at the tokenized corpus you may verify that there are many tokens that correspond to punktuation signs and other symbols that are not relevant to analyze the semantic content. They can be removed using the stemming tool from nltk.
The homogeneization process will consist of:
Removing capitalization: capital alphabetic characters will be transformed to their corresponding lowercase characters.
Removing non alphanumeric tokens (e.g. punktuation signs)
Stemming/Lemmatization: removing word terminations to preserve the root of the words and ignore grammatical information.
2.2.1. Filtering
Let us proceed with the filtering steps 1 and 2 (removing capitalization and non-alphanumeric tokens).
Task: Convert all tokens in corpus_tokens to lowercase (using .lower() method) and remove non alphanumeric tokens (that you can detect with .isalnum() method). You can do it in a single line of code...
End of explanation
# Select stemmer.
stemmer = nltk.stem.SnowballStemmer('english')
corpus_stemmed = []
for n, token_list in enumerate(corpus_filtered):
print "\rStemming article {0} out of {1}".format(n + 1, n_art),
# Apply stemming to all tokens in token_list and save them in stemmed_tokens
# scode: stemmed_tokens = <FILL IN>
# Add stemmed_tokens to the stemmed corpus
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after stemming:"
print corpus_stemmed[0][0:30]
Test.assertTrue((len([c for c in corpus_stemmed[0] if c!=stemmer.stem(c)]) < 0.1*len(corpus_stemmed[0])),
'It seems that stemming has not been applied properly')
Explanation: 2.2.2. Stemming vs Lemmatization
At this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.
Task: Apply the .stem() method, from the stemmer object created in the first line, to corpus_filtered.
End of explanation
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "wordnet"
# nltk.download()
Explanation: Alternatively, we can apply lemmatization. For english texts, we can use the lemmatizer from NLTK, which is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk
End of explanation
wnl = WordNetLemmatizer()
# Select stemmer.
corpus_lemmat = []
for n, token_list in enumerate(corpus_filtered):
print "\rLemmatizing article {0} out of {1}".format(n + 1, n_art),
# scode: lemmat_tokens = <FILL IN>
# Add art to the stemmed corpus
# scode: <FILL IN>
print "\nLet's check the first tokens from document 0 after lemmatization:"
print corpus_lemmat[0][0:30]
Explanation: Task: Apply the .lemmatize() method, from the WordNetLemmatizer object created in the first line, to corpus_filtered.
End of explanation
# You can comment this if the package is already available.
# Select option "d) Download", and identifier "stopwords"
# nltk.download()
Explanation: One of the advantages of the lemmatizer method is that the result of lemmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.
However, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why "is" or "are" are preserved and not replaced by infinitive "be".
As an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').
2.3. Cleaning
The third step consists of removing those words that are very common in language and do not carry out usefull semantic content (articles, pronouns, etc).
Once again, we might need to load the stopword files using the download tools from nltk
End of explanation
corpus_clean = []
stopwords_en = stopwords.words('english')
n = 0
for token_list in corpus_stemmed:
n += 1
print "\rRemoving stopwords from article {0} out of {1}".format(n, n_art),
# Remove all tokens in the stopwords list and append the result to corpus_clean
# scode: clean_tokens = <FILL IN>
# scode: <FILL IN>
print "\n Let's check tokens after cleaning:"
print corpus_clean[0][0:30]
Test.assertTrue(len(corpus_clean) == n_art, 'List corpus_clean does not contain the expected number of articles')
Test.assertTrue(len([c for c in corpus_clean[0] if c in stopwords_en])==0, 'Stopwords have not been removed')
Explanation: Task: In the second line below we read a list of common english stopwords. Clean corpus_stemmed by removing all tokens in the stopword list.
End of explanation
# Create dictionary of tokens
D = gensim.corpora.Dictionary(corpus_clean)
n_tokens = len(D)
print "The dictionary contains {0} tokens".format(n_tokens)
print "First tokens in the dictionary: "
for n in range(10):
print str(n) + ": " + D[n]
Explanation: 2.4. Vectorization
Up to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library.
As a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.
End of explanation
# Transform token lists into sparse vectors on the D-space
# scode: corpus_bow = <FILL IN>
Test.assertTrue(len(corpus_bow)==n_art, 'corpus_bow has not the appropriate size')
Explanation: In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transform any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.
Task: Apply the doc2bow method from gensim dictionary D, to all tokens in every article in corpus_clean. The result must be a new list named corpus_bow where each element is a list of tuples (token_id, number_of_occurrences).
End of explanation
print "Original article (after cleaning): "
print corpus_clean[0][0:30]
print "Sparse vector representation (first 30 components):"
print corpus_bow[0][0:30]
print "The first component, {0} from document 0, states that token 0 ({1}) appears {2} times".format(
corpus_bow[0][0], D[0], corpus_bow[0][0][1])
Explanation: At this point, it is good to make sure to understand what has happened. In corpus_clean we had a list of token lists. With it, we have constructed a Dictionary, D, which assign an integer identifier to each token in the corpus.
After that, we have transformed each article (in corpus_clean) in a list tuples (id, n).
End of explanation
print "{0} tokens".format(len(D))
Explanation: Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples
[(0, 1), (3, 3), (5,2)]
for a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.
[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]
These sparse vectors will be the inputs to the topic modeling algorithms.
Note that, at this point, we have built a Dictionary containing
End of explanation
print "{0} Wikipedia articles".format(len(corpus_bow))
Explanation: and a bow representation of a corpus with
End of explanation
# SORTED TOKEN FREQUENCIES (I):
# Create a "flat" corpus with all tuples in a single list
corpus_bow_flat = [item for sublist in corpus_bow for item in sublist]
# Initialize a numpy array that we will use to cont tokens.
# token_count[n] should store the number of ocurrences of the n-th token, D[n]
token_count = np.zeros(n_tokens)
# Count the number of occurrences of each token.
for x in corpus_bow_flat:
# Update the proper element in token_count
# scode: <FILL IN>
# Sort by decreasing number of occurences
ids_sorted = np.argsort(- token_count)
tf_sorted = token_count[ids_sorted]
Explanation: Before starting with the semantic analyisis, it is interesting to observe the token distribution for the given corpus.
End of explanation
print D[ids_sorted[0]]
Explanation: ids_sorted is a list of all token ids, sorted by decreasing number of occurrences in the whole corpus. For instance, the most frequent term is
End of explanation
print "{0} times in the whole corpus".format(tf_sorted[0])
Explanation: which appears
End of explanation
# SORTED TOKEN FREQUENCIES (II):
plt.rcdefaults()
# Example data
n_bins = 25
hot_tokens = [D[i] for i in ids_sorted[n_bins-1::-1]]
y_pos = np.arange(len(hot_tokens))
z = tf_sorted[n_bins-1::-1]/n_art
plt.barh(y_pos, z, align='center', alpha=0.4)
plt.yticks(y_pos, hot_tokens)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
display()
# SORTED TOKEN FREQUENCIES:
# Example data
plt.semilogy(tf_sorted)
plt.xlabel('Average number of occurrences per article')
plt.title('Token distribution')
plt.show()
display()
Explanation: In the following we plot the most frequent terms in the corpus.
End of explanation
# scode: cold_tokens = <FILL IN>
print "There are {0} cold tokens, which represent {1}% of the total number of tokens in the dictionary".format(
len(cold_tokens), float(len(cold_tokens))/n_tokens*100)
Explanation: Exercise: There are usually many tokens that appear with very low frequency in the corpus. Count the number of tokens appearing only once, and what is the proportion of them in the token list.
End of explanation
# scode: <WRITE YOUR CODE HERE>
Explanation: Exercise: Represent graphically those 20 tokens that appear in the highest number of articles. Note that you can use the code above (headed by # SORTED TOKEN FREQUENCIES) with a very minor modification.
End of explanation
# scode: <WRITE YOUR CODE HERE>
Explanation: Exercise: Count the number of tokens appearing only in a single article.
End of explanation
# scode: <WRITE YOUR CODE HERE>
Explanation: Exercise (All in one): Note that, for pedagogical reasons, we have used a different for loop for each text processing step creating a new corpus_xxx variable after each step. For very large corpus, this could cause memory problems.
As a summary exercise, repeat the whole text processing, starting from corpus_text up to computing the bow, with the following modifications:
Use a single for loop, avoiding the creation of any intermediate corpus variables.
Use lemmatization instead of stemming.
Remove all tokens appearing in only one document and less than 2 times.
Save the result in a new variable corpus_bow1.
End of explanation
# scode: <WRITE YOUR CODE HERE>
Explanation: Exercise (Visualizing categories): Repeat the previous exercise with a second wikipedia category. For instance, you can take "communication".
Save the result in variable corpus_bow2.
Determine the most frequent terms in corpus_bow1 (term1) and corpus_bow2 (term2).
Transform each article in corpus_bow1 and corpus_bow2 into a 2 dimensional vector, where the first component is the frecuency of term1 and the second component is the frequency of term2
Draw a dispersion plot of all 2 dimensional points, using a different marker for each corpus. Could you differentiate both corpora using the selected terms only? What if the 2nd most frequent term is used?
End of explanation
# scode: <WRITE YOUR CODE HERE>
# Check the code below to see how ngrams works, and adapt it to solve the exercise.
# from nltk.util import ngrams
# sentence = 'this is a foo bar sentences and i want to ngramize it'
# sixgrams = ngrams(sentence.split(), 2)
# for grams in sixgrams:
# print grams
Explanation: Exercise (bigrams): nltk provides an utility to compute n-grams from a list of tokens, in nltk.util.ngrams. Join all tokens in corpus_clean in a single list and compute the bigrams. Plot the 20 most frequent bigrams in the corpus.
End of explanation
import pickle
data = {}
data['D'] = D
data['corpus_bow'] = corpus_bow
pickle.dump(data, open("wikiresults.p", "wb"))
Explanation: 2.4. Saving results
The dictionary D and the Bag of Words in corpus_bow are the key inputs to the topic model algorithms analyzed in the following notebook. Save them to be ready to use them during the next session.
End of explanation |
7,166 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Review from the previous lecture
In yesterday's Lecture 2, you learned how to use the numpy module, how to make your own functions, and how to import and export data. Below is a quick review before we move on to Lecture 3.
Remember, to use the numpy module, first it must be imported
Step1: You can do a lot with the numpy module. Below is an example to jog your memory
Step2: Do you remember loops? Let's use a while loop to make an array of 10 numbers. Let's have each element be increased by 2 compared with the previous element. Let's also have the first element of the array be 1.
Step3: There's your quick review of numpy and a while loop. Now we can move on to the content of Lecture 3.
Lecture 3 - Distributions, Histograms, and Curve Fitting
In the previous lecture, you learned how to import the module numpy and how to use many of its associated functions. As you've seen, numpy gives us the ability to generate arrays of numbers using commands usch as np.linspace and others.
In addition to these commands, you can also use numpy to generate distributions of numbers. The two most frequently used distributions are the following
Step4: Let's generate a numpy array of length 5 populated with uniformly distributed random numbers. The function np.random.rand takes the array output size as an argument (in this case, 5).
Step5: Additionally, you are not limited to one-dimensional arrays! Let's make a 5x5, two-dimensional array
Step6: Great, so now you have a handle on generating uniform distributions. Let's quickly look at one more type of distribution.
The normal distribution (randn) selects numbers from a Gaussian curve, sometimes called a bell curve, also from the interval [0,1).
The equation for a Gaussian curve is the following
Step7: So these numbers probably don't mean that much to you. Don't worry; they don't mean much to me either!
Instead of trying to derive meaning from a list of numbers, let's actually plot these outputs and see what they look like. This will allow us to determine whether or not these distributions actually look like what we are expecting. How do we do that? The answer is with histograms!
B. Plotting distributions
Histogram documentation
Step8: Now, let's plot a uniform distribution and take a look.
Use what you learned above to define your variable X as a uniformly distributed array with 5000 elements.
Step9: Now, let's use plt.hist to see what X looks like. First, run the cell below. Then, vary bins -- doing so will either increase or decrease the apparent effect of noise in your distribution.
Step10: Nice job! Do you see why the "uniform distribution" is referred to as such?
Next, let's take a look at the Gaussian distribution using histograms.
In the cell below, generate a vector of length 5000, called X, from the normal (Gaussian) distribution and plot a histogram with 50 bins.
HINT
Step11: Nice job! You just plotted a Gaussian distribution with mean of 0 and a standard deviation of 1.
As a reminder, this is considered the "standard" normal distribution, and it's not particularly interesting. We can transform the distribution given by np.random.randn (and make it more interesting!) using simple arithmetic.
Run the cell below to see. How is the code below different from the code you've already written?
Step12: Before moving onto the next section, vary the values of mu and sigma in the above code to see how your histogram changes. You should find that changing mu (the mean) affects the center of the distribution while changing sigma (the standard deviation) affects the width of the distribution.
Take a look at the histograms you have generated and compare them. Do the histograms of the uniform and normal (Gaussian) distributions look different? If so, how? Describe your observations in the cell below.
Step13: For simplicity's sake, we've used plt.hist without generating any return variables. Remember that plt.hist takes in your data (X) and the number of bins, and it makes histograms from it. In the process, plt.hist generates variables that you can store; we just haven't thus far. Run the cell below to see -- it should replot the Gaussian from above while also generating the output variables.
Step14: Something that might be useful to you is that you can make use of variables outputted by plt.hist -- particularly bins and N.
The bins array returned by plt.hist is longer (by one element) than the actual number of bins. Why? Because the bins array contains all the edges of the bins. For example, if you have 2 bins, you will have 3 edges. Does this make sense?
So you can generate these outputs, but what can you do with them? You can average consecutive elements from the bins output to get, in a sense, a location of the center of a bin. Let's call it bin_avg. Then you can plot the number of observations in that bin (N) against the bin location (bin_avg).
Step15: The plot above (red stars) should look like it overlays the histogram plot above it. If that's what you see, nice job! If not, let your instructor and/or TAs know before moving onto the next section.
C. Checking your distributions with statistics
If you ever want to check that your distributions are giving you what you expect, you can use numpy to calculate the mean and standard deviation of your distribution. Let's do this for X, our Gaussian distribution, and print the results.
Run the cell below. Are your mean and standard deviation what you expect them to be?
Step16: So you've learned how to generate distributions of numbers, plot them, and generate statistics on them. This is a great starting point, but let's try working with some real data!
D. Visualizing and understanding real data
Hope you're excited -- we're about to get our hands on some real data! Let's import a list of fluorescence lifetimes in nanoseconds from Nitrogen-Vacancy defects in diamond.
(While it is not at all necessary to understand the physics behind this, know that this is indeed real data! You can read more about it at http
Step17: Next, plot a histogram of this data set (play around with the number of bins, too).
Step18: Now, calculate and print the mean and standard deviation of this distribution.
Step19: Nice job! Now that you're used to working with real data, we're going to try to fit some more real data to known functions to gain a better understanding of that data.
E. Basic curve fitting
In this section, we're going to introduce you to the Python module known as scipy (short for Scientific Python).
scipy allows you to perform a range of functions such as numerical integration and optimization. In particular, it's useful for data analysis, which we shall see shortly. In particular, we will do curve fitting using curve_fit from scipy.optimize.
Curve fitting documentation
Step20: We will show you an example, and then you get to try it out for yourself!
We start by creating an equally-spaced numpy array x_vals consisting of 100 numbers from -5 to 5. Try it out yourself below.
Step21: Next, we will define a function $f(x) = \frac 1 3x^2+3$ that will square the elements in x and add an offset. Call this function f_scalar, and implement it (for scalar values) below.
Step22: We will create a new variable, y, that will call the function f_scalar with x_vals as the input. Note that we are using two separate variable names, x and x_vals, so we don't confuse them! This is good programming practice; you should try not to use the same variable names unless you are intentionally overriding something.
Step23: Now we will add some noise to the array y using the np.random.rand() function and store it in a new variable called y_noisy.
Important question
Step24: Let's see what the y_noisy values look like now
Step25: It seems like there's still a rough parabolic shape, so let's see if we can recover the original y values without any noise.
We can treat this y_noisy as data values that we want to fit with a parabolic funciton. To do this, we first need to define the general form of a quadratic function
Step26: Then, we want to find the optimal values of a, b, and c that will give a function that fits best to y_noisy.
We do this using the curve_fit function in the following way
Step27: Now that we have the fitted parameters, let's use quadratic to plot the fitted parabola alongside the noisy y values.
Step28: And we can also compare y_fitted to the original y values without any noise
Step29: Not a bad job for your first fit function!
F. More advanced curve fitting
In this section, you will visualize real data and plot a best-fit function to model the underlying physics.
You just used curve_fit above to fit simulated data to a linear function. Using that code as your guide, combined with the steps below, you will use curve_fit to fit your real data to a non-linear function that you define. This exercise will combine most of what you've learned so far!
Steps for using curve_fit
Here is the basic outline on how to use curve_fit. As this is the last section, you will mostly be on your own. Try your best with new skills you have learned here and feel free to ask for help!
1) Load in your x and y data. You will be using "photopeak.txt", which is in the folder Data.
HINT 1
Step30: So you've imported your data and plotted it. It should look similar to the figure below. Run the next cell to see.
Step31: What type of function would you say this is? Think back to the distributions we've learned about today. Any ideas?
Based on what you think, define your function below. | Python Code:
import numpy as np
Explanation: Review from the previous lecture
In yesterday's Lecture 2, you learned how to use the numpy module, how to make your own functions, and how to import and export data. Below is a quick review before we move on to Lecture 3.
Remember, to use the numpy module, first it must be imported:
End of explanation
np.linspace(0,10,11)
Explanation: You can do a lot with the numpy module. Below is an example to jog your memory:
End of explanation
#your code here
# start by defining the length of the array
arrayLength = 10
# make a numpy array of 10 zeros
# let's define the first element of the array
# make your while loop (don't forget to initialize i!) and print out the new array
Explanation: Do you remember loops? Let's use a while loop to make an array of 10 numbers. Let's have each element be increased by 2 compared with the previous element. Let's also have the first element of the array be 1.
End of explanation
import numpy as np
Explanation: There's your quick review of numpy and a while loop. Now we can move on to the content of Lecture 3.
Lecture 3 - Distributions, Histograms, and Curve Fitting
In the previous lecture, you learned how to import the module numpy and how to use many of its associated functions. As you've seen, numpy gives us the ability to generate arrays of numbers using commands usch as np.linspace and others.
In addition to these commands, you can also use numpy to generate distributions of numbers. The two most frequently used distributions are the following:
the uniform distribution: np.random.rand
the normal (Gaussian) distribution: np.random.randn
(notice the "n" that distinguishes the functions for generating normal vs. uniform distributions)
A. Generating distributions
Let's start with the uniform distribution (rand), which gives numbers uniformly distributed over the interval [0,1).
If you haven't already, import the numpy module.
End of explanation
np.random.rand(5)
Explanation: Let's generate a numpy array of length 5 populated with uniformly distributed random numbers. The function np.random.rand takes the array output size as an argument (in this case, 5).
End of explanation
np.random.rand(5,5)
Explanation: Additionally, you are not limited to one-dimensional arrays! Let's make a 5x5, two-dimensional array:
End of explanation
np.random.randn(5)
Explanation: Great, so now you have a handle on generating uniform distributions. Let's quickly look at one more type of distribution.
The normal distribution (randn) selects numbers from a Gaussian curve, sometimes called a bell curve, also from the interval [0,1).
The equation for a Gaussian curve is the following:
$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(x-\mu)^2}{2\sigma^2}}$
where $\mu$ is the mean and $\sigma$ is the standard deviation.
Don't worry about memorizing this equation, but do know that it exists and that numbers can be randomly drawn from it.
In python, the command np.random.randn selects numbers from the "standard" normal distribution.
All this means is that, in the equation above, $\mu$ (mean) = 0 and $\sigma$ (standard deviation ) 1. randn takes the size of the output as an argument just like rand does.
Try running the cell below to see the numbers you get from a normal distribution.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: So these numbers probably don't mean that much to you. Don't worry; they don't mean much to me either!
Instead of trying to derive meaning from a list of numbers, let's actually plot these outputs and see what they look like. This will allow us to determine whether or not these distributions actually look like what we are expecting. How do we do that? The answer is with histograms!
B. Plotting distributions
Histogram documentation: http://matplotlib.org/1.2.1/api/pyplot_api.html?highlight=hist#matplotlib.pyplot.hist
Understanding distributions is perhaps best done by plotting them in a histogram. Lucky for us, matplotlib makes that very simple for us.
To make a histogram, we use the command plt.hist, which takes -- at minimum -- a vector of values that we want to plot as a histogram. We can also specify the number of bins.
First things first: let's import matplotlib:
End of explanation
#your code here
Explanation: Now, let's plot a uniform distribution and take a look.
Use what you learned above to define your variable X as a uniformly distributed array with 5000 elements.
End of explanation
plt.hist(X, bins=20)
Explanation: Now, let's use plt.hist to see what X looks like. First, run the cell below. Then, vary bins -- doing so will either increase or decrease the apparent effect of noise in your distribution.
End of explanation
#your code here
Explanation: Nice job! Do you see why the "uniform distribution" is referred to as such?
Next, let's take a look at the Gaussian distribution using histograms.
In the cell below, generate a vector of length 5000, called X, from the normal (Gaussian) distribution and plot a histogram with 50 bins.
HINT: You will use a similar format as above when you defined and plotted a uniform distribution.
End of explanation
mu = 5 #the mean of the distribution
sigma = 3 #the standard deviation
X = sigma * np.random.randn(5000) + mu
plt.hist(X,bins=50)
Explanation: Nice job! You just plotted a Gaussian distribution with mean of 0 and a standard deviation of 1.
As a reminder, this is considered the "standard" normal distribution, and it's not particularly interesting. We can transform the distribution given by np.random.randn (and make it more interesting!) using simple arithmetic.
Run the cell below to see. How is the code below different from the code you've already written?
End of explanation
#write your observations here
Explanation: Before moving onto the next section, vary the values of mu and sigma in the above code to see how your histogram changes. You should find that changing mu (the mean) affects the center of the distribution while changing sigma (the standard deviation) affects the width of the distribution.
Take a look at the histograms you have generated and compare them. Do the histograms of the uniform and normal (Gaussian) distributions look different? If so, how? Describe your observations in the cell below.
End of explanation
N,bins,patches = plt.hist(X, bins=50)
Explanation: For simplicity's sake, we've used plt.hist without generating any return variables. Remember that plt.hist takes in your data (X) and the number of bins, and it makes histograms from it. In the process, plt.hist generates variables that you can store; we just haven't thus far. Run the cell below to see -- it should replot the Gaussian from above while also generating the output variables.
End of explanation
bin_avg = (bins[1:]+bins[:-1])/2
plt.plot(bin_avg, N, 'r*')
plt.show()
Explanation: Something that might be useful to you is that you can make use of variables outputted by plt.hist -- particularly bins and N.
The bins array returned by plt.hist is longer (by one element) than the actual number of bins. Why? Because the bins array contains all the edges of the bins. For example, if you have 2 bins, you will have 3 edges. Does this make sense?
So you can generate these outputs, but what can you do with them? You can average consecutive elements from the bins output to get, in a sense, a location of the center of a bin. Let's call it bin_avg. Then you can plot the number of observations in that bin (N) against the bin location (bin_avg).
End of explanation
mean = np.mean(X)
std = np.std(X)
print('mean: '+ repr(mean) )
print('standard deviation: ' + repr(std))
Explanation: The plot above (red stars) should look like it overlays the histogram plot above it. If that's what you see, nice job! If not, let your instructor and/or TAs know before moving onto the next section.
C. Checking your distributions with statistics
If you ever want to check that your distributions are giving you what you expect, you can use numpy to calculate the mean and standard deviation of your distribution. Let's do this for X, our Gaussian distribution, and print the results.
Run the cell below. Are your mean and standard deviation what you expect them to be?
End of explanation
lifetimes = np.loadtxt('Data/LifetimeData.txt')
Explanation: So you've learned how to generate distributions of numbers, plot them, and generate statistics on them. This is a great starting point, but let's try working with some real data!
D. Visualizing and understanding real data
Hope you're excited -- we're about to get our hands on some real data! Let's import a list of fluorescence lifetimes in nanoseconds from Nitrogen-Vacancy defects in diamond.
(While it is not at all necessary to understand the physics behind this, know that this is indeed real data! You can read more about it at http://www.nature.com/articles/ncomms11820 if you are so inclined. This data is from Fig. 6a).
Do you remember learning how to import data in yesterday's Lecture 2? The command you want to use is np.loadtxt. The data we'll be working with is called LifetimeData.txt, and it's located in the Data folder.
End of explanation
#your code here
Explanation: Next, plot a histogram of this data set (play around with the number of bins, too).
End of explanation
#your code here
Explanation: Now, calculate and print the mean and standard deviation of this distribution.
End of explanation
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Nice job! Now that you're used to working with real data, we're going to try to fit some more real data to known functions to gain a better understanding of that data.
E. Basic curve fitting
In this section, we're going to introduce you to the Python module known as scipy (short for Scientific Python).
scipy allows you to perform a range of functions such as numerical integration and optimization. In particular, it's useful for data analysis, which we shall see shortly. In particular, we will do curve fitting using curve_fit from scipy.optimize.
Curve fitting documentation: https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.curve_fit.html
In this section, you will learn how to use curve fitting on simulated data. Next will be real data!
First, let's load the modules.
End of explanation
#your code here
Explanation: We will show you an example, and then you get to try it out for yourself!
We start by creating an equally-spaced numpy array x_vals consisting of 100 numbers from -5 to 5. Try it out yourself below.
End of explanation
#your code here
Explanation: Next, we will define a function $f(x) = \frac 1 3x^2+3$ that will square the elements in x and add an offset. Call this function f_scalar, and implement it (for scalar values) below.
End of explanation
y = f_scalar(x_vals)
Explanation: We will create a new variable, y, that will call the function f_scalar with x_vals as the input. Note that we are using two separate variable names, x and x_vals, so we don't confuse them! This is good programming practice; you should try not to use the same variable names unless you are intentionally overriding something.
End of explanation
#your code here
Explanation: Now we will add some noise to the array y using the np.random.rand() function and store it in a new variable called y_noisy.
Important question: What value for the array size should we pass into this function?
End of explanation
plt.plot(x_vals,y_noisy)
Explanation: Let's see what the y_noisy values look like now
End of explanation
def quadratic(x,a,b,c):
return a*x**2 + b*x + c
Explanation: It seems like there's still a rough parabolic shape, so let's see if we can recover the original y values without any noise.
We can treat this y_noisy as data values that we want to fit with a parabolic funciton. To do this, we first need to define the general form of a quadratic function:
End of explanation
optimal_values, _ = curve_fit(quadratic,x_vals,y_noisy)
a = optimal_values[0]
b = optimal_values[1]
c = optimal_values[2]
print(a, b, c)
Explanation: Then, we want to find the optimal values of a, b, and c that will give a function that fits best to y_noisy.
We do this using the curve_fit function in the following way:
curve_fit(f,xdata,ydata)
where f is the model we're fitting to (quadratic in this case).
This function will return the optimal values for a, b, and c in a list. Try it out!
End of explanation
y_fitted = quadratic(x_vals,a,b,c)
plt.plot(x_vals,y_fitted)
plt.plot(x_vals,y_noisy)
Explanation: Now that we have the fitted parameters, let's use quadratic to plot the fitted parabola alongside the noisy y values.
End of explanation
plt.plot(x_vals,y_fitted)
plt.plot(x_vals,y)
Explanation: And we can also compare y_fitted to the original y values without any noise:
End of explanation
# let's get you started by importing the right libraries
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
%matplotlib inline
# Step 1: Import the data
xData, yData = np.loadtxt('Data/photopeak.txt', usecols=(0,1), unpack=True)
print(xData,yData)
# Step 2: Plot the data to see what it looks like
plt.plot(xData,yData,'*')
Explanation: Not a bad job for your first fit function!
F. More advanced curve fitting
In this section, you will visualize real data and plot a best-fit function to model the underlying physics.
You just used curve_fit above to fit simulated data to a linear function. Using that code as your guide, combined with the steps below, you will use curve_fit to fit your real data to a non-linear function that you define. This exercise will combine most of what you've learned so far!
Steps for using curve_fit
Here is the basic outline on how to use curve_fit. As this is the last section, you will mostly be on your own. Try your best with new skills you have learned here and feel free to ask for help!
1) Load in your x and y data. You will be using "photopeak.txt", which is in the folder Data.
HINT 1: When you load your data, I recommend making use of the usecols and unpack argument.
HINT 2: Make sure the arrays are the same length!
2) Plot this data to see what it looks like. Determine the function your data most resembles.
3) Define the function to which your data will be fit.
4) PART A: Use curve_fit and point the output to popt and pcov. These are the fitted parameters (popt) and their estimated errors (pcov).
4) PART B - OPTIONAL (only do this if you get through all the other steps): Input a guess (p0) and bounds (bounds) into curve_fit. For p0, I would suggest [0.5, 0.1, 5].
5) Pass the popt parameters into the function you've defined to create the model fit.
6) Plot your data and your fitted function.
7) Pat yourself on the back!
End of explanation
from IPython.display import display, Image
display(Image(filename='Data/photopeak.png'))
Explanation: So you've imported your data and plotted it. It should look similar to the figure below. Run the next cell to see.
End of explanation
# Step 3: Define your function here
# Step 3.5: SANITY CHECK! Use this step as a way to check that the function you defined above is mathematically correct.
# Step 4: Use curve_fit to generate your output parameters
# Step 5: Generate your model fit
# Step 6: Plot the best fit function and the scatter plot of data
Explanation: What type of function would you say this is? Think back to the distributions we've learned about today. Any ideas?
Based on what you think, define your function below.
End of explanation |
7,167 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License")
Step1: Retrain a classification model for Edge TPU using post-training quantization (with TF2)
In this tutorial, we'll use TensorFlow 2 to create an image classification model, train it with a flowers dataset, and convert it to TensorFlow Lite using post-training quantization. Finally, we compile it for compatibility with the Edge TPU (available in Coral devices).
The model is based on a pre-trained version of MobileNet V2. We'll start by retraining only the classification layers, reusing MobileNet's pre-trained feature extractor layers. Then we'll fine-tune the model by updating weights in some of the feature extractor layers. This type of transfer learning is much faster than training the entire model from scratch.
Once it's trained, we'll use post-training quantization to convert all parameters to int8 format, which reduces the model size and increases inferencing speed. This format is also required for compatibility on the Edge TPU.
For more information about how to create a model compatible with the Edge TPU, see the documentation at coral.ai.
Note
Step2: Prepare the training data
First let's download and organize the flowers dataset we'll use to retrain the model (it contains 5 flower classes).
Pay attention to this part so you can reproduce it with your own images dataset. In particular, notice that the "flower_photos" directory contains an appropriately-named directory for each class. The following code randomizes and divides up the photos into training and validation sets, and generates a labels file based on the photo folder names.
Step3: Next, we use ImageDataGenerator to rescale the image data into float values (divide by 255 so the tensor values are between 0 and 1), and call flow_from_directory() to create two generators
Step4: On each iteration, these generators provide a batch of images by reading images from disk and processing them to the proper tensor size (224 x 224). The output is a tuple of (images, labels). For example, you can see the shapes here
Step5: Now save the class labels to a text file
Step6: Build the model
Now we'll create a model that's capable of transfer learning on just the last fully-connected layer.
We'll start with MobileNet V2 from Keras as the base model, which is pre-trained with the ImageNet dataset (trained to recognize 1,000 classes). This provides us a great feature extractor for image classification and we can then train a new classification layer with our flowers dataset.
Create the base model
When instantiating the MobileNetV2 model, we specify the include_top=False argument in order to load the network without the classification layers at the top. Then we set trainable false to freeze all the weights in the base model. This effectively converts the model into a feature extractor because all the pre-trained weights and biases are preserved in the lower layers when we begin training for our classification head.
Step7: Add a classification head
Now we create a new Sequential model and pass the frozen MobileNet model as the base of the graph, and append new classification layers so we can set the final output dimension to match the number of classes in our dataset (5 types of flowers).
Step8: Configure the model
Although this method is called compile(), it's basically a configuration step that's required before we can start training.
Step9: You can see a string summary of the final network with the summary() method
Step10: And because the majority of the model graph is frozen in the base model, weights from only the last convolution and dense layers are trainable
Step11: Train the model
Now we can train the model using data provided by the train_generator and val_generator that we created at the beginning.
This should take less than 10 minutes.
Step12: Review the learning curves
Step13: Fine tune the base model
So far, we've only trained the classification layers—the weights of the pre-trained network were not changed.
One way we can increase the accuracy is to train (or "fine-tune") more layers from the pre-trained model. That is, we'll un-freeze some layers from the base model and adjust those weights (which were originally trained with 1,000 ImageNet classes) so they're better tuned for features found in our flowers dataset.
Un-freeze more layers
So instead of freezing the entire base model, we'll freeze individual layers.
First, let's see how many layers are in the base model
Step14: Let's try freezing just the bottom 100 layers.
Step15: Reconfigure the model
Now configure the model again, but this time with a lower learning rate (the default is 0.001).
Step16: Continue training
Now let's fine-tune all trainable layers. This starts with the weights we already trained in the classification layers, so we don't need as many epochs.
Step17: Review the new learning curves
Step18: This is better, but it's not ideal.
The validation loss is still higher than the training loss, so there could be some overfitting during training. The overfitting might also be because the new training set is relatively small with less intra-class variance, compared to the original ImageNet dataset used to train MobileNet V2.
So this model isn't trained to an accuracy that's production ready, but it works well enough as a demonstration.
Let's move on and convert the model to TensorFlow Lite.
Convert to TFLite
Ordinarily, creating a TensorFlow Lite model is just a few lines of code with TFLiteConverter. For example, this creates a basic (un-quantized) TensorFlow Lite model
Step19: However, this .tflite file still uses floating-point values for the parameter data, and we need to fully quantize the model to int8 format.
To fully quantize the model, we need to perform post-training quantization with a representative dataset, which requires a few more arguments for the TFLiteConverter, and a function that builds a dataset that's representative of the training dataset.
So let's convert the model again with post-training quantization
Step20: Compare the accuracy
So now we have a fully quantized TensorFlow Lite model. To be sure the conversion went well, let's evaluate both the raw model and the TensorFlow Lite model.
First check the accuracy of the raw model
Step21: Now let's check the accuracy of the .tflite file, using the same dataset.
However, there's no convenient API to evaluate the accuracy of a TensorFlow Lite model, so this code runs several inferences and compares the predictions against ground truth
Step22: You might see some, but hopefully not very much accuracy drop between the raw model and the TensorFlow Lite model. But again, these results are not suitable for production deployment.
Compile for the Edge TPU
Finally, we're ready to compile the model for the Edge TPU.
First download the Edge TPU Compiler
Step23: Then compile the model
Step24: That's it.
The compiled model uses the same filename but with "_edgetpu" appended at the end.
Download the model
You can download the converted model and labels file from Colab like this | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License")
End of explanation
import tensorflow as tf
assert float(tf.__version__[:3]) >= 2.3
import os
import numpy as np
import matplotlib.pyplot as plt
Explanation: Retrain a classification model for Edge TPU using post-training quantization (with TF2)
In this tutorial, we'll use TensorFlow 2 to create an image classification model, train it with a flowers dataset, and convert it to TensorFlow Lite using post-training quantization. Finally, we compile it for compatibility with the Edge TPU (available in Coral devices).
The model is based on a pre-trained version of MobileNet V2. We'll start by retraining only the classification layers, reusing MobileNet's pre-trained feature extractor layers. Then we'll fine-tune the model by updating weights in some of the feature extractor layers. This type of transfer learning is much faster than training the entire model from scratch.
Once it's trained, we'll use post-training quantization to convert all parameters to int8 format, which reduces the model size and increases inferencing speed. This format is also required for compatibility on the Edge TPU.
For more information about how to create a model compatible with the Edge TPU, see the documentation at coral.ai.
Note: This tutorial requires TensorFlow 2.3+ for full quantization, which currently does not work for all types of models. In particular, this tutorial expects a Keras-built model and this conversion strategy currently doesn't work with models imported from a frozen graph. (If you're using TF 1.x, see the 1.x version of this tutorial.)
<a href="https://colab.research.google.com/github/google-coral/tutorials/blob/master/retrain_classification_ptq_tf2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"></a>
<a href="https://github.com/google-coral/tutorials/blob/master/retrain_classification_ptq_tf2.ipynb" target="_parent"><img src="https://img.shields.io/static/v1?logo=GitHub&label=&color=333333&style=flat&message=View%20on%20GitHub" alt="View in GitHub"></a>
To start running all the code in this tutorial, select Runtime > Run all in the Colab toolbar.
Import the required libraries
In order to quantize both the input and output tensors, we need TFLiteConverter APIs that are available in TensorFlow r2.3 or higher:
End of explanation
_URL = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
zip_file = tf.keras.utils.get_file(origin=_URL,
fname="flower_photos.tgz",
extract=True)
flowers_dir = os.path.join(os.path.dirname(zip_file), 'flower_photos')
Explanation: Prepare the training data
First let's download and organize the flowers dataset we'll use to retrain the model (it contains 5 flower classes).
Pay attention to this part so you can reproduce it with your own images dataset. In particular, notice that the "flower_photos" directory contains an appropriately-named directory for each class. The following code randomizes and divides up the photos into training and validation sets, and generates a labels file based on the photo folder names.
End of explanation
IMAGE_SIZE = 224
BATCH_SIZE = 64
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
validation_split=0.2)
train_generator = datagen.flow_from_directory(
flowers_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='training')
val_generator = datagen.flow_from_directory(
flowers_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
subset='validation')
Explanation: Next, we use ImageDataGenerator to rescale the image data into float values (divide by 255 so the tensor values are between 0 and 1), and call flow_from_directory() to create two generators: one for the training dataset and one for the validation dataset.
End of explanation
image_batch, label_batch = next(val_generator)
image_batch.shape, label_batch.shape
Explanation: On each iteration, these generators provide a batch of images by reading images from disk and processing them to the proper tensor size (224 x 224). The output is a tuple of (images, labels). For example, you can see the shapes here:
End of explanation
print (train_generator.class_indices)
labels = '\n'.join(sorted(train_generator.class_indices.keys()))
with open('flower_labels.txt', 'w') as f:
f.write(labels)
!cat flower_labels.txt
Explanation: Now save the class labels to a text file:
End of explanation
IMG_SHAPE = (IMAGE_SIZE, IMAGE_SIZE, 3)
# Create the base model from the pre-trained MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
Explanation: Build the model
Now we'll create a model that's capable of transfer learning on just the last fully-connected layer.
We'll start with MobileNet V2 from Keras as the base model, which is pre-trained with the ImageNet dataset (trained to recognize 1,000 classes). This provides us a great feature extractor for image classification and we can then train a new classification layer with our flowers dataset.
Create the base model
When instantiating the MobileNetV2 model, we specify the include_top=False argument in order to load the network without the classification layers at the top. Then we set trainable false to freeze all the weights in the base model. This effectively converts the model into a feature extractor because all the pre-trained weights and biases are preserved in the lower layers when we begin training for our classification head.
End of explanation
model = tf.keras.Sequential([
base_model,
tf.keras.layers.Conv2D(filters=32, kernel_size=3, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(units=5, activation='softmax')
])
Explanation: Add a classification head
Now we create a new Sequential model and pass the frozen MobileNet model as the base of the graph, and append new classification layers so we can set the final output dimension to match the number of classes in our dataset (5 types of flowers).
End of explanation
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
Explanation: Configure the model
Although this method is called compile(), it's basically a configuration step that's required before we can start training.
End of explanation
model.summary()
Explanation: You can see a string summary of the final network with the summary() method:
End of explanation
print('Number of trainable weights = {}'.format(len(model.trainable_weights)))
Explanation: And because the majority of the model graph is frozen in the base model, weights from only the last convolution and dense layers are trainable:
End of explanation
history = model.fit(train_generator,
steps_per_epoch=len(train_generator),
epochs=10,
validation_data=val_generator,
validation_steps=len(val_generator))
Explanation: Train the model
Now we can train the model using data provided by the train_generator and val_generator that we created at the beginning.
This should take less than 10 minutes.
End of explanation
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
Explanation: Review the learning curves
End of explanation
print("Number of layers in the base model: ", len(base_model.layers))
Explanation: Fine tune the base model
So far, we've only trained the classification layers—the weights of the pre-trained network were not changed.
One way we can increase the accuracy is to train (or "fine-tune") more layers from the pre-trained model. That is, we'll un-freeze some layers from the base model and adjust those weights (which were originally trained with 1,000 ImageNet classes) so they're better tuned for features found in our flowers dataset.
Un-freeze more layers
So instead of freezing the entire base model, we'll freeze individual layers.
First, let's see how many layers are in the base model:
End of explanation
base_model.trainable = True
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
Explanation: Let's try freezing just the bottom 100 layers.
End of explanation
model.compile(optimizer=tf.keras.optimizers.Adam(1e-5),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
print('Number of trainable weights = {}'.format(len(model.trainable_weights)))
Explanation: Reconfigure the model
Now configure the model again, but this time with a lower learning rate (the default is 0.001).
End of explanation
history_fine = model.fit(train_generator,
steps_per_epoch=len(train_generator),
epochs=5,
validation_data=val_generator,
validation_steps=len(val_generator))
Explanation: Continue training
Now let's fine-tune all trainable layers. This starts with the weights we already trained in the classification layers, so we don't need as many epochs.
End of explanation
acc = history_fine.history['accuracy']
val_acc = history_fine.history['val_accuracy']
loss = history_fine.history['loss']
val_loss = history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,1.0])
plt.title('Training and Validation Loss')
plt.xlabel('epoch')
plt.show()
Explanation: Review the new learning curves
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
with open('mobilenet_v2_1.0_224.tflite', 'wb') as f:
f.write(tflite_model)
Explanation: This is better, but it's not ideal.
The validation loss is still higher than the training loss, so there could be some overfitting during training. The overfitting might also be because the new training set is relatively small with less intra-class variance, compared to the original ImageNet dataset used to train MobileNet V2.
So this model isn't trained to an accuracy that's production ready, but it works well enough as a demonstration.
Let's move on and convert the model to TensorFlow Lite.
Convert to TFLite
Ordinarily, creating a TensorFlow Lite model is just a few lines of code with TFLiteConverter. For example, this creates a basic (un-quantized) TensorFlow Lite model:
End of explanation
# A generator that provides a representative dataset
def representative_data_gen():
dataset_list = tf.data.Dataset.list_files(flowers_dir + '/*/*')
for i in range(100):
image = next(iter(dataset_list))
image = tf.io.read_file(image)
image = tf.io.decode_jpeg(image, channels=3)
image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
image = tf.cast(image / 255., tf.float32)
image = tf.expand_dims(image, 0)
yield [image]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
# This enables quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# This sets the representative dataset for quantization
converter.representative_dataset = representative_data_gen
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# For full integer quantization, though supported types defaults to int8 only, we explicitly declare it for clarity.
converter.target_spec.supported_types = [tf.int8]
# These set the input and output tensors to uint8 (added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model = converter.convert()
with open('mobilenet_v2_1.0_224_quant.tflite', 'wb') as f:
f.write(tflite_model)
Explanation: However, this .tflite file still uses floating-point values for the parameter data, and we need to fully quantize the model to int8 format.
To fully quantize the model, we need to perform post-training quantization with a representative dataset, which requires a few more arguments for the TFLiteConverter, and a function that builds a dataset that's representative of the training dataset.
So let's convert the model again with post-training quantization:
End of explanation
batch_images, batch_labels = next(val_generator)
logits = model(batch_images)
prediction = np.argmax(logits, axis=1)
truth = np.argmax(batch_labels, axis=1)
keras_accuracy = tf.keras.metrics.Accuracy()
keras_accuracy(prediction, truth)
print("Raw model accuracy: {:.3%}".format(keras_accuracy.result()))
Explanation: Compare the accuracy
So now we have a fully quantized TensorFlow Lite model. To be sure the conversion went well, let's evaluate both the raw model and the TensorFlow Lite model.
First check the accuracy of the raw model:
End of explanation
def set_input_tensor(interpreter, input):
input_details = interpreter.get_input_details()[0]
tensor_index = input_details['index']
input_tensor = interpreter.tensor(tensor_index)()[0]
# Inputs for the TFLite model must be uint8, so we quantize our input data.
# NOTE: This step is necessary only because we're receiving input data from
# ImageDataGenerator, which rescaled all image data to float [0,1]. When using
# bitmap inputs, they're already uint8 [0,255] so this can be replaced with:
# input_tensor[:, :] = input
scale, zero_point = input_details['quantization']
input_tensor[:, :] = np.uint8(input / scale + zero_point)
def classify_image(interpreter, input):
set_input_tensor(interpreter, input)
interpreter.invoke()
output_details = interpreter.get_output_details()[0]
output = interpreter.get_tensor(output_details['index'])
# Outputs from the TFLite model are uint8, so we dequantize the results:
scale, zero_point = output_details['quantization']
output = scale * (output - zero_point)
top_1 = np.argmax(output)
return top_1
interpreter = tf.lite.Interpreter('mobilenet_v2_1.0_224_quant.tflite')
interpreter.allocate_tensors()
# Collect all inference predictions in a list
batch_prediction = []
batch_truth = np.argmax(batch_labels, axis=1)
for i in range(len(batch_images)):
prediction = classify_image(interpreter, batch_images[i])
batch_prediction.append(prediction)
# Compare all predictions to the ground truth
tflite_accuracy = tf.keras.metrics.Accuracy()
tflite_accuracy(batch_prediction, batch_truth)
print("Quant TF Lite accuracy: {:.3%}".format(tflite_accuracy.result()))
Explanation: Now let's check the accuracy of the .tflite file, using the same dataset.
However, there's no convenient API to evaluate the accuracy of a TensorFlow Lite model, so this code runs several inferences and compares the predictions against ground truth:
End of explanation
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
Explanation: You might see some, but hopefully not very much accuracy drop between the raw model and the TensorFlow Lite model. But again, these results are not suitable for production deployment.
Compile for the Edge TPU
Finally, we're ready to compile the model for the Edge TPU.
First download the Edge TPU Compiler:
End of explanation
! edgetpu_compiler mobilenet_v2_1.0_224_quant.tflite
Explanation: Then compile the model:
End of explanation
from google.colab import files
files.download('mobilenet_v2_1.0_224_quant_edgetpu.tflite')
files.download('flower_labels.txt')
Explanation: That's it.
The compiled model uses the same filename but with "_edgetpu" appended at the end.
Download the model
You can download the converted model and labels file from Colab like this:
End of explanation |
7,168 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Python basics
This chapter only gives a short introduction to Python to make the explanations in the following chapters more understandable. A detailed description would be too extensive and would go beyond the scope of this tutorial. Take a look at https
Step1: This is the easiest way to use print. In order to produce a prettier output of the variable contents format specifications can be used. But we will come to this later.
1.2 Data types
Numeric
integer
float
complex
Boolean
True or False
Text
string
and many others
We use the built-in function <font color='blue'><b>type</b></font> to retrieve the type of a variable.
Example
Step2: Let's see what type the variable really has. You can use the function type as argument to the function print.
Step3: Change the value of variable x to a floating point value of 5.0.
Step4: Get the type of the changed variable x.
Step5: Define variables of different types.
Step6: 1.3 Lists
A list is a compound data type, used to group different values which can have different data types. Lists are written as a list of comma-separated values (items) between square brackets.
Step7: To select single or multiple elements of a list you can use indexing. A negative value takes the element from the end of the list.
Step8: To select a subset of a list you can use indices, slicing, [start_index<font color='red'><b>
Step9: What will be returned when doing the following?
Step10: The slicing with [
Step11: Well, how do we do an insertion of an element right after the first element?
Step12: If you want to add more than one element to a list use extend.
Step13: If you decide to remove an element use remove.
Step14: With pop you can remove an element, too. Remove the last element of the list.
Step15: Remove an element by its index.
Step16: Use reverse to - yupp - reverse the list.
Step17: 1.4 Tuples
A tuple is like a list, but it's unchangeable (it's also called immutable). Once a tuple is created, you cannot change its values. To change a tuple you have to convert it to a list, change the content, and convert it back to a tuple.
Define the variable tup as tuple.
Step18: Sometimes it is increasingly necessary to make multiple variable assignments which is very tedious. But it is very easy with the tuple value packaging method. Here are some examples how to use tuples.
Standard definition of variable of type integer.
Step19: Tuple packaging
Step20: You can use tuple packaging to assign the values to a single variable, too.
Step21: Tuple packaging makes an exchange of the content of variables much easier.
Step22: Ok, now we've learned a lot about tuples, but not all. There is a very helpful way to unpack a tuple.
Unpacking example with a tuple of integers.
Step23: Unpacking example with a tuple of strings.
Step24: 1.5 Computations
To do computations you can use the algebraic opertors on numeric values and lists.
Step25: The built-in functions max(), min(), and sum() for instance can be used to do computations for us.
Step26: To do computations with lists is not that simple.
Multiply the content of the list values by 10.
Step27: Yeah, that is not what you have expected, isn't it?
To multiply a list by a value means to repeat the list 10-times to the new list. We have to go through the list and multiply each single element by 10. There is a long and a short way to do it.
The long way
Step28: The more efficient way is to use Python's list comprehension
Step29: To notice would be the inplace operators += and *=.
Step30: 1.6 Statements
Like other programing languages Python uses similar flow control statements
1.6.1 if statement
The most used statement is the if statement which allows us to control if a condition is True or False. It can contain optional parts like elif and else.
Step31: 1.6.2 while statement
The lines in a while loop is executed until the condition is False.
Step32: 1.6.3 for statement
The use of the for statement differs to other programming languages because it iterates over the items of any sequence, e.g. a list or a string, in the order that they appear in the sequence.
Step33: 1.7 Import Python modules
Usually you need to load some additional Python packages, so called modules, in your program in order to use their functionality. This can be done with the command import, whose usage may look different.
python
import module_name
import module_name as short_name
from module_name import module_part
1.7.1 Module os
We start with a simple example. To get access to the operating system outside our program we have to import the module os.
Step34: Take a look at the module.
Step35: Ok, let's see in which directory we are.
Step36: Go to the parent directory and let us see where we are then.
Step37: Go back to the directory where we started (that's why we wrote the name of the directory to the variable pwd ;)).
Step38: To retrieve the content of an environment variable the module os provides os.environment.get function.
Step39: Concatenate path names with os.path.join.
Step40: Now, we want to see if the directory really exist.
Step41: Modify the datadir variable, run the cells and see what happens.
But how to proof if a file exist? Well, there is a function os.path.isfile, who would have thought!
Step42: Add a cell and play with the os functions in your environment.
1.7.2 Module glob
In the last case we already know the name of the file we are looking for but in most cases we don't know what is in a directory.
To get the file names from a directory the glob module is very helpful.
For example, after importing the glob module the glob function of glob, weired isn't it, will return a list of all netCDF files in the subdirectory data.
Step43: Now, we can select a file, for instance the second one, of fnames.
Step44: But how can we get rid of the leading path? And yes, the os module can help us again with its path.basename function.
Step45: 1.7.2 Module sys
The module sys provides access to system variables and functions. The Module includes functions to read from stdin, write to stdout and stderr, and others.
Here we will give a closer look into the part sys.path of the module, which among other things allows us to extend the search path for loaded modules.
In the subdirectory lib is a file containing user defined python functions called dkrz_utils.py. To load the file like a module we have to extend the system path before calling any function from it. | Python Code:
print('Hello World')
Explanation: 1. Python basics
This chapter only gives a short introduction to Python to make the explanations in the following chapters more understandable. A detailed description would be too extensive and would go beyond the scope of this tutorial. Take a look at https://docs.python.org/tutorial/.
Now let's take our first steps into the Python world.
1.1 Functions
You can see a function as a small subprogram that does a special job. Depending on what should be done, the code is executed and/or something will be returned to the calling part of the script. There is a set of built-in functions available but you can create your own functions, too.
Each function name in Python 3.x is followed by parentheses () and any input parameter or arguments are placed within it. Functions can perform calculations or actions and return None, a value or multiple values.
For example
function_name()
function_name(parameter1, parameter2,...)
ret = function_name(variable, axis=0)
1.1.1 Print
In our notebooks we use the function <font color='blue'><b>print</b></font> to print contents when running the cells. This gives us the possibility to export the notebooks as Python scripts, which can be run directly in a terminal.
Let's print the string <font color='red'>Hello World</font>. A string can be written in different ways, enclosed in single or double quotes.
End of explanation
x = 5
print(x)
Explanation: This is the easiest way to use print. In order to produce a prettier output of the variable contents format specifications can be used. But we will come to this later.
1.2 Data types
Numeric
integer
float
complex
Boolean
True or False
Text
string
and many others
We use the built-in function <font color='blue'><b>type</b></font> to retrieve the type of a variable.
Example: Define a variable x with value 5 and print the content of x.
End of explanation
print(type(x))
Explanation: Let's see what type the variable really has. You can use the function type as argument to the function print.
End of explanation
x = 5.0
print(x)
Explanation: Change the value of variable x to a floating point value of 5.0.
End of explanation
print(type(x))
Explanation: Get the type of the changed variable x.
End of explanation
x = 1
y = 7.3
is_red = False
title = 'Just a string'
print(type(x), type(y), type(is_red), type(title))
Explanation: Define variables of different types.
End of explanation
names = ['Hugo', 'Charles','Janine']
ages = [72, 33, 16]
print(type(names), type(ages))
print(names)
print(ages)
Explanation: 1.3 Lists
A list is a compound data type, used to group different values which can have different data types. Lists are written as a list of comma-separated values (items) between square brackets.
End of explanation
first_name = names[0]
last_name = names[-1]
print('First name: %-10s' % first_name)
print('Last name: %-10s' % last_name)
print(type(names[0]))
Explanation: To select single or multiple elements of a list you can use indexing. A negative value takes the element from the end of the list.
End of explanation
print(names[0:2])
Explanation: To select a subset of a list you can use indices, slicing, [start_index<font color='red'><b>:</b></font>end_index[<font color='red'><b>:</b></font>step]], where the selected part of the list include the first element and all following elements until the element before end_index.
The next example will return the first two elements (index 0 and 1) and not the last element.
End of explanation
print(names[0:3:2])
print(names[1:2])
print(names[1:3])
print(names[::-1])
Explanation: What will be returned when doing the following?
End of explanation
names_ln = names
names_cp = names[:]
names[0] = 'Ben'
print(names_ln)
print(names_cp)
names.append('Paul')
print(names)
names += ['Liz']
print(names)
Explanation: The slicing with [::-1] reverses the order of the list.
Using only the colon without any indices for slicing means to create a shallow copy of the list. Working with the new list will not affect the original list.
End of explanation
names.insert(1,'Merle')
print(names)
Explanation: Well, how do we do an insertion of an element right after the first element?
End of explanation
names.extend(['Sophie','Sebastian','James'])
print(names)
Explanation: If you want to add more than one element to a list use extend.
End of explanation
names.remove('Janine')
print(names)
Explanation: If you decide to remove an element use remove.
End of explanation
names.pop()
print(names)
Explanation: With pop you can remove an element, too. Remove the last element of the list.
End of explanation
names.pop(2)
print(names)
Explanation: Remove an element by its index.
End of explanation
names.reverse()
print(names)
Explanation: Use reverse to - yupp - reverse the list.
End of explanation
tup = (0, 1, 1, 5, 3, 8, 5)
print(type(tup))
Explanation: 1.4 Tuples
A tuple is like a list, but it's unchangeable (it's also called immutable). Once a tuple is created, you cannot change its values. To change a tuple you have to convert it to a list, change the content, and convert it back to a tuple.
Define the variable tup as tuple.
End of explanation
td = 15
tm = 12
ty = 2018
print(td,tm,ty)
Explanation: Sometimes it is increasingly necessary to make multiple variable assignments which is very tedious. But it is very easy with the tuple value packaging method. Here are some examples how to use tuples.
Standard definition of variable of type integer.
End of explanation
td,tm,ty = 15,12,2018
print(td,tm,ty)
print(type(td))
Explanation: Tuple packaging
End of explanation
date = 31,12,2018
print(date)
print(type(date))
(day, month, year) = date
print(year, month, day)
Explanation: You can use tuple packaging to assign the values to a single variable, too.
End of explanation
x,y = 47,11
x,y = y,x
print(x,y)
Explanation: Tuple packaging makes an exchange of the content of variables much easier.
End of explanation
tup = (123,34,79,133)
X,*Y = tup
print(X)
print(Y)
X,*Y,Z = tup
print(X)
print(Y)
print(Z)
X,Y,*Z = tup
print(X)
print(Y)
print(Z)
Explanation: Ok, now we've learned a lot about tuples, but not all. There is a very helpful way to unpack a tuple.
Unpacking example with a tuple of integers.
End of explanation
Name = 'Elizabeth'
A,*B,C = Name
print(A)
print(B)
print(C)
A,B,*C = Name
print(A)
print(B)
print(C)
Explanation: Unpacking example with a tuple of strings.
End of explanation
m = 12
d = 8.1
s = m + d
print(s)
print(type(s))
Explanation: 1.5 Computations
To do computations you can use the algebraic opertors on numeric values and lists.
End of explanation
data = [12.2, 16.7, 22.0, 9.3, 13.1, 18.1, 15.0, 6.8]
data_min = min(data)
data_max = max(data)
data_sum = sum(data)
print('Minimum %6.1f' % data_min)
print('Maximum %6.1f' % data_max)
print('Sum %6.1f' % data_sum)
Explanation: The built-in functions max(), min(), and sum() for instance can be used to do computations for us.
End of explanation
values = [1,2,3,4,5,6,7,8,9,10]
values10 = values*10
print(values10)
Explanation: To do computations with lists is not that simple.
Multiply the content of the list values by 10.
End of explanation
values10 = values[:]
for i in range(0,len(values)):
values10[i] = values[i]*10
print(values10)
Explanation: Yeah, that is not what you have expected, isn't it?
To multiply a list by a value means to repeat the list 10-times to the new list. We have to go through the list and multiply each single element by 10. There is a long and a short way to do it.
The long way:
End of explanation
values10 = [i * 10 for i in values]
print(values10)
# just to be sure that the original values list is not overwritten.
print(values)
Explanation: The more efficient way is to use Python's list comprehension:
End of explanation
ix = 1
print(ix)
ix += 3 # same as x = x + 3
print(ix)
ix *= 2 # same as x = x * 2
print(ix)
Explanation: To notice would be the inplace operators += and *=.
End of explanation
x = 0
if(x>0):
print('x is greater than 0')
elif(x<0):
print('x is less than 0')
elif(x==0):
print('x is equal 0')
user = 'George'
if(user):
print('user is set')
if(user=='Dennis'):
print('--> it is Dennis')
else:
print('--> but it is not Dennis')
Explanation: 1.6 Statements
Like other programing languages Python uses similar flow control statements
1.6.1 if statement
The most used statement is the if statement which allows us to control if a condition is True or False. It can contain optional parts like elif and else.
End of explanation
a = 0
b = 10
while(a < b):
print('a =',a)
a = a + 1
Explanation: 1.6.2 while statement
The lines in a while loop is executed until the condition is False.
End of explanation
s = 0
for x in [1,2,3,4]:
s = s + x
print('sum = ', s)
# Now, let us find the shortest name of the list names.
# Oh, by the way this is a comment line :), which will not be executed.
index = -99
length = 50
i = 0
for name in names:
if(len(name)<length):
length = len(name)
index = i
i+=1
print('--> shortest name in list names is', names[index])
Explanation: 1.6.3 for statement
The use of the for statement differs to other programming languages because it iterates over the items of any sequence, e.g. a list or a string, in the order that they appear in the sequence.
End of explanation
import os
Explanation: 1.7 Import Python modules
Usually you need to load some additional Python packages, so called modules, in your program in order to use their functionality. This can be done with the command import, whose usage may look different.
python
import module_name
import module_name as short_name
from module_name import module_part
1.7.1 Module os
We start with a simple example. To get access to the operating system outside our program we have to import the module os.
End of explanation
print(help(os))
Explanation: Take a look at the module.
End of explanation
pwd = os.getcwd()
print(pwd)
Explanation: Ok, let's see in which directory we are.
End of explanation
os.chdir('..')
print("Directory changed: ", os.getcwd())
Explanation: Go to the parent directory and let us see where we are then.
End of explanation
os.chdir(pwd)
print("Directory changed: ", os.getcwd())
Explanation: Go back to the directory where we started (that's why we wrote the name of the directory to the variable pwd ;)).
End of explanation
HOME = os.environ.get('HOME')
print('My HOME environment variable is set to ', HOME)
Explanation: To retrieve the content of an environment variable the module os provides os.environment.get function.
End of explanation
datadir = 'data'
newpath = os.path.join(HOME,datadir)
print(newpath)
Explanation: Concatenate path names with os.path.join.
End of explanation
if os.path.isdir(newpath):
print('--> directory %s exists' % newpath)
else:
print('--> directory %s does not exist' % newpath)
Explanation: Now, we want to see if the directory really exist.
End of explanation
input_file = os.path.join('data','precip.nc')
if os.path.isfile(input_file):
print('--> file %s exists' % input_file)
else:
print('--> file %s does not exist' % input_file)
Explanation: Modify the datadir variable, run the cells and see what happens.
But how to proof if a file exist? Well, there is a function os.path.isfile, who would have thought!
End of explanation
import glob
fnames = sorted(glob.glob('./data/*.nc'))
print(fnames)
Explanation: Add a cell and play with the os functions in your environment.
1.7.2 Module glob
In the last case we already know the name of the file we are looking for but in most cases we don't know what is in a directory.
To get the file names from a directory the glob module is very helpful.
For example, after importing the glob module the glob function of glob, weired isn't it, will return a list of all netCDF files in the subdirectory data.
End of explanation
print(fnames[1])
Explanation: Now, we can select a file, for instance the second one, of fnames.
End of explanation
print(os.path.basename(fnames[1]))
Explanation: But how can we get rid of the leading path? And yes, the os module can help us again with its path.basename function.
End of explanation
import sys
sys.path.append('./lib/')
import dkrz_utils
tempK = 286.5 #-- units Kelvin
print('Convert: %6.2f degK == %6.2f degC' % (tempK, (dkrz_utils.conv_K2C(tempK))))
Explanation: 1.7.2 Module sys
The module sys provides access to system variables and functions. The Module includes functions to read from stdin, write to stdout and stderr, and others.
Here we will give a closer look into the part sys.path of the module, which among other things allows us to extend the search path for loaded modules.
In the subdirectory lib is a file containing user defined python functions called dkrz_utils.py. To load the file like a module we have to extend the system path before calling any function from it.
End of explanation |
7,169 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
=====================================================================
Spectro-temporal receptive field (STRF) estimation on continuous data
=====================================================================
This demonstrates how an encoding model can be fit with multiple continuous
inputs. In this case, we simulate the model behind a spectro-temporal receptive
field (or STRF). First, we create a linear filter that maps patterns in
spectro-temporal space onto an output, representing neural activity. We fit
a receptive field model that attempts to recover the original linear filter
that was used to create this data.
References
Estimation of spectro-temporal and spatio-temporal receptive fields using
modeling with continuous inputs is described in
Step1: Load audio data
We'll read in the audio data from [3]_ in order to simulate a response.
In addition, we'll downsample the data along the time dimension in order to
speed up computation. Note that depending on the input values, this may
not be desired. For example if your input stimulus varies more quickly than
1/2 the sampling rate to which we are downsampling.
Step2: Create a receptive field
We'll simulate a linear receptive field for a theoretical neural signal. This
defines how the signal will respond to power in this receptive field space.
Step3: Simulate a neural response
Using this receptive field, we'll create an artificial neural response to
a stimulus.
To do this, we'll create a time-delayed version of the receptive field, and
then calculate the dot product between this and the stimulus. Note that this
is effectively doing a convolution between the stimulus and the receptive
field. See here <https
Step4: Fit a model to recover this receptive field
Finally, we'll use the
Step5: Visualize the effects of regularization
Above we fit a
Step6: Using different regularization types
In addition to the standard ridge regularization, the
Step7: Compare model performance
Below we visualize the model performance of each regularization method
(ridge vs. Laplacian) for different levels of alpha. As you can see, the
Laplacian method performs better in general, because it imposes a smoothness
constraint along the time and feature dimensions of the coefficients.
This matches the "true" receptive field structure and results in a better
model fit.
Step8: Plot the original STRF, and the one that we recovered with modeling. | Python Code:
# Authors: Chris Holdgraf <[email protected]>
# Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.decoding import ReceptiveField, TimeDelayingRidge
from scipy.stats import multivariate_normal
from scipy.io import loadmat
from sklearn.preprocessing import scale
rng = np.random.RandomState(1337) # To make this example reproducible
Explanation: =====================================================================
Spectro-temporal receptive field (STRF) estimation on continuous data
=====================================================================
This demonstrates how an encoding model can be fit with multiple continuous
inputs. In this case, we simulate the model behind a spectro-temporal receptive
field (or STRF). First, we create a linear filter that maps patterns in
spectro-temporal space onto an output, representing neural activity. We fit
a receptive field model that attempts to recover the original linear filter
that was used to create this data.
References
Estimation of spectro-temporal and spatio-temporal receptive fields using
modeling with continuous inputs is described in:
.. [1] Theunissen, F. E. et al. Estimating spatio-temporal receptive
fields of auditory and visual neurons from their responses to
natural stimuli. Network 12, 289-316 (2001).
.. [2] Willmore, B. & Smyth, D. Methods for first-order kernel
estimation: simple-cell receptive fields from responses to
natural scenes. Network 14, 553-77 (2003).
.. [3] Crosse, M. J., Di Liberto, G. M., Bednar, A. & Lalor, E. C. (2016).
The Multivariate Temporal Response Function (mTRF) Toolbox:
A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli.
Frontiers in Human Neuroscience 10, 604.
doi:10.3389/fnhum.2016.00604
.. [4] Holdgraf, C. R. et al. Rapid tuning shifts in human auditory cortex
enhance speech intelligibility. Nature Communications, 7, 13654 (2016).
doi:10.1038/ncomms13654
End of explanation
# Read in audio that's been recorded in epochs.
path_audio = mne.datasets.mtrf.data_path()
data = loadmat(path_audio + '/speech_data.mat')
audio = data['spectrogram'].T
sfreq = float(data['Fs'][0, 0])
n_decim = 2
audio = mne.filter.resample(audio, down=n_decim, npad='auto')
sfreq /= n_decim
Explanation: Load audio data
We'll read in the audio data from [3]_ in order to simulate a response.
In addition, we'll downsample the data along the time dimension in order to
speed up computation. Note that depending on the input values, this may
not be desired. For example if your input stimulus varies more quickly than
1/2 the sampling rate to which we are downsampling.
End of explanation
n_freqs = 20
tmin, tmax = -0.1, 0.4
# To simulate the data we'll create explicit delays here
delays_samp = np.arange(np.round(tmin * sfreq),
np.round(tmax * sfreq) + 1).astype(int)
delays_sec = delays_samp / sfreq
freqs = np.linspace(50, 5000, n_freqs)
grid = np.array(np.meshgrid(delays_sec, freqs))
# We need data to be shaped as n_epochs, n_features, n_times, so swap axes here
grid = grid.swapaxes(0, -1).swapaxes(0, 1)
# Simulate a temporal receptive field with a Gabor filter
means_high = [.1, 500]
means_low = [.2, 2500]
cov = [[.001, 0], [0, 500000]]
gauss_high = multivariate_normal.pdf(grid, means_high, cov)
gauss_low = -1 * multivariate_normal.pdf(grid, means_low, cov)
weights = gauss_high + gauss_low # Combine to create the "true" STRF
kwargs = dict(vmax=np.abs(weights).max(), vmin=-np.abs(weights).max(),
cmap='RdBu_r', shading='gouraud')
fig, ax = plt.subplots()
ax.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax.set(title='Simulated STRF', xlabel='Time Lags (s)', ylabel='Frequency (Hz)')
plt.setp(ax.get_xticklabels(), rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
Explanation: Create a receptive field
We'll simulate a linear receptive field for a theoretical neural signal. This
defines how the signal will respond to power in this receptive field space.
End of explanation
# Reshape audio to split into epochs, then make epochs the first dimension.
n_epochs, n_seconds = 16, 5
audio = audio[:, :int(n_seconds * sfreq * n_epochs)]
X = audio.reshape([n_freqs, n_epochs, -1]).swapaxes(0, 1)
n_times = X.shape[-1]
# Delay the spectrogram according to delays so it can be combined w/ the STRF
# Lags will now be in axis 1, then we reshape to vectorize
delays = np.arange(np.round(tmin * sfreq),
np.round(tmax * sfreq) + 1).astype(int)
# Iterate through indices and append
X_del = np.zeros((len(delays),) + X.shape)
for ii, ix_delay in enumerate(delays):
# These arrays will take/put particular indices in the data
take = [slice(None)] * X.ndim
put = [slice(None)] * X.ndim
if ix_delay > 0:
take[-1] = slice(None, -ix_delay)
put[-1] = slice(ix_delay, None)
elif ix_delay < 0:
take[-1] = slice(-ix_delay, None)
put[-1] = slice(None, ix_delay)
X_del[ii][tuple(put)] = X[tuple(take)]
# Now set the delayed axis to the 2nd dimension
X_del = np.rollaxis(X_del, 0, 3)
X_del = X_del.reshape([n_epochs, -1, n_times])
n_features = X_del.shape[1]
weights_sim = weights.ravel()
# Simulate a neural response to the sound, given this STRF
y = np.zeros((n_epochs, n_times))
for ii, iep in enumerate(X_del):
# Simulate this epoch and add random noise
noise_amp = .002
y[ii] = np.dot(weights_sim, iep) + noise_amp * rng.randn(n_times)
# Plot the first 2 trials of audio and the simulated electrode activity
X_plt = scale(np.hstack(X[:2]).T).T
y_plt = scale(np.hstack(y[:2]))
time = np.arange(X_plt.shape[-1]) / sfreq
_, (ax1, ax2) = plt.subplots(2, 1, figsize=(6, 6), sharex=True)
ax1.pcolormesh(time, freqs, X_plt, vmin=0, vmax=4, cmap='Reds')
ax1.set_title('Input auditory features')
ax1.set(ylim=[freqs.min(), freqs.max()], ylabel='Frequency (Hz)')
ax2.plot(time, y_plt)
ax2.set(xlim=[time.min(), time.max()], title='Simulated response',
xlabel='Time (s)', ylabel='Activity (a.u.)')
mne.viz.tight_layout()
Explanation: Simulate a neural response
Using this receptive field, we'll create an artificial neural response to
a stimulus.
To do this, we'll create a time-delayed version of the receptive field, and
then calculate the dot product between this and the stimulus. Note that this
is effectively doing a convolution between the stimulus and the receptive
field. See here <https://en.wikipedia.org/wiki/Convolution>_ for more
information.
End of explanation
# Create training and testing data
train, test = np.arange(n_epochs - 1), n_epochs - 1
X_train, X_test, y_train, y_test = X[train], X[test], y[train], y[test]
X_train, X_test, y_train, y_test = [np.rollaxis(ii, -1, 0) for ii in
(X_train, X_test, y_train, y_test)]
# Model the simulated data as a function of the spectrogram input
alphas = np.logspace(-3, 3, 7)
scores = np.zeros_like(alphas)
models = []
for ii, alpha in enumerate(alphas):
rf = ReceptiveField(tmin, tmax, sfreq, freqs, estimator=alpha)
rf.fit(X_train, y_train)
# Now make predictions about the model output, given input stimuli.
scores[ii] = rf.score(X_test, y_test)
models.append(rf)
times = rf.delays_ / float(rf.sfreq)
# Choose the model that performed best on the held out data
ix_best_alpha = np.argmax(scores)
best_mod = models[ix_best_alpha]
coefs = best_mod.coef_[0]
best_pred = best_mod.predict(X_test)[:, 0]
# Plot the original STRF, and the one that we recovered with modeling.
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(6, 3), sharey=True, sharex=True)
ax1.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax2.pcolormesh(times, rf.feature_names, coefs, **kwargs)
ax1.set_title('Original STRF')
ax2.set_title('Best Reconstructed STRF')
plt.setp([iax.get_xticklabels() for iax in [ax1, ax2]], rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
# Plot the actual response and the predicted response on a held out stimulus
time_pred = np.arange(best_pred.shape[0]) / sfreq
fig, ax = plt.subplots()
ax.plot(time_pred, y_test, color='k', alpha=.2, lw=4)
ax.plot(time_pred, best_pred, color='r', lw=1)
ax.set(title='Original and predicted activity', xlabel='Time (s)')
ax.legend(['Original', 'Predicted'])
plt.autoscale(tight=True)
mne.viz.tight_layout()
Explanation: Fit a model to recover this receptive field
Finally, we'll use the :class:mne.decoding.ReceptiveField class to recover
the linear receptive field of this signal. Note that properties of the
receptive field (e.g. smoothness) will depend on the autocorrelation in the
inputs and outputs.
End of explanation
# Plot model score for each ridge parameter
fig = plt.figure(figsize=(10, 4))
ax = plt.subplot2grid([2, len(alphas)], [1, 0], 1, len(alphas))
ax.plot(np.arange(len(alphas)), scores, marker='o', color='r')
ax.annotate('Best parameter', (ix_best_alpha, scores[ix_best_alpha]),
(ix_best_alpha, scores[ix_best_alpha] - .1),
arrowprops={'arrowstyle': '->'})
plt.xticks(np.arange(len(alphas)), ["%.0e" % ii for ii in alphas])
ax.set(xlabel="Ridge regularization value", ylabel="Score ($R^2$)",
xlim=[-.4, len(alphas) - .6])
mne.viz.tight_layout()
# Plot the STRF of each ridge parameter
for ii, (rf, i_alpha) in enumerate(zip(models, alphas)):
ax = plt.subplot2grid([2, len(alphas)], [0, ii], 1, 1)
ax.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
plt.xticks([], [])
plt.yticks([], [])
plt.autoscale(tight=True)
fig.suptitle('Model coefficients / scores for many ridge parameters', y=1)
mne.viz.tight_layout()
Explanation: Visualize the effects of regularization
Above we fit a :class:mne.decoding.ReceptiveField model for one of many
values for the ridge regularization parameter. Here we will plot the model
score as well as the model coefficients for each value, in order to
visualize how coefficients change with different levels of regularization.
These issues as well as the STRF pipeline are described in detail
in [1], [2], and [4]_.
End of explanation
scores_lap = np.zeros_like(alphas)
models_lap = []
for ii, alpha in enumerate(alphas):
estimator = TimeDelayingRidge(tmin, tmax, sfreq, reg_type='laplacian',
alpha=alpha)
rf = ReceptiveField(tmin, tmax, sfreq, freqs, estimator=estimator)
rf.fit(X_train, y_train)
# Now make predictions about the model output, given input stimuli.
scores_lap[ii] = rf.score(X_test, y_test)
models_lap.append(rf)
ix_best_alpha_lap = np.argmax(scores_lap)
Explanation: Using different regularization types
In addition to the standard ridge regularization, the
:class:mne.decoding.TimeDelayingRidge class also exposes
Laplacian <https://en.wikipedia.org/wiki/Laplacian_matrix>_ regularization
term as:
\begin{align}\left[\begin{matrix}
1 & -1 & & & & \
-1 & 2 & -1 & & & \
& -1 & 2 & -1 & & \
& & \ddots & \ddots & \ddots & \
& & & -1 & 2 & -1 \
& & & & -1 & 1\end{matrix}\right]\end{align}
This imposes a smoothness constraint of nearby time samples and/or features.
Quoting [3]_:
Tikhonov [identity] regularization (Equation 5) reduces overfitting by
smoothing the TRF estimate in a way that is insensitive to
the amplitude of the signal of interest. However, the Laplacian
approach (Equation 6) reduces off-sample error whilst preserving
signal amplitude (Lalor et al., 2006). As a result, this approach
usually leads to an improved estimate of the system’s response (as
indexed by MSE) compared to Tikhonov regularization.
End of explanation
fig = plt.figure(figsize=(10, 6))
ax = plt.subplot2grid([3, len(alphas)], [2, 0], 1, len(alphas))
ax.plot(np.arange(len(alphas)), scores_lap, marker='o', color='r')
ax.plot(np.arange(len(alphas)), scores, marker='o', color='0.5', ls=':')
ax.annotate('Best Laplacian', (ix_best_alpha_lap,
scores_lap[ix_best_alpha_lap]),
(ix_best_alpha_lap, scores_lap[ix_best_alpha_lap] - .1),
arrowprops={'arrowstyle': '->'})
ax.annotate('Best Ridge', (ix_best_alpha, scores[ix_best_alpha]),
(ix_best_alpha, scores[ix_best_alpha] - .1),
arrowprops={'arrowstyle': '->'})
plt.xticks(np.arange(len(alphas)), ["%.0e" % ii for ii in alphas])
ax.set(xlabel="Laplacian regularization value", ylabel="Score ($R^2$)",
xlim=[-.4, len(alphas) - .6])
mne.viz.tight_layout()
# Plot the STRF of each ridge parameter
xlim = times[[0, -1]]
for ii, (rf_lap, rf, i_alpha) in enumerate(zip(models_lap, models, alphas)):
ax = plt.subplot2grid([3, len(alphas)], [0, ii], 1, 1)
ax.pcolormesh(times, rf_lap.feature_names, rf_lap.coef_[0], **kwargs)
ax.set(xticks=[], yticks=[], xlim=xlim)
if ii == 0:
ax.set(ylabel='Laplacian')
ax = plt.subplot2grid([3, len(alphas)], [1, ii], 1, 1)
ax.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
ax.set(xticks=[], yticks=[], xlim=xlim)
if ii == 0:
ax.set(ylabel='Ridge')
fig.suptitle('Model coefficients / scores for laplacian regularization', y=1)
mne.viz.tight_layout()
Explanation: Compare model performance
Below we visualize the model performance of each regularization method
(ridge vs. Laplacian) for different levels of alpha. As you can see, the
Laplacian method performs better in general, because it imposes a smoothness
constraint along the time and feature dimensions of the coefficients.
This matches the "true" receptive field structure and results in a better
model fit.
End of explanation
rf = models[ix_best_alpha]
rf_lap = models_lap[ix_best_alpha_lap]
_, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(9, 3),
sharey=True, sharex=True)
ax1.pcolormesh(delays_sec, freqs, weights, **kwargs)
ax2.pcolormesh(times, rf.feature_names, rf.coef_[0], **kwargs)
ax3.pcolormesh(times, rf_lap.feature_names, rf_lap.coef_[0], **kwargs)
ax1.set_title('Original STRF')
ax2.set_title('Best Ridge STRF')
ax3.set_title('Best Laplacian STRF')
plt.setp([iax.get_xticklabels() for iax in [ax1, ax2, ax3]], rotation=45)
plt.autoscale(tight=True)
mne.viz.tight_layout()
Explanation: Plot the original STRF, and the one that we recovered with modeling.
End of explanation |
7,170 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>CAR Classic predictor - Regressor</h1>
<hr style="border
Step1: <span>
Build a processor.
</span>
<br>
<span>
This is required by the regressor in order to parse the input raw data.<br>
A ATTMatrixHitProcessor is needed here.
</span>
Step2: <span>
We build the regressor now, injecting the processor
</span>
Step3: <span>
We define the training data source file
</span>
Step4: <span>
And load the dataset
</span>
Step5: <span>
Now, train
</span>
Step6: <span>
And finally test
</span> | Python Code:
import sys
#sys.path.insert(0, 'I:/git/att/src/python/')
sys.path.insert(0, 'i:/dev/workspaces/python/att-workspace/att/src/python/')
Explanation: <h1>CAR Classic predictor - Regressor</h1>
<hr style="border: 1px solid #000;">
<span>
<h2>ATT hit predictor.</h2>
</span>
<br>
<span>
This notebook shows how the hit predictor works.<br>
The Hit predictor aim is to guess (x,y) coords from serial port readings.
There are two steps: Train and Predict.
</span>
<span>
Set modules path first:
</span>
End of explanation
from hit.process.processor import ATTMatrixHitProcessor
matProcessor = ATTMatrixHitProcessor()
Explanation: <span>
Build a processor.
</span>
<br>
<span>
This is required by the regressor in order to parse the input raw data.<br>
A ATTMatrixHitProcessor is needed here.
</span>
End of explanation
from hit.train.regressor import ATTClassicHitRegressor
regressor = ATTClassicHitRegressor(matProcessor)
Explanation: <span>
We build the regressor now, injecting the processor
</span>
End of explanation
TRAIN_VALUES_FILE_LEFT = "train_data/train_points_20160129_left.txt"
Explanation: <span>
We define the training data source file
</span>
End of explanation
import numpy as np
(training_values, Y) = regressor.collect_train_hits_from_file(TRAIN_VALUES_FILE_LEFT)
print "Train Values: ", np.shape(training_values), np.shape(Y)
Explanation: <span>
And load the dataset
</span>
End of explanation
regressor.train(training_values, Y)
Explanation: <span>
Now, train
</span>
End of explanation
hit = "hit: {1568:6 1416:5 3230:6 787:8 2757:4 0:13 980:4 3116:4 l}"
print '(6,30)'
print regressor.predict(hit)
Explanation: <span>
And finally test
</span>
End of explanation |
7,171 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vision data
Helper functions to get data in a DataLoaders in the vision application and higher class ImageDataLoaders
The main classes defined in this module are ImageDataLoaders and SegmentationDataLoaders, so you probably want to jump to their definitions. They provide factory methods that are a great way to quickly get your data ready for training, see the vision tutorial for examples.
Helper functions
Step1: This is used by the type-dispatched versions of show_batch and show_results for the vision application. The default figsize is (cols*imsize, rows*imsize+0.6). imsize is passed down to subplots. suptitle, sharex, sharey, squeeze, subplot_kw and gridspec_kw are all passed down to plt.subplots. If return_fig is True, returns fig,axs, otherwise just axs.
Step2: This is used in bb_pad
Step3: This is used in BBoxBlock
Step4: Show methods -
Step5: TransformBlocks for vision
These are the blocks the vision application provide for the data block API.
Step6: If add_na is True, a new category is added for NaN (that will represent the background class).
ImageDataLoaders -
Step7: This class should not be used directly, one of the factory methods should be preferred instead. All those factory methods accept as arguments
Step8: If valid_pct is provided, a random split is performed (with an optional seed) by setting aside that percentage of the data for the validation set (instead of looking at the grandparents folder). If a vocab is passed, only the folders with names in vocab are kept.
Here is an example loading a subsample of MNIST
Step9: Passing valid_pct will ignore the valid/train folders and do a new random split
Step10: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility.
Here is how to create the same DataLoaders on the MNIST dataset as the previous example with a label_func
Step11: Here is another example on the pets dataset. Here filenames are all in an "images" folder and their names have the form class_name_123.jpg. One way to properly label them is thus to throw away everything after the last _
Step12: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility.
Here is how to create the same DataLoaders on the MNIST dataset as the previous example (you will need to change the initial two / by a \ on Windows)
Step13: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. This method does the same as ImageDataLoaders.from_path_func except label_func is applied to the name of each filenames, and not the full path.
Step14: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. This method does the same as ImageDataLoaders.from_path_re except pat is applied to the name of each filenames, and not the full path.
Step15: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. Alternatively, if your df contains a valid_col, give its name or its index to that argument (the column should have True for the elements going to the validation set).
You can add an additional folder to the filenames in df if they should not be concatenated directly to path. If they do not contain the proper extensions, you can add suff. If your label column contains multiple labels on each row, you can use label_delim to warn the library you have a multi-label problem.
y_block should be passed when the task automatically picked by the library is wrong, you should then give CategoryBlock, MultiCategoryBlock or RegressionBlock. For more advanced uses, you should use the data block API.
The tiny mnist example from before also contains a version in a dataframe
Step16: Here is how to load it using ImageDataLoaders.from_df
Step17: Here is another example with a multi-label problem
Step18: Note that can also pass 2 to valid_col (the index, starting with 0).
Step19: Same as ImageDataLoaders.from_df after loading the file with header and delimiter.
Here is how to load the same dataset as before with this method
Step20: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. y_block can be passed to specify the type of the targets.
Step21: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. codes contain the mapping index to label.
Step22: Export - | Python Code:
#|export
@delegates(subplots)
def get_grid(
n:int, # Number of axes in the returned grid
nrows:int=None, # Number of rows in the returned grid, defaulting to `int(math.sqrt(n))`
ncols:int=None, # Number of columns in the returned grid, defaulting to `ceil(n/rows)`
figsize:tuple=None, # Width, height in inches of the returned figure
double:bool=False, # Whether to double the number of columns and `n`
title:str=None, # If passed, title set to the figure
return_fig:bool=False, # Whether to return the figure created by `subplots`
flatten:bool=True, # Whether to flatten the matplot axes such that they can be iterated over with a single loop
**kwargs,
) -> (plt.Figure, plt.Axes): # Returns just `axs` by default, and (`fig`, `axs`) if `return_fig` is set to True
"Return a grid of `n` axes, `rows` by `cols`"
if nrows:
ncols = ncols or int(np.ceil(n/nrows))
elif ncols:
nrows = nrows or int(np.ceil(n/ncols))
else:
nrows = int(math.sqrt(n))
ncols = int(np.ceil(n/nrows))
if double: ncols*=2 ; n*=2
fig,axs = subplots(nrows, ncols, figsize=figsize, **kwargs)
if flatten: axs = [ax if i<n else ax.set_axis_off() for i, ax in enumerate(axs.flatten())][:n]
if title is not None: fig.suptitle(title, weight='bold', size=14)
return (fig,axs) if return_fig else axs
Explanation: Vision data
Helper functions to get data in a DataLoaders in the vision application and higher class ImageDataLoaders
The main classes defined in this module are ImageDataLoaders and SegmentationDataLoaders, so you probably want to jump to their definitions. They provide factory methods that are a great way to quickly get your data ready for training, see the vision tutorial for examples.
Helper functions
End of explanation
#|export
def clip_remove_empty(
bbox:TensorBBox, # Coordinates of bounding boxes
label:TensorMultiCategory # Labels of the bounding boxes
):
"Clip bounding boxes with image border and remove empty boxes along with corresponding labels"
bbox = torch.clamp(bbox, -1, 1)
empty = ((bbox[...,2] - bbox[...,0])*(bbox[...,3] - bbox[...,1]) <= 0.)
return (bbox[~empty], label[TensorBase(~empty)])
Explanation: This is used by the type-dispatched versions of show_batch and show_results for the vision application. The default figsize is (cols*imsize, rows*imsize+0.6). imsize is passed down to subplots. suptitle, sharex, sharey, squeeze, subplot_kw and gridspec_kw are all passed down to plt.subplots. If return_fig is True, returns fig,axs, otherwise just axs.
End of explanation
bb = TensorBBox([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5], [-2, -0.5, -1.5, 0.5]])
bb,lbl = clip_remove_empty(bb, TensorMultiCategory([1,2,3,2,5]))
test_eq(bb, TensorBBox([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(lbl, TensorMultiCategory([1,2,2]))
#|export
def bb_pad(
samples:list, # List of 3-tuples like (image, bounding_boxes, labels)
pad_idx:int=0 # Label that will be used to pad each list of labels
):
"Function that collects `samples` of labelled bboxes and adds padding with `pad_idx`."
samples = [(s[0], *clip_remove_empty(*s[1:])) for s in samples]
max_len = max([len(s[2]) for s in samples])
def _f(img,bbox,lbl):
bbox = torch.cat([bbox,bbox.new_zeros(max_len-bbox.shape[0], 4)])
lbl = torch.cat([lbl, lbl .new_zeros(max_len-lbl .shape[0])+pad_idx])
return img,bbox,lbl
return [_f(*s) for s in samples]
Explanation: This is used in bb_pad
End of explanation
img1,img2 = TensorImage(torch.randn(16,16,3)),TensorImage(torch.randn(16,16,3))
bb1 = tensor([[-2,-0.5,0.5,1.5], [-0.5,-0.5,0.5,0.5], [1,0.5,0.5,0.75], [-0.5,-0.5,0.5,0.5]])
lbl1 = tensor([1, 2, 3, 2])
bb2 = tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]])
lbl2 = tensor([2, 2])
samples = [(img1, bb1, lbl1), (img2, bb2, lbl2)]
res = bb_pad(samples)
non_empty = tensor([True,True,False,True])
test_eq(res[0][0], img1)
test_eq(res[0][1], tensor([[-1,-0.5,0.5,1.], [-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5]]))
test_eq(res[0][2], tensor([1,2,2]))
test_eq(res[1][0], img2)
test_eq(res[1][1], tensor([[-0.5,-0.5,0.5,0.5], [-0.5,-0.5,0.5,0.5], [0,0,0,0]]))
test_eq(res[1][2], tensor([2,2,0]))
Explanation: This is used in BBoxBlock
End of explanation
#|export
@typedispatch
def show_batch(x:TensorImage, y, samples, ctxs=None, max_n=10, nrows=None, ncols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), nrows=nrows, ncols=ncols, figsize=figsize)
ctxs = show_batch[object](x, y, samples, ctxs=ctxs, max_n=max_n, **kwargs)
return ctxs
#|export
@typedispatch
def show_batch(x:TensorImage, y:TensorImage, samples, ctxs=None, max_n=10, nrows=None, ncols=None, figsize=None, **kwargs):
if ctxs is None: ctxs = get_grid(min(len(samples), max_n), nrows=nrows, ncols=ncols, figsize=figsize, double=True)
for i in range(2):
ctxs[i::2] = [b.show(ctx=c, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs[i::2],range(max_n))]
return ctxs
Explanation: Show methods -
End of explanation
#|export
def ImageBlock(cls:PILBase=PILImage):
"A `TransformBlock` for images of `cls`"
return TransformBlock(type_tfms=cls.create, batch_tfms=IntToFloatTensor)
#|export
def MaskBlock(
codes:list=None # Vocab labels for segmentation masks
):
"A `TransformBlock` for segmentation masks, potentially with `codes`"
return TransformBlock(type_tfms=PILMask.create, item_tfms=AddMaskCodes(codes=codes), batch_tfms=IntToFloatTensor)
#|export
PointBlock = TransformBlock(type_tfms=TensorPoint.create, item_tfms=PointScaler)
BBoxBlock = TransformBlock(type_tfms=TensorBBox.create, item_tfms=PointScaler, dls_kwargs = {'before_batch': bb_pad})
PointBlock.__doc__ = "A `TransformBlock` for points in an image"
BBoxBlock.__doc__ = "A `TransformBlock` for bounding boxes in an image"
show_doc(PointBlock, name='PointBlock')
show_doc(BBoxBlock, name='BBoxBlock')
#|export
def BBoxLblBlock(
vocab:list=None, # Vocab labels for bounding boxes
add_na:bool=True # Add NaN as a background class
):
"A `TransformBlock` for labeled bounding boxes, potentially with `vocab`"
return TransformBlock(type_tfms=MultiCategorize(vocab=vocab, add_na=add_na), item_tfms=BBoxLabeler)
Explanation: TransformBlocks for vision
These are the blocks the vision application provide for the data block API.
End of explanation
#|export
class ImageDataLoaders(DataLoaders):
"Basic wrapper around several `DataLoader`s with factory methods for computer vision problems"
@classmethod
@delegates(DataLoaders.from_dblock)
def from_folder(cls, path, train='train', valid='valid', valid_pct=None, seed=None, vocab=None, item_tfms=None,
batch_tfms=None, **kwargs):
"Create from imagenet style dataset in `path` with `train` and `valid` subfolders (or provide `valid_pct`)"
splitter = GrandparentSplitter(train_name=train, valid_name=valid) if valid_pct is None else RandomSplitter(valid_pct, seed=seed)
get_items = get_image_files if valid_pct else partial(get_image_files, folders=[train, valid])
dblock = DataBlock(blocks=(ImageBlock, CategoryBlock(vocab=vocab)),
get_items=get_items,
splitter=splitter,
get_y=parent_label,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, path, path=path, **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_path_func(cls, path, fnames, label_func, valid_pct=0.2, seed=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create from list of `fnames` in `path`s with `label_func`"
dblock = DataBlock(blocks=(ImageBlock, CategoryBlock),
splitter=RandomSplitter(valid_pct, seed=seed),
get_y=label_func,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, fnames, path=path, **kwargs)
@classmethod
def from_name_func(cls,
path:(str, Path), # Set the default path to a directory that a `Learner` can use to save files like models
fnames:list, # A list of `os.Pathlike`'s to individual image files
label_func:callable, # A function that receives a string (the file name) and outputs a label
**kwargs
) -> DataLoaders:
"Create from the name attrs of `fnames` in `path`s with `label_func`"
if sys.platform == 'win32' and isinstance(label_func, types.LambdaType) and label_func.__name__ == '<lambda>':
# https://medium.com/@jwnx/multiprocessing-serialization-in-python-with-pickle-9844f6fa1812
raise ValueError("label_func couldn't be lambda function on Windows")
f = using_attr(label_func, 'name')
return cls.from_path_func(path, fnames, f, **kwargs)
@classmethod
def from_path_re(cls, path, fnames, pat, **kwargs):
"Create from list of `fnames` in `path`s with re expression `pat`"
return cls.from_path_func(path, fnames, RegexLabeller(pat), **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_name_re(cls, path, fnames, pat, **kwargs):
"Create from the name attrs of `fnames` in `path`s with re expression `pat`"
return cls.from_name_func(path, fnames, RegexLabeller(pat), **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_df(cls, df, path='.', valid_pct=0.2, seed=None, fn_col=0, folder=None, suff='', label_col=1, label_delim=None,
y_block=None, valid_col=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create from `df` using `fn_col` and `label_col`"
pref = f'{Path(path) if folder is None else Path(path)/folder}{os.path.sep}'
if y_block is None:
is_multi = (is_listy(label_col) and len(label_col) > 1) or label_delim is not None
y_block = MultiCategoryBlock if is_multi else CategoryBlock
splitter = RandomSplitter(valid_pct, seed=seed) if valid_col is None else ColSplitter(valid_col)
dblock = DataBlock(blocks=(ImageBlock, y_block),
get_x=ColReader(fn_col, pref=pref, suff=suff),
get_y=ColReader(label_col, label_delim=label_delim),
splitter=splitter,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, df, path=path, **kwargs)
@classmethod
def from_csv(cls, path, csv_fname='labels.csv', header='infer', delimiter=None, **kwargs):
"Create from `path/csv_fname` using `fn_col` and `label_col`"
df = pd.read_csv(Path(path)/csv_fname, header=header, delimiter=delimiter)
return cls.from_df(df, path=path, **kwargs)
@classmethod
@delegates(DataLoaders.from_dblock)
def from_lists(cls, path, fnames, labels, valid_pct=0.2, seed:int=None, y_block=None, item_tfms=None, batch_tfms=None,
**kwargs):
"Create from list of `fnames` and `labels` in `path`"
if y_block is None:
y_block = MultiCategoryBlock if is_listy(labels[0]) and len(labels[0]) > 1 else (
RegressionBlock if isinstance(labels[0], float) else CategoryBlock)
dblock = DataBlock.from_columns(blocks=(ImageBlock, y_block),
splitter=RandomSplitter(valid_pct, seed=seed),
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return cls.from_dblock(dblock, (fnames, labels), path=path, **kwargs)
ImageDataLoaders.from_csv = delegates(to=ImageDataLoaders.from_df)(ImageDataLoaders.from_csv)
ImageDataLoaders.from_name_func = delegates(to=ImageDataLoaders.from_path_func)(ImageDataLoaders.from_name_func)
ImageDataLoaders.from_path_re = delegates(to=ImageDataLoaders.from_path_func)(ImageDataLoaders.from_path_re)
ImageDataLoaders.from_name_re = delegates(to=ImageDataLoaders.from_name_func)(ImageDataLoaders.from_name_re)
Explanation: If add_na is True, a new category is added for NaN (that will represent the background class).
ImageDataLoaders -
End of explanation
show_doc(ImageDataLoaders.from_folder)
Explanation: This class should not be used directly, one of the factory methods should be preferred instead. All those factory methods accept as arguments:
item_tfms: one or several transforms applied to the items before batching them
batch_tfms: one or several transforms applied to the batches once they are formed
bs: the batch size
val_bs: the batch size for the validation DataLoader (defaults to bs)
shuffle_train: if we shuffle the training DataLoader or not
device: the PyTorch device to use (defaults to default_device())
End of explanation
path = untar_data(URLs.MNIST_TINY)
dls = ImageDataLoaders.from_folder(path)
Explanation: If valid_pct is provided, a random split is performed (with an optional seed) by setting aside that percentage of the data for the validation set (instead of looking at the grandparents folder). If a vocab is passed, only the folders with names in vocab are kept.
Here is an example loading a subsample of MNIST:
End of explanation
dls = ImageDataLoaders.from_folder(path, valid_pct=0.2)
dls.valid_ds.items[:3]
show_doc(ImageDataLoaders.from_path_func)
Explanation: Passing valid_pct will ignore the valid/train folders and do a new random split:
End of explanation
fnames = get_image_files(path)
def label_func(x): return x.parent.name
dls = ImageDataLoaders.from_path_func(path, fnames, label_func)
Explanation: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility.
Here is how to create the same DataLoaders on the MNIST dataset as the previous example with a label_func:
End of explanation
show_doc(ImageDataLoaders.from_path_re)
Explanation: Here is another example on the pets dataset. Here filenames are all in an "images" folder and their names have the form class_name_123.jpg. One way to properly label them is thus to throw away everything after the last _:
End of explanation
pat = r'/([^/]*)/\d+.png$'
dls = ImageDataLoaders.from_path_re(path, fnames, pat)
show_doc(ImageDataLoaders.from_name_func)
Explanation: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility.
Here is how to create the same DataLoaders on the MNIST dataset as the previous example (you will need to change the initial two / by a \ on Windows):
End of explanation
show_doc(ImageDataLoaders.from_name_re)
Explanation: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. This method does the same as ImageDataLoaders.from_path_func except label_func is applied to the name of each filenames, and not the full path.
End of explanation
show_doc(ImageDataLoaders.from_df)
Explanation: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. This method does the same as ImageDataLoaders.from_path_re except pat is applied to the name of each filenames, and not the full path.
End of explanation
path = untar_data(URLs.MNIST_TINY)
df = pd.read_csv(path/'labels.csv')
df.head()
Explanation: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. Alternatively, if your df contains a valid_col, give its name or its index to that argument (the column should have True for the elements going to the validation set).
You can add an additional folder to the filenames in df if they should not be concatenated directly to path. If they do not contain the proper extensions, you can add suff. If your label column contains multiple labels on each row, you can use label_delim to warn the library you have a multi-label problem.
y_block should be passed when the task automatically picked by the library is wrong, you should then give CategoryBlock, MultiCategoryBlock or RegressionBlock. For more advanced uses, you should use the data block API.
The tiny mnist example from before also contains a version in a dataframe:
End of explanation
dls = ImageDataLoaders.from_df(df, path)
Explanation: Here is how to load it using ImageDataLoaders.from_df:
End of explanation
path = untar_data(URLs.PASCAL_2007)
df = pd.read_csv(path/'train.csv')
df.head()
dls = ImageDataLoaders.from_df(df, path, folder='train', valid_col='is_valid')
Explanation: Here is another example with a multi-label problem:
End of explanation
show_doc(ImageDataLoaders.from_csv)
Explanation: Note that can also pass 2 to valid_col (the index, starting with 0).
End of explanation
dls = ImageDataLoaders.from_csv(path, 'train.csv', folder='train', valid_col='is_valid')
show_doc(ImageDataLoaders.from_lists)
Explanation: Same as ImageDataLoaders.from_df after loading the file with header and delimiter.
Here is how to load the same dataset as before with this method:
End of explanation
path = untar_data(URLs.PETS)
fnames = get_image_files(path/"images")
labels = ['_'.join(x.name.split('_')[:-1]) for x in fnames]
dls = ImageDataLoaders.from_lists(path, fnames, labels)
#|export
class SegmentationDataLoaders(DataLoaders):
"Basic wrapper around several `DataLoader`s with factory methods for segmentation problems"
@classmethod
@delegates(DataLoaders.from_dblock)
def from_label_func(cls, path, fnames, label_func, valid_pct=0.2, seed=None, codes=None, item_tfms=None, batch_tfms=None, **kwargs):
"Create from list of `fnames` in `path`s with `label_func`."
dblock = DataBlock(blocks=(ImageBlock, MaskBlock(codes=codes)),
splitter=RandomSplitter(valid_pct, seed=seed),
get_y=label_func,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
res = cls.from_dblock(dblock, fnames, path=path, **kwargs)
return res
show_doc(SegmentationDataLoaders.from_label_func)
Explanation: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. y_block can be passed to specify the type of the targets.
End of explanation
path = untar_data(URLs.CAMVID_TINY)
fnames = get_image_files(path/'images')
def label_func(x): return path/'labels'/f'{x.stem}_P{x.suffix}'
codes = np.loadtxt(path/'codes.txt', dtype=str)
dls = SegmentationDataLoaders.from_label_func(path, fnames, label_func, codes=codes)
Explanation: The validation set is a random subset of valid_pct, optionally created with seed for reproducibility. codes contain the mapping index to label.
End of explanation
#|hide
from nbdev.export import notebook2script
notebook2script()
Explanation: Export -
End of explanation |
7,172 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 8
Step1: Why was an a returned instead of b?
Computer scientists (and much of Europe) count starting with 0
These sequence are referred to as zero-based
The bracket notation only accepts integers (or expressions that result in an integer)
The len operator
len is a built-in function that returns the number of characters in a string
Step2: Note that the lenght is one-based, just like normal counting
What happens when you try to access a value that is out of range?
Step3: Remember that indices are zero-based, while counting is one-based
There are multiple ways to get the last letter
Step4: You can also use other negative integers to index characters
Traversal with a for loop
There is frequently a need to process each item in a sequence one at a time
This is done by traversing the sequence and processing each item in turn
One way to do this is using a while loop
Step5: A for loop can also be used
We can either use the for loop to keep track of the index or have Python automatically access each value in the sequence
Step6: The second approach can be simpler, but you don't know the index of the value
Loops can also be used to build strings
Step7: String slices
A segment of a string is called a slice
The method of selecting a slice is similar to accessing a single character
Step8: The operator [n
Step9: If the first index is greater than or equal to the second, an empty string is returned
Strings are immutable
Strings are immutable
This means you can't change a string once it has been built
Step10: In the error, the object is the string and the item is the character
Don't worry about the idea of an object right now
We will discuss it later
For now, think of it as a variable's value
Since strings are immutable, the only way to "change" one is to create a new one with the changes you want
Step11: Note that the original string is unchanged
Searching
In many situations, you need to search for a particular value in a sequence of values
The easiest way to do it is using a sequential search
Step12: Notice that if the letter isn't found in the word, the function returns -1
It is common to return a special value indicating failure of some kind
However, it needs to be noted in the comments so it is actually useful
String methods
A method is similar to a function
It takes arguments and can return a value, but the syntax to use it is different
It operates on a variable that is an object (like a string)
All the TurtleWorld code used methods on a turtle
Step13: In this case, the method upper is called on the string word
It returns a new string with all the letters now in uppercase
Calling a method like this is called an invocation
Strings in Python have a number of methods already built in
One of them is find
It finds th eindex of a specified substring (not just a character)
Python's documentation lists them all <br>
https
Step14: String comparison
The relational equality operator == works on strings
The other relational operators work as well
Step15: As we have previously discussed, uppercase letters come before lowercase letters with respect to encoding
This means that uppercase are considered less than lowercase letters
If you need to compare strings, it is common to convert them to all lowercase (or uppercase) before comparing unless you specifically want to compare using case
Debugging
When you traverse a sequence of items, it is sometimes challenging to get the beginning and ending indices correct
These are frequently referred to as off-by-one errors since you stop one early or one late
To aid in debugging, it is helpful to display not only the index, but the value at the index as well
Step16: Exercises
Write a function is_palindrome that takes a string phrase as an argument and returns a boolean indicating whether or not the string is a palindrome.
Step17: As discussed above, Python has an operator called in that determines if a character or a substring is contained in a larger string. It returns True if the character or substring is in the larger string, otherwise, it returns False. Write a function called in_string that implements the same functionality.
Step18: Write a function that prompts a user to enter their first and last names. The function should then print a computer username consisting of the first letter of their first name and the first seven letters of their last name. | Python Code:
fruit = 'banana'
letter = fruit[1]
print( letter )
Explanation: Chapter 8: Strings
Contents
- A string is a sequence
- The len operator
- Traversal with a for loop
- String slices
- Strings are immutable
- Searching
- String methods
- The in operator
- String comparison
- Debugging
- Exercises
This notebook is based on "Think Python, 2Ed" by Allen B. Downey <br>
https://greenteapress.com/wp/think-python-2e/
A string is a sequence
A string is a sequence of characters
You can access individual characters in the sequence using the bracket operator
End of explanation
len( fruit )
Explanation: Why was an a returned instead of b?
Computer scientists (and much of Europe) count starting with 0
These sequence are referred to as zero-based
The bracket notation only accepts integers (or expressions that result in an integer)
The len operator
len is a built-in function that returns the number of characters in a string
End of explanation
length = len( fruit )
# Uncomment to see what happens
#last = fruit[ length ]
Explanation: Note that the lenght is one-based, just like normal counting
What happens when you try to access a value that is out of range?
End of explanation
last = fruit[ length - 1 ]
last = fruit[ -1 ]
Explanation: Remember that indices are zero-based, while counting is one-based
There are multiple ways to get the last letter
End of explanation
index = 0
while( index < len( fruit ) ):
letter = fruit[ index ]
print( letter )
index = index + 1
Explanation: You can also use other negative integers to index characters
Traversal with a for loop
There is frequently a need to process each item in a sequence one at a time
This is done by traversing the sequence and processing each item in turn
One way to do this is using a while loop
End of explanation
# Use the for loop to keep track of the index
for index in range( len( fruit ) ):
print( fruit[ index ] )
# Have Python keep track of things for us
for char in fruit:
print( char )
Explanation: A for loop can also be used
We can either use the for loop to keep track of the index or have Python automatically access each value in the sequence
End of explanation
prefixes = 'JKLMNOPQ'
suffix = 'ack'
for letter in prefixes:
print( letter + suffix )
Explanation: The second approach can be simpler, but you don't know the index of the value
Loops can also be used to build strings
End of explanation
s = 'Monty Python'
print( s[0:5] )
print( s[6:12] )
Explanation: String slices
A segment of a string is called a slice
The method of selecting a slice is similar to accessing a single character
End of explanation
fruit = 'banana'
print( fruit[:3] )
print( fruit[3:] )
Explanation: The operator [n:m] returns a portion of the string
The n-th item is included, but the m-th item is not
TODO insert image
If you omit an index from the slice, it either uses the beginning or the end of the string depending on which index is omitted
End of explanation
greeting = 'Hello world!'
# Uncomment to see the error generated
# greeting[0] = 'J'
Explanation: If the first index is greater than or equal to the second, an empty string is returned
Strings are immutable
Strings are immutable
This means you can't change a string once it has been built
End of explanation
greeting = 'Hello world!'
new_greeting = 'J' + greeting[1:]
print( new_greeting )
print( greeting )
Explanation: In the error, the object is the string and the item is the character
Don't worry about the idea of an object right now
We will discuss it later
For now, think of it as a variable's value
Since strings are immutable, the only way to "change" one is to create a new one with the changes you want
End of explanation
def find( word, letter ):
letter_index = -1
current_index = 0
while( current_index < len( word ) ):
if( word[current_index] == letter ):
letter_index = current_index
return letter_index
Explanation: Note that the original string is unchanged
Searching
In many situations, you need to search for a particular value in a sequence of values
The easiest way to do it is using a sequential search
End of explanation
word = 'banana'
new_word = word.upper()
print( new_word )
Explanation: Notice that if the letter isn't found in the word, the function returns -1
It is common to return a special value indicating failure of some kind
However, it needs to be noted in the comments so it is actually useful
String methods
A method is similar to a function
It takes arguments and can return a value, but the syntax to use it is different
It operates on a variable that is an object (like a string)
All the TurtleWorld code used methods on a turtle
End of explanation
print( 'an' in 'banana' )
print( 'seed' in 'banana' )
Explanation: In this case, the method upper is called on the string word
It returns a new string with all the letters now in uppercase
Calling a method like this is called an invocation
Strings in Python have a number of methods already built in
One of them is find
It finds th eindex of a specified substring (not just a character)
Python's documentation lists them all <br>
https://docs.python.org/3/library/stdtypes.html#string-methods
The in operator
Strings have an in operator that returns True if the first string is a substring of the second
End of explanation
word = 'apple'
if( word == 'banana' ):
print( 'All right, bananas.' )
if( word < 'banana' ):
print( 'Your word, ' + word + ', comes before banana.' )
elif( word > 'banana' ):
print( 'Your word, ' + word + ', comes after banana.' )
else:
print( 'All right, bananas.' )
Explanation: String comparison
The relational equality operator == works on strings
The other relational operators work as well
End of explanation
word = 'banana'
index = 2
print( 'index=[', index, '] value=[', word[ index ], ']' )
Explanation: As we have previously discussed, uppercase letters come before lowercase letters with respect to encoding
This means that uppercase are considered less than lowercase letters
If you need to compare strings, it is common to convert them to all lowercase (or uppercase) before comparing unless you specifically want to compare using case
Debugging
When you traverse a sequence of items, it is sometimes challenging to get the beginning and ending indices correct
These are frequently referred to as off-by-one errors since you stop one early or one late
To aid in debugging, it is helpful to display not only the index, but the value at the index as well
End of explanation
def is_palindrome( phrase ):
# YOUR CODE GOES HERE
return False
Explanation: Exercises
Write a function is_palindrome that takes a string phrase as an argument and returns a boolean indicating whether or not the string is a palindrome.
End of explanation
def in_string( substring, larger_string ):
# YOUR CODE GOES HERE
return False
Explanation: As discussed above, Python has an operator called in that determines if a character or a substring is contained in a larger string. It returns True if the character or substring is in the larger string, otherwise, it returns False. Write a function called in_string that implements the same functionality.
End of explanation
def create_username():
username = ''
# YOUR CODE HERE
print( username )
Explanation: Write a function that prompts a user to enter their first and last names. The function should then print a computer username consisting of the first letter of their first name and the first seven letters of their last name.
End of explanation |
7,173 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using lambdify for plotting expressions
The syntethic isotope Technetium-99m is used in medical diagnostics (scintigraphy)
Step1: now we need to determine the integration constants from the intial conditions
Step2: Exercise
Step3: Use either the %exercise or %load magic to get the exercise / solution respecitvely | Python Code:
import sympy as sym
sym.init_printing()
symbs = t, l1, l2, x0, y0, z0 = sym.symbols('t lambda_1 lambda_2 x0 y0 z0', real=True, nonnegative=True)
funcs = x, y, z = [sym.Function(s)(t) for s in 'xyz']
inits = [f.subs(t, 0) for f in funcs]
diffs = [f.diff(t) for f in funcs]
exprs = -l1*x, l1*x - l2*y, l2*y
eqs = [sym.Eq(diff, expr) for diff, expr in zip(diffs, exprs)]
eqs
solutions = sym.dsolve(eqs)
solutions
Explanation: Using lambdify for plotting expressions
The syntethic isotope Technetium-99m is used in medical diagnostics (scintigraphy):
$$
^{99m}Tc \overset{\lambda_1}{\longrightarrow} \,^{99}Tc \overset{\lambda_2}{\longrightarrow} \,^{99}Ru \
\lambda_1 = 3.2\cdot 10^{-5}\,s^{-1} \
\lambda_2 = 1.04 \cdot 10^{-13}\,s^{-1} \
$$
SymPy can solve the differential equations describing the amounts versus time analytically.
Let's denote the concentrations of each isotope $x(t),\ y(t)\ \&\ z(t)$ respectively.
End of explanation
integration_constants = set.union(*[sol.free_symbols for sol in solutions]) - set(symbs)
integration_constants
initial_values = [sol.subs(t, 0) for sol in solutions]
initial_values
const_exprs = sym.solve(initial_values, integration_constants)
const_exprs
analytic = [sol.subs(const_exprs) for sol in solutions]
analytic
Explanation: now we need to determine the integration constants from the intial conditions:
End of explanation
from math import log10
import numpy as np
year_s = 365.25*24*3600
tout = np.logspace(0, log10(3e6*year_s), 500) # 1 s to 3 million years
%load_ext scipy2017codegen.exercise
Explanation: Exercise: Create a function from a symbolic expression
We want to plot the time evolution of x, y & z from the above analytic expression (called analytic above):
End of explanation
# %exercise exercise_Tc99.py
xyz_num = sym.lambdify([t, l1, l2, *inits], [eq.rhs for eq in analytic])
yout = xyz_num(tout, 3.2e-5, 1.04e-13, 1, 0, 0)
import matplotlib.pyplot as plt
%matplotlib inline
fig, ax = plt.subplots(1, 1, figsize=(14, 4))
ax.loglog(tout.reshape((tout.size, 1)), np.array(yout).T)
ax.legend(['$^{99m}Tc$', '$^{99}Tc$', '$^{99}Ru$'])
ax.set_xlabel('Time / s')
ax.set_ylabel('Concentration / a.u.')
_ = ax.set_ylim([1e-11, 2])
Explanation: Use either the %exercise or %load magic to get the exercise / solution respecitvely:
Replace ??? so that f(t) evaluates $x(t),\ y(t)\ \&\ z(t)$. Hint: use the right hand side of the equations in analytic (use the attribute rhs of the items in anayltic):
End of explanation |
7,174 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Following the tutorial at
Step1: Accessing Data
You can access DataFrame data using familiar Python dict/list operations
Step2: Manipulating Data
You may apply Python's basic arithmetic operations to Series. For example
Step3: Exercise #1
Modify the cities table by adding a new boolean column that is True if and only if both of the following are True
Step4: Indexes
Both Series and DataFrame objects also define an index property that assigns an identifier value to each Series item or DataFrame row.
By default, at construction, pandas assigns index values that reflect the ordering of the source data. Once created, the index values are stable; that is, they do not change when data is reordered.
Step5: Reindexing is a great way to shuffle (randomize) a DataFrame. In the example below, we take the index, which is array-like, and pass it to NumPy's random.permutation function, which shuffles its values in place. Calling reindex with this shuffled array causes the DataFrame rows to be shuffled in the same way.
Step6: Exercise #2
The reindex method allows index values that are not in the original DataFrame's index values. Try it and see what happens if you use such values! Why do you think this is allowed? | Python Code:
import pandas as pd
# There are two data structures in pandas, Series and DataFrames
city_names = pd.Series(['San Francisco', 'San Jose', 'Sacramento'])
population = pd.Series([852469, 1015785, 485199])
pd.DataFrame({"City Name": city_names, "Population": population})
# importing an existing csv file into DataFrame
california_housing_dataframe = pd.read_csv(
"https://storage.googleapis.com/mledu-datasets/california_housing_train.csv",
sep=","
)
california_housing_dataframe.shape
california_housing_dataframe.head()
california_housing_dataframe.hist('housing_median_age')
Explanation: Following the tutorial at: https://colab.research.google.com/notebooks/mlcc/intro_to_pandas.ipynb?hl=tr#scrollTo=U5ouUp1cU6pC
End of explanation
cities = pd.DataFrame({'City Name': city_names, 'Population': population})
print(type(cities['City Name']))
cities['City Name']
print(type(cities["City Name"][1]))
cities["City Name"][1]
print(type(cities[0:2]))
cities[0:2]
Explanation: Accessing Data
You can access DataFrame data using familiar Python dict/list operations:
End of explanation
population / 1000
import numpy as np
np.log(population)
cities['Area square miles'] = pd.Series([46.87, 176.53, 97.92])
cities['Population density'] = cities['Population'] / cities['Area square miles']
cities
population.apply(lambda val: val > 1000000)
Explanation: Manipulating Data
You may apply Python's basic arithmetic operations to Series. For example:
End of explanation
cities['is saint and wide'] = (cities['Area square miles'] > 50) & (cities['City Name'].apply(lambda name: name.startswith("San")))
cities
Explanation: Exercise #1
Modify the cities table by adding a new boolean column that is True if and only if both of the following are True:
The city is named after a saint.
The city has an area greater than 50 square miles.
Note: Boolean Series are combined using the bitwise, rather than the traditional boolean, operators. For example, when performing logical and, use & instead of and.
Hint: "San" in Spanish means "saint."
End of explanation
city_names.index
cities.index
cities.reindex([2, 0, 1])
Explanation: Indexes
Both Series and DataFrame objects also define an index property that assigns an identifier value to each Series item or DataFrame row.
By default, at construction, pandas assigns index values that reflect the ordering of the source data. Once created, the index values are stable; that is, they do not change when data is reordered.
End of explanation
cities.reindex(np.random.permutation(cities.index))
Explanation: Reindexing is a great way to shuffle (randomize) a DataFrame. In the example below, we take the index, which is array-like, and pass it to NumPy's random.permutation function, which shuffles its values in place. Calling reindex with this shuffled array causes the DataFrame rows to be shuffled in the same way.
End of explanation
cities.reindex([4, 2, 1, 3, 0])
Explanation: Exercise #2
The reindex method allows index values that are not in the original DataFrame's index values. Try it and see what happens if you use such values! Why do you think this is allowed?
End of explanation |
7,175 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Google form analysis tests
Table of Contents
'Google form analysis' functions checks
Google form loading
Selection of a question
Selection of a user's answers
checking answers
comparison of checkpoints completion and answers
answers submitted through time
merge English and French answers
Step1: 'Google form analysis' functions checks
<a id=funcchecks />
copy-paste for unit tests
(userIDThatDidNotAnswer)
(userID1AnswerEN)
(userIDAnswersEN)
(userID1ScoreEN)
(userIDScoresEN)
(userID1AnswerFR)
(userIDAnswersFR)
(userID1ScoreFR)
(userIDScoresFR)
(userIDAnswersENFR)
getAllResponders
Step2: hasAnswered
Step3: getAnswers
Step4: getCorrections
Step5: getScore
Step6: getValidatedCheckpoints
Step7: getNonValidated
Step8: getNonValidatedCheckpoints
Step9: getValidatedCheckpointsCounts
Step10: getNonValidatedCheckpointsCounts
Step11: getAllAnswerRows
Step12: getPercentCorrectPerColumn
tested through getPercentCorrectKnowingAnswer
getPercentCorrectKnowingAnswer
Step13: Google form loading
<a id=gformload />
Step14: Selection of a question
<a id=selquest />
Step15: Selection of a user's answers
<a id=selusans />
userIDThatDidNotAnswer
userID1AnswerEN
userIDAnswersEN
userID1AnswerFR
userIDAnswersFR
userIDAnswersENFR
getUniqueUserCount tinkering
Step16: getAllRespondersGFormGUID tinkering
Step17: getRandomGFormGUID tinkering
Step18: getAnswers tinkering
Step19: answer selection
Step20: checking answers
<a id=checkans />
Step21: getTemporality tinkering
Step22: getTemporality tinkering
Step23: getTestAnswers tinkering
Step24: getCorrections tinkering
Step25: getCorrections extensions tinkering
Step26: getBinarizedCorrections tinkering
Step27: getBinarized tinkering
Step28: getAllBinarized tinkering
Step29: plotCorrelationMatrix tinkering
Step30: data = transposed[[0,1]]
data.corr(method = 'spearman')
Step31: getCrossCorrectAnswers tinkering
Before
Step32: after
Step33: getScore tinkering
Step34: comparison of checkpoints completion and answers
<a id=compcheckans />
Theoretically, they should match. Whoever understood an item should beat the matching challenge. The discrepancies are due to game design or level design.
getValidatedCheckpoints tinkering
Step35: getNonValidated tinkering
Step36: getAllAnswerRows tinkering
Step37: getPercentCorrectPerColumn tinkering
Step38: getPercentCorrectKnowingAnswer tinkering
Step39: tests on all user Ids, including those who answered more than once
Step40: answers submitted through time
<a id=ansthrutime />
merging answers in English and French
<a id=mergelang />
tests
Step41: add language column
Scores will be evaluated per language
Step42: concatenate | Python Code:
%run "../Functions/1. Google form analysis.ipynb"
# Localplayerguids of users who answered the questionnaire (see below).
# French
#localplayerguid = 'a4d4b030-9117-4331-ba48-90dc05a7e65a'
#localplayerguid = 'd6826fd9-a6fc-4046-b974-68e50576183f'
#localplayerguid = 'deb089c0-9be3-4b75-9b27-28963c77b10c'
#localplayerguid = '75e264d6-af94-4975-bb18-50cac09894c4'
#localplayerguid = '3d733347-0313-441a-b77c-3e4046042a53'
# English
localplayerguid = '8d352896-a3f1-471c-8439-0f426df901c1'
#localplayerguid = '7037c5b2-c286-498e-9784-9a061c778609'
#localplayerguid = '5c4939b5-425b-4d19-b5d2-0384a515539e'
#localplayerguid = '7825d421-d668-4481-898a-46b51efe40f0'
#localplayerguid = 'acb9c989-b4a6-4c4d-81cc-6b5783ec71d8'
#localplayerguid = devPCID5
Explanation: Google form analysis tests
Table of Contents
'Google form analysis' functions checks
Google form loading
Selection of a question
Selection of a user's answers
checking answers
comparison of checkpoints completion and answers
answers submitted through time
merge English and French answers
End of explanation
len(getAllResponders())
Explanation: 'Google form analysis' functions checks
<a id=funcchecks />
copy-paste for unit tests
(userIDThatDidNotAnswer)
(userID1AnswerEN)
(userIDAnswersEN)
(userID1ScoreEN)
(userIDScoresEN)
(userID1AnswerFR)
(userIDAnswersFR)
(userID1ScoreFR)
(userIDScoresFR)
(userIDAnswersENFR)
getAllResponders
End of explanation
assert(not hasAnswered( userIDThatDidNotAnswer )), "User has NOT answered"
assert(hasAnswered( userID1AnswerEN )), "User HAS answered"
assert(hasAnswered( userIDAnswersEN )), "User HAS answered"
assert(hasAnswered( userID1AnswerFR )), "User HAS answered"
assert(hasAnswered( userIDAnswersFR )), "User HAS answered"
assert(hasAnswered( userIDAnswersENFR )), "User HAS answered"
Explanation: hasAnswered
End of explanation
assert (len(getAnswers( userIDThatDidNotAnswer ).columns) == 0),"Too many answers"
assert (len(getAnswers( userID1AnswerEN ).columns) == 1),"Too many answers"
assert (len(getAnswers( userIDAnswersEN ).columns) >= 2),"Not enough answers"
assert (len(getAnswers( userID1AnswerFR ).columns) == 1),"Not enough columns"
assert (len(getAnswers( userIDAnswersFR ).columns) >= 2),"Not enough answers"
assert (len(getAnswers( userIDAnswersENFR ).columns) >= 2),"Not enough answers"
Explanation: getAnswers
End of explanation
assert (len(getCorrections( userIDThatDidNotAnswer ).columns) == 0),"Too many answers"
assert (len(getCorrections( userID1AnswerEN ).columns) == 2),"Too many answers"
assert (len(getCorrections( userIDAnswersEN ).columns) >= 4),"Not enough answers"
assert (len(getCorrections( userID1AnswerFR ).columns) == 2),"Too many answers"
assert (len(getCorrections( userIDAnswersFR ).columns) >= 4),"Not enough answers"
assert (len(getCorrections( userIDAnswersENFR ).columns) >= 4),"Not enough answers"
Explanation: getCorrections
End of explanation
assert (len(pd.DataFrame(getScore( userIDThatDidNotAnswer ).values.flatten().tolist()).values.flatten().tolist()) == 0),"Too many answers"
score = getScore( userID1AnswerEN )
#print(score)
assert (
(len(score.values.flatten()) == 3)
and
score['before'][0][0] == 23
),"Incorrect score"
score = getScore( userIDAnswersEN )
#print(score)
assert (
(len(score.values.flatten()) == 3)
and
score['before'][0][0] == 5
and
score['after'][0][0] == 25
),"Incorrect score"
score = getScore( userID1AnswerFR )
#print(score)
assert (
(len(score.values.flatten()) == 3)
and
score['before'][0][0] == 23
),"Incorrect score"
score = getScore( userIDAnswersFR )
#print(score)
assert (
(len(score.values.flatten()) == 3)
and
score['before'][0][0] == 15
and
score['after'][0][0] == 26
),"Incorrect score"
score = getScore( userIDAnswersENFR )
#print(score)
assert (
(len(score.values.flatten()) == 3)
and
score['before'][0][0] == 4
and
score['after'][0][0] == 13
),"Incorrect score"
Explanation: getScore
End of explanation
objective = 0
assert (len(getValidatedCheckpoints( userIDThatDidNotAnswer )) == objective),"Incorrect number of answers"
objective = 1
assert (len(getValidatedCheckpoints( userID1AnswerEN )) == objective),"Incorrect number of answers"
assert (getValidatedCheckpoints( userID1AnswerEN )[0].equals(validableCheckpoints)) \
, "User has validated everything"
objective = 2
assert (len(getValidatedCheckpoints( userIDAnswersEN )) == objective),"Incorrect number of answers"
objective = 3
assert (len(getValidatedCheckpoints( userIDAnswersEN )[0]) == objective) \
, "User has validated " + objective + " chapters on first try"
objective = 1
assert (len(getValidatedCheckpoints( userID1AnswerFR )) == objective),"Incorrect number of answers"
assert (getValidatedCheckpoints( userID1AnswerFR )[0].equals(validableCheckpoints)) \
, "User has validated everything"
objective = 2
assert (len(getValidatedCheckpoints( userIDAnswersFR )) == objective),"Incorrect number of answers"
objective = 5
assert (len(getValidatedCheckpoints( userIDAnswersFR )[1]) == objective) \
, "User has validated " + objective + " chapters on second try"
objective = 2
assert (len(getValidatedCheckpoints( userIDAnswersENFR )) == objective),"Incorrect number of answers"
objective = 5
assert (len(getValidatedCheckpoints( userIDAnswersENFR )[1]) == objective) \
, "User has validated " + objective + " chapters on second try"
Explanation: getValidatedCheckpoints
End of explanation
getValidatedCheckpoints( userIDThatDidNotAnswer )
pd.Series(getValidatedCheckpoints( userIDThatDidNotAnswer ))
type(getNonValidated(pd.Series(getValidatedCheckpoints( userIDThatDidNotAnswer ))))
validableCheckpoints
assert(getNonValidated(getValidatedCheckpoints( userIDThatDidNotAnswer ))).equals(validableCheckpoints), \
"incorrect validated checkpoints: should contain all checkpoints that can be validated"
testSeries = pd.Series(
[
'', # 7
'', # 8
'', # 9
'', # 10
'tutorial1.Checkpoint00', # 11
'tutorial1.Checkpoint00', # 12
'tutorial1.Checkpoint00', # 13
'tutorial1.Checkpoint00', # 14
'tutorial1.Checkpoint02', # 15
'tutorial1.Checkpoint01', # 16
'tutorial1.Checkpoint05'
]
)
assert(getNonValidated(pd.Series([testSeries]))[0][0] == 'tutorial1.Checkpoint13'), "Incorrect non validated checkpoint"
Explanation: getNonValidated
End of explanation
getNonValidatedCheckpoints( userIDThatDidNotAnswer )
getNonValidatedCheckpoints( userID1AnswerEN )
getNonValidatedCheckpoints( userIDAnswersEN )
getNonValidatedCheckpoints( userID1AnswerFR )
getNonValidatedCheckpoints( userIDAnswersFR )
getNonValidatedCheckpoints( userIDAnswersENFR )
Explanation: getNonValidatedCheckpoints
End of explanation
getValidatedCheckpointsCounts(userIDThatDidNotAnswer)
getValidatedCheckpointsCounts(userID1AnswerEN)
getValidatedCheckpointsCounts(userIDAnswersEN)
getValidatedCheckpointsCounts(userID1ScoreEN)
getValidatedCheckpointsCounts(userIDScoresEN)
getValidatedCheckpointsCounts(userID1AnswerFR)
getValidatedCheckpointsCounts(userIDAnswersFR)
getValidatedCheckpointsCounts(userID1ScoreFR)
getValidatedCheckpointsCounts(userIDScoresFR)
getValidatedCheckpointsCounts(userIDAnswersENFR)
Explanation: getValidatedCheckpointsCounts
End of explanation
getNonValidatedCheckpointsCounts(userIDThatDidNotAnswer)
getNonValidatedCheckpointsCounts(userID1AnswerEN)
getNonValidatedCheckpointsCounts(userIDAnswersEN)
getNonValidatedCheckpointsCounts(userID1ScoreEN)
getNonValidatedCheckpointsCounts(userIDScoresEN)
getNonValidatedCheckpointsCounts(userID1AnswerFR)
getNonValidatedCheckpointsCounts(userIDAnswersFR)
getNonValidatedCheckpointsCounts(userID1ScoreFR)
getNonValidatedCheckpointsCounts(userIDScoresFR)
getNonValidatedCheckpointsCounts(userIDAnswersENFR)
Explanation: getNonValidatedCheckpointsCounts
End of explanation
aYes = ["Yes", "Oui"]
aNo = ["No", "Non"]
aNoIDK = ["No", "Non", "I don't know", "Je ne sais pas"]
# How long have you studied biology?
qBiologyEducationLevelIndex = 5
aBiologyEducationLevelHigh = ["Until bachelor's degree", "Jusqu'à la license"]
aBiologyEducationLevelLow = ['Until the end of high school', 'Until the end of middle school', 'Not even in middle school'\
"Jusqu'au bac", "Jusqu'au brevet", 'Jamais']
# Have you ever heard about BioBricks?
qHeardBioBricksIndex = 8
# Have you played the current version of Hero.Coli?
qPlayedHerocoliIndex = 10
qPlayedHerocoliYes = ['Yes', 'Once', 'Multiple times', 'Oui',
'De nombreuses fois', 'Quelques fois', 'Une fois']
qPlayedHerocoliNo = ['No', 'Non',]
gform['How long have you studied biology?'].unique()
gform['Before playing Hero.Coli, had you ever heard about BioBricks?'].unique()
gform['Have you played the current version of Hero.Coli?'].unique()
getAllAnswerRows(qBiologyEducationLevelIndex, aBiologyEducationLevelHigh)
assert(len(getAllAnswerRows(qBiologyEducationLevelIndex, aBiologyEducationLevelHigh)) != 0)
assert(len(getAllAnswerRows(qBiologyEducationLevelIndex, aBiologyEducationLevelLow)) != 0)
assert(len(getAllAnswerRows(qHeardBioBricksIndex, aYes)) != 0)
assert(len(getAllAnswerRows(qHeardBioBricksIndex, aNoIDK)) != 0)
assert(len(getAllAnswerRows(qPlayedHerocoliIndex, qPlayedHerocoliYes)) != 0)
assert(len(getAllAnswerRows(qPlayedHerocoliIndex, qPlayedHerocoliNo)) != 0)
Explanation: getAllAnswerRows
End of explanation
questionIndex = 15
gform.iloc[:, questionIndex].head()
(qBiologyEducationLevelIndex, aBiologyEducationLevelHigh)
getAllAnswerRows(qBiologyEducationLevelIndex, aBiologyEducationLevelHigh)
getPercentCorrectKnowingAnswer(qBiologyEducationLevelIndex, aBiologyEducationLevelHigh)
getPercentCorrectKnowingAnswer(qBiologyEducationLevelIndex, aBiologyEducationLevelLow)
getPercentCorrectKnowingAnswer(qHeardBioBricksIndex, aYes)
getPercentCorrectKnowingAnswer(qHeardBioBricksIndex, aNoIDK)
playedHerocoliIndexYes = getPercentCorrectKnowingAnswer(qPlayedHerocoliIndex, qPlayedHerocoliYes)
playedHerocoliIndexYes
playedHerocoliIndexNo = getPercentCorrectKnowingAnswer(qPlayedHerocoliIndex, qPlayedHerocoliNo)
playedHerocoliIndexNo
playedHerocoliIndexYes - playedHerocoliIndexNo
(playedHerocoliIndexYes - playedHerocoliIndexNo) / (1 - playedHerocoliIndexNo)
Explanation: getPercentCorrectPerColumn
tested through getPercentCorrectKnowingAnswer
getPercentCorrectKnowingAnswer
End of explanation
#gform = gformEN
transposed = gform.T
#answers = transposed[transposed[]]
transposed
type(gform)
Explanation: Google form loading
<a id=gformload />
End of explanation
gform.columns
gform.columns.get_loc('Do not edit - pre-filled anonymous ID')
localplayerguidkey
# Using the whole question:
gform[localplayerguidkey]
# Get index from question
localplayerguidindex
# Using the index of the question:
gform.iloc[:, localplayerguidindex]
Explanation: Selection of a question
<a id=selquest />
End of explanation
sample = gform
#def getUniqueUserCount(sample):
sample[localplayerguidkey].nunique()
Explanation: Selection of a user's answers
<a id=selusans />
userIDThatDidNotAnswer
userID1AnswerEN
userIDAnswersEN
userID1AnswerFR
userIDAnswersFR
userIDAnswersENFR
getUniqueUserCount tinkering
End of explanation
userIds = gform[localplayerguidkey].unique()
len(userIds)
Explanation: getAllRespondersGFormGUID tinkering
End of explanation
allResponders = getAllResponders()
uniqueUsers = np.unique(allResponders)
print(len(allResponders))
print(len(uniqueUsers))
for guid in uniqueUsers:
if(not isGUIDFormat(guid)):
print('incorrect guid: ' + str(guid))
uniqueUsers = getAllResponders()
userCount = len(uniqueUsers)
guid = '0'
while (not isGUIDFormat(guid)):
userIndex = randint(0,userCount-1)
guid = uniqueUsers[userIndex]
guid
Explanation: getRandomGFormGUID tinkering
End of explanation
#userId = userIDThatDidNotAnswer
#userId = userID1AnswerEN
userId = userIDAnswersEN
_form = gform
#def getAnswers( userId, _form = gform ):
answers = _form[_form[localplayerguidkey]==userId]
_columnAnswers = answers.T
if 0 != len(answers):
_newColumns = []
for column in _columnAnswers.columns:
_newColumns.append(answersColumnNameStem + str(column))
_columnAnswers.columns = _newColumns
else:
# user has never answered
print("user " + str(userId) + " has never answered")
_columnAnswers
Explanation: getAnswers tinkering
End of explanation
answers
# Selection of a specific answer
answers.iloc[:,localplayerguidindex]
answers.iloc[:,localplayerguidindex].iloc[0]
type(answers.iloc[0,:])
answers.iloc[0,:].values
Explanation: answer selection
End of explanation
#### Question that has a correct answer:
questionIndex = 15
answers.iloc[:,questionIndex].iloc[0]
correctAnswers.iloc[questionIndex][0]
answers.iloc[:,questionIndex].iloc[0].startswith(correctAnswers.iloc[questionIndex][0])
#### Question that has no correct answer:
questionIndex = 0
#answers.iloc[:,questionIndex].iloc[0].startswith(correctAnswers.iloc[questionIndex].iloc[0])
#### Batch check:
columnAnswers = getAnswers( userId )
columnAnswers.values[2,0]
columnAnswers[columnAnswers.columns[0]][2]
correctAnswers
type(columnAnswers)
indexOfFirstEvaluationQuestion = 13
columnAnswers.index[indexOfFirstEvaluationQuestion]
Explanation: checking answers
<a id=checkans />
End of explanation
gform.tail(50)
gform[gform[localplayerguidkey] == 'ba202bbc-af77-42e8-85ff-e25b871717d5']
gformRealBefore = gform.loc[88, 'Timestamp']
gformRealBefore
gformRealAfter = gform.loc[107, 'Timestamp']
gformRealAfter
RMRealFirstEvent = getFirstEventDate(gform.loc[88,localplayerguidkey])
RMRealFirstEvent
Explanation: getTemporality tinkering
End of explanation
tzAnswerDate = gformRealBefore
gameEventDate = RMRealFirstEvent
#def getTemporality( answerDate, gameEventDate ):
result = answerTemporalities[2]
if(gameEventDate != pd.Timestamp.max.tz_localize('utc')):
if(answerDate <= gameEventDate):
result = answerTemporalities[0]
elif (answerDate > gameEventDate):
result = answerTemporalities[1]
result, tzAnswerDate, gameEventDate
firstEventDate = getFirstEventDate(gform.loc[userIndex,localplayerguidkey])
firstEventDate
gformTestBefore = pd.Timestamp('2018-01-16 14:28:20.998000+0000', tz='UTC')
getTemporality(gformTestBefore,firstEventDate)
gformTestWhile = pd.Timestamp('2018-01-16 14:28:23.998000+0000', tz='UTC')
getTemporality(gformTestWhile,firstEventDate)
gformTestAfter = pd.Timestamp('2018-01-16 14:28:24.998000+0000', tz='UTC')
getTemporality(gformTestAfter,firstEventDate)
Explanation: getTemporality tinkering
End of explanation
_form = gform
_rmDF = rmdf152
_rmTestDF = normalizedRMDFTest
includeAndroid = True
#def getTestAnswers( _form = gform, _rmDF = rmdf152, _rmTestDF = normalizedRMDFTest, includeAndroid = True):
_form[_form[localplayerguidkey].isin(testUsers)]
_form[localplayerguidkey]
testUsers
len(getTestAnswers()[localplayerguidkey])
rmdf152['customData.platform'].unique()
rmdf152[rmdf152['customData.platform'].apply(lambda s: str(s).endswith('editor'))]
rmdf152[rmdf152['userId'].isin(getTestAnswers()[localplayerguidkey])][['userTime','customData.platform','userId']].dropna()
Explanation: getTestAnswers tinkering
End of explanation
columnAnswers
#testUserId = userID1AnswerEN
testUserId = '8d352896-a3f1-471c-8439-0f426df901c1'
getCorrections(testUserId)
testUserId = '8d352896-a3f1-471c-8439-0f426df901c1'
source = correctAnswers
#def getCorrections( _userId, _source = correctAnswers, _form = gform ):
columnAnswers = getAnswers( testUserId )
if 0 != len(columnAnswers.columns):
questionsCount = len(columnAnswers.values)
for columnName in columnAnswers.columns:
if answersColumnNameStem in columnName:
answerNumber = columnName.replace(answersColumnNameStem,"")
newCorrectionsColumnName = correctionsColumnNameStem + answerNumber
columnAnswers[newCorrectionsColumnName] = columnAnswers[columnName]
columnAnswers[newCorrectionsColumnName] = pd.Series(np.full(questionsCount, np.nan))
for question in columnAnswers[columnName].index:
#print()
#print(question)
__correctAnswers = source.loc[question]
if(len(__correctAnswers) > 0):
columnAnswers.loc[question,newCorrectionsColumnName] = False
for correctAnswer in __correctAnswers:
#print("-> " + correctAnswer)
if str(columnAnswers.loc[question,columnName])\
.startswith(str(correctAnswer)):
columnAnswers.loc[question,newCorrectionsColumnName] = True
break
else:
# user has never answered
print("can't give correct answers")
columnAnswers
question = 'How old are you?'
columnName = ''
for column in columnAnswers.columns:
if str.startswith(column, 'answers'):
columnName = column
break
type(columnAnswers.loc[question,columnName])
getCorrections(localplayerguid)
gform.columns[20]
columnAnswers.loc[gform.columns[20],columnAnswers.columns[1]]
columnAnswers[columnAnswers.columns[1]][gform.columns[13]]
columnAnswers.loc[gform.columns[13],columnAnswers.columns[1]]
columnAnswers.iloc[20,1]
questionsCount
np.full(3, np.nan)
pd.Series(np.full(questionsCount, np.nan))
columnAnswers.loc[question,newCorrectionsColumnName]
question
correctAnswers[question]
getCorrections('8d352896-a3f1-471c-8439-0f426df901c1')
Explanation: getCorrections tinkering
End of explanation
correctAnswersEN
#demographicAnswersEN
type([])
mergedCorrectAnswersEN = correctAnswersEN.copy()
for index in mergedCorrectAnswersEN.index:
#print(str(mergedCorrectAnswersEN.loc[index,column]))
mergedCorrectAnswersEN.loc[index] =\
demographicAnswersEN.loc[index] + mergedCorrectAnswersEN.loc[index]
mergedCorrectAnswersEN
correctAnswersEN + demographicAnswersEN
correctAnswers + demographicAnswers
Explanation: getCorrections extensions tinkering
End of explanation
corrections = getCorrections(userIDAnswersENFR)
#corrections
for columnName in corrections.columns:
if correctionsColumnNameStem in columnName:
for index in corrections[columnName].index:
if(True==corrections.loc[index,columnName]):
corrections.loc[index,columnName] = 1
elif (False==corrections.loc[index,columnName]):
corrections.loc[index,columnName] = 0
corrections
binarized = getBinarizedCorrections(corrections)
binarized
slicedBinarized = binarized[13:40]
slicedBinarized
slicedBinarized =\
binarized[13:40][binarized.columns[\
binarized.columns.to_series().str.contains(correctionsColumnNameStem)\
]]
slicedBinarized
Explanation: getBinarizedCorrections tinkering
End of explanation
_source = correctAnswers
_userId = getRandomGFormGUID()
getCorrections(_userId, _source=_source, _form = gform)
_userId = '5e978fb3-316a-42ba-bb58-00856353838d'
gform[gform[localplayerguidkey] == _userId].iloc[0].index
_gformLine = gform[gform[localplayerguidkey] == _userId].iloc[0]
_gformLine.loc['Before playing Hero.Coli, had you ever heard about synthetic biology?']
_gformLine = gform[gform[localplayerguidkey] == _userId].iloc[0]
# only for one user
# def getBinarized(_gformLine, _source = correctAnswers):
_notEmptyIndexes = []
for _index in _source.index:
if(len(_source.loc[_index]) > 0):
_notEmptyIndexes.append(_index)
_binarized = pd.Series(np.full(len(_gformLine.index), np.nan), index = _gformLine.index)
for question in _gformLine.index:
_correctAnswers = _source.loc[question]
if(len(_correctAnswers) > 0):
_binarized[question] = 0
for _correctAnswer in _correctAnswers:
if str(_gformLine.loc[question])\
.startswith(str(_correctAnswer)):
_binarized.loc[question] = 1
break
_slicedBinarized = _binarized.loc[_notEmptyIndexes]
_slicedBinarized
_slicedBinarized.loc['What are BioBricks and devices?']
Explanation: getBinarized tinkering
End of explanation
allBinarized = getAllBinarized()
plotCorrelationMatrix(allBinarized)
source
source = correctAnswers + demographicAnswers
notEmptyIndexes = []
for eltIndex in source.index:
#print(eltIndex)
if(len(source.loc[eltIndex]) > 0):
notEmptyIndexes.append(eltIndex)
len(source)-len(notEmptyIndexes)
emptyForm = gform[gform[localplayerguidkey] == 'incorrectGUID']
emptyForm
_source = correctAnswers + demographicAnswers
_form = gform #emptyForm
#def getAllBinarized(_source = correctAnswers, _form = gform ):
_notEmptyIndexes = []
for _index in _source.index:
if(len(_source.loc[_index]) > 0):
_notEmptyIndexes.append(_index)
_result = pd.DataFrame(index = _notEmptyIndexes)
for _userId in getAllResponders( _form = _form ):
_corrections = getCorrections(_userId, _source=_source, _form = _form)
_binarized = getBinarizedCorrections(_corrections)
_slicedBinarized =\
_binarized.loc[_notEmptyIndexes][_binarized.columns[\
_binarized.columns.to_series().str.contains(correctionsColumnNameStem)\
]]
_result = pd.concat([_result, _slicedBinarized], axis=1)
_result = _result.T
#_result
if(_result.shape[0] > 0 and _result.shape[1] > 0):
correlation = _result.astype(float).corr()
#plt.matshow(correlation)
sns.clustermap(correlation,cmap=plt.cm.jet,square=True,figsize=(10,10))
#ax = sns.clustermap(correlation,cmap=plt.cm.jet,square=True,figsize=(10,10),cbar_kws={\
#"orientation":"vertical"})
correlation_pearson = _result.T.astype(float).corr(methods[0])
correlation_kendall = _result.T.astype(float).corr(methods[1])
correlation_spearman = _result.T.astype(float).corr(methods[2])
print(correlation_pearson.equals(correlation_kendall))
print(correlation_kendall.equals(correlation_spearman))
diff = (correlation_pearson - correlation_kendall)
flattened = diff[diff > 0.1].values.flatten()
flattened[~np.isnan(flattened)]
correlation
Explanation: getAllBinarized tinkering
End of explanation
scientificQuestionsLabels = gform.columns[13:40]
scientificQuestionsLabels = [
'In order to modify the abilities of the bacterium, you have to... #1',
'What are BioBricks and devices? #2',
'What is the name of this BioBrick? #3',
'What is the name of this BioBrick?.1 #4',
'What is the name of this BioBrick?.2 #5',
'What is the name of this BioBrick?.3 #6',
'What does this BioBrick do? #7',
'What does this BioBrick do?.1 #8',
'What does this BioBrick do?.2 #9',
'What does this BioBrick do?.3 #10',
'Pick the case where the BioBricks are well-ordered: #11',
'When does green fluorescence happen? #12',
'What happens when you unequip the movement device? #13',
'What is this? #14',
'What does this device do? #15',
'What does this device do?.1 #16',
'What does this device do?.2 #17',
'What does this device do?.3 #18',
'What does this device do?.4 #19',
'What does this device do?.5 #20',
'What does this device do?.6 #21',
'What does this device do?.7 #22',
'Guess: what would a device producing l-arabinose do, if it started with a l-arabinose-induced promoter? #23',
'Guess: the bacterium would glow yellow... #24',
'What is the species of the bacterium of the game? #25',
'What is the scientific name of the tails of the bacterium? #26',
'Find the antibiotic: #27',
]
scientificQuestionsLabelsX = [
'#1 In order to modify the abilities of the bacterium, you have to...',
'#2 What are BioBricks and devices?',
'#3 What is the name of this BioBrick?',
'#4 What is the name of this BioBrick?.1',
'#5 What is the name of this BioBrick?.2',
'#6 What is the name of this BioBrick?.3',
'#7 What does this BioBrick do?',
'#8 What does this BioBrick do?.1',
'#9 What does this BioBrick do?.2',
'#10 What does this BioBrick do?.3',
'#11 Pick the case where the BioBricks are well-ordered:',
'#12 When does green fluorescence happen?',
'#13 What happens when you unequip the movement device?',
'#14 What is this?',
'#15 What does this device do?',
'#16 What does this device do?.1',
'#17 What does this device do?.2',
'#18 What does this device do?.3',
'#19 What does this device do?.4',
'#20 What does this device do?.5',
'#21 What does this device do?.6',
'#22 What does this device do?.7',
'Guess: what would a device producing l-arabinose do, if it started with a l-arabinose-induced p#23 romoter?',
'#24 Guess: the bacterium would glow yellow...',
'#25 What is the species of the bacterium of the game?',
'#26 What is the scientific name of the tails of the bacterium?',
'#27 Find the antibiotic:',
]
questionsLabels = scientificQuestionsLabels
questionsLabelsX = scientificQuestionsLabelsX
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
ax.set_yticklabels(['']+questionsLabels)
ax.set_xticklabels(['']+questionsLabelsX, rotation='vertical')
ax.matshow(correlation)
ax.set_xticks(np.arange(-1,len(questionsLabels),1.));
ax.set_yticks(np.arange(-1,len(questionsLabels),1.));
questionsLabels = correlation.columns.copy()
newLabels = []
for index in range(0, len(questionsLabels)):
newLabels.append(questionsLabels[index] + ' #' + str(index + 1))
correlationRenamed = correlation.copy()
correlationRenamed.columns = newLabels
correlationRenamed.index = newLabels
correlationRenamed
correlationRenamed = correlation.copy()
correlationRenamed.columns = pd.Series(correlation.columns).apply(lambda x: x + ' #' + str(correlation.columns.get_loc(x) + 1))
correlationRenamed.index = correlationRenamed.columns
correlationRenamed
correlation.shape
fig = plt.figure(figsize=(10,10))
ax12 = plt.subplot(111)
ax12.set_title('Heatmap')
sns.heatmap(correlation,ax=ax12,cmap=plt.cm.jet,square=True)
ax = sns.clustermap(correlation,cmap=plt.cm.jet,square=True,figsize=(10,10),cbar_kws={\
"orientation":"vertical"})
questionsLabels = pd.Series(correlation.columns).apply(lambda x: x + ' #' + str(correlation.columns.get_loc(x) + 1))
fig = plt.figure(figsize=(10,10))
ax = plt.subplot(111)
cmap=plt.cm.jet
#cmap=plt.cm.ocean
cax = ax.imshow(correlation, interpolation='nearest', cmap=cmap,
# extent=(0.5,np.shape(correlation)[0]+0.5,0.5,np.shape(correlation)[1]+0.5)
)
#ax.grid(True)
plt.title('Questions\' Correlations')
ax.set_yticklabels(questionsLabels)
ax.set_xticklabels(questionsLabels, rotation='vertical')
ax.set_xticks(np.arange(len(questionsLabels)));
ax.set_yticks(np.arange(len(questionsLabels)));
#ax.set_xticks(np.arange(-1,len(questionsLabels),1.));
#ax.set_yticks(np.arange(-1,len(questionsLabels),1.));
fig.colorbar(cax)
plt.show()
ax.get_xticks()
transposed = _result.T.astype(float)
transposed.head()
transposed.corr()
transposed.columns = range(0,len(transposed.columns))
transposed.index = range(0,len(transposed.index))
transposed.head()
transposed = transposed.iloc[0:10,0:3]
transposed
transposed = transposed.astype(float)
type(transposed[0][0])
transposed.columns = list('ABC')
transposed
transposed.loc[0, 'A'] = 0
transposed
transposed.corr()
Explanation: plotCorrelationMatrix tinkering
End of explanation
round(7.64684)
df = pd.DataFrame(10*np.random.randint(2, size=[20,2]),index=range(0,20),columns=list('AB'))
#df.columns = range(0,len(df.columns))
df.head()
#type(df[0][0])
type(df.columns)
df.corr()
#corr = pd.Series({}, index = methods)
for meth in methods:
#corr[meth] = result.corr(method = meth)
print(meth + ":\n" + str(transposed.corr(method = meth)) + "\n\n")
Explanation: data = transposed[[0,1]]
data.corr(method = 'spearman')
End of explanation
befores = gform.copy()
befores = befores[befores['Temporality'] == 'before']
print(len(befores))
allBeforesBinarized = getAllBinarized( _source = correctAnswers + demographicAnswers, _form = befores)
np.unique(allBeforesBinarized.values.flatten())
allBeforesBinarized.columns[20]
allBeforesBinarized.T.dot(allBeforesBinarized)
np.unique(allBeforesBinarized.iloc[:,20].values)
plotCorrelationMatrix( allBeforesBinarized, _abs=False,\
_clustered=False, _questionNumbers=True )
_correlation = allBeforesBinarized.astype(float).corr()
overlay = allBeforesBinarized.T.dot(allBeforesBinarized).astype(int)
_correlation.columns = pd.Series(_correlation.columns).apply(\
lambda x: x + ' #' + str(_correlation.columns.get_loc(x) + 1))
_correlation.index = _correlation.columns
_correlation = _correlation.abs()
_fig = plt.figure(figsize=(20,20))
_ax = plt.subplot(111)
#sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True,annot=overlay,fmt='d')
sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True,annot=True)
Explanation: getCrossCorrectAnswers tinkering
Before
End of explanation
afters = gform.copy()
afters = afters[afters['Temporality'] == 'after']
print(len(afters))
allAftersBinarized = getAllBinarized( _source = correctAnswers + demographicAnswers, _form = afters)
np.unique(allAftersBinarized.values.flatten())
plotCorrelationMatrix( allAftersBinarized, _abs=False,\
_clustered=False, _questionNumbers=True )
#for answerIndex in range(0,len(allAftersBinarized)):
# print(str(answerIndex) + " " + str(allAftersBinarized.iloc[answerIndex,0]))
allAftersBinarized.iloc[28,0]
len(allAftersBinarized)
len(allAftersBinarized.index)
_correlation = allAftersBinarized.astype(float).corr()
overlay = allAftersBinarized.T.dot(allAftersBinarized).astype(int)
_correlation.columns = pd.Series(_correlation.columns).apply(\
lambda x: x + ' #' + str(_correlation.columns.get_loc(x) + 1))
_correlation.index = _correlation.columns
_fig = plt.figure(figsize=(10,10))
_ax = plt.subplot(111)
#sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True,annot=overlay,fmt='d')
sns.heatmap(_correlation,ax=_ax,cmap=plt.cm.jet,square=True)
crossCorrect = getCrossCorrectAnswers(allAftersBinarized)
pd.Series((overlay == crossCorrect).values.flatten()).unique()
allAftersBinarized.shape
cross = allAftersBinarized.T.dot(allAftersBinarized)
cross.shape
equal = (cross == crossCorrect)
type(equal)
pd.Series(equal.values.flatten()).unique()
Explanation: after
End of explanation
testUser = userIDAnswersFR
gform[gform[localplayerguidkey] == testUser].T
getScore(testUser)
print("draft test")
testUserId = "3ef14300-4987-4b54-a56c-5b6d1f8a24a1"
testUserId = userIDAnswersEN
#def getScore( _userId, _form = gform ):
score = pd.DataFrame({}, columns = answerTemporalities)
score.loc['score',:] = np.nan
for column in score.columns:
score.loc['score', column] = []
if hasAnswered( testUserId ):
columnAnswers = getCorrections(testUserId)
for columnName in columnAnswers.columns:
# only work on corrected columns
if correctionsColumnNameStem in columnName:
answerColumnName = columnName.replace(correctionsColumnNameStem,\
answersColumnNameStem)
temporality = columnAnswers.loc['Temporality',answerColumnName]
counts = (columnAnswers[columnName]).value_counts()
thisScore = 0
if(True in counts):
thisScore = counts[True]
score.loc['score',temporality].append(thisScore)
else:
print("user " + str(testUserId) + " has never answered")
#expectedScore = 18
#if (expectedScore != score[0]):
# print("ERROR incorrect score: expected "+ str(expectedScore) +", got "+ str(score))
score
score = pd.DataFrame({}, columns = answerTemporalities)
score.loc['score',:] = np.nan
for column in score.columns:
score.loc['score', column] = []
score
#score.loc['user0',:] = [1,2,3]
#score
#type(score)
#type(score[0])
#for i,v in score[0].iteritems():
# print(v)
#score[0]['undefined']
#columnAnswers.loc['Temporality','answers0']
False in (columnAnswers[columnName]).value_counts()
getScore("3ef14300-4987-4b54-a56c-5b6d1f8a24a1")
#gform[gform[localplayerguidkey]=="3ef14300-4987-4b54-a56c-5b6d1f8a24a1"].T
correctAnswers
Explanation: getScore tinkering
End of explanation
#questionnaireValidatedCheckpointsPerQuestion = pd.Series(np.nan, index=range(35))
questionnaireValidatedCheckpointsPerQuestion = pd.Series(np.nan, index=range(len(checkpointQuestionMatching)))
questionnaireValidatedCheckpointsPerQuestion.head()
checkpointQuestionMatching['checkpoint'][19]
userId = localplayerguid
_form = gform
#function that returns the list of checkpoints from user id
#def getValidatedCheckpoints( userId, _form = gform ):
_validatedCheckpoints = []
if hasAnswered( userId, _form = _form ):
_columnAnswers = getCorrections( userId, _form = _form)
for _columnName in _columnAnswers.columns:
# only work on corrected columns
if correctionsColumnNameStem in _columnName:
_questionnaireValidatedCheckpointsPerQuestion = pd.Series(np.nan, index=range(len(checkpointQuestionMatching)))
for _index in range(0, len(_questionnaireValidatedCheckpointsPerQuestion)):
if _columnAnswers[_columnName][_index]==True:
_questionnaireValidatedCheckpointsPerQuestion[_index] = checkpointQuestionMatching['checkpoint'][_index]
else:
_questionnaireValidatedCheckpointsPerQuestion[_index] = ''
_questionnaireValidatedCheckpoints = _questionnaireValidatedCheckpointsPerQuestion.unique()
_questionnaireValidatedCheckpoints = _questionnaireValidatedCheckpoints[_questionnaireValidatedCheckpoints!='']
_questionnaireValidatedCheckpoints = pd.Series(_questionnaireValidatedCheckpoints)
_questionnaireValidatedCheckpoints = _questionnaireValidatedCheckpoints.sort_values()
_questionnaireValidatedCheckpoints.index = range(0, len(_questionnaireValidatedCheckpoints))
_validatedCheckpoints.append(_questionnaireValidatedCheckpoints)
else:
print("user " + str(userId) + " has never answered")
result = pd.Series(data=_validatedCheckpoints)
result
type(result[0])
Explanation: comparison of checkpoints completion and answers
<a id=compcheckans />
Theoretically, they should match. Whoever understood an item should beat the matching challenge. The discrepancies are due to game design or level design.
getValidatedCheckpoints tinkering
End of explanation
testSeries1 = pd.Series(
[
'tutorial1.Checkpoint00',
'tutorial1.Checkpoint01',
'tutorial1.Checkpoint02',
'tutorial1.Checkpoint05'
]
)
testSeries2 = pd.Series(
[
'tutorial1.Checkpoint01',
'tutorial1.Checkpoint05'
]
)
np.setdiff1d(testSeries1, testSeries2)
np.setdiff1d(testSeries1.values, testSeries2.values)
getAnswers(localplayerguid).head(2)
getCorrections(localplayerguid).head(2)
getScore(localplayerguid)
getValidatedCheckpoints(localplayerguid)
getNonValidatedCheckpoints(localplayerguid)
Explanation: getNonValidated tinkering
End of explanation
qPlayedHerocoliIndex = 10
qPlayedHerocoliYes = ['Yes', 'Once', 'Multiple times', 'Oui',
'De nombreuses fois', 'Quelques fois', 'Une fois']
questionIndex = qPlayedHerocoliIndex
choice = qPlayedHerocoliYes
_form = gform
# returns all rows of Google form's answers that contain an element
# of the array 'choice' for question number 'questionIndex'
#def getAllAnswerRows(questionIndex, choice, _form = gform ):
_form[_form.iloc[:, questionIndex].isin(choice)]
Explanation: getAllAnswerRows tinkering
End of explanation
_df = getAllAnswerRows(qPlayedHerocoliIndex, qPlayedHerocoliYes, _form = gform )
#def getPercentCorrectPerColumn(_df):
_count = len(_df)
_percents = pd.Series(np.full(len(_df.columns), np.nan), index=_df.columns)
for _rowIndex in _df.index:
for _columnName in _df.columns:
_columnIndex = _df.columns.get_loc(_columnName)
if ((_columnIndex >= firstEvaluationQuestionIndex) \
and (_columnIndex < len(_df.columns)-3)):
if(str(_df[_columnName][_rowIndex]).startswith(str(correctAnswers[_columnIndex]))):
if (np.isnan(_percents[_columnName])):
_percents[_columnName] = 1;
else:
_percents[_columnName] = _percents[_columnName]+1
else:
if (np.isnan(_percents[_columnName])):
_percents[_columnName] = 0;
_percents = _percents/_count
_percents['Count'] = _count
_percents
print('\n\n\npercents=\n' + str(_percents))
Explanation: getPercentCorrectPerColumn tinkering
End of explanation
questionIndex = qPlayedHerocoliIndex
choice = qPlayedHerocoliYes
_form = gform
#def getPercentCorrectKnowingAnswer(questionIndex, choice, _form = gform):
_answerRows = getAllAnswerRows(questionIndex, choice, _form = _form);
getPercentCorrectPerColumn(_answerRows)
Explanation: getPercentCorrectKnowingAnswer tinkering
End of explanation
#localplayerguid = '8d352896-a3f1-471c-8439-0f426df901c1'
#localplayerguid = '7037c5b2-c286-498e-9784-9a061c778609'
#localplayerguid = '5c4939b5-425b-4d19-b5d2-0384a515539e'
#localplayerguid = '7825d421-d668-4481-898a-46b51efe40f0'
#localplayerguid = 'acb9c989-b4a6-4c4d-81cc-6b5783ec71d8'
for id in getAllResponders():
print("===========================================")
print("id=" + str(id))
print("-------------------------------------------")
print(getAnswers(id).head(2))
print("-------------------------------------------")
print(getCorrections(id).head(2))
print("-------------------------------------------")
print("scores=" + str(getScore(id)))
print("#ValidatedCheckpoints=" + str(getValidatedCheckpointsCounts(id)))
print("#NonValidatedCheckpoints=" + str(getNonValidatedCheckpointsCounts(id)))
print("===========================================")
gform[localplayerguidkey]
hasAnswered( '8d352896-a3f1-471c-8439-0f426df901c1' )
'8d352896-a3f1-471c-8439-0f426df901c1' in gform[localplayerguidkey].values
apostropheTestString = 'it\'s a test'
apostropheTestString
Explanation: tests on all user Ids, including those who answered more than once
End of explanation
#gformEN.head(2)
#gformFR.head(2)
Explanation: answers submitted through time
<a id=ansthrutime />
merging answers in English and French
<a id=mergelang />
tests
End of explanation
#gformEN['Language'] = pd.Series('en', index=gformEN.index)
#gformFR['Language'] = pd.Series('fr', index=gformFR.index)
#gformFR.head(2)
Explanation: add language column
Scores will be evaluated per language
End of explanation
# rename columns
#gformFR.columns = gformEN.columns
#gformFR.head(2)
#gformTestMerge = pd.concat([gformEN, gformFR])
#gformTestMerge.head(2)
#gformTestMerge.tail(2)
gform
localplayerguid
someAnswers = getAnswers( '8ca16c7a-70a6-4723-bd72-65b8485a2e86' )
someAnswers
testQuestionIndex = 24
thisUsersFirstEvaluationQuestion = str(someAnswers[someAnswers.columns[0]][testQuestionIndex])
thisUsersFirstEvaluationQuestion
someAnswers[someAnswers.columns[0]]['Language']
firstEvaluationQuestionCorrectAnswer = str(correctAnswers[testQuestionIndex])
firstEvaluationQuestionCorrectAnswer
thisUsersFirstEvaluationQuestion.startswith(firstEvaluationQuestionCorrectAnswer)
Explanation: concatenate
End of explanation |
7,176 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting sentiment from product reviews
Fire up GraphLab Create
Step1: Read some product review data
Loading reviews for a set of baby products.
Step2: Let's explore this data together
Data includes the product name, the review text and the rating of the review.
Step3: Build the word count vector for each review
Step4: Examining the reviews for most-sold product
Step5: Build a sentiment classifier
Step6: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
Step7: Let's train the sentiment classifier
Step8: Evaluate the sentiment model
Step9: Applying the learned model to understand sentiment for Giraffe
Step10: Sort the reviews based on the predicted sentiment and explore
Step11: Most positive reviews for the giraffe
Step12: Show most negative reviews for giraffe | Python Code:
import graphlab
Explanation: Predicting sentiment from product reviews
Fire up GraphLab Create
End of explanation
products = graphlab.SFrame('amazon_baby.gl/')
Explanation: Read some product review data
Loading reviews for a set of baby products.
End of explanation
products.head()
Explanation: Let's explore this data together
Data includes the product name, the review text and the rating of the review.
End of explanation
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
products.head()
graphlab.canvas.set_target('ipynb')
products['name'].show()
Explanation: Build the word count vector for each review
End of explanation
giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
giraffe_reviews['rating'].show(view='Categorical')
Explanation: Examining the reviews for most-sold product: 'Vulli Sophie the Giraffe Teether'
End of explanation
products['rating'].show(view='Categorical')
Explanation: Build a sentiment classifier
End of explanation
#ignore all 3* reviews
products = products[products['rating'] != 3]
#positive sentiment = 4* or 5* reviews
products['sentiment'] = products['rating'] >=4
products.head()
len(products)
print products['sentiment'].sum()
Explanation: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
End of explanation
train_data,test_data = products.random_split(.8, seed=0)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
Explanation: Let's train the sentiment classifier
End of explanation
sentiment_model.evaluate(test_data, metric='roc_curve')
graphlab.canvas.set_target('browser')
sentiment_model.show(view='Evaluation')
Explanation: Evaluate the sentiment model
End of explanation
giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')
giraffe_reviews.head()
Explanation: Applying the learned model to understand sentiment for Giraffe
End of explanation
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)
giraffe_reviews.head()
Explanation: Sort the reviews based on the predicted sentiment and explore
End of explanation
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
Explanation: Most positive reviews for the giraffe
End of explanation
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
def awesome_count(word_count_dict):
if 'awesome' in word_count_dict:
word_count_dict.get('awesome')
else:
return 0
products['awesome'] = products['word_count'].apply(awesome_count)
def selected_word_count(word_count_dict,word):
if word in word_count_dict:
return word_count_dict.get(word)
else:
return 0
for word in selected_words:
products[word] = products['word_count'].apply(lambda x : selected_word_count(x,word))
products.head(4)
for word in selected_words:
print word, products[word].sum()
train_data,test_data = products.random_split(.8, seed=0)
selected_words__model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=selected_words,
validation_set=test_data)
selected_words__model['coefficients'].sort('value')
selected_words__model.evaluate(test_data,metric='roc_curve')
diaper_champ_reviews = products[products['name']=='Baby Trend Diaper Champ']
diaper_champ_reviews
diaper_champ_reviews['predicted_sentiment'] = sentiment_model.predict(diaper_champ_reviews, output_type='probability')
diaper_champ_reviews = diaper_champ_reviews.sort('predicted_sentiment', ascending=False)
diaper_champ_reviews
diaper_champ_reviews[0]
top_pred= selected_words__model.predict(diaper_champ_reviews, output_type='probability')
top_pred[0]
diaper_champ_reviews[0]['review']
Explanation: Show most negative reviews for giraffe
End of explanation |
7,177 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Training Keras model on Cloud AI Platform</h1>
This notebook illustrates distributed training and hyperparameter tuning on Cloud AI Platform (formerly known as Cloud ML Engine). This uses Keras and requires TensorFlow 2.0
Step1: Now that we have the Keras wide-and-deep code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
<p>
<h2> Train on Cloud AI Platform</h2>
<p>
Training on Cloud AI Platform requires
Step2: Lab Task 2
The following code edits babyweight_tf2/trainer/model.py.
Step3: Lab Task 3
After moving the code to a package, make sure it works standalone. (Note the --pattern and --train_examples lines so that I am not trying to boil the ocean on my laptop). Even then, this takes about <b>3 minutes</b> in which you won't see any output ...
Step4: Lab Task 4
Since we are using TensorFlow 2.0 preview, we will use a container image to run the code on AI Platform.
Once TensorFlow 2.0 is released, you will be able to simply do (without having to build a container)
<pre>
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs
Step5: Note
Step6: Lab Task 5
Once the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> two hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.
Step7: When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was
Step8: <h2> Repeat training </h2>
<p>
This time with tuned parameters (note last line) | Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '2.0' # not used in this notebook
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/babyweight/preproc; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical set of preprocessed files if you didn't do previous notebook
gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET}
fi
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
Explanation: <h1>Training Keras model on Cloud AI Platform</h1>
This notebook illustrates distributed training and hyperparameter tuning on Cloud AI Platform (formerly known as Cloud ML Engine). This uses Keras and requires TensorFlow 2.0
End of explanation
!mkdir -p babyweight_tf2/trainer
%%writefile babyweight_tf2/trainer/task.py
import argparse
import json
import os
from . import model
import tensorflow as tf
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--bucket',
help = 'GCS path to data. We assume that data is in gs://BUCKET/babyweight/preproc/',
required = True
)
parser.add_argument(
'--output_dir',
help = 'GCS location to write checkpoints and export models',
required = True
)
parser.add_argument(
'--batch_size',
help = 'Number of examples to compute gradient over.',
type = int,
default = 512
)
parser.add_argument(
'--job-dir',
help = 'this model ignores this field, but it is required by gcloud',
default = 'junk'
)
parser.add_argument(
'--nnsize',
help = 'Hidden layer sizes to use for DNN feature columns -- provide space-separated layers',
nargs = '+',
type = int,
default=[128, 32, 4]
)
parser.add_argument(
'--nembeds',
help = 'Embedding size of a cross of n key real-valued parameters',
type = int,
default = 3
)
## TODO 1: add the new arguments here
parser.add_argument(
'--train_examples',
help = 'Number of examples (in thousands) to run the training job over. If this is more than actual # of examples available, it cycles through them. So specifying 1000 here when you have only 100k examples makes this 10 epochs.',
type = int,
default = 5000
)
parser.add_argument(
'--pattern',
help = 'Specify a pattern that has to be in input files. For example 00001-of will process only one shard',
default = 'of'
)
parser.add_argument(
'--eval_steps',
help = 'Positive number of steps for which to evaluate model. Default to None, which means to evaluate until input_fn raises an end-of-input exception',
type = int,
default = None
)
## parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# unused args provided by service
arguments.pop('job_dir', None)
arguments.pop('job-dir', None)
## assign the arguments to the model variables
output_dir = arguments.pop('output_dir')
model.BUCKET = arguments.pop('bucket')
model.BATCH_SIZE = arguments.pop('batch_size')
model.TRAIN_EXAMPLES = arguments.pop('train_examples') * 1000
model.EVAL_STEPS = arguments.pop('eval_steps')
print ("Will train on {} examples using batch_size={}".format(model.TRAIN_EXAMPLES, model.BATCH_SIZE))
model.PATTERN = arguments.pop('pattern')
model.NEMBEDS= arguments.pop('nembeds')
model.NNSIZE = arguments.pop('nnsize')
print ("Will use DNN size of {}".format(model.NNSIZE))
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
output_dir = os.path.join(
output_dir,
json.loads(
os.environ.get('TF_CONFIG', '{}')
).get('task', {}).get('trial', '')
)
# Run the training job
model.train_and_evaluate(output_dir)
Explanation: Now that we have the Keras wide-and-deep code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
<p>
<h2> Train on Cloud AI Platform</h2>
<p>
Training on Cloud AI Platform requires:
<ol>
<li> Making the code a Python package
<li> Using gcloud to submit the training code to Cloud AI Platform
</ol>
Ensure that the AI Platform API is enabled by going to this [link](https://console.developers.google.com/apis/library/ml.googleapis.com).
## Lab Task 1
The following code edits babyweight_tf2/trainer/task.py.
End of explanation
%%writefile babyweight_tf2/trainer/model.py
import shutil, os, datetime
import numpy as np
import tensorflow as tf
BUCKET = None # set from task.py
PATTERN = 'of' # gets all files
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
# Define some hyperparameters
TRAIN_EXAMPLES = 1000 * 1000
EVAL_STEPS = None
NUM_EVALS = 10
BATCH_SIZE = 512
NEMBEDS = 3
NNSIZE = [64, 16, 4]
# Create an input function reading a file using the Dataset API
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
## Build a Keras wide-and-deep model using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Helper function to handle categorical columns
def categorical_fc(name, values):
orig = tf.feature_column.categorical_column_with_vocabulary_list(name, values)
wrapped = tf.feature_column.indicator_column(orig)
return orig, wrapped
def build_wd_model(dnn_hidden_units = [64, 32], nembeds = 3):
# input layer
deep_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in ['mother_age', 'gestation_weeks']
}
wide_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in ['is_male', 'plurality']
}
inputs = {**wide_inputs, **deep_inputs}
# feature columns from inputs
deep_fc = {
colname : tf.feature_column.numeric_column(colname)
for colname in ['mother_age', 'gestation_weeks']
}
wide_fc = {}
is_male, wide_fc['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])
plurality, wide_fc['plurality'] = categorical_fc('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])
# bucketize the float fields. This makes them wide
age_buckets = tf.feature_column.bucketized_column(deep_fc['mother_age'],
boundaries=np.arange(15,45,1).tolist())
wide_fc['age_buckets'] = tf.feature_column.indicator_column(age_buckets)
gestation_buckets = tf.feature_column.bucketized_column(deep_fc['gestation_weeks'],
boundaries=np.arange(17,47,1).tolist())
wide_fc['gestation_buckets'] = tf.feature_column.indicator_column(gestation_buckets)
# cross all the wide columns. We have to do the crossing before we one-hot encode
crossed = tf.feature_column.crossed_column(
[is_male, plurality, age_buckets, gestation_buckets], hash_bucket_size=20000)
deep_fc['crossed_embeds'] = tf.feature_column.embedding_column(crossed, nembeds)
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures(wide_fc.values(), name='wide_inputs')(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(deep_fc.values(), name='deep_inputs')(inputs)
# hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(numnodes, activation='relu', name='dnn_{}'.format(layerno+1))(deep)
deep_out = deep
# linear model for the wide side
wide_out = tf.keras.layers.Dense(10, activation='relu', name='linear')(wide_inputs)
# concatenate the two sides
both = tf.keras.layers.concatenate([deep_out, wide_out], name='both')
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='weight')(both)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
# The main function
def train_and_evaluate(output_dir):
model = build_wd_model(NNSIZE, NEMBEDS)
print("Here is our Wide-and-Deep architecture so far:\n")
print(model.summary())
train_file_path = 'gs://{}/babyweight/preproc/{}*{}*'.format(BUCKET, 'train', PATTERN)
eval_file_path = 'gs://{}/babyweight/preproc/{}*{}*'.format(BUCKET, 'eval', PATTERN)
trainds = load_dataset('train*', BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL)
if EVAL_STEPS:
evalds = evalds.take(EVAL_STEPS)
steps_per_epoch = TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)
checkpoint_path = os.path.join(output_dir, 'checkpoints/babyweight')
cp_callback = tf.keras.callbacks.ModelCheckpoint(checkpoint_path, save_weights_only=True, verbose=1)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch,
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[cp_callback])
EXPORT_PATH = os.path.join(output_dir, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
Explanation: Lab Task 2
The following code edits babyweight_tf2/trainer/model.py.
End of explanation
%%bash
echo "bucket=${BUCKET}"
rm -rf babyweight_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight_tf2
python3 -m trainer.task \
--bucket=${BUCKET} \
--output_dir=babyweight_trained \
--job-dir=./tmp \
--pattern="00000-of-" --train_examples=1 --eval_steps=1 --batch_size=10
Explanation: Lab Task 3
After moving the code to a package, make sure it works standalone. (Note the --pattern and --train_examples lines so that I am not trying to boil the ocean on my laptop). Even then, this takes about <b>3 minutes</b> in which you won't see any output ...
End of explanation
%%writefile babyweight_tf2/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY trainer /babyweight_tf2/trainer
RUN apt update && \
apt install --yes python3-pip && \
pip3 install --upgrade --quiet tf-nightly-2.0-preview
ENV PYTHONPATH ${PYTHONPATH}:/babyweight_tf2
CMD ["python3", "-m", "trainer.task"]
%%writefile babyweight_tf2/push_docker.sh
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=babyweight_training_container
#export IMAGE_TAG=$(date +%Y%m%d_%H%M%S)
#export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME:$IMAGE_TAG
export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME
echo "Building $IMAGE_URI"
docker build -f Dockerfile -t $IMAGE_URI ./
echo "Pushing $IMAGE_URI"
docker push $IMAGE_URI
Explanation: Lab Task 4
Since we are using TensorFlow 2.0 preview, we will use a container image to run the code on AI Platform.
Once TensorFlow 2.0 is released, you will be able to simply do (without having to build a container)
<pre>
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=200000
</pre>
End of explanation
%%bash
cd babyweight_tf2
bash push_docker.sh
Explanation: Note: If you get a permissions/stat error when running push_docker.sh from Notebooks, do it from CloudShell:
Open CloudShell on the GCP Console
* git clone https://github.com/GoogleCloudPlatform/training-data-analyst
* cd training-data-analyst/courses/machine_learning/deepdive/06_structured/babyweight_tf2/containers
* bash push_docker.sh
This step takes 5-10 minutes to run
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
#IMAGE=gcr.io/deeplearning-platform-release/tf2-cpu
IMAGE=gcr.io/$PROJECT/serverlessml_training_container
gcloud beta ai-platform jobs submit training $JOBID \
--staging-bucket=gs://$BUCKET --region=$REGION \
--master-image-uri=$IMAGE \
--master-machine-type=n1-standard-4 --scale-tier=CUSTOM \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=200000
Explanation: Lab Task 5
Once the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> two hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.
End of explanation
%%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: rmse
goal: MINIMIZE
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterName: nembeds
type: INTEGER
minValue: 3
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
- parameterName: nnsize
type: INTEGER
minValue: 64
maxValue: 512
scaleType: UNIT_LOG_SCALE
%%bash
OUTDIR=gs://${BUCKET}/babyweight/hyperparam
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--config=hyperparam.yaml \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--eval_steps=10 \
--train_examples=20000
Explanation: When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was:
<pre>
Saving dict for global step 5714290: average_loss = 1.06473, global_step = 5714290, loss = 34882.4, rmse = 1.03186
</pre>
The final RMSE was 1.03 pounds.
<h2> Hyperparameter tuning </h2>
<p>
All of these are command-line parameters to my program. To do hyperparameter tuning, create hyperparam.xml and pass it as --configFile.
This step will take <b>up to 2 hours</b> -- you can increase maxParallelTrials or reduce maxTrials to get it done faster. Since maxParallelTrials is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=20000 --batch_size=35 --nembeds=16 --nnsize=281
Explanation: <h2> Repeat training </h2>
<p>
This time with tuned parameters (note last line)
End of explanation |
7,178 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. | Python Code:
import sys
print(sys.version)
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
Some test sample are significantly away from all other data because
They are very dark or bright while normal data are just normally bright.
Some training sample are significantly away from all other data because
same as 1
There might be no test sample in the same class as the training sample?
End of explanation
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
dists.shape
dists_two.shape
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
X_train_folds = np.array_split(X_train, num_folds, axis=0)
y_train_folds = np.array_split(y_train, num_folds, axis=0)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
for k in k_choices:
k_to_accuracies[k] = []
for n in xrange(num_folds):
classifier.train(np.concatenate(X_train_folds[:n] + X_train_folds[n+1:]), np.concatenate(y_train_folds[:n] + y_train_folds[n+1:]))
y_test_pred = classifier.predict(X_train_folds[n], k=k, num_loops=0)
num_correct = np.sum(y_test_pred == y_train_folds[n])
k_to_accuracies[k].append(float(num_correct) * num_folds / X_train.shape[0])
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation |
7,179 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Particle Swarm Optimization Algorithm (in Python!)
[SPOILER] We will be using the Particle Swarm Optimization algorithm to obtain the minumum of a customed objective function
First of all, let's import the libraries we'll need (remember we are using Python 3)
Step1: We can define and plot the function we want to optimize
Step3: PSO Algorithm
Step5: Animation
Step6: Note | Python Code:
import numpy as np
import matplotlib.pyplot as plt
# import scipy as sp
# import time
%matplotlib inline
plt.style.use('bmh')
Explanation: Particle Swarm Optimization Algorithm (in Python!)
[SPOILER] We will be using the Particle Swarm Optimization algorithm to obtain the minumum of a customed objective function
First of all, let's import the libraries we'll need (remember we are using Python 3)
End of explanation
x_lo = 0
x_up = 14
n_points = 1000
x = np.linspace(x_lo, x_up, n_points)
def f(x):
return x*np.sin(x) + x*np.cos(x)
y = f(x)
plt.plot(x,y)
plt.ylabel('$f(x) = \sin(x)+x\cos(x)$')
plt.xlabel('$x$')
plt.title('Function to be optimized')
Explanation: We can define and plot the function we want to optimize:
End of explanation
n_iterations = 50
def run_PSO(n_particles=5, omega=0.5, phi_p=0.5, phi_g=0.7):
PSO algorithm to a funcion already defined.
Params:
omega = 0.5 # Particle weight (intertial)
phi_p = 0.1 # particle best weight
phi_g = 0.1 # global global weight
global x_best_p_global, x_particles, y_particles, u_particles, v_particles
# Note: we are using global variables to ease the use of interactive widgets
# This code will work fine without the global (and actually it will be safer)
## Initialization:
x_particles = np.zeros((n_particles, n_iterations))
x_particles[:, 0] = np.random.uniform(x_lo, x_up, size=n_particles)
x_best_particles = np.copy(x_particles[:, 0])
y_particles = f(x_particles[:, 0])
y_best_global = np.min(y_particles[:])
index_best_global = np.argmin(y_particles[:])
x_best_p_global = np.copy(x_particles[index_best_global,0])
# velocity units are [Length/iteration]
velocity_lo = x_lo-x_up
velocity_up = x_up-x_lo
u_particles = np.zeros((n_particles, n_iterations))
u_particles[:, 0] = 0.1*np.random.uniform(velocity_lo, velocity_up, size=n_particles)
v_particles = np.zeros((n_particles, n_iterations)) # Needed for plotting the velocity as vectors
# PSO STARTS
iteration = 1
while iteration <= n_iterations-1:
for i in range(n_particles):
x_p = x_particles[i, iteration-1]
u_p = u_particles[i, iteration-1]
x_best_p = x_best_particles[i]
r_p = np.random.uniform(0, 1)
r_g = np.random.uniform(0, 1)
u_p_new = omega*u_p+ \
phi_p*r_p*(x_best_p-x_p) + \
phi_g*r_g*(x_best_p_global-x_p)
x_p_new = x_p + u_p_new
if not x_lo <= x_p_new <= x_up:
x_p_new = x_p # ignore new position, it's out of the domain
u_p_new = 0
x_particles[i, iteration] = np.copy(x_p_new)
u_particles[i, iteration] = np.copy(u_p_new)
y_p_best = f(x_best_p)
y_p_new = f(x_p_new)
if y_p_new < y_p_best:
x_best_particles[i] = np.copy(x_p_new)
y_p_best_global = f(x_best_p_global)
if y_p_new < y_p_best_global:
x_best_p_global = x_p_new
iteration = iteration + 1
# Plotting convergence
y_particles = f(x_particles)
y_particles_best_hist = np.min(y_particles, axis=0)
y_particles_worst_hist = np.max(y_particles, axis=0)
y_best_global = np.min(y_particles[:])
index_best_global = np.argmin(y_particles[:])
fig, ax1 = plt.subplots(nrows=1, ncols=1, figsize=(10, 2))
# Limits of the function being plotted
ax1.plot((0,n_iterations),(np.min(y),np.min(y)), '--g', label="min$f(x)$")
ax1.plot((0,n_iterations),(np.max(y),np.max(y)),'--r', label="max$f(x)$")
# Convergence of the best particle and worst particle value
ax1.plot(np.arange(n_iterations),y_particles_best_hist,'b', label="$p_{best}$")
ax1.plot(np.arange(n_iterations),y_particles_worst_hist,'k', label="$p_{worst}$")
ax1.set_xlim((0,n_iterations))
ax1.set_ylabel('$f(x)$')
ax1.set_xlabel('$i$ (iteration)')
ax1.set_title('Convergence')
ax1.legend()
return
run_PSO()
Explanation: PSO Algorithm
End of explanation
from __future__ import print_function
import ipywidgets as widgets
from IPython.display import display, HTML
def plotPSO(i=0): #iteration
Visualization of particles and obj. function
fig, ax1 = plt.subplots(nrows=1, ncols=1, figsize=(10, 3))
ax1.plot(x,y)
ax1.set_xlim((x_lo,x_up))
ax1.set_ylabel('$f(x)$')
ax1.set_xlabel('$x$')
ax1.set_title('Function to be optimized')
#from IPython.core.debugger import Tracer
#Tracer()() #this one triggers the debugger
y_particles = f(x_particles)
ax1.plot(x_particles[:,i],y_particles[:,i], "ro")
ax1.quiver(x_particles[:,i],y_particles[:,i],u_particles[:,i],v_particles[:,i],
angles='xy', scale_units='xy', scale=1)
n_particles, iterations = x_particles.shape
tag_particles = range(n_particles)
for j, txt in enumerate(tag_particles):
ax1.annotate(txt, (x_particles[j,i],y_particles[j,i]))
w_arg_PSO = widgets.interact_manual(run_PSO,
n_particles=(2,50),
omega=(0,1,0.001),
phi_p=(0,1,0.001),
phi_g=(0,1,0.001))
w_viz_PSO = widgets.interact(plotPSO, i=(0,n_iterations-1))
Explanation: Animation
End of explanation
# More examples in https://github.com/ipython/ipywidgets/tree/master/docs/source/examples
Explanation: Note:
<div class=\"alert alert-success\">
As of ipywidgets 5.0, only static images of the widgets in this notebook will show on http://nbviewer.ipython.org. To view the live widgets and interact with them, you will need to download this notebook and run it with a Jupyter Notebook server.
</div>
End of explanation |
7,180 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Quantum Approximate Optimization Algorithm for MAX-CUT
The following is a step-by-step guide to running QAOA on the MaxCut problem. In the debue paper on QAOA (arXiv
Step1: The cost Hamiltonian and driver Hamiltonian corresponding to the barbell graph are stored in QAOA object fields in the form of lists of PauliSums.
Step2: The identity term above is not necessary to the computation since global phase rotations on the wavefunction do not change the expectation value. We include it here purely as a demonstration. The cost function printed above is the negative of the traditional maximum cut cost operator. This is because QAOA is formulated as the maximization of the cost operator but the VQE algorithm in the pyQuil library performs a minimization.
QAOA requires the construction of a state parameterized by $\beta$ and $\gamma$ rotation angles
$$\begin{align}
\mid \beta, \gamma \rangle = \prod_{p=0}^{\mathrm{steps}}\left( U(\hat{H}{\mathrm{drive}}, \beta{p})U(\hat{H}{\mathrm{MAXCUT}}, \gamma{p}) \right)^{\mathrm{steps}} (\mid +\rangle_{N-1}\otimes\mid + \rangle_{N-2}...\otimes\mid + \rangle_{0}).
\end{align}$$
The unitaries $U(\hat{H}{\mathrm{drive}}, \beta{p})$ and $U(\hat{H}{\mathrm{MAXCUT}}, \gamma{p})$ are the exponentiation of the driver Hamiltonian and the cost Hamiltonian, respectively.
$$
\begin{align}
U(\hat{H}{\mathrm{ref}}, \beta{p}) = e^{-i \beta_{p} \hat{H}{drive}} \
U(\hat{H}{\mathrm{MAXCUT}}, \gamma_{p}) = e^{-i \gamma_{p} \hat{H}_{\mathrm{MAXCUT}}}
\end{align}
$$
The QAOA algorithm relies on many constructions of a wavefunction via parameterized Quil and measurements on all qubits to evaluate an expectation value. In order avoid needless classical computation, QAOA constructs this parametric program once at the beginning of the calculation and then uses this same program object throughout the computation. This is accomplished using the ParametricProgram object from pyQuil that allows us to slot in a symbolic value for a parameterized gate.
The parameterized program object can be accessed through the QAOA method get_parameterized_program(). Calling this method on an instantiated QAOA object returns a closure with a precomputed set of Quil programs. Calling this closure with the parameters $\beta$ and $\gamma$ returns the circuit that has the rotations parameterized.
Step3: The printed program above is a Quil program that can be executed on a QVM. QAOA has two modes of operation
Step4: get_angles() returns optimal the $\beta$ and $\gamma$ angles. To view the probabilities of the state you can call QAOA.probabilities(t) were t is a concatentation of the $\beta$ and $\gamma$ angles, in that order. The probabilities(t) routine takes the $\beta$ and $\gamma$ parameters, reconstructs the wave function and returns their coefficients. A modified version can be used to print off the probabilities
Step5: As expected the bipartitioning of a graph with a single edge connecting two nodes corresponds to the state ${|01\rangle, |10\rangle }$. In this trivial example the QAOA finds angles that construct a distribution peaked around the two degenerate solutions.
MAXCUT on larger graphs and alternative optimizers.
Larger graph instances and different classical optimizers can be used with the QAOA. Here we consider an 6-node ring of disagrees. For an even number ring graph, the ring of disagrees corresponds to the antiferromagnet ground state--i.e. alternating spin-up spin-down.
Step6: This graph could be passed to the maxcut_qaoa method and a QAOA instance with the correct driver and cost Hamiltonian could be generated as before. In order to demonstrate the more general approach, along with some VQE options, we will construct the cost and driver Hamiltonians directly with PauliSum and PauliTerm objects. To do this we parse the edges and nodes of the graph to construct the relevant operators.
$$
\begin{align}
\hat{H}{\mathrm{cost}} = \sum{\langle i, j\rangle \in E}\frac{\sigma_{i}^{z}\sigma_{j}^{z} - 1}{2} \
\hat{H}{\mathrm{drive}} = \sum{i}^{n}-\sigma_{i}^{x}
\end{align}
$$
where $\langle i, j\rangle \in E$ refers to the pairs of nodes that form the edges of the graph.
Step7: We will also construct the initial state and pass this to the QAOA object. By default QAOA uses the $|+\rangle$ tensor product state. In other notebooks we will demonstrate that you can use the driver_ref optional argument to pass a different starting state for QAOA.
Step8: Now we are ready to instantiate the QAOA object!
Step9: We are interested in the bit strings returned from the QAOA algorthm. The get_angles() routine calls the VQE algorithm to find the best angles. We can then manually query the bit strings by rerunning the program and sampling many outputs.
Step10: We can see that the first two most frequently sampled bit strings are the alternating solutions to the ring graph. Since we have access to the wave function we can go one step farther and view the probability distrubtion over the bit strings produced by our $p = 1$ circuit.
Step11: For larger graphs the probability of sampling the correct string could be significantly smaller, though still peaked around the solution. Therefore, we would want to increase the probability of sampling the solution relative to any other string. To do this we simply increase the number of steps $p$ in the algorithm. We might want to bootstrap the algorithm with angles from lower number of steps. We can pass inital angles to the solver as optional arguments.
Step12: We could also change the optimizer which is passed down to VQE through the QAOA interface. Let's say I want to use BFGS or another optimizer that can be wrapped in python. Simply pass it to QAOA through the minimzer, minimizer_args, and minimizer_kwargs keywords | Python Code:
import pyquil.forest as qvm_module
import numpy as np
from grove.pyqaoa.maxcut_qaoa import maxcut_qaoa
barbell = [(0, 1)] # graph is a defined by a list of edges. Edge weights are assumed to be 1.0
steps = 1 # evolution path length between the ref hamiltonian and cost hamiltonian
inst = maxcut_qaoa(barbell, steps=steps) # initializing problem instance
Explanation: The Quantum Approximate Optimization Algorithm for MAX-CUT
The following is a step-by-step guide to running QAOA on the MaxCut problem. In the debue paper on QAOA (arXiv: 1411.4028), Farhi, Goldstone, and Gutmann demonstrate that the lowest order approximation of the algorithm produced an approximation ratio of 0.6946 for the MaxCut problem on three-regular graphs. You can use this notebook to set up an arbitary graph for MaxCut and solve it using the QAOA algorithm the Rigetti Forest service.
pyQAOA is a python library that implements the QAOA. It uses the PauliTerm and PauliSum objects from the pyQuil library for expressing the cost Hamiltonian and driver Hamiltonain. These operators are used to create a parametric pyQuil program and passed to the variational quantum eigensolver (VQE) solver in Grove. VQE calls the Rigetti Forest QVM to exectue the Quil program that prepares the angle parameterized state. There are multiple ways to construct the MAX-CUT problem for the QAOA library. We include a method that accepts a graph and returns a QAOA instance where the costs and driver Hamiltonaians have been constructed. The graph is either an undirected Networkx graph or a list of tuples where each tuple represents an edge between a pair of nodes.
We start by demonstrating the QAOA algorithm with the simplest instance of MAX-CUT--parititioning the nodes on a barbell graph. The barbell graph corresponds to a single edge connecting two nodes. The solution is the partitioning of the nodes into different sets ${0, 1}$.
End of explanation
cost_list, ref_list = inst.cost_ham, inst.ref_ham
cost_ham = reduce(lambda x,y: x + y, cost_list)
ref_ham = reduce(lambda x,y: x + y, ref_list)
print cost_ham
print ref_ham
Explanation: The cost Hamiltonian and driver Hamiltonian corresponding to the barbell graph are stored in QAOA object fields in the form of lists of PauliSums.
End of explanation
param_prog = inst.get_parameterized_program()
prog = param_prog([1.2, 4.2])
print prog
Explanation: The identity term above is not necessary to the computation since global phase rotations on the wavefunction do not change the expectation value. We include it here purely as a demonstration. The cost function printed above is the negative of the traditional maximum cut cost operator. This is because QAOA is formulated as the maximization of the cost operator but the VQE algorithm in the pyQuil library performs a minimization.
QAOA requires the construction of a state parameterized by $\beta$ and $\gamma$ rotation angles
$$\begin{align}
\mid \beta, \gamma \rangle = \prod_{p=0}^{\mathrm{steps}}\left( U(\hat{H}{\mathrm{drive}}, \beta{p})U(\hat{H}{\mathrm{MAXCUT}}, \gamma{p}) \right)^{\mathrm{steps}} (\mid +\rangle_{N-1}\otimes\mid + \rangle_{N-2}...\otimes\mid + \rangle_{0}).
\end{align}$$
The unitaries $U(\hat{H}{\mathrm{drive}}, \beta{p})$ and $U(\hat{H}{\mathrm{MAXCUT}}, \gamma{p})$ are the exponentiation of the driver Hamiltonian and the cost Hamiltonian, respectively.
$$
\begin{align}
U(\hat{H}{\mathrm{ref}}, \beta{p}) = e^{-i \beta_{p} \hat{H}{drive}} \
U(\hat{H}{\mathrm{MAXCUT}}, \gamma_{p}) = e^{-i \gamma_{p} \hat{H}_{\mathrm{MAXCUT}}}
\end{align}
$$
The QAOA algorithm relies on many constructions of a wavefunction via parameterized Quil and measurements on all qubits to evaluate an expectation value. In order avoid needless classical computation, QAOA constructs this parametric program once at the beginning of the calculation and then uses this same program object throughout the computation. This is accomplished using the ParametricProgram object from pyQuil that allows us to slot in a symbolic value for a parameterized gate.
The parameterized program object can be accessed through the QAOA method get_parameterized_program(). Calling this method on an instantiated QAOA object returns a closure with a precomputed set of Quil programs. Calling this closure with the parameters $\beta$ and $\gamma$ returns the circuit that has the rotations parameterized.
End of explanation
betas, gammas = inst.get_angles()
print betas, gammas
Explanation: The printed program above is a Quil program that can be executed on a QVM. QAOA has two modes of operation: 1) pre-computing the angles of rotation classically and using the quantum computer to measure expectation values through repeated experiments and 2) installing the a classical optimization loop on top of step 1 to optimally determine the angles. Operation mode 2 is known as the variational quantum eigensolver algorithm. the QAOA object wraps the instantiation of the VQE algorithm with the get_angles() method.
End of explanation
param_prog = inst.get_parameterized_program()
t = np.hstack((betas, gammas))
prog = param_prog(t)
wf, _ = inst.qvm.wavefunction(prog)
wf = wf.amplitudes
for ii in range(2**inst.n_qubits):
print inst.states[ii], np.conj(wf[ii])*wf[ii]
Explanation: get_angles() returns optimal the $\beta$ and $\gamma$ angles. To view the probabilities of the state you can call QAOA.probabilities(t) were t is a concatentation of the $\beta$ and $\gamma$ angles, in that order. The probabilities(t) routine takes the $\beta$ and $\gamma$ parameters, reconstructs the wave function and returns their coefficients. A modified version can be used to print off the probabilities
End of explanation
%matplotlib inline
from grove.pyqaoa.qaoa import QAOA
import networkx as nx
import matplotlib.pyplot as plt
from pyquil.paulis import PauliSum, PauliTerm
import pyquil.quil as pq
from pyquil.gates import H
import pyquil.forest as qvm_module
CXN = qvm_module.Connection()
# define a 6-qubit ring
ring_size = 6
graph = nx.Graph()
for i in range(ring_size):
graph.add_edge(i, (i + 1) % ring_size)
nx.draw_circular(graph, node_color="#6CAFB7")
Explanation: As expected the bipartitioning of a graph with a single edge connecting two nodes corresponds to the state ${|01\rangle, |10\rangle }$. In this trivial example the QAOA finds angles that construct a distribution peaked around the two degenerate solutions.
MAXCUT on larger graphs and alternative optimizers.
Larger graph instances and different classical optimizers can be used with the QAOA. Here we consider an 6-node ring of disagrees. For an even number ring graph, the ring of disagrees corresponds to the antiferromagnet ground state--i.e. alternating spin-up spin-down.
End of explanation
cost_operators = []
driver_operators = []
for i, j in graph.edges():
cost_operators.append(PauliTerm("Z", i, 0.5)*PauliTerm("Z", j) + PauliTerm("I", 0, -0.5))
for i in graph.nodes():
driver_operators.append(PauliSum([PauliTerm("X", i, 1.0)]))
Explanation: This graph could be passed to the maxcut_qaoa method and a QAOA instance with the correct driver and cost Hamiltonian could be generated as before. In order to demonstrate the more general approach, along with some VQE options, we will construct the cost and driver Hamiltonians directly with PauliSum and PauliTerm objects. To do this we parse the edges and nodes of the graph to construct the relevant operators.
$$
\begin{align}
\hat{H}{\mathrm{cost}} = \sum{\langle i, j\rangle \in E}\frac{\sigma_{i}^{z}\sigma_{j}^{z} - 1}{2} \
\hat{H}{\mathrm{drive}} = \sum{i}^{n}-\sigma_{i}^{x}
\end{align}
$$
where $\langle i, j\rangle \in E$ refers to the pairs of nodes that form the edges of the graph.
End of explanation
prog = pq.Program()
for i in graph.nodes():
prog.inst(H(i))
Explanation: We will also construct the initial state and pass this to the QAOA object. By default QAOA uses the $|+\rangle$ tensor product state. In other notebooks we will demonstrate that you can use the driver_ref optional argument to pass a different starting state for QAOA.
End of explanation
ring_cut_inst = QAOA(CXN, len(graph.nodes()), steps=1, ref_ham=driver_operators, cost_ham=cost_operators,
driver_ref=prog, store_basis=True, rand_seed=42)
betas, gammas = ring_cut_inst.get_angles()
Explanation: Now we are ready to instantiate the QAOA object!
End of explanation
from collections import Counter
# get the parameterized program
param_prog = ring_cut_inst.get_parameterized_program()
sampling_prog = param_prog(np.hstack((betas, gammas)))
# use the run_and_measure QVM API to prepare a circuit and then measure on the qubits
bitstring_samples = CXN.run_and_measure(sampling_prog, range(len(graph.nodes())), 1000)
bitstring_tuples = map(tuple, bitstring_samples)
# aggregate the statistics
freq = Counter(bitstring_tuples)
most_frequent_bit_string = max(freq, key=lambda x: freq[x])
print freq
print "The most frequently sampled string is ", most_frequent_bit_string
Explanation: We are interested in the bit strings returned from the QAOA algorthm. The get_angles() routine calls the VQE algorithm to find the best angles. We can then manually query the bit strings by rerunning the program and sampling many outputs.
End of explanation
# plotting strings!
n_qubits = len(graph.nodes())
def plot(inst, probs):
probs = probs.real
states = inst.states
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel("state",fontsize=20)
ax.set_ylabel("Probability",fontsize=20)
ax.set_xlim([0, 2**n_qubits ])
rec = ax.bar(range(2**n_qubits), probs, )
num_states = [0, int("".join(str(x) for x in [0,1] * (n_qubits/2)), 2),
int("".join(str(x) for x in [1,0] * (n_qubits/2)), 2), 2**n_qubits - 1 ]
ax.set_xticks(num_states)
ax.set_xticklabels(map(lambda x: inst.states[x], num_states), rotation=90)
plt.grid(True)
plt.tight_layout()
plt.show()
t = np.hstack((betas, gammas))
probs = ring_cut_inst.probabilities(t)
plot(ring_cut_inst, probs)
Explanation: We can see that the first two most frequently sampled bit strings are the alternating solutions to the ring graph. Since we have access to the wave function we can go one step farther and view the probability distrubtion over the bit strings produced by our $p = 1$ circuit.
End of explanation
# get the angles from the last run
beta = ring_cut_inst.betas
gamma = ring_cut_inst.gammas
# form new beta/gamma angles from the old angles
betas = np.hstack((beta[0]/3, beta[0]*2/3))
gammas = np.hstack((gamma[0]/3, gamma[0]*2/3))
# set up a new QAOA instance.
ring_cut_inst_2 = QAOA(CXN, len(graph.nodes()), steps=2, ref_ham=driver_operators, cost_ham=cost_operators,
driver_ref=prog, store_basis=True, init_betas=betas, init_gammas=gammas)
# run VQE to determine the optimal angles
betas, gammas = ring_cut_inst_2.get_angles()
t = np.hstack((betas, gammas))
probs = ring_cut_inst_2.probabilities(t)
plot(ring_cut_inst_2, probs)
Explanation: For larger graphs the probability of sampling the correct string could be significantly smaller, though still peaked around the solution. Therefore, we would want to increase the probability of sampling the solution relative to any other string. To do this we simply increase the number of steps $p$ in the algorithm. We might want to bootstrap the algorithm with angles from lower number of steps. We can pass inital angles to the solver as optional arguments.
End of explanation
from scipy.optimize import fmin_bfgs
ring_cut_inst_3 = QAOA(CXN, len(graph.nodes()), steps=3, ref_ham=driver_operators, cost_ham=cost_operators,
driver_ref=prog, store_basis=True, minimizer=fmin_bfgs, minimizer_kwargs={'gtol':1.0e-3},
rand_seed=42)
betas, gammas = ring_cut_inst_3.get_angles()
t = np.hstack((betas, gammas))
probs = ring_cut_inst_3.probabilities(t)
plot(ring_cut_inst_3, probs)
Explanation: We could also change the optimizer which is passed down to VQE through the QAOA interface. Let's say I want to use BFGS or another optimizer that can be wrapped in python. Simply pass it to QAOA through the minimzer, minimizer_args, and minimizer_kwargs keywords
End of explanation |
7,181 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Distributed Deep Learning with Apache Spark and Keras
Joeri Hermans (Technical Student, IT-DB-SAS, CERN)
Departement of Knowledge Engineering
Maastricht University, The Netherlands
Step1: This presentation will give the reader an introduction to the topic of distributed deep learning (DDL) and to the issues which need to be taken into consideration when applying this technique. We will also introduce a DDL framework based on a fast and general engine for large-scale data processing called Apache Spark and the neural network library Keras.
The project was initially initiated by the CMS experiment. CMS is exploring the possibility to use a deep learning model for the high level trigger in order to be able to handle the data rates for LHC run 3 and up. Furthermore, they would like to be able to train their models faster using distributed algorithms which allows them to tune their models with an increased frequency. An other requirement was those models should be trained on their complete dataset, which is in the order of a TB. At this point, production use-cases for ATLAS are also being evaluated. These focus more on the serving of models to classify instances.
Contents
Introduction and problem statement
Model parallelism
Data parallelism
Usage
Acknowledgments
Distributed Deep Learning, an introduction.
Unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. However, consider the problem of training a deep network with billions of parameters. How do we achieve this without waiting for days, or even weeks, and thus leaving more time to tune the model? Dean et al. [1] proposed an training paradigm which allows us to train a model on multiple physical machines. The authors describe two methods to achieve this, i.e., data parallelism and model parallelism<sup>1</sup>.
Model parallelism<sup><span style="font-size
Step2: Preparing a Spark Context
In order to read our (big) dataset into our Spark Cluster, we first need a Spark Context. However, since Spark 2.0 there are some changes regarding the initialization of a Spark Context. For example, SQLContext and HiveContext do not have to be initialized separately anymore.
Step3: Dataset preprocessing and normalization
Since Spark's MLlib has some nice features for distributed preprocessing, we made sure we comply to the DataFrame API in order to ensure compatibility. What it basically boils down to, is that all the features (which can have different type) will be aggregated into a single column. More information on Spark MLlib (and other APIs) can be found here
Step4: Warning
Step5: Model construction
We will now construct a relatively simple Keras model (without any modifications) which, hopefully, will be able to classify the dataset.
Step6: Worker Optimizer and Loss
In order to evaluate the gradient on the model replicas, we have to specify an optimizer and a loss method. For this, we just follow the Keras API as defined in the documentation
Step7: Training
In the following cells we will train and evaluate the model using different distributed trainers, however, we will as well provide a baseline metric using a SingleTrainer, which is basically an instance of the Adagrad optimizer running on Spark.
Furthermore, we will also evaluate every training using Spark's MulticlassClassificationEvaluator https
Step8: But first, we will allocate a simple datastructure which will hold the results.
Step9: SingleTrainer
A SingleTrainer is used as a benchmarking trainer to compare to distributed trainer. However, one could also use this trainer if the dataset is too big to fit in memory.
Step10: Asynchronous EASGD
EASGD based methods, proposed by Zhang et al., transmit the complete parametrization instead of the gradient. These methods will then "average" the difference of the center variable and the backpropagated worker variable. This is used to compute a new master variable, on which the worker nodes will base their backpropagation in the next iteration on.
Asynchronous EASGD will do this in an asynchronous fashion, meaning, whenever a worker node is done processing its mini-batch after a certain amount of iterations (communication window), then the computed parameter will be communicated with the parameter server, which will update the center (master) variable immediately without waiting for other workers.
Step11: Asynchronous EAMSGD
The only difference between asynchronous EAMSGD and asynchronous EASGD is the possibility of specifying an explicit momentum term.
Step12: DOWNPOUR SGD | Python Code:
!(date +%d\ %B\ %G)
Explanation: Distributed Deep Learning with Apache Spark and Keras
Joeri Hermans (Technical Student, IT-DB-SAS, CERN)
Departement of Knowledge Engineering
Maastricht University, The Netherlands
End of explanation
import numpy as np
import time
import requests
from keras.optimizers import *
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import StringIndexer
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
from pyspark.mllib.evaluation import BinaryClassificationMetrics
from distkeras.trainers import *
from distkeras.predictors import *
from distkeras.transformers import *
from distkeras.evaluators import *
from distkeras.utils import *
# Modify these variables according to your needs.
application_name = "Distributed Keras Notebook"
using_spark_2 = False
local = False
if local:
# Tell master to use local resources.
master = "local[*]"
num_cores = 3
num_executors = 1
else:
# Tell master to use YARN.
master = "yarn-client"
num_executors = 6
num_cores = 2
# This variable is derived from the number of cores and executors, and will be used to assign the number of model trainers.
num_workers = num_executors * num_cores
print("Number of desired executors: " + `num_executors`)
print("Number of desired cores / executor: " + `num_cores`)
print("Total number of workers: " + `num_workers`)
import os
# Use the DataBricks CSV reader, this has some nice functionality regarding invalid values.
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-csv_2.10:1.4.0 pyspark-shell'
Explanation: This presentation will give the reader an introduction to the topic of distributed deep learning (DDL) and to the issues which need to be taken into consideration when applying this technique. We will also introduce a DDL framework based on a fast and general engine for large-scale data processing called Apache Spark and the neural network library Keras.
The project was initially initiated by the CMS experiment. CMS is exploring the possibility to use a deep learning model for the high level trigger in order to be able to handle the data rates for LHC run 3 and up. Furthermore, they would like to be able to train their models faster using distributed algorithms which allows them to tune their models with an increased frequency. An other requirement was those models should be trained on their complete dataset, which is in the order of a TB. At this point, production use-cases for ATLAS are also being evaluated. These focus more on the serving of models to classify instances.
Contents
Introduction and problem statement
Model parallelism
Data parallelism
Usage
Acknowledgments
Distributed Deep Learning, an introduction.
Unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. However, consider the problem of training a deep network with billions of parameters. How do we achieve this without waiting for days, or even weeks, and thus leaving more time to tune the model? Dean et al. [1] proposed an training paradigm which allows us to train a model on multiple physical machines. The authors describe two methods to achieve this, i.e., data parallelism and model parallelism<sup>1</sup>.
Model parallelism<sup><span style="font-size: 0.75em">2</span></sup>
In model parallelism a single model is distributed over multiple machines. The performance benefits of distributing a deep network across multiple machines depends mainly on the structure and of the model. Models with a large number of parameters typically benefit from access to more CPUs and memory, up to the point where communication costs, i.e., propagation of the weight updates and synchronization mechanisms, dominate [1].
<img src="https://github.com/JoeriHermans/dist-keras/blob/master/resources/model_parallelism.png?raw=true" alt="Model Parallelism" width="300px" />
Data parallelism
As stated in the introduction, in order to train a large network in a reasonable amount of time, we need to parallize the optimization process (which is the learning of the model). In this settings, we take several model replicas, and distribute them over multiple machines. Of course, it would also be possible to combine this with the model parallelism approach. However, for the sake of simplicity, let us assume that a model (or several models) can be contained on a single machine. In order to parallelize the training, and to improve the usage of the resources of the cluster, we distribute the models over several machines.
In order to build a distributed learning scheme using data parallelism, you would in the most simple case need at least 1 parameter server. A parameter server is basically a thread (or a collection of threads) which aggregate the incoming gradient updates of the workers into a so-called center variable, which acts as a global consensus variable. Finally, the weights which are stored in the center variable will eventually be used by the produced model.
<img src="https://github.com/JoeriHermans/dist-keras/blob/master/resources/data_parallelism.png?raw=true" alt="Data Parallelism" width="400px" />
There are two general approaches towards solving data parallelism. The most straightforward is a synchronous method. In short, a synchronous data parallel method will wait for all workers to finish the current mini-batch or stochastic sample before continuing to the next iteration. Synchronous methods have the advantage that all workers will use the most recent center variable, i.e., a worker knows that all other workers will use the same center variable. However, the main disadvantage of this method is the synchronization itself. A synchronous method will never be truly synchronous. This is due to the many, and possibly different, machines. Furthermore, every machine could have a different workload, which would influence the training speed of a worker. As a result, synchronous methods need additional waiting mechanisms to synchronize all workers. These locking mechanisms will make sure that that all workers compute the next gradient based on the same center variable. However, locking mechanisms induce a significant wait which will significantly influence the training speed. For example, imagine a cluster node with an unusual high load. This high load will, due to CPU sharing, cause the training procedure to slow down. Which in turn, will cause the other workers to wait for this single node. Of course, this is just a simple and possibly extreme example, but this example shows how a single worker could significantly influence the training time of all workers.
<img src="https://github.com/JoeriHermans/dist-keras/blob/master/resources/synchronous_method.png?raw=true" alt="Data Parallelism" width="500px" />
A very simple, but radical "solution" for this synchronization problem is to not synchronize the workers :) Workers simply fetch the center variable and update the parameter server with the computed gradient whenever a worker is ready. This approach is called an asynchronous data parallel method. Asynchronous methods, compared to synchronous methods, will have a different set of problems. One of these is a so-called stale gradient. This a gradient based on an older version of the center variable while the current center variable is a center variable which has already been updated by other workers. One approach to solve this is to induce and exponential decay factor to the gradient updates. However, this would waste computational resources, but of course, one could just get the most recent weights from the parameter server and then start again. However, as we will show later, it is actually stale gradients (result of asynchrony) that induce implicit momentum to the learning process [2].
<img src="https://github.com/JoeriHermans/dist-keras/blob/master/resources/asynchronous_method.png?raw=true" alt="Data Parallelism" width="500px" />
At this point you probably ask the question: why does this actually work? A lot of people suggest this is due to the sparsity of the gradients. Intuitively, image having multiple workers processing different data (since every worker has its own dat partition), chances are the weights updates will be totally dissimilar since we are training a large network with a lot of tunable parameters. Furthermore, techniques such as dropout (if they are applied differently among the replicas) only increase the sparsity updates<sup>3</sup>.
Formalization
We would also like to inform the reader that the general problem to be solved is the so-called global consensus optimization problem. A popular approach towards solving this is using the Alternating Direction Method of Multipliers (ADMM) [3] [4]. However, since this is outside the scope of this notebook we will not review this in-depth. But, we would like to note that the Elastic Averaging methods [5] by Zhang et al., which we included in Distributed Keras are based on ADMM.
<sub>1: Hybrids are possible as well.</sub>
<sub>2: This is mainly used for the computation of the network outputs [1].</sub>
<sub>3: A way to check the sparsity between 2 gradients is to put all the weights in to a 1 dimensional vector, and then compute the cosine similarity.
Distributed Keras
Distributed Keras is a framework which uses Apache Spark and Keras. We chose for Spark because of the distributed environment. This allows us to preprocess the data in a distributed manner, and train our deep learning models on the same architecture, while still having the modeling simplicity of Keras.
Architecture
Our architecture is very similar to the architecture discussed in [1]. However, we employ Apache Spark for data parallel reading and handling larger than memory datasets. The parameter server will always be created in the Spark Driver. This is the program which creates the Spark Context. For example, if the Jupyter installation of this notebook is running on the Spark cluster, then a cluster node will host the parameter server. However, if you run a Python script, which connects to a remote Spark cluster, then your computer will run the Spark Driver, and as a result will run the parameter server. In that case, be sure your network connection is able to handle the load, else your computer will be the bottleneck in the learning process.
<img src="https://github.com/JoeriHermans/dist-keras/blob/master/resources/distkeras_architecture.png?raw=true" alt="Model Parallelism" width="500px" />
Implementation of costum distributed optimizer
In order to implement your own optimizer you need 2 classes. First, define your optimizer using the Trainer interface. We already supplied an AsynchronousDistributedTrainer, and an SynchronousDistributedTrainer. However, if you require an other procedure, please feel free to do so. Finally, you need a worker class. This class must have a train method with the required arguments, as specified by Apache Spark.
Usage
In the following sections, we will give you an example how a complete workflow will look like. This includes setting up a Spark context, reading, preprocessing, and normalizing the data. Finally, we create a relatively simple model (feel free to adjust the parameters) with Keras and optimize it using the different distributed optimizers which are included by default.
Dataset
We are using the ATLAS Higgs dataset constructed for the Kaggle machine learning challenge. This dataset is quite limited, it contains only 250000 instances. 40% of which we will be using as a test set. For future experiments, it would be usefull to integrate well understood datasets such as CIFAR or MNIST to evaluate against other optimizers. However, it would be nice to have a "well understood" HEP (High Energy Physics) dataset for this task :)
End of explanation
conf = SparkConf()
conf.set("spark.app.name", application_name)
conf.set("spark.master", master)
conf.set("spark.executor.cores", `num_cores`)
conf.set("spark.executor.instances", `num_executors`)
conf.set("spark.locality.wait", "0")
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
# Check if the user is running Spark 2.0 +
if using_spark_2:
sc = SparkSession.builder.config(conf=conf) \
.appName(application_name) \
.getOrCreate()
else:
# Create the Spark context.
sc = SparkContext(conf=conf)
# Add the missing imports
from pyspark import SQLContext
sqlContext = SQLContext(sc)
# Check if we are using Spark 2.0
if using_spark_2:
reader = sc
else:
reader = sqlContext
# Read the dataset.
raw_dataset = reader.read.format('com.databricks.spark.csv') \
.options(header='true', inferSchema='true').load("data/atlas_higgs.csv")
# Double-check the inferred schema, and get fetch a row to show how the dataset looks like.
raw_dataset.printSchema()
Explanation: Preparing a Spark Context
In order to read our (big) dataset into our Spark Cluster, we first need a Spark Context. However, since Spark 2.0 there are some changes regarding the initialization of a Spark Context. For example, SQLContext and HiveContext do not have to be initialized separately anymore.
End of explanation
# First, we would like to extract the desired features from the raw dataset.
# We do this by constructing a list with all desired columns.
features = raw_dataset.columns
features.remove('EventId')
features.remove('Weight')
features.remove('Label')
# Next, we use Spark's VectorAssembler to "assemble" (create) a vector of all desired features.
# http://spark.apache.org/docs/latest/ml-features.html#vectorassembler
vector_assembler = VectorAssembler(inputCols=features, outputCol="features")
# This transformer will take all columns specified in features, and create an additional column "features" which will contain all the desired features aggregated into a single vector.
dataset = vector_assembler.transform(raw_dataset)
# Show what happened after applying the vector assembler.
# Note: "features" column got appended to the end.
dataset.select("features").take(1)
# Apply feature normalization with standard scaling. This will transform a feature to have mean 0, and std 1.
# http://spark.apache.org/docs/latest/ml-features.html#standardscaler
standard_scaler = StandardScaler(inputCol="features", outputCol="features_normalized", withStd=True, withMean=True)
standard_scaler_model = standard_scaler.fit(dataset)
dataset = standard_scaler_model.transform(dataset)
# If we look at the dataset, the Label column consists of 2 entries, i.e., b (background), and s (signal).
# Our neural network will not be able to handle these characters, so instead, we convert it to an index so we can indicate that output neuron with index 0 is background, and 1 is signal.
# http://spark.apache.org/docs/latest/ml-features.html#stringindexer
label_indexer = StringIndexer(inputCol="Label", outputCol="label_index").fit(dataset)
dataset = label_indexer.transform(dataset)
# Show the result of the label transformation.
dataset.select("Label", "label_index").take(5)
# Define some properties of the neural network for later use.
nb_classes = 2 # Number of output classes (signal and background)
nb_features = len(features)
# We observe that Keras is not able to work with these indexes.
# What it actually expects is a vector with an identical size to the output layer.
# Our framework provides functionality to do this with ease.
# What it basically does, given an expected vector dimension,
# it prepares zero vector with the specified dimensionality, and will set the neuron
# with a specific label index to one. (One-Hot encoding)
# For example:
# 1. Assume we have a label index: 3
# 2. Output dimensionality: 5
# With these parameters, we obtain the following vector in the DataFrame column: [0,0,0,1,0]
transformer = OneHotTransformer(output_dim=nb_classes, input_col="label_index", output_col="label")
dataset = transformer.transform(dataset)
# Only select the columns we need (less data shuffling) while training.
dataset = dataset.select("features_normalized", "label_index", "label")
# Show the expected output vectors of the neural network.
dataset.select("label_index", "label").take(1)
Explanation: Dataset preprocessing and normalization
Since Spark's MLlib has some nice features for distributed preprocessing, we made sure we comply to the DataFrame API in order to ensure compatibility. What it basically boils down to, is that all the features (which can have different type) will be aggregated into a single column. More information on Spark MLlib (and other APIs) can be found here: http://spark.apache.org/docs/latest/ml-guide.html
In the following steps we will show you how to extract the desired columns from the dataset and prepare the for further processing.
End of explanation
# Shuffle the dataset.
dataset = shuffle(dataset)
# Note: we also support shuffling in the trainers by default.
# However, since this would require a shuffle for every training we will only do it once here.
# If you want, you can enable the training shuffling by specifying shuffle=True in the train() function.
# Finally, we create a trainingset and a testset.
(training_set, test_set) = dataset.randomSplit([0.6, 0.4])
training_set.cache()
test_set.cache()
Explanation: Warning: shuffling on a large dataset will take some time.
We recommend users to first preprocess and shuffle their data, as is described in the data preprocessing notebook.
End of explanation
model = Sequential()
model.add(Dense(500, input_shape=(nb_features,)))
model.add(Activation('relu'))
model.add(Dropout(0.4))
model.add(Dense(500))
model.add(Activation('relu'))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.summary()
Explanation: Model construction
We will now construct a relatively simple Keras model (without any modifications) which, hopefully, will be able to classify the dataset.
End of explanation
optimizer = 'adagrad'
loss = 'categorical_crossentropy'
Explanation: Worker Optimizer and Loss
In order to evaluate the gradient on the model replicas, we have to specify an optimizer and a loss method. For this, we just follow the Keras API as defined in the documentation: https://keras.io/optimizers/ and https://keras.io/objectives/.
End of explanation
def evaluate_accuracy(model):
global test_set
# Allocate a Distributed Keras Accuracy evaluator.
evaluator = AccuracyEvaluator(prediction_col="prediction_index", label_col="label_index")
# Clear the prediction column from the testset.
test_set = test_set.select("features_normalized", "label_index", "label")
# Apply a prediction from a trained model.
predictor = ModelPredictor(keras_model=trained_model, features_col="features_normalized")
test_set = predictor.predict(test_set)
# Allocate an index transformer.
index_transformer = LabelIndexTransformer(output_dim=nb_classes)
# Transform the prediction vector to an indexed label.
test_set = index_transformer.transform(test_set)
# Fetch the score.
score = evaluator.evaluate(test_set)
return score
def add_result(trainer, accuracy, dt):
global results;
# Store the metrics.
results[trainer] = {}
results[trainer]['accuracy'] = accuracy;
results[trainer]['time_spent'] = dt
# Display the metrics.
print("Trainer: " + str(trainer))
print(" - Accuracy: " + str(accuracy))
print(" - Training time: " + str(dt))
Explanation: Training
In the following cells we will train and evaluate the model using different distributed trainers, however, we will as well provide a baseline metric using a SingleTrainer, which is basically an instance of the Adagrad optimizer running on Spark.
Furthermore, we will also evaluate every training using Spark's MulticlassClassificationEvaluator https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.evaluation.MulticlassClassificationEvaluator.
Evaluation
We will evaluate all algorithms using the F1 https://en.wikipedia.org/wiki/F1_score and accuracy metric.
End of explanation
results = {}
Explanation: But first, we will allocate a simple datastructure which will hold the results.
End of explanation
trainer = SingleTrainer(keras_model=model, worker_optimizer=optimizer,
loss=loss, features_col="features_normalized",
label_col="label", num_epoch=1, batch_size=32)
trained_model = trainer.train(training_set)
# Fetch the evaluation metrics.
accuracy = evaluate_accuracy(trained_model)
dt = trainer.get_training_time()
# Add the metrics to the results.
add_result('single', accuracy, dt)
Explanation: SingleTrainer
A SingleTrainer is used as a benchmarking trainer to compare to distributed trainer. However, one could also use this trainer if the dataset is too big to fit in memory.
End of explanation
trainer = AEASGD(keras_model=model, worker_optimizer=optimizer, loss=loss, num_workers=num_workers,
batch_size=32, features_col="features_normalized", label_col="label", num_epoch=1,
communication_window=32, rho=5.0, learning_rate=0.1)
trainer.set_parallelism_factor(1)
trained_model = trainer.train(training_set)
# Fetch the evaluation metrics.
accuracy = evaluate_accuracy(trained_model)
dt = trainer.get_training_time()
# Add the metrics to the results.
add_result('aeasgd', accuracy, dt)
Explanation: Asynchronous EASGD
EASGD based methods, proposed by Zhang et al., transmit the complete parametrization instead of the gradient. These methods will then "average" the difference of the center variable and the backpropagated worker variable. This is used to compute a new master variable, on which the worker nodes will base their backpropagation in the next iteration on.
Asynchronous EASGD will do this in an asynchronous fashion, meaning, whenever a worker node is done processing its mini-batch after a certain amount of iterations (communication window), then the computed parameter will be communicated with the parameter server, which will update the center (master) variable immediately without waiting for other workers.
End of explanation
trainer = EAMSGD(keras_model=model, worker_optimizer=optimizer, loss=loss, num_workers=num_workers,
batch_size=32, features_col="features_normalized", label_col="label", num_epoch=1,
communication_window=32, rho=5.0, learning_rate=0.1, momentum=0.6)
trainer.set_parallelism_factor(1)
trained_model = trainer.train(training_set)
# Fetch the evaluation metrics.
accuracy = evaluate_accuracy(trained_model)
dt = trainer.get_training_time()
# Add the metrics to the results.
add_result('eamsgd', accuracy, dt)
Explanation: Asynchronous EAMSGD
The only difference between asynchronous EAMSGD and asynchronous EASGD is the possibility of specifying an explicit momentum term.
End of explanation
trainer = DOWNPOUR(keras_model=model, worker_optimizer=optimizer, loss=loss, num_workers=num_workers,
batch_size=32, communication_window=5, learning_rate=0.05, num_epoch=1,
features_col="features_normalized", label_col="label")
trainer.set_parallelism_factor(1)
trained_model = trainer.train(training_set)
# Fetch the evaluation metrics.
accuracy = evaluate_accuracy(trained_model)
dt = trainer.get_training_time()
# Add the metrics to the results.
add_result('downpour', accuracy, dt)
Explanation: DOWNPOUR SGD
End of explanation |
7,182 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Question 1
Step1: Motivations
The problem we try to solve in the question 1 is evaluating the average causal effect of the "treatment" represented by the job training program.
A naive analysis would only compare the difference in mean between the two groups (with and without treatment). By doing so, this only reflect both the average causal effect (ACE) and the selection bias (SB). The latter might drastically change the two averages we are comparing and could lead to a wrong conclusion.
In order to minimize the role of the selection bias, we use the propensity score matching (PSM) technique. The idea is to compare the difference in mean between subsets of the two groups that are similar.
1.1) A naive analysis
Using Kernel Density Estimations (KDE) plots
Step2: A naive analysis would claim that there are no clear differences between the two groups and thus would conclude that the "Job Training Program" (JTP) is useless. And if a difference exists, people in the treatment groups have a smaller revenue by 10%, hence the treatment would be worst than no treatment at all.
1.2) A closer look at the data
Step3: Obervations
1
Step4: As we can see there is 2.3 times as many sample for the control group. And because of this, we can be picky and select only a part of the samples in the control group that correspond to the samples in the treatment group. To do so, we will match two samples together from each groups corresponding to their propensity score and then only keep and compare the samples matched.
1.3) A propensity score model
Let's calculate the propensity score
Step5: 1.4) Balancing the dataset via matching
Step6: Now, lets compare the difference in the distribution for each feature in the two groups as done earlier in part 1.2
Step7: Observations
1
Step8: We can now see that the mean in the treatment group is slightly higher than in the control group, where it was slightly below before. Also the maximum, median and quartiles are all bigger than their counterpart in the control group. This is a complete different information from what we had before, but let's improve it even more.
1.5) Balancing the groups further
The main difference in the two groups resides in the proportion of hispanic and black people
Step9: 1.6) A less naive analysis
Step10: Observations
The proportion of hispanic people in the two groups is now the same and the only feature that is now unbalanced is the proportion of black people.
Step11: The difference in the salaries we perceived in part 1.4 increased, but not significantly.
Based on this difference, we could say that the "Job Training Program" (JTP) is slightly useful and has a positive effect on average on the salary of the people who took the program. We still have a selection bias by having way more black people in the treatment group and hence any conclusion drawn from these data will be biased. Shrinking the number of samples taken in each group so that we only match hispan with hispan and black with black in each group would result in such a small set that it would not be possible to draw any conclusion.
However it is good to point how far we are from the naive analysis realised in point 1. We had that the mean of the treatment group real earnings in 1978 was 10% lower than the one of the control group. However after we refined the way to analyse the data using propensity score and then late one with matching hispan people with hispan people only, we see that the mean of the treatment group real earnings in 1978 is 10% higher than the one of the control group. This example perfectly shows how a naive analyse could show wrong result. Indeed we go from "the treatment is worst" to "The treatment is worth"
Below you can find a barplot summary of the ratio of the means
Step12: Question 2
Step13: 2.1) Loading, TF-IDF and Spliting
Data fetching
Step17: Utility functions
For the following part of the exercise, we created some utility functions that we use here and could be reused for other tasks.
Step18: Data splitting
Step19: 2.2) Random Forest
Grid search for parameters tuning
Step20: After having computed an estimation of our model with many different parameters we choose the best parameters (comparing their mean score and std)
Step21: Let's save the parameters which give the best result inside a variable
Step22: Random forest classification
We reuse the optimal parameters computed above to produce prediction with a random forest classifier
Step23: As one can see, neither precision, recall or f1_score are adding information. This is because there are quite many classes (20) which are uniformly distributed
Step25: The plot above shows that every class is well represented in the test, training and validation sets.
Confusion matrix
Let's show the confusion matrix
Step26: What the confusion matrices show is that we did a pretty good joob at assignating the categories except we categorized quite a lot of things in religion.christian instead of religion.misc which is understandable because both categories are closely related. Also atheism is closlely related to religion hence the above average value for ths category but it is still a small value. The last part where we could have done better is with every topics about technology (pc.hardware, mac.hardware, etc.) which is again topics that are very closely related. But overall our classifier can categorize correctly a news and if not it classifies it in a category closely related to the correct one.
feature_importances_ attribute
Let's see what information the feature_importances_ attribute can provide us | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy import optimize
from scipy import spatial
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
sns.set(rc={"figure.figsize": (15, 6)})
sns.set_palette(sns.color_palette("Set2", 10))
lalonde_data = pd.read_csv('lalonde.csv')
Explanation: Question 1:
End of explanation
#Function that plots a boxplot for re78
def compare_groups(data):
plt.figure(figsize=(10,10))
sns.boxplot(x='treat', y='re78', data=data, showfliers=False, showmeans=True, meanline=True, meanprops=dict(color='r'))
plt.xticks(range(2), ["Control Group", "Treatment Group"])
plt.show()
compare_groups(lalonde_data)
#We keep track of the ratio (Treatment group real earnings in 1978 mean) / (Control group real earnings in 1978 mean) after each improvement done in exercise 1
means_ratio_over_improvement = []
#A function that prints the mean of real earnings in 1978 in both group
def print_means(data):
data_means = data.groupby("treat").agg(np.mean)
print("Control group real earnings in 1978 mean: {:.0f}".format(data_means["re78"].loc[0]))
print("Treatment group real earnings in 1978 mean: {:.0f}".format(data_means["re78"].loc[1]))
ratio = data_means["re78"].loc[1]/data_means["re78"].loc[0]
means_ratio_over_improvement.append(ratio)
print("Ratio (treatment/control): {:.2f}".format(ratio))
print_means(lalonde_data)
Explanation: Motivations
The problem we try to solve in the question 1 is evaluating the average causal effect of the "treatment" represented by the job training program.
A naive analysis would only compare the difference in mean between the two groups (with and without treatment). By doing so, this only reflect both the average causal effect (ACE) and the selection bias (SB). The latter might drastically change the two averages we are comparing and could lead to a wrong conclusion.
In order to minimize the role of the selection bias, we use the propensity score matching (PSM) technique. The idea is to compare the difference in mean between subsets of the two groups that are similar.
1.1) A naive analysis
Using Kernel Density Estimations (KDE) plots
End of explanation
#Features of each group
main_variables = ['black', 'hispan', 'age', 'married', 'nodegree', 'educ']
#Function that displays a bar plot of each group for every features
def display_proportions(data, variables=main_variables, n_cols=3):
N = len(variables)
f, axes = plt.subplots(nrows=int(np.ceil(N / n_cols)), ncols=n_cols)
f.set_figheight(10)
for idx, axis, var in zip(range(N), axes.flatten(), variables):
sns.barplot(x='treat', y=var, data=data, ax=axis)
axis.set_xticklabels(["Control Group", "Treatment Group"])
axis.set_xlabel("")
axis.set_title(idx+1)
axis.set_ylabel("mean of {}".format(var))
display_proportions(lalonde_data)
Explanation: A naive analysis would claim that there are no clear differences between the two groups and thus would conclude that the "Job Training Program" (JTP) is useless. And if a difference exists, people in the treatment groups have a smaller revenue by 10%, hence the treatment would be worst than no treatment at all.
1.2) A closer look at the data
End of explanation
lalond_count = lalonde_data.groupby("treat").agg("count")
print("Number of people in the control group: {}".format(lalond_count["re78"].loc[0]))
print("Number of people in the treatment group: {}".format(lalond_count["re78"].loc[1]))
Explanation: Obervations
1: As we can see on the barplot above, the concentration of black people in the treatment group is 4 times as high as in the control group
2: The concentration of hispanic people in the control group is more than twice as high as in the treatment group
3: Treatment group is on average 2 years younger that control group
4: People in the control group are more than twice as likely to be married than the ones in the treatment group
5: The proportion of people without a degree in the treatment group is higher by 20% than in the control group
6: The mean and the variance of the of years of education is more or less the same in both groups
With these 6 observations, we can say that that two group are not uniformly separated and that for this reason, it is dangerous to draw a conclusion from a superficial analysis.
Let's see whether each group has a similar number of sample:
End of explanation
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
#Select features, that is drop id and treat columns
selectedFeatures = lalonde_data.drop(['id','treat'], axis=1)
#Fit the model
lr.fit(selectedFeatures, lalonde_data['treat']);
#Calculate the propensity scores
propensity_scores = lr.predict_proba(selectedFeatures)
#Only keep the probability of receiving the treatment and store it inside the dataframe
lalonde_data['propensity score'] = [x[1] for x in propensity_scores]
Explanation: As we can see there is 2.3 times as many sample for the control group. And because of this, we can be picky and select only a part of the samples in the control group that correspond to the samples in the treatment group. To do so, we will match two samples together from each groups corresponding to their propensity score and then only keep and compare the samples matched.
1.3) A propensity score model
Let's calculate the propensity score
End of explanation
#One dataframe per group
control_group = lalonde_data[lalonde_data['treat'] == 0]
treatment_group = lalonde_data[lalonde_data['treat'] == 1]
#Compute the distance matrix using the absolute difference of the propensity scores
cost_matrix = spatial.distance.cdist(
treatment_group["propensity score"].values.reshape((treatment_group.shape[0], 1)),
control_group["propensity score"].values.reshape((control_group.shape[0], 1)),
metric=lambda a,b: np.abs(a - b)
)
#Solve the distance matrix to minimze the total cost function. Where the total cost function is the sum of the distances
#And get the indices of the pairs that minimze this total cost function
treatment_ind, control_ind = optimize.linear_sum_assignment(cost_matrix)
#We construct a dataframe whith the rows corresponding to the indices obtaiend above. Note we have the same number of sample in each group by construction
lalonde_ps_matched = pd.concat((treatment_group.iloc[treatment_ind], control_group.iloc[control_ind]))
Explanation: 1.4) Balancing the dataset via matching
End of explanation
display_proportions(lalonde_ps_matched)
Explanation: Now, lets compare the difference in the distribution for each feature in the two groups as done earlier in part 1.2
End of explanation
compare_groups(lalonde_ps_matched)
print_means(lalonde_ps_matched)
Explanation: Observations
1: The difference in the concentration of black people shrinked, however the treatment group's rate is almost still twice the rate of the control group (better than before)
2: The concentration of hispanic people in the control group is now twice as high as in the treatment group (better than before)
3: The control group is on average 2 years younger than the treatment group (same as before, but reversed)
4: People in the control group have now almost the same probability to be married as the ones in the treatment group (better than before)
5: The proportion of people without a degree in the treatment group is higher by 5% than in the control group (less than before (20%) )
6: The mean and the variance of the of years of education is again more or less the same in both groups
Compared to before the matching, the different features are more balanced. The only features that are not roughtly the same are the features that have a racial information in them.
End of explanation
additionnal_feature_matched = 'hispan'
#Compute the distance matrix where a value is 0 if both the row and the colum is hispan, 1 otherwise
add_cost_matrix = spatial.distance.cdist(
treatment_group[additionnal_feature_matched].values.reshape((treatment_group.shape[0], 1)),
control_group[additionnal_feature_matched].values.reshape((control_group.shape[0], 1)),
metric=lambda a,b: int(a != b)
)
#Solve the distance matrix (obtained by adding the propensity score distance matrix to the hispan distance matrix) to minimze the total cost function.
#Where the total cost function is the sum of the distances
#And get the indices of the pairs that minimze this total cost function
treatment_ind_2, control_ind_2 = optimize.linear_sum_assignment(cost_matrix + add_cost_matrix)
Explanation: We can now see that the mean in the treatment group is slightly higher than in the control group, where it was slightly below before. Also the maximum, median and quartiles are all bigger than their counterpart in the control group. This is a complete different information from what we had before, but let's improve it even more.
1.5) Balancing the groups further
The main difference in the two groups resides in the proportion of hispanic and black people:
For this reason, we will add the condition when matching two subjects that they have the same value for the hispanic feature. Doing so for the black feature is not possible because 156 people out of the 185 people are black in the treatment group where for the control group there are 87 black people out of the 429 people.
End of explanation
#We construct a dataframe whith the rows corresponding to the indices obtaiend above. Note we have the same number of sample in each group by construction
lalonde_ps_matched_2 = pd.concat((treatment_group.iloc[treatment_ind_2], control_group.iloc[control_ind_2]))
display_proportions(lalonde_ps_matched_2)
Explanation: 1.6) A less naive analysis
End of explanation
compare_groups(lalonde_ps_matched_2)
print_means(lalonde_ps_matched_2)
Explanation: Observations
The proportion of hispanic people in the two groups is now the same and the only feature that is now unbalanced is the proportion of black people.
End of explanation
#Plot the means we recorded after each improvement
sns.barplot(y=means_ratio_over_improvement, x = ["Naive", "Propensity score", "Propensity score + hispan matching"])
Explanation: The difference in the salaries we perceived in part 1.4 increased, but not significantly.
Based on this difference, we could say that the "Job Training Program" (JTP) is slightly useful and has a positive effect on average on the salary of the people who took the program. We still have a selection bias by having way more black people in the treatment group and hence any conclusion drawn from these data will be biased. Shrinking the number of samples taken in each group so that we only match hispan with hispan and black with black in each group would result in such a small set that it would not be possible to draw any conclusion.
However it is good to point how far we are from the naive analysis realised in point 1. We had that the mean of the treatment group real earnings in 1978 was 10% lower than the one of the control group. However after we refined the way to analyse the data using propensity score and then late one with matching hispan people with hispan people only, we see that the mean of the treatment group real earnings in 1978 is 10% higher than the one of the control group. This example perfectly shows how a naive analyse could show wrong result. Indeed we go from "the treatment is worst" to "The treatment is worth"
Below you can find a barplot summary of the ratio of the means
End of explanation
from sklearn import metrics
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from time import time
Explanation: Question 2
End of explanation
#Loading data
all_news = fetch_20newsgroups(subset='all')
vectorizer = TfidfVectorizer(stop_words='english', max_df=0.5, sublinear_tf=True)
#Vectorizing
news_data = vectorizer.fit_transform(all_news.data)
news_target = all_news.target
news_target_names = all_news.target_names
feature_names = vectorizer.get_feature_names()
Explanation: 2.1) Loading, TF-IDF and Spliting
Data fetching
End of explanation
# this could have been done in a simpler way for this homework,
# but it might be useful to have such a powerful function for other uses,
# hence we decide to keep it here so that other could use it too :)
def split(X, y, ratios):
Split X and y given some ratios
Parameters
----------
X : ndarray
train matrix
y : ndarray
test matrix
ratios : list(int)
ratios on how to split X and y
Returns
-------
out : tuple(ndarray)
Output one tuple of first, the splits of X and then, the splits of y
assert np.sum(ratios) < 1, "sum of ratios cannot be greater than 1"
assert len(ratios) >= 1, "at least one ratio required to split"
def inner_split(X, y, ratios, acc_X, acc_y):
ratio, *ratios_remaining = ratios
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=ratio)
if len(ratios_remaining) == 0:
acc_X.extend([X_train, X_test])
acc_y.extend([y_train, y_test])
acc_X.extend(acc_y)
return tuple(acc_X)
else:
acc_X.append(X_train)
acc_y.append(y_train)
return inner_split(X_test, y_test, [r/(1.0 - ratio) for r in ratios_remaining], acc_X, acc_y)
return inner_split(X, y, ratios, [], [])
def predict(clf, X_train, y_train, X_test):
Using a classifier, train with training data it using to fit testing labels
and then predict some labels of testing data.
It also times the different steps.
Parameters
----------
clf: sklearn classifier
classifier
X_train: ndarray
training data
y_train: ndarray
training labels
X_test: ndarray
testing data
Returns
-------
out : ndarray
Output the prediction of labels
start_time = time()
print("Prediction computations started...")
clf.fit(X_train, y_train)
train_time = time() - start_time
pred = clf.predict(X_test)
prediction_time = time() - train_time - start_time
print("...Finished")
print("Training time = {}s".format(round(train_time)))
print("Prediction time = {}s".format(round(prediction_time // 1)))
return pred
def report(results, n_top=3, compared_to=10):
Print the parameters of the best grid search cross-validation results
and plot their accuracy compared to another accuracy score.
Parameters
----------
results: sklearn grid search cv_results_
grid search cross-validation results
n_top: int
the number of best results to plot
compared_to: int
the nth best results to compare the best results with
Returns
-------
out : None
Output some prints and a plot
means = []
stds = []
for i in range(1, n_top + 1):
candidates = np.flatnonzero(results['rank_test_score'] == i)
for candidate in candidates:
mean = results['mean_test_score'][candidate]
std = results['std_test_score'][candidate]
means.append(mean)
stds.append(std)
print("Model with rank: {}".format(i))
print("Mean validation score: {0:.4f} (std: {1:.4f})".format(mean, std))
print("Parameters: {}".format(results['params'][candidate]))
min_ = np.min(results['mean_test_score'][results['rank_test_score'] == (compared_to)])
print('\n{0:}\'th score = {1:.4f}'.format(compared_to, min_))
means = np.array(means) - min_
plt.title("Top {0} best scores (compared to the {1}'th score = {2:.3f})".format(n_top, compared_to, min_))
plt.bar(range(n_top), means, yerr=stds, align="center")
plt.xticks(range(n_top), range(1, n_top + 1))
plt.xlabel("n'th best scores")
plt.ylabel("score - {}'th score".format(compared_to))
plt.show()
Explanation: Utility functions
For the following part of the exercise, we created some utility functions that we use here and could be reused for other tasks.
End of explanation
ratios = [0.8, 0.1] #Ratio is 0.8 for train and twice 0.1 for test and validation
X_train, X_test, X_validation, \
y_train, y_test, y_validation = split(news_data, news_target, ratios)
Explanation: Data splitting
End of explanation
# use a full grid over max_depth and n_estimators parameters
param_grid = {
"max_depth": [3, 10, 20, None],
"n_estimators": np.linspace(3, 200, num=5, dtype=int)
#"max_features": [1, 3, 10],
#"min_samples_split": [2, 3, 10],
#"min_samples_leaf": [1, 3, 10],
#"bootstrap": [True, False],
#"criterion": ["gini", "entropy"]
}
# run grid search
grid_search = GridSearchCV(RandomForestClassifier(), param_grid=param_grid)
grid_search.fit(X_validation, y_validation)
None #No output cell
Explanation: 2.2) Random Forest
Grid search for parameters tuning
End of explanation
report(grid_search.cv_results_, n_top=5, compared_to=10)
Explanation: After having computed an estimation of our model with many different parameters we choose the best parameters (comparing their mean score and std)
End of explanation
rank_chosen = 1 #Position of the parameters we choose
best_params = grid_search.cv_results_['params'][np.flatnonzero(grid_search.cv_results_['rank_test_score'] == rank_chosen)[0]]
Explanation: Let's save the parameters which give the best result inside a variable
End of explanation
random_forest_clf = RandomForestClassifier(**best_params)
pred = predict(random_forest_clf, X_train, y_train, X_test)
#Choose the average type
average_type = "weighted"
#Get the different scores of the predicion computed above
accuracy = metrics.accuracy_score(y_test, pred)
precision = metrics.precision_score(y_test, pred, average=average_type)
recall = metrics.recall_score(y_test, pred, average=average_type)
f1_score = metrics.f1_score(y_test, pred, average=average_type)
print("accuracy = {:.4f}".format(accuracy))
print("precision = {:.4f}".format(precision))
print("recall = {:.4f}".format(recall))
print("f1_score = {:.4f}".format(f1_score))
Explanation: Random forest classification
We reuse the optimal parameters computed above to produce prediction with a random forest classifier
End of explanation
classes = range(len(news_target_names))
def sum_by_class(arr):
return np.array([np.sum(arr == i) for i in classes])
test_sum_by_class = sum_by_class(y_test)
val_sum_by_class = sum_by_class(y_validation)
train_sum_by_class = sum_by_class(y_train)
p1 = plt.bar(classes, test_sum_by_class)
p2 = plt.bar(classes, val_sum_by_class, bottom=test_sum_by_class)
p3 = plt.bar(classes, train_sum_by_class, bottom=test_sum_by_class + val_sum_by_class)
plt.xticks(classes, news_target_names, rotation='vertical')
plt.tick_params(axis='x', labelsize=15)
plt.legend((p1[0], p2[0], p3[0]), ('test', 'validation', 'train'))
plt.show()
Explanation: As one can see, neither precision, recall or f1_score are adding information. This is because there are quite many classes (20) which are uniformly distributed :
End of explanation
import itertools
# A function to plot the confusion matrix, taken from http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cnf_matrix = metrics.confusion_matrix(y_test, pred)
# Plot non-normalized confusion matrix
plt.figure(figsize=(25, 15))
plot_confusion_matrix(cnf_matrix, classes=news_target_names, title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure(figsize=(25, 15))
plot_confusion_matrix(cnf_matrix, classes=news_target_names, normalize=True, title='Normalized confusion matrix')
Explanation: The plot above shows that every class is well represented in the test, training and validation sets.
Confusion matrix
Let's show the confusion matrix
End of explanation
importances = random_forest_clf.feature_importances_
std = np.std([tree.feature_importances_ for tree in random_forest_clf.estimators_], axis=0)
#Sort the feature by importance
indices = np.argsort(importances)[::-1]
print("Total number of features = {}".format(len(indices)))
# Only most important ones (out of thousands)
num_best = 20
best_indices = indices[:num_best]
best_importances = importances[best_indices]
best_std = std[best_indices]
# Plot the feature importances
plt.figure()
plt.title("20 best feature importances")
plt.bar(range(num_best), best_importances, yerr=best_std, align="center")
plt.xticks(range(num_best), np.array(feature_names)[best_indices], rotation='vertical')
plt.tick_params(axis='x', labelsize=15)
plt.xlim([-1, num_best])
plt.xlabel("Feature indices")
plt.ylabel("Feature names")
plt.show()
Explanation: What the confusion matrices show is that we did a pretty good joob at assignating the categories except we categorized quite a lot of things in religion.christian instead of religion.misc which is understandable because both categories are closely related. Also atheism is closlely related to religion hence the above average value for ths category but it is still a small value. The last part where we could have done better is with every topics about technology (pc.hardware, mac.hardware, etc.) which is again topics that are very closely related. But overall our classifier can categorize correctly a news and if not it classifies it in a category closely related to the correct one.
feature_importances_ attribute
Let's see what information the feature_importances_ attribute can provide us
End of explanation |
7,183 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is intended to show how to use pandas, and sql alchemy to upload data into DB2-switch and create geospatial coordinate and indexes.
Install using pip or any other package manager pandas, sqlalchemy and pg8000. The later one is the driver to connect to the db.
Step1: After importing the required packages, first create the engine to connect to the DB. The approach I generally use is to create a string based on the username and password. The code is a function, you just need to fill in with the username, password and the dbname.
It allows you to create different engines to connect to serveral dbs.
Step2: Afterwards, use pandas to import the data from Excel files or any other text file format. Make sure that the data in good shape before trying to push it into the server. In this example I use previous knowledge of the structure of the tabs in the excel file to recursively upload each tab and match the name of the table with the tab name.
If you are using csv files just change the commands to pd.read_csv() in this link you can find the documentation.
Before doing this I already checked that the data is properly organized, crate new cells to explore the data beforehand if needed
excel_file = 'substations_table.xlsx'
tab_name = 'sheet1'
schema_for_upload = 'geographic_data'
pd_data.to_sql(name, engine_db, schema=schema_for_upload, if_exists='replace',chunksize=100)
Step3: Once the data is updated, it is possible to run the SQL commands to properly create geom columns in the tables, this can be done as follows. The ojective is to run an SQL querie like this
Step4: The function created the geom column, the next step is to define a function to create the Primary-Key in the db. Remember that the index from the data frame is included as an index in the db, sometimes an index is not really neded and might need to be dropped.
Step5: The reason why we use postgis is to improve geospatial queries and provide a better data structure for geospatial operations. Many of the ST_ functions have improved performance when a geospatial index is created. The process implemented here comes from this workshop. This re-creates the process using python functions so that it can be easily replicated for many tables.
The query to create a geospatial index is as follows | Python Code:
import pandas as pd
from sqlalchemy import create_engine
Explanation: This notebook is intended to show how to use pandas, and sql alchemy to upload data into DB2-switch and create geospatial coordinate and indexes.
Install using pip or any other package manager pandas, sqlalchemy and pg8000. The later one is the driver to connect to the db.
End of explanation
def connection(user,passwd,dbname, echo_i=False):
str1 = ('postgresql+pg8000://' + user +':' + passw + '@switch-db2.erg.berkeley.edu:5432/'
+ dbname + '?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory')
engine = create_engine(str1,echo=echo_i,isolation_level='AUTOCOMMIT')
return engine
user = 'jdlara'
passw = 'Amadeus-2010'
dbname = 'apl_cec'
engine_db= connection(user,passw,dbname)
Explanation: After importing the required packages, first create the engine to connect to the DB. The approach I generally use is to create a string based on the username and password. The code is a function, you just need to fill in with the username, password and the dbname.
It allows you to create different engines to connect to serveral dbs.
End of explanation
#excel_file = 'substations_table.xlsx'
#tab_name = 'sheet1'
csv_name = ['LEMMA_ADS_AllSpp_2016_Turbo_01252016.csv']
schema_for_upload = 'lemma2016'
for name in csv_name:
pd_data = pd.read_csv(name, encoding='UTF-8')
pd_data.to_sql(name, engine_db, schema=schema_for_upload, if_exists='replace',chunksize=1000)
Explanation: Afterwards, use pandas to import the data from Excel files or any other text file format. Make sure that the data in good shape before trying to push it into the server. In this example I use previous knowledge of the structure of the tabs in the excel file to recursively upload each tab and match the name of the table with the tab name.
If you are using csv files just change the commands to pd.read_csv() in this link you can find the documentation.
Before doing this I already checked that the data is properly organized, crate new cells to explore the data beforehand if needed
excel_file = 'substations_table.xlsx'
tab_name = 'sheet1'
schema_for_upload = 'geographic_data'
pd_data.to_sql(name, engine_db, schema=schema_for_upload, if_exists='replace',chunksize=100)
End of explanation
def create_geom(table,schema,engine, projection=5070):
k = engine.connect()
query = ('set search_path = "'+ schema +'"'+ ', public;')
print query
k.execute(query)
query = ('alter table ' + table + ' drop column if exists geom;')
print query
k.execute(query)
query = 'SELECT AddGeometryColumn (\''+ schema + '\',\''+ table + '\',\'geom\''+',5070,\'POINT\',2);'
print query
k.execute(query)
query = ('UPDATE ' + table + ' set geom = ST_SetSRID(st_makepoint(' + table + '.x, ' +
table + '.y),' + str(projection) + ')::geometry;')
k.execute(query)
print query
k = engine.dispose()
return 'geom column added with SRID ' + str(projection)
table = 'results_approach1'
schema = 'lemma2016'
create_geom(table,schema,engine_db)
Explanation: Once the data is updated, it is possible to run the SQL commands to properly create geom columns in the tables, this can be done as follows. The ojective is to run an SQL querie like this:
PGSQL
set search_path = SCHEMA, public;
alter table vTABLE drop column if exists geom;
SELECT AddGeometryColumn ('SCHEMA','vTABLE','geom',4326,'POINT',2);
UPDATE TABLE set geom = ST_SetSRID(st_makepoint(vTABLE.lon, vTABLE.lat), 4326)::geometry;
where SCHEMA and vTABLE are the variable portions. Also note, that this query assumes that your columns with latitude and longitude are named lat and lon respectively; moreover, it also assumes that the coordinates are in the 4326 projection.
The following function runs the query for you, considering again that the data is clean and nice.
End of explanation
def create_pk(table,schema,column,engine):
k = engine.connect()
query = ('set search_path = "'+ schema +'"'+ ', public;')
print query
k.execute(query)
query = ('alter table ' + table + ' ADD CONSTRAINT '+ table +'_pk PRIMARY KEY (' + column + ')')
print query
k.execute(query)
k = engine.dispose()
return 'Primary key created with column' + column
col = ''
create_pk(table,schema,col,engine_db)
Explanation: The function created the geom column, the next step is to define a function to create the Primary-Key in the db. Remember that the index from the data frame is included as an index in the db, sometimes an index is not really neded and might need to be dropped.
End of explanation
def create_gidx(table,schema,engine,column='geom'):
k = engine.connect()
query = ('set search_path = "'+ schema +'"'+ ', public;')
k.execute(query)
print query
query = ('CREATE INDEX ' + table + '_gix ON ' + table + ' USING GIST (' + column + ');')
k.execute(query)
print query
query = ('VACUUM ' + table + ';')
k.execute(query)
print query
query = ('CLUSTER ' + table + ' USING ' + table + '_gix;')
k.execute(query)
print query
query = ('ANALYZE ' + table + ';')
k.execute(query)
print query
k = engine.dispose()
return k
create_gidx(table,schema,engine_db)
Explanation: The reason why we use postgis is to improve geospatial queries and provide a better data structure for geospatial operations. Many of the ST_ functions have improved performance when a geospatial index is created. The process implemented here comes from this workshop. This re-creates the process using python functions so that it can be easily replicated for many tables.
The query to create a geospatial index is as follows:
SQL
set search_path = SCHEMA, public;
CREATE INDEX vTABLE_gix ON vTABLE USING GIST (geom);
This assumes that the column name with the geometry is named geom. If the process follows from the previous code, it will work ok.
The following step is to run a VACUUM, creating an index is not enough to allow PostgreSQL to use it effectively. VACUUMing must be performed when ever a new index is created or after a large number of UPDATEs, INSERTs or DELETEs are issued against a table.
SQL
VACUUM ANALYZE vTABLE;
The final step corresponds to CLUSTERING, this process re-orders the table according to the geospatial index we created. This ensures that records with similar attributes have a high likelihood of being found in the same page, reducing the number of pages that must be read into memory for some types of queries. When a query to find nearest neighbors or within a certain are is needed, geometries that are near each other in space are near each other on disk. The query to perform this clustering is as follows:
CLUSTER vTABLE USING vTABLE_gix;
ANALYZE vTABLE;
End of explanation |
7,184 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Funciones y expresiones booleanas
El paquete sympy tiene un módulo de lógica. Con él podemos hacer algunas simplificaciones
Step1: Definimos los símbolos que vamos a utilizar. Supremo e ínfimo se introducen usando el o e y lógicos. Aunque también podemos utilizar And y Or como funciones. Para la negación usamos Not o ~. Tenemos además Xor, Nand, Implies (que se puede usar de forma prefija con >>) y Equivalent.
Step2: La formas normales conjuntiva y disyuntivas las podemos calcular como sigue
Step3: También podemos simplificar expresiones
Step4: O dar valores de verdad a las variables
Step5: Esto nos permite crear nuestras propias tablas de verdad
Step6: Una forma de comprobar que dos expresiones son equivalentes es la siguiente
Step7: Veamos ahora cómo podemos encontrar la versión simplificada de una función booleana que venga dada por minterms. Aparentemente SOPform hace algunas simplificaciones usando el algoritmo de Quine-McCluskey
Step8: Al utilizar sympy podemos escribir una forma más amigable de una expresión booleana
Step9: Los comandos simplify or simplify_logic pueden simplificar aún más
Step10: De hecho, p se puede escribir de forma más compacta. Para ello vamos a utilizar el algoritmo espresso, que viene implementado en el paquete pyeda
Step11: Este paquete no admite las variables definidas con symbols, así que las vamos a declarar con expvar para definir variables booleanas
Step12: Otro problema es que la salida de SOPform no es una expresión de pyeda. Lo podemos arreglar pasándola a cadena de caracteres y releyéndola en pyeda
Step13: Ahora sí que podemos utilizar el simplificador espresso implementado en pyeda
Step14: Y podemos comprobar que es más corta que la salida que daba sympy. Para escribirla de forma más "legible" volvemos a utilizar pprint de sympy, pero para ello necesitamos pasar nuestra expresión en pyeda a sympy
Step15: Podríamos haber definido directamente p utilizando tablas de verdad
Step16: La tabla de verdad de una expresión se obtiene como sigue
Step17: Veamos un ejemplo análogo pero con más variables, y de paso mostramos como definir vectores de variables
Step18: Un ejemplo de simplificación
Veamos que el o exclusivo con la definición $x\oplus y=(x\wedge \neg y)\vee (\neg x\wedge y)$ es asociativo
Step19: Veamos que efectivamente $x\oplus(y\oplus z)=(x\oplus y)\oplus z$
Step20: Podemos hacer una función que pase de minterm a expresions en pyeda
Step21: Y ahora la podemos simplificar | Python Code:
from sympy import *
Explanation: Funciones y expresiones booleanas
El paquete sympy tiene un módulo de lógica. Con él podemos hacer algunas simplificaciones
End of explanation
x, y, z = symbols("x,y,z")
p = (x | y) & ~ z
pprint(p)
Explanation: Definimos los símbolos que vamos a utilizar. Supremo e ínfimo se introducen usando el o e y lógicos. Aunque también podemos utilizar And y Or como funciones. Para la negación usamos Not o ~. Tenemos además Xor, Nand, Implies (que se puede usar de forma prefija con >>) y Equivalent.
End of explanation
to_cnf(p)
to_dnf(p)
Explanation: La formas normales conjuntiva y disyuntivas las podemos calcular como sigue
End of explanation
simplify(x | ~x)
Explanation: También podemos simplificar expresiones
End of explanation
p.xreplace({x:True})
Explanation: O dar valores de verdad a las variables
End of explanation
p.free_symbols
p = Or(x,And(x,y))
from IPython.display import HTML,display
colores=['LightCoral','Aquamarine']
tabla="<table style='width:25%'><tr><td bgcolor='lightblue'>$"+latex(x)
tabla=tabla+"$ </td><td bgcolor='lightblue'>$"+latex( y)+"$</td><td bgcolor='lightblue'>$"+latex(p)+"$</td></tr>"
for t in cartes({True,False}, repeat=2):
v =dict(zip((x,y),t))
tabla=tabla+"<tr> <td bgcolor="+colores[v[x]]+">"+str(v[x])
tabla=tabla+"</td><td bgcolor="+colores[v[y]]+">"+str(v[y])
tabla=tabla+"</td><td bgcolor="+colores[v[x]]+">"+str(p.xreplace(v))+"</td></tr>"
tabla=tabla+"</table>"
display(HTML(tabla))
Explanation: Esto nos permite crear nuestras propias tablas de verdad
End of explanation
Equivalent(simplify(p), simplify(x))
Explanation: Una forma de comprobar que dos expresiones son equivalentes es la siguiente
End of explanation
p=SOPform([x,y,z],[[0,0,1],[0,1,0],[0,1,1],[1,1,0],[1,0,0],[1,0,1]])
p
Explanation: Veamos ahora cómo podemos encontrar la versión simplificada de una función booleana que venga dada por minterms. Aparentemente SOPform hace algunas simplificaciones usando el algoritmo de Quine-McCluskey
End of explanation
pprint(p)
Explanation: Al utilizar sympy podemos escribir una forma más amigable de una expresión booleana
End of explanation
pprint(simplify(p))
pprint(simplify_logic(p))
Explanation: Los comandos simplify or simplify_logic pueden simplificar aún más
End of explanation
from pyeda.inter import *
Explanation: De hecho, p se puede escribir de forma más compacta. Para ello vamos a utilizar el algoritmo espresso, que viene implementado en el paquete pyeda
End of explanation
x,y,z = map(exprvar,"xyz")
p=SOPform([x,y,z],[[0,0,1],[0,1,0],[0,1,1],[1,1,0],[1,0,0],[1,0,1]])
Explanation: Este paquete no admite las variables definidas con symbols, así que las vamos a declarar con expvar para definir variables booleanas
End of explanation
p=expr(str(p))
Explanation: Otro problema es que la salida de SOPform no es una expresión de pyeda. Lo podemos arreglar pasándola a cadena de caracteres y releyéndola en pyeda
End of explanation
pm, =espresso_exprs(p)
pm
Explanation: Ahora sí que podemos utilizar el simplificador espresso implementado en pyeda
End of explanation
pprint(sympify(pm))
Explanation: Y podemos comprobar que es más corta que la salida que daba sympy. Para escribirla de forma más "legible" volvemos a utilizar pprint de sympy, pero para ello necesitamos pasar nuestra expresión en pyeda a sympy
End of explanation
p=truthtable([x,y,z], "01111110")
pm, = espresso_tts(p)
pprint(sympify(pm))
Explanation: Podríamos haber definido directamente p utilizando tablas de verdad
End of explanation
expr2truthtable(pm)
Explanation: La tabla de verdad de una expresión se obtiene como sigue
End of explanation
X = ttvars('x', 4)
f = truthtable(X, "0111111111111110")
fm, = espresso_tts(f)
fm
expr2truthtable(fm)
Explanation: Veamos un ejemplo análogo pero con más variables, y de paso mostramos como definir vectores de variables
End of explanation
x, y, z = map(exprvar,"xyz")
f = lambda x,y : Or(And(x,~ y),And(~x,y))
f(x,y)
expr2truthtable(f(x,y))
f(x,y).equivalent(Xor(x,y))
Explanation: Un ejemplo de simplificación
Veamos que el o exclusivo con la definición $x\oplus y=(x\wedge \neg y)\vee (\neg x\wedge y)$ es asociativo
End of explanation
pprint(simplify_logic(f(x,f(y,z))))
pprint(simplify_logic(f(f(x,y),z)))
a= f(f(x,y),z)
b= f(x,f(y,z))
a.equivalent(b)
Explanation: Veamos que efectivamente $x\oplus(y\oplus z)=(x\oplus y)\oplus z$
End of explanation
def minterm2expr(l,v):
n = len(l)
vv=v.copy()
for i in range(n):
if not(l[i]):
vv[i]=Not(vv[i])
return And(*vv)
x,y,z,t = map(exprvar,"xyzt")
minterm2expr([0,1,0,1],[x,y,z,t])
def minterms2expr(l,v):
return Or(*[minterm2expr(a,v) for a in l])
hh2=minterms2expr([[0,0,0,0],[0,0,1,0],[0,1,0,0],[0,1,1,0],[0,1,1,1],[1,0,0,0],[1,0,1,0],[1,1,0,0]],[x,y,z,t])
hh2
pprint(sympify(hh2))
Explanation: Podemos hacer una función que pase de minterm a expresions en pyeda
End of explanation
sh2, = espresso_exprs(hh2)
sh2
Explanation: Y ahora la podemos simplificar
End of explanation |
7,185 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Lea
Step1: Summary
Random variables are abstract objects. Transparent method for drawing random samples variable.random(times). Standard statistical metrics of the probability distribution is also part of the object.
Coolness - Part 1
Step2: Summary
Random variables are abstract objects. Methods are available for operating on them algebraically. The probability
distributions, methods for drawing random samples, statistical metrics, are transparently propagated.
Coolness - Part 2
"You just threw two dice. Can you guess the result ?"
"Here's a tip
Step3: Summary
Conditioning, which is the first step towards inference, is done automatically. A wide variety of conditions can be used. P(A | B) translates to a.given(b).
Inference
An entomologist spots what might be a rare subspecies of beetle, due to the pattern on its back. In the rare subspecies, 98% have the pattern, or P(pattern|species = rare) = 0.98. In the common subspecies, 5% have the pattern, or P(pattern | species = common) = 0.05. The rare subspecies accounts for only 0.1% of the population. How likely is the beetle having the pattern to be rare, or what is P(species=rare|pattern) ? | Python Code:
from lea import *
# mandatory die example - initilize a die object
die = Lea.fromVals(1, 2, 3, 4, 5, 6)
# throw the die a few times
die.random(20)
# mandatory coin toss example - states can be strings!
coin = Lea.fromVals('Head', 'Tail')
# toss the coin a few times
coin.random(10)
# how about a Boolean variable - only True or False ?
rain = Lea.boolProb(5, 100)
# how often does it rain in Chennai ?
rain.random(10)
# How about standard statistics ?
die.mean, die.mode, die.var, die.entropy
Explanation: Introduction to Lea
End of explanation
# Lets create two dies
die1 = die.clone()
die2 = die.clone()
# Two throws of the die
dice = die1 + die2
dice
dice.random(10)
dice.mean
dice.mode
print dice.histo()
Explanation: Summary
Random variables are abstract objects. Transparent method for drawing random samples variable.random(times). Standard statistical metrics of the probability distribution is also part of the object.
Coolness - Part 1
End of explanation
## We can create a new distribution, conditioned on our state of knowledge : P(sum | sum <= 6)
conditionalDice = dice.given(dice<=6)
## What is our best guess for the result of the throw ?
conditionalDice.mode
## Conditioning can be done in many ways : suppose we know that the first die came up 3.
dice.given(die1 == 3)
## Conditioning can be done in still more ways : suppose we know that **either** of the two dies came up 3
dice.given((die1 == 3) | (die2 == 3))
Explanation: Summary
Random variables are abstract objects. Methods are available for operating on them algebraically. The probability
distributions, methods for drawing random samples, statistical metrics, are transparently propagated.
Coolness - Part 2
"You just threw two dice. Can you guess the result ?"
"Here's a tip : the sum is less than 6"
End of explanation
# Species is a random variable with states "common" and "rare", with probabilities determined by the population. Since
# are only two states, species states are, equivalently, "rare" and "not rare". Species can be a Boolean!
rare = Lea.boolProb(1,1000)
# Similarly, pattern is either "present" or "not present". It too is a Boolean, but, its probability distribution
# is conditioned on "rare" or "not rare"
patternIfrare = Lea.boolProb(98, 100)
patternIfNotrare = Lea.boolProb(5, 100)
# Now, lets build the conditional probability table for P(pattern | species)
pattern = Lea.buildCPT((rare , patternIfrare), ( ~rare , patternIfNotrare))
# Sanity check : do we get what we put in ?
pattern.given(rare)
# Finally, our moment of truth : Bayesian inference - what is P(rare | pattern )?
rare.given(pattern)
# And, now some show off : what is the probability of being rare and having a pattern ?
rare & pattern
# All possible outcomes
Lea.cprod(rare,pattern)
Explanation: Summary
Conditioning, which is the first step towards inference, is done automatically. A wide variety of conditions can be used. P(A | B) translates to a.given(b).
Inference
An entomologist spots what might be a rare subspecies of beetle, due to the pattern on its back. In the rare subspecies, 98% have the pattern, or P(pattern|species = rare) = 0.98. In the common subspecies, 5% have the pattern, or P(pattern | species = common) = 0.05. The rare subspecies accounts for only 0.1% of the population. How likely is the beetle having the pattern to be rare, or what is P(species=rare|pattern) ?
End of explanation |
7,186 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-lr', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: MPI-M
Source ID: MPI-ESM-1-2-LR
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
7,187 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am trying to find duplicates rows in a pandas dataframe. | Problem:
import pandas as pd
df=pd.DataFrame(data=[[1,2],[3,4],[1,2],[1,4],[1,2]],columns=['col1','col2'])
def g(df):
df['index_original'] = df.groupby(['col1', 'col2']).col1.transform('idxmin')
return df[df.duplicated(subset=['col1', 'col2'], keep='first')]
result = g(df.copy()) |
7,188 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wilson-Devinney Style Meshing
NOTE
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Changing Meshing Options
Next we need to add compute options and change the mesh_style for all stars from 'marching' to 'wd'
Step3: Adding Datasets
Next let's add a mesh dataset so that we can plot our Wilson-Devinney style meshes
Step4: Running Compute
Step5: Plotting
Step6: Now let's zoom in so we can see the layout of the triangles. Note that Wilson-Devinney uses trapezoids, but since PHOEBE uses triangles, we take each of the trapezoids and split it into two triangles.
Step7: And now looking down from above. Here you can see the gaps between the surface elements (and you can also see some of the subdivision that's taking place along the limb).
Step8: And see which elements are visible at the current time. This defaults to use the 'RdYlGn' colormap which will make visible elements green, partially hidden elements yellow, and hidden elements red. Note that the observer is in the positive w-direction. | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
Explanation: Wilson-Devinney Style Meshing
NOTE: Wilson-Devinney Style meshing requires developer mode in PHOEBE and is meant to be used for testing, not used for science.
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
phoebe.devel_on() # CURRENTLY REQUIRED FOR WD-STYLE MESHING (WHICH IS EXPERIMENTAL)
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.set_value_all('mesh_method', 'wd')
b.set_value_all('eclipse_method', 'graham')
Explanation: Changing Meshing Options
Next we need to add compute options and change the mesh_style for all stars from 'marching' to 'wd'
End of explanation
b.add_dataset('mesh', times=np.linspace(0,10,6), dataset='mesh01', columns=['visibilities'])
Explanation: Adding Datasets
Next let's add a mesh dataset so that we can plot our Wilson-Devinney style meshes
End of explanation
b.run_compute(irrad_method='none')
Explanation: Running Compute
End of explanation
afig, mplfig = b['mesh01@model'].plot(time=0.5, x='us', y='vs',
show=True)
Explanation: Plotting
End of explanation
afig, mplfig = b['primary@mesh01@model'].plot(time=0.0, x='us', y='vs',
ec='blue', fc='gray',
xlim=(-0.2,0.2), ylim=(-0.2,0.2),
show=True)
Explanation: Now let's zoom in so we can see the layout of the triangles. Note that Wilson-Devinney uses trapezoids, but since PHOEBE uses triangles, we take each of the trapezoids and split it into two triangles.
End of explanation
afig, mplfig = b['primary@mesh01@model'].plot(time=0.0, x='us', y='ws',
ec='blue', fc='gray',
xlim=(-0.1,0.1), ylim=(-2.75,-2.55),
show=True)
Explanation: And now looking down from above. Here you can see the gaps between the surface elements (and you can also see some of the subdivision that's taking place along the limb).
End of explanation
afig, mplfig = b['secondary@mesh01@model'].plot(time=0.0, x='us', y='ws',
ec='None', fc='visibilities',
show=True)
Explanation: And see which elements are visible at the current time. This defaults to use the 'RdYlGn' colormap which will make visible elements green, partially hidden elements yellow, and hidden elements red. Note that the observer is in the positive w-direction.
End of explanation |
7,189 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We're going to simulate data discretely sampled in time. We need to set a sample rate in Hz. The highest resolvable frequency, the Nyquist frequency, is half the sample rate. We also need to decide how long the simulated data record should last.
Note that we'll assume everything is real in the time domain, so we use rFFT and rIFFT. That just means that we're only keeping positive frequencies in the frequency domain.
Step1: Generate fake Gaussian noise
We'll start by making some fake noise to practice on. The procedure for making colored Gaussian noise is
Step3: We'll make an analytical PSD that has about the same shape as H1's noise. We put in a fake highpass filter around 8 Hz so we don't have infinite noise at low frequency, and to match what the calibration code does (it only reconstructs accurately above 10 Hz).
Step5: Make a whitening filter
We need to make a whitening filter for the data. This is an FIR filter that, when applied to our colored noise, produces a decorrelated or 'white' time series. There are many ways to calculate this, and actually many different filters that would achieve this. We'll choose to make ours zero-phase -- the impulse response is acausal and symmetric around t=0. We'll use FFT-based methods for the design because they are simple and efficient.
First, we make a good estimate of the ASD, with enough frequency resolution to capture the important features. Then we do 1/asd to get a rough whitening filter. We zero out any frequencies we want to throw away (in this case, below 10 Hz because strain is not accurate down there). We transform to the time domain to get an impulse response, and roll it forwards so the response is centered at half the length of the filter.
Then we truncate it with a window function to make sure that it goes to zero smoothly. This is our time-domain FIR filter. The t=0 of this acausal filter is still in the middle, so make sure to compensate for the filter delay if using it this way.
The inverting and Fourier transforming it part may not be quite the best thing to do. Even better might be to measure the ASD at the desired resolution, then interpolate to higher resolution. That gives more space when computing the impulse response to avoid wraparound issues. The details of the interpolation shouldn't matter since you're truncating it anyway.
Step6: To apply the filter efficiently, we can pad it with zeros to the length of the data then transform it to the frequency domain. We could either compensate for the time delay explicitly, or just take the absolute value of the filter which will make it zero-phase. We multiply this by the FFT of the data, and IFFT them back.
This is completely equivalent to just applying the FIR filter by convolution in the time domain.
Step7: Here, I've left the filter corruption on. We need to remove half the total length of the filter on both sides because of the filter wrapround.
Step8: Now we can check the spectrum of the whitened data. It should be flat, at least where the assumptions in deriving the filter are valid. | Python Code:
sample_rate = 4096
nyquist = sample_rate/2
time_length_seconds = 512
Explanation: We're going to simulate data discretely sampled in time. We need to set a sample rate in Hz. The highest resolvable frequency, the Nyquist frequency, is half the sample rate. We also need to decide how long the simulated data record should last.
Note that we'll assume everything is real in the time domain, so we use rFFT and rIFFT. That just means that we're only keeping positive frequencies in the frequency domain.
End of explanation
# Make the data twice as long so we can cut off the wrap-around
num_noise_samples=2*time_length_seconds*sample_rate
white_noise_fd=rfft(np.random.normal(size=num_noise_samples))
sim_freqs=np.arange(len(white_noise_fd))/(2.*time_length_seconds)
Explanation: Generate fake Gaussian noise
We'll start by making some fake noise to practice on. The procedure for making colored Gaussian noise is:
1. Generate white Gaussian noise by many independent draws from a normal distribution.
1. Transform this to the frequency domain.
1. Choose the spectrum you want to simulate, sampled at the same frequencies as the frequency-domain white noise.
1. Multiply the FD white noise by the ASD of the desired spectrum.
1. Transform this back to the time domain.
1. Cut off the start and end of this data, to eliminate the filter wraparond. We cut off a quarter of the noise stream at either end, which should be more than enough with such a smooth PSD.
End of explanation
psd=(sim_freqs/40.)**-10+(sim_freqs/70.)**-4+0.5+1.397e-6*(sim_freqs)**2
# Put in a fake highpass around 8 Hz, so we don't have too much low frequency
to_bin=2*time_length_seconds
f_pass, f_min = 8., 10.
idx1=int(to_bin*f_pass)
idx2=int(to_bin*f_min)
psd[:idx1]=psd[idx2]*(sim_freqs[:idx1]/f_pass)**2
psd[idx1:idx2]=psd[idx2]
# Generate the noise
colored_noise_td = np.sqrt(float(nyquist))*irfft(np.sqrt(psd)*white_noise_fd)
colored_noise_td = colored_noise_td[len(colored_noise_td)/4:-len(colored_noise_td)/4]
def welch_asd(data, fft_len_sec, overlap=0.5, window='hanning'):
Measure the ASD using the Welch method of averaging
estimates on shorter overlapping segments of the data.
assert 0. <= overlap < 1.
ff, tmp = sig.welch(data, fs=sample_rate, window=window,
nperseg=fft_len_sec*sample_rate,
noverlap=overlap*fft_len_sec*sample_rate)
return ff, np.sqrt(tmp)
ff, measured_asd = welch_asd(colored_noise_td, window='hanning', fft_len_sec=4)
plt.loglog(ff, measured_asd)
plt.loglog(sim_freqs, np.sqrt(psd), c='k', ls=':')
plt.xlim(4,nyquist)
plt.ylim(0.5,2e3)
Explanation: We'll make an analytical PSD that has about the same shape as H1's noise. We put in a fake highpass filter around 8 Hz so we don't have infinite noise at low frequency, and to match what the calibration code does (it only reconstructs accurately above 10 Hz).
End of explanation
def make_invasd(data, invasd_len=8, f_min=10.):
invasd_len is the length in seconds of the desired impulse response (the full width)
ff, asd = welch_asd(data, window='hanning', fft_len_sec=invasd_len)
invasd = 1./asd
invasd[:int(f_min*invasd_len)]=0.
invasd_td = irfft(invasd)
invasd_td = sig.hann(len(invasd_td))*np.roll(invasd_td, len(invasd_td)/2)
return invasd_td
invasd_len=8
invasd_trunc = make_invasd(colored_noise_td, invasd_len=invasd_len, f_min=10.)
plt.plot(invasd_trunc)
Explanation: Make a whitening filter
We need to make a whitening filter for the data. This is an FIR filter that, when applied to our colored noise, produces a decorrelated or 'white' time series. There are many ways to calculate this, and actually many different filters that would achieve this. We'll choose to make ours zero-phase -- the impulse response is acausal and symmetric around t=0. We'll use FFT-based methods for the design because they are simple and efficient.
First, we make a good estimate of the ASD, with enough frequency resolution to capture the important features. Then we do 1/asd to get a rough whitening filter. We zero out any frequencies we want to throw away (in this case, below 10 Hz because strain is not accurate down there). We transform to the time domain to get an impulse response, and roll it forwards so the response is centered at half the length of the filter.
Then we truncate it with a window function to make sure that it goes to zero smoothly. This is our time-domain FIR filter. The t=0 of this acausal filter is still in the middle, so make sure to compensate for the filter delay if using it this way.
The inverting and Fourier transforming it part may not be quite the best thing to do. Even better might be to measure the ASD at the desired resolution, then interpolate to higher resolution. That gives more space when computing the impulse response to avoid wraparound issues. The details of the interpolation shouldn't matter since you're truncating it anyway.
End of explanation
tmp = invasd_trunc.copy()
tmp.resize(len(colored_noise_td))
fd_filter=np.abs(rfft(tmp))
whitened_strain_corrupt = irfft(fd_filter*rfft(colored_noise_td))
plt.plot(whitened_strain_corrupt)
Explanation: To apply the filter efficiently, we can pad it with zeros to the length of the data then transform it to the frequency domain. We could either compensate for the time delay explicitly, or just take the absolute value of the filter which will make it zero-phase. We multiply this by the FFT of the data, and IFFT them back.
This is completely equivalent to just applying the FIR filter by convolution in the time domain.
End of explanation
whitened_strain_corrupt = whitened_strain[invasd_len*sample_rate/2:-invasd_len*sample_rate/2]
Explanation: Here, I've left the filter corruption on. We need to remove half the total length of the filter on both sides because of the filter wrapround.
End of explanation
ff, asdw = welch_asd(whitened_strain, window='hanning', fft_len_sec=4)
plt.loglog(ff, asdw)
plt.xlim(1,1000)
plt.ylim(0.1,10)
Explanation: Now we can check the spectrum of the whitened data. It should be flat, at least where the assumptions in deriving the filter are valid.
End of explanation |
7,190 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Statistics of Stable
Step2: Property Occurence
Step3: Success Ratio of Properties (Normalized)
Step4: Success Ratio by unit
Step5: Failure Ratio does not depend on #props
Step6: Coverage per Unit
Show that HAL/BSP are just spec and app is full SPARK
TODO
Step7: Flows by unit
Step9: by task? not yet possible
specific check by package
Step13: Analysis Time per Module and VC Type (only SPARK 17+)
Because the entry "check_tree" is only available in GNATprove 2017 and later.
Step15: How often do we fail because of timeout/stepout? | Python Code:
import json, re, pprint, os, datetime
import pandas as pd
import numpy as np
import matplotlib
pltsettings = {
"figure.figsize" : (5.0, 4.0),
"pgf.texsystem" : "pdflatex",
"font.family": "sans",
"font.serif": [], # use latex default serif font
#"font.sans-serif": ["DejaVu Sans"], # use a specific sans-serif font
}
matplotlib.rcParams.update(pltsettings)
import matplotlib.pyplot as plt
#import seaborn as sns # makes exports ugly
print plt.style.available
plt.style.use ('seaborn-whitegrid')
Pantone540=(0,.2,0.34901)
Pantone301=(0,0.32156,0.57647)
TUMred=(0.898,0.2039,0.0941)
TUMgreen=(0.5686,0.6745,0.4196)
TUMgray=(0.61176,0.61568,0.62352)
TUMlightgray=(0.85098,0.85490,0.85882)
# get path to files
print datetime.datetime.now()
unitstats_file = os.environ.get('unitstats', '/tmp/unitstats.log')
unitstats_file="/home/becker/async/StratoX.git.rcs/obj/unitstats.log"
if not os.path.exists(unitstats_file):
print("ERROR: File {} does not exist.".format(unitstats_file))
exit(-1)
FOLDER=os.path.split(unitstats_file)[0]
print "file=" + str(unitstats_file)
def read_json(fname):
reads JSOn and returns (totals, units), tuple of dicts
units=None
totals=None
with open(fname,'r') as f:
endofunits = False
for line in f:
match = re.search(r'^TOTALS', line)
if match:
endofunits = True
if not endofunits:
try:
if not units:
units = json.loads(line)
else:
print "error: units appearing multiple times"
except:
pass
else:
try:
if not totals:
totals = json.loads(line)
else:
print "error: totals appearing multiple times"
except:
pass
#print units
# unpack units (list of dicts) => dict
tmp=units
units={}
for u in tmp:
name=u.keys()[0]
stats=u[name]
units[name]=stats
return (totals, units)
#######
(totals, units) = read_json(unitstats_file)
print "TOTALS: "
pprint.pprint(totals)
Explanation: Statistics of Stable
End of explanation
#### plot success per VC type; rows are VCs, columns are "success", "fail"
def pretty_vcname(VCname):
parts = VCname.split("_");
if parts[0] == "VC":
parts = parts[1:]
return " ".join(parts).title()
counters={pretty_vcname(k) : v['cnt'] for k,v in totals["rules"].iteritems()}
df_cnt = pd.DataFrame(counters,index=['count']).T
#print df2.head()
# sort by occurence
df_cnt.sort_values(by='count', ascending=False, inplace=True)
ax=df_cnt.plot.bar(color=[Pantone301],legend=False);
ax.set_ylabel('count')
#ax.grid()
plt.savefig(FOLDER + os.sep + 'props.pdf', bbox_inches='tight')
plt.show()
num_vc = sum([v for k,v in counters.iteritems() if k != 'Uninitialized'])
print "########################"
print "# Percentage of each VC"
print "########################"
for k,v in counters.iteritems():
if k != 'Uninitialized': print k + ": {0:0.1f}".format(100.0*v/num_vc)
Explanation: Property Occurence
End of explanation
# plot success per VC type; rows are VCs, columns are "success", "fail"
VCs={pretty_vcname(k) : {'proven' : 100.0*v['proven'] / v['cnt'], 'unsuccessful': 100.0*(v['cnt']-v['proven'])/v['cnt'] } for k,v in totals["rules"].iteritems()}
df2 = pd.DataFrame(VCs).T
#print df2.head()
df2.sort_values(by='proven', inplace=True, ascending=False)
exclude_columns=['unsuccessful']
ax=df2.ix[:,df2.columns.difference(exclude_columns)].plot.bar(stacked=True,color=[Pantone301],legend=False,figsize=(5,2.5));
ax.set_ylim(50,100)
ax.set_ylabel('success [%]')
ax.set_title('Verification success per VC type')
plt.savefig(FOLDER + os.sep + 'vc_success.pdf', bbox_inches='tight')
plt.show()
Explanation: Success Ratio of Properties (Normalized)
End of explanation
unit_success = {k : {'success': v['success'], 'cnt' : v['props']} for k,v in units.iteritems()}
df_usucc = pd.DataFrame(unit_success).T
df_usucc.sort_values(by='success', inplace=True, ascending=False)
exclude_columns=['cnt']
ax=df_usucc.ix[df_usucc.cnt>0,df_usucc.columns.difference(exclude_columns)].plot.bar(color=[Pantone301],legend=False);
ax.set_ylabel('success')
ax.grid()
plt.show()
unit_cnt = {k : { 'proven' : v['proven'], 'unsuccessful' : v['props'] - v['proven'], 'cnt':v['props'], 'success' : 100.0 if v['props']==0 else (100.0*v['proven'])/v['props']} for k,v in units.iteritems() if v['props']>0}
df_ucnt = pd.DataFrame(unit_cnt).T
# filter those where cnt=0
print df_ucnt.head()
df_ucnt.sort_values(by=['cnt','proven'], inplace=True, ascending=False)
exclude_columns=['cnt','success']
ax=df_ucnt.ix[df_ucnt.cnt>0,df_ucnt.columns.difference(exclude_columns)].plot.bar(stacked=True,color=[TUMgreen,TUMred],figsize=(7,4));
ax.set_ylabel('cnt')
ax.set_xlabel('unit')
#ax.grid()
plt.savefig(FOLDER + os.sep + 'units_props.pdf', bbox_inches='tight')
plt.show()
Explanation: Success Ratio by unit
End of explanation
df_ucnt.plot.scatter(x='cnt', y='success');
plt.show()
Explanation: Failure Ratio does not depend on #props:
End of explanation
filter=['stm32','cortex','generic']#,'hil.','driver','hal','register']
unit_cov = {u : {'body': v["coverage"], 'spec': 100.0*v['spec'] / v['ents'] if v['ents'] > 0 else 0, 'ents' : v['ents'], 'skip': 100.0*v['skip']/v['ents']} for u,v in units.iteritems() if (not any(f in u for f in filter) and v['ents']>0)}
df_ucov = pd.DataFrame(unit_cov).T
df_ucov.sort_values(by=['body','spec','skip'], inplace=True, ascending=False)
exclude_columns=['ents']
ax=df_ucov.ix[:,df_ucov.columns.difference(exclude_columns)].plot.bar(stacked=True,figsize=(13,3),color=[TUMgreen, "black", TUMlightgray]);
ax.set_ylabel('coverage')
plt.savefig(FOLDER + os.sep + 'units_cov.pdf', bbox_inches='tight')
plt.show()
print units["fat_filesystem.directories"]
Explanation: Coverage per Unit
Show that HAL/BSP are just spec and app is full SPARK
TODO: Units which have SPARK-Mode off in spec -> they do not appear here, which is unfair
SPARK_Mode
Tri-state:
* On (Must be in SPARK)
* Off (forbid GNATprove to analyze this)
* Auto (implicit: take whatever is compliant to SPARK as SPARK; ignore rest)
What does "spec" mean?
Does it mean that there is a contract, or only that the spec is in a scope with SPARK mode on?
Mixed Unit
Does it count non-SPARK subs in the body?
Non-SPARK Unit
Does GNATprove count the specs even?
* yes, but only SPARK-compliant specs. E.g., it skips functions with side effects
* FAT_Filesystem.Directories: lot of functions. Ents=0
We need to count entities ourselves, somehow.
End of explanation
filter=['stm32','cortex','generic']
unit_flow = {u : {'cnt': v['flows'], 'success': v['flows_success'] } for u,v in units.iteritems() if (not any(f in u for f in filter) and v['flows']>0)}
df_flows = pd.DataFrame(unit_flow).T
df_flows.sort_values(by=['cnt'], inplace=True, ascending=False)
exclude_columns=[]
ax=df_flows.ix[:,df_flows.columns.difference(exclude_columns)].plot.bar();
ax.set_ylabel('success')
plt.show()
Explanation: Flows by unit
End of explanation
def plot_vc (vcname):
Plot success for a specific VC in all units
res = {}
for u,v in units.iteritems():
for r, rv in v['rules'].iteritems():
if r == vcname and rv and 'cnt' in rv and rv['cnt'] > 0:
res[u] = { 'cnt' : rv['cnt'], 'proven' : rv['proven'], 'fail' : rv['cnt'] - rv['proven'] }
df_vc = pd.DataFrame(res).T
print df_vc.head()
df_vc.sort_values(by=['cnt','proven'], inplace=True, ascending=False)
exclude_columns=['cnt']
ax=df_vc.ix[:,df_vc.columns.difference(exclude_columns)].plot.bar(stacked=True,figsize=(13,7),color=[TUMred, TUMgreen]);
ax.set_ylabel('num')
ax.set_title('Results for ' + pretty_vcname(vcname))
#ax.grid()
plt.savefig(FOLDER + os.sep + 'units' + vcname + '.pdf', bbox_inches='tight')
plt.show()
print(totals["rules"].keys()[0])
for vc in totals["rules"].keys():
plot_vc(vc)
Explanation: by task? not yet possible
specific check by package
End of explanation
#print str(units['mission']['details_proofs'])
# 'check_tree' : [{'proof_attempts':{ <solver> : {'steps:..,'result':..,'time':..}}, 'transformations': {}}]
def parse_checktree(checktree):
decend into check_tree and return:
- time sum of all proof attempts
- count how often each solver was used
- find most steps needed
ret = {'time': 0.0, 'solvers': {}, 'maxsteps': 0, 'result' : 'undefined'}
howproved_counted = ['interval','flow']
if not checktree:
return ret
def strip_result(r):
Remove additional information from result and return a result class in {Valid, ...}
# somehow step limits are reported not as "Step limit exceeded"; but often as "Failure (steps:<n>)"
# we remap them to the expected strings
match = re.search(r'\w+ \(([^\)]+)\)', r)
if match:
detail = match.group(1)
if 'steps' in detail:
return 'Step limit exceeded'
elif 'time' in detail: # never seen this
return 'Timeout'
elif 'resource limit' in detail: # this must be why3-cpulimit. Either timeout or memout.
return 'Resource limit exceeded'
else:
return detail
else:
return r
result = set()
for chk in checktree:
#print "chk=" + str(chk)
if 'transformations' in chk and len (chk['transformations']) > 0:
print "trafo; not handled: " + str(chk)
return None
if 'proof_attempts' in chk:
for s,sd in chk['proof_attempts'].iteritems():
#print "attempt=" + str(s) + ":" + str(sd)
if not s in ret['solvers']: ret['solvers'][s] = {}
ret['solvers'][s]['cnt'] = ret['solvers'][s].get('cnt', 0) + 1
ret['time'] += sd['time']
if sd['steps'] > ret['maxsteps']: ret['maxsteps'] = sd['steps']
result.add(strip_result(sd['result']))
else:
print "no proof attempts"
if 'how_proved' in chk and chk['how_proved'] in howproved_counted:
ret['solvers'][chk['how_proved']] = ret['solvers'].get(chk['how_proved'], 0) + 1
# decide overall result of all provers
#print "result=" + str(result)
if len(result) == 1:
ret["result"] = next(iter(result))
else:
# multiple different reasons: if there is a timeout/stepout, then give this as reason; otherwise mention what is there
if "Valid" in result:
ret["result"] = 'Valid'
elif any (substring in result for substring in ["Timeout", "Step limit exceeded", "Resource limit exceeded", "Out Of Memory"]):
reason = []
if "Step limit exceeded" in result: reason.append("steplimit")
if "Of Of Memory" in result: reason.append("memlimit")
if "Resource limit exceeded" in result: reason.append("resourcelimit")
if "Timeout" in result: reason.append("timeout")
ret["result"] = '|'.join(list(reason))
else:
ret["result"] = '|'.join(list(result))
#print "consolidated result:" + ret["result"]
return ret
VCs = totals["rules"].keys()
####################
# summarize per VC
####################
res = {}
have_checktree = False
for u,ud in units.iteritems():
res[u] = { 'VC': { v : {'cnt':0, 'solvers': {}, 'time_total': 0.0, 'time_total_undecided' : 0.0, 'time_samples' :[], 'time_samples_undecided' : [], 'result': {}} for v in VCs }}
t_total = 0.0
t_total_undecided = 0.0
maxsteps = 0
if "details_proofs" in ud:
#print "num proofs in " + u + "=" + str(len(ud['details_proofs']))
for p in ud['details_proofs']:
vc = p['rule']
res[u]['VC'][vc]['cnt'] += 1
ct = None
if 'check_tree' in p:
ct = parse_checktree(p['check_tree'])
if ct:
have_checktree = True
t_total += ct['time']
res[u]['VC'][vc]['time_total'] += ct['time']
res[u]['VC'][vc]['time_samples'].append (ct['time'])
if ct['maxsteps'] > maxsteps: maxsteps = ct['maxsteps']
# merge dicts counting solver invocations
for k,v in ct['solvers'].iteritems():
res[u]['VC'][vc]['solvers'][k] = res[u]['VC'][vc]['solvers'].get(k,0) + v['cnt']
# we get no result class when there is no check tree. this happens for interval
if ct['result'] == 'undefined':
if p['how_proved'] == 'interval':
ct['result'] = 'Valid' if p['severity'] == 'info' else 'interval failed'
else:
print "ERROR: unhandled value of how_proved. Not a solver, not interval check. what is it?"
# we get back exactly one result class per check. Count how often this happens in the unit
res[u]['VC'][vc]['result'][ct['result']] = res[u]['VC'][vc]['result'].get(ct['result'],0) + 1
# now also collect samples and sum time for proofs that did not finish
if not ct['result'] in ["Valid", "HighFailure", "Failure"]:
t_total_undecided += ct['time']
res[u]['VC'][vc]['time_samples_undecided'].append (ct['time'])
res[u]['VC'][vc]['time_total_undecided'] += ct['time']
res[u]['time'] = t_total
res[u]['time_undecided'] = t_total_undecided
res[u]['result'] = {} # TODO: summarize
res[u]['maxsteps'] = maxsteps
t_analysis_sec = sum([ud['time'] for u,ud in res.iteritems()])
print "################################"
print "# TOTAL CPU TIME = {0:0.1f} min".format(t_analysis_sec/60)
print "################################"
print "Note: divide by number of cores to get approximate wall-clock time"
if not have_checktree:
print "No Check tree (needs newer GNATprove). Stopping report here."
#exit(0) # does not kill the kernel
###########################
# plot total time per unit
###########################
df_unittime = pd.DataFrame({k: v['time'] for k,v in res.iteritems() if v['time'] > 0.0}, index=['time']).T
df_unittime.sort_values(by=['time'], inplace=True, ascending=False)
ax=df_unittime.plot.bar(figsize=(15,5),color=[Pantone301],legend=False)
ax.set_ylabel('analysis time [s]')
ax.set_title('Analysis time vs. Unit')
#ax.grid()
plt.savefig(FOLDER + os.sep + 'units_time.pdf', bbox_inches='tight')
plt.show()
###########################
# plot maxsteps per unit
###########################
df_unitsteps = pd.DataFrame({k:v['maxsteps'] for k,v in res.iteritems() if v['maxsteps'] > 0}, index=['steps']).T
df_unitsteps.sort_values(by=['steps'], inplace=True, ascending=False)
ax=df_unitsteps.plot.bar(figsize=(15,5),color=[Pantone301],legend=False)
ax.set_ylabel('steps')
ax.set_title('Analysis steps vs. Unit')
#ax.grid()
plt.savefig(FOLDER + os.sep + 'steps.pdf', bbox_inches='tight')
plt.show()
##########################
# plot total time per VC (summarizing all units)
##########################
def plot_vc_overview():
pdata = {} # {u'UNINITIALIZED': {'cnt': 0, 'time': 0.0, 'result': {}, 'solvers': {}}, ... }
def accumulate(vc,vcd):
#print vc + ": " + str(vcd['time_total']) + " [" + str(vcd['time_samples']) + "]"
if not vc in pdata: pdata[vc] = {}
pdata[vc]['time'] = pdata[vc].get('time', 0.0) + vcd['time_total']
pdata[vc]['time_undecided'] = pdata[vc].get('time_undecided', 0.0) + vcd['time_total_undecided']
if not 'samples' in pdata[vc]: pdata[vc]['samples'] = []
pdata[vc]['samples'].extend (vcd['time_samples'])
if not 'samples_undecided' in pdata[vc]: pdata[vc]['samples_undecided'] = []
pdata[vc]['samples_undecided'].extend (vcd['time_samples_undecided'])
for u,ud in res.iteritems():
#print "##" + u
for vc,vcd in ud['VC'].iteritems():
accumulate(pretty_vcname(vc),vcd)
#print pdata
# sanity check: sum of all must be approximately analysis time
#print "############################"
#print "# time total for all VCs: {0:0.1f} min".format(sum([v['time'] for k,v in pdata.iteritems()])/60)
#print "############################"
df_vc = pd.DataFrame(pdata).T
df_vc.sort_values(by=['time'], inplace=True, ascending=False)
print df_vc.head()
ax=df_vc.ix[:,df_vc.columns.difference(['time_samples', 'time_samples_undecided','time_undecided'])].plot.bar(logy=True,figsize=(5,2.5),color=[Pantone301,TUMred],legend=False);
ax.set_ylabel('Analysis time [s]')
ax.set_title('Total CPU time per VC type')
#ax.grid()
plt.savefig(FOLDER + os.sep + 'total_time_per_VC.pdf', bbox_inches='tight')
plt.show()
print "##############################"
print "# Percentage of time spent in:"
print "##############################"
for k,v in pdata.iteritems():
print k + ": {0:0.1f}".format(100.0*v['time']/t_analysis_sec)
tmp = { k: v['samples'] for k,v in pdata.iteritems()}
df_stats = pd.DataFrame.from_dict(tmp, orient='index').T # we need this orient and .T because we want NaNs to fill unequal lengths
#print df_stats.head()
#print df_stats.describe()
exclude_columns = ['Fp Overflow Check','Assert'] # because Float is extremely slow
ax=df_stats.ix[:,df_stats.columns.difference(exclude_columns)].plot.box(vert=False,figsize=(20,6),showfliers=False);
#plt.setp( ax.xaxis.get_majorticklabels(), rotation=90 )
ax.set_ylabel('Analysis time [s]')
ax.set_title('Statistical analysis time of a single VC')
#ax.grid()
plt.savefig(FOLDER + os.sep + 'statistical_time_per_VC.pdf', bbox_inches='tight')
plt.show()
plot_vc_overview()
def plot_vc_time_per_unit(vcname):
Make a plot showing time spent in a specific VC for all modules
pdata = {}
for u,ud in res.iteritems():
if vcname in ud['VC']:
t = ud['VC'][vcname]['time_total']
t2 = ud['VC'][vcname]['time_total_undecided']
if t > 0.0:
pdata[u] = { 'time total': t, 'time undecided':t2 }
if len(pdata.keys()) == 0: return
df_vc = pd.DataFrame(pdata).T
df_vc.sort_values(by=['time total'], inplace=True, ascending=False)
ax=df_vc.plot.bar(figsize=(13,7),color=[Pantone301, TUMred],legend=True);
ax.set_ylabel('Analysis time [s]')
ax.set_title('Analysis time for ' + pretty_vcname(vcname))
#ax.grid()
plt.savefig(FOLDER + os.sep + 'time_in_' + vcname + '.pdf', bbox_inches='tight')
plt.show()
#########################################
# plot VC time of a specific VC per unit
#########################################
for vc in totals["rules"].keys():
plot_vc_time_per_unit(vc)
Explanation: Analysis Time per Module and VC Type (only SPARK 17+)
Because the entry "check_tree" is only available in GNATprove 2017 and later.
End of explanation
#print " "
#print "Mission:"
#print res['mission']
# u'VC_RANGE_CHECK': {'cnt': 61, 'time': 68.78999999999995, 'result': {'steplimit': 3, u'Valid': 48, 'undefined': 10}, 'solvers': {u'CVC4': 51, u'altergo': 3, u'CVC4_CE': 3, u'Z3': 3}}
def plot_failure_reason(unit):
Make a plot showing why checks in unit fail
pdata = {v : vd['result'] for v,vd in res[unit]['VC'].iteritems() if vd and 'cnt' in vd and vd['cnt'] > 0}
#print unit + ":" + str(pdata)
num_fails = sum([1 for v,vd in pdata.iteritems() if not 'Valid' in vd.keys() or len(vd.keys()) > 1])
if num_fails == 0:
print " "
print "no fails for " + unit
return
# add 'cnt' to each VC (total)
for v,vd in pdata.iteritems():
vd['cnt'] = res[unit]['VC'][v]['cnt']
if len(pdata.keys()) == 0:
print "no data for " + unit
return
df_vc = pd.DataFrame(pdata).T
df_vc.sort_values(by=['cnt'], inplace=True, ascending=False)
exclude_columns=['cnt']
ax=df_vc.ix[:,df_vc.columns.difference(exclude_columns)].plot.bar(figsize=(10,5));
ax.set_ylabel('Number of')
ax.set_title('Why checks fail in ' + unit)
#ax.grid()
plt.savefig(FOLDER + os.sep + 'unit_fails_' + unit + '.pdf', bbox_inches='tight')
plt.show()
for k in units.keys():
if units[k]["success"] < 100.0: plot_failure_reason (k)
Explanation: How often do we fail because of timeout/stepout?
End of explanation |
7,191 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Earth Engine and TensorFlow in Cloud Datalab
This notebook walks you through a simple example of using Earth Engine and TensorFlow together in Cloud Datalab.
Specifically, we will train a neural network to recognize cloudy pixels in a Landsat scene. For this simple example we will use the output of the Fmask cloud detection algorithm as training data.
Configure the Environment
We begin by importing a number of useful libraries.
Step1: Initialize the Earth Engine client. This assumes that you have already configured Earth Engine credentials in this Datalab instance. If not, see the "Earth Engine Datalab Initialization.ipynb" notebook.
Step2: Inspect the Input Data
Load a Landsat image with corresponding Fmask label data.
Step3: Let's define a helper function to make it easier to print thumbnails of Earth Engine images. (We'll be adding a library with utility functions like this one to the Earth Engine Python SDK, but for now we can do it by hand.)
Step4: Now we can use our helper function to quickly visualize the image and label data. The Fmask values are
Step5: Fetch the Input Data
First we define some helper functions to download raw data from Earth Engine as numpy arrays.
We use the getDownloadId() function, which only works for modestly sized datasets. For larger datasets, a better approach would be to initiate a batch Export from Earth Engine to Cloud Storage, which you could easily manage right here in Datalab too.
Step6: Now we can use that function to load the data from Earth Engine, including a valid data band, as a numpy array. This may take a few seconds. We also convert the Fmask band to a binary cloud label (i.e. fmask=4).
Step7: Display the local data. This time, for variety, we display it as an NRG false-color image. We can use pyplot to display local numpy arrays.
Step8: Preprocess the Input Data
Select the valid pixels and hold out a fraction for use as validation data. Compute per-band means and standard deviations of the training data for normalization.
Step9: Build the TensorFlow Model
We start with a helper function to build a simple TensorFlow neural network layer.
Step10: Here we define our TensorFlow model, a neural network with two hidden layers with tanh() nonlinearities. The main network has two outputs, continuous-valued “logits” representing non-cloud and cloud, respectively. The binary output is intepreted as the argmax of these outputs.
We define a training step, which uses Kingma and Ba's Adam algorithm to minimize the cross-entropy between the logits and the training data. Finally, we define a simple overall percentage accuracy measure.
Step11: Train the Neural Network
Now train the neural network, using batches of training data drawn randomly from the training data pool. We periodically compute the accuracy against the validation data. When we're done training, we apply the model to the complete input data set.
This simple notebook performs all TensorFlow operations locally. However, for larger analyses you could bring up a cluster of TensorFlow workers to parallelize the computation, all controlled from within Datalab.
Step12: Inspect the Results
Here we dislay the results. The red band corresponds to the TensorFlow output and the blue band corresponds to the labeled training data, so pixels that are red and blue correspond to disagreements between the model and the training data. (There aren't many
Step13: We can zoom in on a particular region over on the right side of the image to see some of the disagreements. Red pixels represent comission errors and blue pixels represent omission errors relative to the labeled input data. | Python Code:
import ee
from IPython import display
import math
from matplotlib import pyplot
import numpy
from osgeo import gdal
import tempfile
import tensorflow as tf
import urllib
import zipfile
Explanation: Introduction to Earth Engine and TensorFlow in Cloud Datalab
This notebook walks you through a simple example of using Earth Engine and TensorFlow together in Cloud Datalab.
Specifically, we will train a neural network to recognize cloudy pixels in a Landsat scene. For this simple example we will use the output of the Fmask cloud detection algorithm as training data.
Configure the Environment
We begin by importing a number of useful libraries.
End of explanation
ee.Initialize()
Explanation: Initialize the Earth Engine client. This assumes that you have already configured Earth Engine credentials in this Datalab instance. If not, see the "Earth Engine Datalab Initialization.ipynb" notebook.
End of explanation
input_image = ee.Image('LANDSAT/LT5_L1T_TOA_FMASK/LT50100551998003CPE00')
Explanation: Inspect the Input Data
Load a Landsat image with corresponding Fmask label data.
End of explanation
def print_image(image):
display.display(display.Image(ee.data.getThumbnail({
'image': image.serialize(),
'dimensions': '360',
})))
Explanation: Let's define a helper function to make it easier to print thumbnails of Earth Engine images. (We'll be adding a library with utility functions like this one to the Earth Engine Python SDK, but for now we can do it by hand.)
End of explanation
print_image(input_image.visualize(
bands=['B3', 'B2', 'B1'],
min=0,
max=0.3,
))
print_image(input_image.visualize(
bands=['fmask'],
min=0,
max=4,
palette=['808080', '0000C0', '404040', '00FFFF', 'FFFFFF'],
))
Explanation: Now we can use our helper function to quickly visualize the image and label data. The Fmask values are:
0 | 1 | 2 | 3 | 4
:---:|:---:|:---:|:---:|:---:
Clear | Water | Shadow | Snow | Cloud
End of explanation
def download_tif(image, scale):
url = ee.data.makeDownloadUrl(ee.data.getDownloadId({
'image': image.serialize(),
'scale': '%d' % scale,
'filePerBand': 'false',
'name': 'data',
}))
local_zip, headers = urllib.urlretrieve(url)
with zipfile.ZipFile(local_zip) as local_zipfile:
return local_zipfile.extract('data.tif', tempfile.mkdtemp())
def load_image(image, scale):
local_tif_filename = download_tif(image, scale)
dataset = gdal.Open(local_tif_filename, gdal.GA_ReadOnly)
bands = [dataset.GetRasterBand(i + 1).ReadAsArray() for i in range(dataset.RasterCount)]
return numpy.stack(bands, 2)
Explanation: Fetch the Input Data
First we define some helper functions to download raw data from Earth Engine as numpy arrays.
We use the getDownloadId() function, which only works for modestly sized datasets. For larger datasets, a better approach would be to initiate a batch Export from Earth Engine to Cloud Storage, which you could easily manage right here in Datalab too.
End of explanation
mask = input_image.mask().reduce('min')
data = load_image(input_image.addBands(mask), scale=240)
data[:,:,7] = numpy.equal(data[:,:,7], 4)
Explanation: Now we can use that function to load the data from Earth Engine, including a valid data band, as a numpy array. This may take a few seconds. We also convert the Fmask band to a binary cloud label (i.e. fmask=4).
End of explanation
pyplot.imshow(numpy.clip(data[:,:,[3,2,1]] * 3, 0, 1))
pyplot.show()
Explanation: Display the local data. This time, for variety, we display it as an NRG false-color image. We can use pyplot to display local numpy arrays.
End of explanation
HOLDOUT_FRACTION = 0.1
# Reshape into a single vector of pixels.
data_vector = data.reshape([data.shape[0] * data.shape[1], data.shape[2]])
# Select only the valid data and shuffle it.
valid_data = data_vector[numpy.equal(data_vector[:,8], 1)]
numpy.random.shuffle(valid_data)
# Hold out a fraction of the labeled data for validation.
training_size = int(valid_data.shape[0] * (1 - HOLDOUT_FRACTION))
training_data = valid_data[0:training_size,:]
validation_data = valid_data[training_size:-1,:]
# Compute per-band means and standard deviations of the input bands.
data_mean = training_data[:,0:7].mean(0)
data_std = training_data[:,0:7].std(0)
Explanation: Preprocess the Input Data
Select the valid pixels and hold out a fraction for use as validation data. Compute per-band means and standard deviations of the training data for normalization.
End of explanation
def make_nn_layer(input, output_size):
input_size = input.get_shape().as_list()[1]
weights = tf.Variable(tf.truncated_normal(
[input_size, output_size],
stddev=1.0 / math.sqrt(float(input_size))))
biases = tf.Variable(tf.zeros([output_size]))
return tf.matmul(input, weights) + biases
Explanation: Build the TensorFlow Model
We start with a helper function to build a simple TensorFlow neural network layer.
End of explanation
NUM_INPUT_BANDS = 7
NUM_HIDDEN_1 = 20
NUM_HIDDEN_2 = 20
NUM_CLASSES = 2
input = tf.placeholder(tf.float32, shape=[None, NUM_INPUT_BANDS])
labels = tf.placeholder(tf.float32, shape=[None])
normalized = (input - data_mean) / data_std
hidden1 = tf.nn.tanh(make_nn_layer(normalized, NUM_HIDDEN_1))
hidden2 = tf.nn.tanh(make_nn_layer(hidden1, NUM_HIDDEN_2))
logits = make_nn_layer(hidden2, NUM_CLASSES)
outputs = tf.argmax(logits, 1)
int_labels = tf.to_int64(labels)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, int_labels, name='xentropy')
train_step = tf.train.AdamOptimizer().minimize(cross_entropy)
correct_prediction = tf.equal(outputs, int_labels)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: Here we define our TensorFlow model, a neural network with two hidden layers with tanh() nonlinearities. The main network has two outputs, continuous-valued “logits” representing non-cloud and cloud, respectively. The binary output is intepreted as the argmax of these outputs.
We define a training step, which uses Kingma and Ba's Adam algorithm to minimize the cross-entropy between the logits and the training data. Finally, we define a simple overall percentage accuracy measure.
End of explanation
BATCH_SIZE = 1000
NUM_BATCHES = 1000
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
validation_dict = {
input: validation_data[:,0:7],
labels: validation_data[:,7],
}
for i in range(NUM_BATCHES):
batch = training_data[numpy.random.choice(training_size, BATCH_SIZE, False),:]
train_step.run({input: batch[:,0:7], labels: batch[:,7]})
if i % 100 == 0 or i == NUM_BATCHES - 1:
print('Accuracy %.2f%% at step %d' % (accuracy.eval(validation_dict) * 100, i))
output_data = outputs.eval({input: data_vector[:,0:7]})
Explanation: Train the Neural Network
Now train the neural network, using batches of training data drawn randomly from the training data pool. We periodically compute the accuracy against the validation data. When we're done training, we apply the model to the complete input data set.
This simple notebook performs all TensorFlow operations locally. However, for larger analyses you could bring up a cluster of TensorFlow workers to parallelize the computation, all controlled from within Datalab.
End of explanation
output_image = output_data.reshape([data.shape[0], data.shape[1]])
red = numpy.where(data[:,:,8], output_image, 0.5)
blue = numpy.where(data[:,:,8], data[:,:,7], 0.5)
green = numpy.minimum(red, blue)
comparison_image = numpy.dstack((red, green, blue))
pyplot.figure(figsize = (12,12))
pyplot.imshow(comparison_image)
pyplot.show()
Explanation: Inspect the Results
Here we dislay the results. The red band corresponds to the TensorFlow output and the blue band corresponds to the labeled training data, so pixels that are red and blue correspond to disagreements between the model and the training data. (There aren't many: look carefully around the fringes of the clouds.)
End of explanation
pyplot.figure(figsize = (12,12))
pyplot.imshow(comparison_image[300:500,600:,:], interpolation='nearest')
pyplot.show()
Explanation: We can zoom in on a particular region over on the right side of the image to see some of the disagreements. Red pixels represent comission errors and blue pixels represent omission errors relative to the labeled input data.
End of explanation |
7,192 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Benchmarking Python Clustering Algorithms on 2D Data
Other notebooks perform a more genenral analysis of clustering algorithms; this notebook is looking at the special case of two dimensional data. Why two dimensional? The algorithms used in DBSCAN and HDBSCAN (in particular algorithms using kd-trees and ball trees) have significantly better performance and scaling in lower dimensions. I wanted to look at how scaling compares in what is effectively the best case scenario for such algorithms. As in the other notebooks we'll be looking at a number of algorithms and implementations.
The implementations being test are
Step1: We can borrow the benchmarking code from other notebooks to benchmark multiple different sizes of dataset. Because some clustering algorithms have performance that can vary quite a lot depending on the exact nature of the dataset we'll also need to run several times on randomly generated datasets of each size so as to get a better idea of the average case performance.
We also need to generalise over algorithms which don't necessarily all have the same API. We can resolve that by taking a clustering function, argument tuple and keywords dictionary to let us do semi-arbitrary calls (fortunately all the algorithms do at least take the dataset to cluster as the first parameter).
Finally some algorithms scale poorly, and I don't want to spend forever doing clustering of random datasets so we'll cap the maximum time an algorithm can use; once it has taken longer than max time we'll just abort there and leave the remaining entries in our datasize by samples matrix unfilled.
In the end this all amounts to a fairly straightforward set of nested loops (over datasizes and number of samples) with calls to sklearn to generate mock data and the clustering function inside a timer. Add in some early abort and we're done.
This time around we'll also set the default dataset dimension to be two.
Step2: Comparison of all ten implementations
Now we need a range of dataset sizes to test out our algorithm. Since the scaling performance is wildly different over the ten implementations we're going to look at it will be beneficial to have a number of very small dataset sizes, and increasing spacing as we get larger, spanning out to 32000 datapoints to cluster (to begin with). Numpy provides convenient ways to get this done via arange and vector multiplication. We'll start with step sizes of 500, then shift to steps of 1000 past 3000 datapoints, and finally steps of 2000 past 6000 datapoints.
Step3: Now it is just a matter of running all the clustering algorithms via our benchmark function to collect up all the requsite data. This could be prettier, rolled up into functions appropriately, but sometimes brute force is good enough. More importantly (for me) since this can take a significant amount of compute time, I wanted to be able to comment out algorithms that were slow or I was uninterested in easily. Which brings me to a warning for you the reader and potential user of the notebook
Step4: Now we need to plot the results so we can see what is going on. The catch is that we have several datapoints for each dataset size and ultimately we would like to try and fit a curve through all of it to get the general scaling trend. Fortunately seaborn comes to the rescue here by providing regplot which plots a regression through a dataset, supports higher order regression (we should probably use order two as most algorithms are effectively quadratic) and handles multiple datapoints for each x-value cleanly (using the x_estimator keyword to put a point at the mean and draw an error bar to cover the range of data).
Step5: A few features stand out. First of all there appear to be essentially two classes of implementation. The fast implementations tend to be implementations of single linkage agglomerative clustering, K-means, and DBSCAN. The slow cases are largely from sklearn and include agglomerative clustering (in this case using Ward instead of single linkage).
We really wanted to see what happens in the two dimensional case for the better scaling algorithms, so let's drop out those slow algorithms so we can scale out a little further and get a closer look at the various algorithms that managed 32000 points in under twenty seconds. There is almost undoubtedly more to learn as we get ever larger dataset sizes.
Comparison of fast implementations
Let's compare the six fastest implementations now. We can scale out a little further as well; based on the curves above it looks like we should be able to comfortably get to 60000 data points without taking much more than a minute per run. We can also note that most of these implementations weren't that noisy so we can get away with a single run per dataset size.
Step6: Again we can use seaborn to do curve fitting and plotting, exactly as before.
Step7: Clearly something has gone woefully wrong with the curve fitting for the two single linkage algorithms, but what exactly? If we look at the raw data we can see.
Step8: It seems that at around 44000 points we hit a wall and the runtimes spiked. A hint is that I'm running this on a laptop with 8GB of RAM. Both single linkage algorithms use scipy.spatial.pdist to compute pairwise distances between points, which returns an array of shape (n(n-1)/2, 1) of doubles. A quick computation shows that that array of distances is quite large once we nave 44000 points
Step9: If we assume that my laptop is keeping much other than that distance array in RAM then clearly we are going to spend time paging out the distance array to disk and back and hence we will see the runtimes increase dramatically as we become disk IO bound.
For fastcluster there is an alternative linkage_vector function that can be used if you are doing single linkage and have the original data (as opposed to a matrix of dissimilarities). We'll rerun fastcluster with this instead.
For scipy single linkahe if we just leave off the last element we can get a better idea of the curves, but keep in mind that this single linkage implemtation does not scale past a limit set by your available RAM.
Step10: Now we just need to replot with the new fastcluster data, and the truncated scipy single linkage data.
Step11: This rules out scipy single linkage as beign in the same class as the others even igf it could scale. Even fastcluster is starting to show the effects of having an $O(n^2)$ algorithm, despite having worked very hard to improve constants.
In practice this is going to mean that for larger datasets you are going to be very constrained in what algorithms you can apply
Step12: While in other notebooks the two K-means implementations separated out, with the Sklearn implementation being clearly better, here we see they very much keep pace. Also, interestingly we see the full benefits of the dual tree Boruvka algorithm for minimum spanning trees, with the HDBSCAN implementation actually outperforming ordinary DBSCAN for large dataset sizes. In higher dimensions this won't be the case, but it makes it clear that if you have very low dimensional data then HDBSCAN (from which you can extract ordinary DBSCAN results if you so desire) is clearly the better choice.
But should I get a coffee?
So we know which implementations scale and which don't; a more useful thing to know in practice is, given a dataset, what can I run interactively? What can I run while I go and grab some coffee? How about a run over lunch? What if I'm willing to wait until I get in tomorrow morning? Each of these represent significant breaks in productivity -- once you aren't working interactively anymore your productivity drops measurably, and so on.
We can build a table for this. To start we'll need to be able to approximate how long a given clustering implementation will take to run. Fortunately we already gathered a lot of that data; if we load up the statsmodels package we can fit the data (with a quadratic or $n\log n$ fit depending on the implementation) and use the resulting model to make our predictions. Obviously this has some caveats
Step13: Now we run that for each of our pre-existing datasets to extrapolate out predicted performance on the relevant dataset sizes. A little pandas wrangling later and we've produced a table of roughly how large a dataset you can tackle in each time frame with each implementation. I had to leave out the scipy KMeans timings because the noise in timing results caused the model to be unrealistic at larger data sizes. Note how the $O(n\log n)$ algorithms utterly dominate here, and HDBSCAN achieves that complexity with the dual tree Boruvka on two dimensional data. | Python Code:
import hdbscan
import debacl
import fastcluster
import sklearn.cluster
import scipy.cluster
import sklearn.datasets
import numpy as np
import pandas as pd
import time
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_context('poster')
sns.set_palette('Paired', 10)
sns.set_color_codes()
Explanation: Benchmarking Python Clustering Algorithms on 2D Data
Other notebooks perform a more genenral analysis of clustering algorithms; this notebook is looking at the special case of two dimensional data. Why two dimensional? The algorithms used in DBSCAN and HDBSCAN (in particular algorithms using kd-trees and ball trees) have significantly better performance and scaling in lower dimensions. I wanted to look at how scaling compares in what is effectively the best case scenario for such algorithms. As in the other notebooks we'll be looking at a number of algorithms and implementations.
The implementations being test are:
Sklearn (which implements several algorithms):
K-Means clustering
DBSCAN clustering
Agglomerative clustering
Spectral clustering
Affinity Propagation
Scipy (which provides basic algorithms):
K-Means clustering
Agglomerative clustering
Fastcluster (which provides very fast agglomerative clustering in C++)
DeBaCl (Density Based Clustering; similar to a mix of DBSCAN and Agglomerative)
HDBSCAN (A robust hierarchical version of DBSCAN)
End of explanation
def benchmark_algorithm(dataset_sizes, cluster_function, function_args, function_kwds,
dataset_dimension=2, max_time=45, sample_size=2):
# Initialize the result with NaNs so that any unfilled entries
# will be considered NULL when we convert to a pandas dataframe at the end
result = np.nan * np.ones((len(dataset_sizes), sample_size))
for index, size in enumerate(dataset_sizes):
dataset_n_clusters = int(np.log2(size))
for s in range(sample_size):
# Use sklearns make_blobs to generate a random dataset with specified size
# dimension and number of clusters
data, _ = sklearn.datasets.make_blobs(n_samples=size,
n_features=dataset_dimension,
centers=dataset_n_clusters)
# Start the clustering with a timer
start_time = time.time()
cluster_function(data, *function_args, **function_kwds)
time_taken = time.time() - start_time
# If we are taking more than max_time then abort -- we don't
# want to spend excessive time on slow algorithms
if time_taken > max_time:
result[index, s] = time_taken
return pd.DataFrame(np.vstack([dataset_sizes.repeat(sample_size),
result.flatten()]).T, columns=['x','y'])
else:
result[index, s] = time_taken
# Return the result as a dataframe for easier handling with seaborn afterwards
return pd.DataFrame(np.vstack([dataset_sizes.repeat(sample_size),
result.flatten()]).T, columns=['x','y'])
Explanation: We can borrow the benchmarking code from other notebooks to benchmark multiple different sizes of dataset. Because some clustering algorithms have performance that can vary quite a lot depending on the exact nature of the dataset we'll also need to run several times on randomly generated datasets of each size so as to get a better idea of the average case performance.
We also need to generalise over algorithms which don't necessarily all have the same API. We can resolve that by taking a clustering function, argument tuple and keywords dictionary to let us do semi-arbitrary calls (fortunately all the algorithms do at least take the dataset to cluster as the first parameter).
Finally some algorithms scale poorly, and I don't want to spend forever doing clustering of random datasets so we'll cap the maximum time an algorithm can use; once it has taken longer than max time we'll just abort there and leave the remaining entries in our datasize by samples matrix unfilled.
In the end this all amounts to a fairly straightforward set of nested loops (over datasizes and number of samples) with calls to sklearn to generate mock data and the clustering function inside a timer. Add in some early abort and we're done.
This time around we'll also set the default dataset dimension to be two.
End of explanation
dataset_sizes = np.hstack([np.arange(1, 6) * 500, np.arange(3,7) * 1000, np.arange(4,17) * 2000])
Explanation: Comparison of all ten implementations
Now we need a range of dataset sizes to test out our algorithm. Since the scaling performance is wildly different over the ten implementations we're going to look at it will be beneficial to have a number of very small dataset sizes, and increasing spacing as we get larger, spanning out to 32000 datapoints to cluster (to begin with). Numpy provides convenient ways to get this done via arange and vector multiplication. We'll start with step sizes of 500, then shift to steps of 1000 past 3000 datapoints, and finally steps of 2000 past 6000 datapoints.
End of explanation
k_means = sklearn.cluster.KMeans(10)
k_means_data = benchmark_algorithm(dataset_sizes, k_means.fit, (), {})
dbscan = sklearn.cluster.DBSCAN(eps=1.25)
dbscan_data = benchmark_algorithm(dataset_sizes, dbscan.fit, (), {})
scipy_k_means_data = benchmark_algorithm(dataset_sizes, scipy.cluster.vq.kmeans, (10,), {})
scipy_single_data = benchmark_algorithm(dataset_sizes, scipy.cluster.hierarchy.single, (), {})
fastclust_data = benchmark_algorithm(dataset_sizes, fastcluster.single, (), {})
hdbscan_ = hdbscan.HDBSCAN()
hdbscan_data = benchmark_algorithm(dataset_sizes, hdbscan_.fit, (), {})
debacl_data = benchmark_algorithm(dataset_sizes, debacl.geom_tree.geomTree, (5, 5), {'verbose':False})
agglomerative = sklearn.cluster.AgglomerativeClustering(10)
agg_data = benchmark_algorithm(dataset_sizes, agglomerative.fit, (), {}, sample_size=4)
spectral = sklearn.cluster.SpectralClustering(10)
spectral_data = benchmark_algorithm(dataset_sizes, spectral.fit, (), {}, sample_size=6)
affinity_prop = sklearn.cluster.AffinityPropagation()
ap_data = benchmark_algorithm(dataset_sizes, affinity_prop.fit, (), {}, sample_size=3)
Explanation: Now it is just a matter of running all the clustering algorithms via our benchmark function to collect up all the requsite data. This could be prettier, rolled up into functions appropriately, but sometimes brute force is good enough. More importantly (for me) since this can take a significant amount of compute time, I wanted to be able to comment out algorithms that were slow or I was uninterested in easily. Which brings me to a warning for you the reader and potential user of the notebook: this next step is very expensive. We are running ten different clustering algorithms multiple times each on twenty two different dataset sizes -- and some of the clustering algorithms are slow (we are capping out at forty five seconds per run). That means that the next cell can take an hour or more to run. That doesn't mean "Don't try this at home" (I actually encourage you to try this out yourself and play with dataset parameters and clustering parameters) but it does mean you should be patient if you're going to!
End of explanation
sns.regplot(x='x', y='y', data=k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=hdbscan_data, order=2, label='HDBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=fastclust_data, order=2, label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=scipy_single_data, order=2, label='Scipy Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=debacl_data, order=2, label='DeBaCl Geom Tree', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=spectral_data, order=2, label='Sklearn Spectral', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=agg_data, order=2, label='Sklearn Agglomerative', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=ap_data, order=2, label='Sklearn Affinity Propagation', x_estimator=np.mean)
plt.gca().axis([0, 34000, 0, 120])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Clustering Implementations')
plt.legend()
Explanation: Now we need to plot the results so we can see what is going on. The catch is that we have several datapoints for each dataset size and ultimately we would like to try and fit a curve through all of it to get the general scaling trend. Fortunately seaborn comes to the rescue here by providing regplot which plots a regression through a dataset, supports higher order regression (we should probably use order two as most algorithms are effectively quadratic) and handles multiple datapoints for each x-value cleanly (using the x_estimator keyword to put a point at the mean and draw an error bar to cover the range of data).
End of explanation
large_dataset_sizes = np.arange(1,16) * 4000
hdbscan_prims = hdbscan.HDBSCAN(algorithm='prims_kdtree')
large_hdbscan_prims_data = benchmark_algorithm(large_dataset_sizes,
hdbscan_prims.fit, (), {}, max_time=90, sample_size=1)
hdbscan_boruvka = hdbscan.HDBSCAN(algorithm='boruvka_kdtree')
large_hdbscan_boruvka_data = benchmark_algorithm(large_dataset_sizes,
hdbscan_boruvka.fit, (), {}, max_time=90, sample_size=1)
k_means = sklearn.cluster.KMeans(10)
large_k_means_data = benchmark_algorithm(large_dataset_sizes,
k_means.fit, (), {}, max_time=90, sample_size=1)
dbscan = sklearn.cluster.DBSCAN(eps=1.25, min_samples=5)
large_dbscan_data = benchmark_algorithm(large_dataset_sizes,
dbscan.fit, (), {}, max_time=90, sample_size=1)
large_fastclust_data = benchmark_algorithm(large_dataset_sizes,
fastcluster.single, (), {}, max_time=90, sample_size=1)
large_scipy_k_means_data = benchmark_algorithm(large_dataset_sizes,
scipy.cluster.vq.kmeans, (10,), {}, max_time=90, sample_size=1)
large_scipy_single_data = benchmark_algorithm(large_dataset_sizes,
scipy.cluster.hierarchy.single, (), {}, max_time=90, sample_size=1)
Explanation: A few features stand out. First of all there appear to be essentially two classes of implementation. The fast implementations tend to be implementations of single linkage agglomerative clustering, K-means, and DBSCAN. The slow cases are largely from sklearn and include agglomerative clustering (in this case using Ward instead of single linkage).
We really wanted to see what happens in the two dimensional case for the better scaling algorithms, so let's drop out those slow algorithms so we can scale out a little further and get a closer look at the various algorithms that managed 32000 points in under twenty seconds. There is almost undoubtedly more to learn as we get ever larger dataset sizes.
Comparison of fast implementations
Let's compare the six fastest implementations now. We can scale out a little further as well; based on the curves above it looks like we should be able to comfortably get to 60000 data points without taking much more than a minute per run. We can also note that most of these implementations weren't that noisy so we can get away with a single run per dataset size.
End of explanation
sns.regplot(x='x', y='y', data=large_k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_hdbscan_boruvka_data, order=2, label='HDBSCAN Boruvka', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_fastclust_data, order=2, label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_single_data, order=2, label='Scipy Single Linkage', x_estimator=np.mean)
#sns.regplot(x='x', y='y', data=large_hdbscan_prims_data, order=2, label='HDBSCAN Prims', x_estimator=np.mean)
plt.gca().axis([0, 64000, 0, 150])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Fastest Clustering Implementations')
plt.legend()
Explanation: Again we can use seaborn to do curve fitting and plotting, exactly as before.
End of explanation
large_fastclust_data.tail(10)
large_scipy_single_data.tail(10)
Explanation: Clearly something has gone woefully wrong with the curve fitting for the two single linkage algorithms, but what exactly? If we look at the raw data we can see.
End of explanation
size_of_array = 44000 * (44000 - 1) / 2 # from pdist documentation
bytes_in_array = size_of_array * 8 # Since doubles use 8 bytes
gigabytes_used = bytes_in_array / (1024.0 ** 3) # divide out to get the number of GB
gigabytes_used
Explanation: It seems that at around 44000 points we hit a wall and the runtimes spiked. A hint is that I'm running this on a laptop with 8GB of RAM. Both single linkage algorithms use scipy.spatial.pdist to compute pairwise distances between points, which returns an array of shape (n(n-1)/2, 1) of doubles. A quick computation shows that that array of distances is quite large once we nave 44000 points:
End of explanation
large_fastclust_data = benchmark_algorithm(large_dataset_sizes,
fastcluster.linkage_vector, (), {}, max_time=90, sample_size=1)
Explanation: If we assume that my laptop is keeping much other than that distance array in RAM then clearly we are going to spend time paging out the distance array to disk and back and hence we will see the runtimes increase dramatically as we become disk IO bound.
For fastcluster there is an alternative linkage_vector function that can be used if you are doing single linkage and have the original data (as opposed to a matrix of dissimilarities). We'll rerun fastcluster with this instead.
For scipy single linkahe if we just leave off the last element we can get a better idea of the curves, but keep in mind that this single linkage implemtation does not scale past a limit set by your available RAM.
End of explanation
sns.regplot(x='x', y='y', data=large_k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_hdbscan_boruvka_data, order=2, label='HDBSCAN Boruvka', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_fastclust_data, order=2, label='Fastcluster Single Linkage', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=large_scipy_single_data[:8], order=2, label='Scipy Single Linkage', x_estimator=np.mean)
#sns.regplot(x='x', y='y', data=large_hdbscan_prims_data, order=2, label='HDBSCAN Prims', x_estimator=np.mean)
plt.gca().axis([0, 64000, 0, 150])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of Fastest Clustering Implementations')
plt.legend()
Explanation: Now we just need to replot with the new fastcluster data, and the truncated scipy single linkage data.
End of explanation
huge_dataset_sizes = np.arange(1,16) * 20000
k_means = sklearn.cluster.KMeans(10)
huge_k_means_data = benchmark_algorithm(huge_dataset_sizes,
k_means.fit, (), {}, max_time=120, sample_size=2)
dbscan = sklearn.cluster.DBSCAN()
huge_dbscan_data = benchmark_algorithm(huge_dataset_sizes,
dbscan.fit, (), {}, max_time=240, sample_size=2)
huge_scipy_k_means_data = benchmark_algorithm(huge_dataset_sizes,
scipy.cluster.vq.kmeans, (10,), {}, max_time=240, sample_size=2)
hdbscan_boruvka = hdbscan.HDBSCAN(algorithm='boruvka_kdtree')
huge_hdbscan_data = benchmark_algorithm(huge_dataset_sizes,
hdbscan_boruvka.fit, (), {}, max_time=240, sample_size=4)
huge_fastcluster_data = benchmark_algorithm(huge_dataset_sizes,
fastcluster.linkage_vector, (), {}, max_time=240, sample_size=2)
sns.regplot(x='x', y='y', data=huge_k_means_data, order=2, label='Sklearn K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_dbscan_data, order=2, label='Sklearn DBSCAN', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_scipy_k_means_data, order=2, label='Scipy K-Means', x_estimator=np.mean)
sns.regplot(x='x', y='y', data=huge_hdbscan_data, order=2, label='HDBSCAN', x_estimator=np.mean)
#sns.regplot(x='x', y='y', data=huge_fastcluster_data, order=2, label='Fastcluster', x_estimator=np.mean)
plt.gca().axis([0, 310000, 0, 60])
plt.gca().set_xlabel('Number of data points')
plt.gca().set_ylabel('Time taken to cluster (s)')
plt.title('Performance Comparison of K-Means and DBSCAN')
plt.legend()
Explanation: This rules out scipy single linkage as beign in the same class as the others even igf it could scale. Even fastcluster is starting to show the effects of having an $O(n^2)$ algorithm, despite having worked very hard to improve constants.
In practice this is going to mean that for larger datasets you are going to be very constrained in what algorithms you can apply: if you get enough datapoints only K-Means, DBSCAN and HDBSCAN will be left. Let's get a closer look at the performance of these.
Comparison of K-Means and DBSCAN and HDBSCAN implementations
At this point we can scale out to 300000 datapoints easily enough: These algorithms can use various data structures to avoid having to compute the full pairwise distance matrix and thus potentially have much more favourable asymptotic complexity.
End of explanation
import statsmodels.formula.api as sm
time_samples = [1000, 2000, 5000, 10000, 25000, 50000, 75000, 100000, 250000, 500000, 750000,
1000000, 2500000, 5000000, 10000000, 50000000, 100000000, 500000000, 1000000000]
def get_timing_series(data, quadratic=True):
if quadratic:
data['x_squared'] = data.x**2
model = sm.ols('y ~ x + x_squared', data=data).fit()
predictions = [model.params.dot([1.0, i, i**2]) for i in time_samples]
return pd.Series(predictions, index=pd.Index(time_samples))
else: # assume n log(n)
data['xlogx'] = data.x * np.log(data.x)
model = sm.ols('y ~ x + xlogx', data=data).fit()
predictions = [model.params.dot([1.0, i, i*np.log(i)]) for i in time_samples]
return pd.Series(predictions, index=pd.Index(time_samples))
Explanation: While in other notebooks the two K-means implementations separated out, with the Sklearn implementation being clearly better, here we see they very much keep pace. Also, interestingly we see the full benefits of the dual tree Boruvka algorithm for minimum spanning trees, with the HDBSCAN implementation actually outperforming ordinary DBSCAN for large dataset sizes. In higher dimensions this won't be the case, but it makes it clear that if you have very low dimensional data then HDBSCAN (from which you can extract ordinary DBSCAN results if you so desire) is clearly the better choice.
But should I get a coffee?
So we know which implementations scale and which don't; a more useful thing to know in practice is, given a dataset, what can I run interactively? What can I run while I go and grab some coffee? How about a run over lunch? What if I'm willing to wait until I get in tomorrow morning? Each of these represent significant breaks in productivity -- once you aren't working interactively anymore your productivity drops measurably, and so on.
We can build a table for this. To start we'll need to be able to approximate how long a given clustering implementation will take to run. Fortunately we already gathered a lot of that data; if we load up the statsmodels package we can fit the data (with a quadratic or $n\log n$ fit depending on the implementation) and use the resulting model to make our predictions. Obviously this has some caveats: if you fill your RAM with a distance matrix your runtime isn't going to fit the curve.
I've hand built a time_samples list to give a reasonable set of potential data sizes that are nice and human readable. After that we just need a function to fit and build the curves.
End of explanation
ap_timings = get_timing_series(ap_data)
spectral_timings = get_timing_series(spectral_data)
agg_timings = get_timing_series(agg_data)
debacl_timings = get_timing_series(debacl_data)
fastclust_timings = get_timing_series(large_fastclust_data.ix[:10,:].copy())
scipy_single_timings = get_timing_series(large_scipy_single_data.ix[:10,:].copy())
hdbscan_boruvka = get_timing_series(huge_hdbscan_data, quadratic=False)
#scipy_k_means_timings = get_timing_series(huge_scipy_k_means_data, quadratic=False)
dbscan_timings = get_timing_series(huge_dbscan_data, quadratic=False)
k_means_timings = get_timing_series(huge_k_means_data, quadratic=True)
timing_data = pd.concat([ap_timings, spectral_timings, agg_timings, debacl_timings,
scipy_single_timings, fastclust_timings, hdbscan_boruvka,
dbscan_timings, k_means_timings
], axis=1)
timing_data.columns=['AffinityPropagation', 'Spectral', 'Agglomerative',
'DeBaCl', 'ScipySingleLinkage', 'Fastcluster',
'HDBSCAN', 'DBSCAN', 'SKLearn KMeans'
]
def get_size(series, max_time):
return series.index[series < max_time].max()
datasize_table = pd.concat([
timing_data.apply(get_size, max_time=30),
timing_data.apply(get_size, max_time=300),
timing_data.apply(get_size, max_time=3600),
timing_data.apply(get_size, max_time=8*3600)
], axis=1)
datasize_table.columns=('Interactive', 'Get Coffee', 'Over Lunch', 'Overnight')
datasize_table
Explanation: Now we run that for each of our pre-existing datasets to extrapolate out predicted performance on the relevant dataset sizes. A little pandas wrangling later and we've produced a table of roughly how large a dataset you can tackle in each time frame with each implementation. I had to leave out the scipy KMeans timings because the noise in timing results caused the model to be unrealistic at larger data sizes. Note how the $O(n\log n)$ algorithms utterly dominate here, and HDBSCAN achieves that complexity with the dual tree Boruvka on two dimensional data.
End of explanation |
7,193 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyBroom Example - Multiple Datasets - Minimize
This notebook is part of pybroom.
This notebook demonstrate using pybroom when performing Maximum-Likelihood fitting
(scalar minimization as opposed to curve fitting) of a set of datasets with lmfit.minimize.
We will show that pybroom greatly simplifies comparing, filtering and plotting fit results
from multiple datasets.
For an example using curve fitting see
pybroom-example-multi-datasets.
Step1: Create Noisy Data
Simulate N datasets which are identical except for the additive noise.
Step2: Model Fitting
Two-peaks model
Here, we use a Gaussian mixture distribution for fitting the data.
We fit the data using the Maximum-Likelihood method, i.e. we minimize the
(negative) log-likelihood function
Step3: We define the parameters and "fit" the $N$ datasets by minimizing the (scalar) function log_likelihood_lmfit
Step4: Fit results can be inspected with
lmfit.fit_report() or params.pretty_print()
Step5: This is good for peeking at the results. However,
extracting these data from lmfit objects is quite a chore
and requires good knowledge of lmfit objects structure.
pybroom helps in this task
Step6: Note that while glance returns one row per fit result, the tidy function
return one row per fitted parameter.
We can query the value of one parameter (peak position) across the multiple datasets
Step7: By computing the standard deviation of the peak positions
Step8: we see that the estimation of mu1 as less error than the estimation
of mu2.
This difference can be also observed in the histogram of
the fitted values
Step9: We can also use pybroom's tidy_to_dict
and dict_to_tidy
functions to convert
a set of fitted parameters to a dict (and vice-versa)
Step10: This conversion is useful to call a python functions
passing argument values from a tidy DataFrame.
For example, here we use tidy_to_dict
to easily plot the model distribution
Step11: Single-peak model
For the sake of the example we also fit the $N$ datasets with a single Gaussian distribution
Step12: Augment?
Pybroom augment function
extracts information that is the same size as the input dataset,
for example the array of residuals. In this case, however, we performed a scalar minimization
(the log-likelihood function returns a scalar) and therefore the MinimizerResult object
does not contain any residual array or other data of the same size as the dataset.
Comparing fit results
We will do instead a comparison of single and two-peaks distribution using the results
from the tidy function obtained in the previous section.
We start with the following plot
Step13: The problem is that FacetGrid only takes one DataFrame as input. In the previous
example we provide the DataFrame of "experimental" data (ds) and use the .map method to plot
histograms of the different datasets. The fitted distributions, instead, are
plotted manually in the for loop.
We can invert the approach, and pass to FacetGrid the DataFrame of fitted parameters (dt_tot),
while leaving the simple histogram for manual plotting. In this case we need to write an
helper function (_plot) that knows how to plot a distribution given a set of parameter | Python Code:
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.pylab import normpdf
import seaborn as sns
from lmfit import Model
import lmfit
print('lmfit: %s' % lmfit.__version__)
sns.set_style('whitegrid')
import pybroom as br
Explanation: PyBroom Example - Multiple Datasets - Minimize
This notebook is part of pybroom.
This notebook demonstrate using pybroom when performing Maximum-Likelihood fitting
(scalar minimization as opposed to curve fitting) of a set of datasets with lmfit.minimize.
We will show that pybroom greatly simplifies comparing, filtering and plotting fit results
from multiple datasets.
For an example using curve fitting see
pybroom-example-multi-datasets.
End of explanation
N = 20 # number of datasets
n = 1000 # number of sample in each dataset
np.random.seed(1)
d1 = np.random.randn(20, int(0.6*n))*0.5 - 2
d2 = np.random.randn(20, int(0.4*n))*1.5 + 2
d = np.hstack((d1, d2))
ds = pd.DataFrame(data=d, columns=range(d.shape[1])).stack().reset_index()
ds.columns = ['dataset', 'sample', 'data']
ds.head()
kws = dict(bins = np.arange(-5, 5.1, 0.1), histtype='step',
lw=2, color='k', alpha=0.1)
for i in range(N):
ds.loc[ds.dataset == i, :].data.plot.hist(**kws)
Explanation: Create Noisy Data
Simulate N datasets which are identical except for the additive noise.
End of explanation
# Model PDF to be maximized
def model_pdf(x, a2, mu1, mu2, sig1, sig2):
a1 = 1 - a2
return (a1 * normpdf(x, mu1, sig1) +
a2 * normpdf(x, mu2, sig2))
# Function to be minimized by lmfit
def log_likelihood_lmfit(params, x):
pnames = ('a2', 'mu1', 'mu2', 'sig1', 'sig2')
kws = {n: params[n] for n in pnames}
return -np.log(model_pdf(x, **kws)).sum()
Explanation: Model Fitting
Two-peaks model
Here, we use a Gaussian mixture distribution for fitting the data.
We fit the data using the Maximum-Likelihood method, i.e. we minimize the
(negative) log-likelihood function:
End of explanation
params = lmfit.Parameters()
params.add('a2', 0.5, min=0, max=1)
params.add('mu1', -1, min=-5, max=5)
params.add('mu2', 1, min=-5, max=5)
params.add('sig1', 1, min=1e-6)
params.add('sig2', 1, min=1e-6)
params.add('ax', expr='a2') # just a test for a derived parameter
Results = [lmfit.minimize(log_likelihood_lmfit, params, args=(di,),
nan_policy='omit', method='least_squares')
for di in d]
Explanation: We define the parameters and "fit" the $N$ datasets by minimizing the (scalar) function log_likelihood_lmfit:
End of explanation
print(lmfit.fit_report(Results[0]))
print()
Results[0].params.pretty_print()
Explanation: Fit results can be inspected with
lmfit.fit_report() or params.pretty_print():
End of explanation
dg = br.glance(Results)
dg.drop('message', 1).head()
dt = br.tidy(Results, var_names='dataset')
dt.query('dataset == 0')
Explanation: This is good for peeking at the results. However,
extracting these data from lmfit objects is quite a chore
and requires good knowledge of lmfit objects structure.
pybroom helps in this task: it extracts data from fit results and
returns familiar pandas DataFrame (in tidy format).
Thanks to the tidy format these data can be
much more easily manipulated, filtered and plotted.
Let's use the glance and
tidy functions:
End of explanation
dt.query('name == "mu1"').head()
Explanation: Note that while glance returns one row per fit result, the tidy function
return one row per fitted parameter.
We can query the value of one parameter (peak position) across the multiple datasets:
End of explanation
dt.query('name == "mu1"')['value'].std()
dt.query('name == "mu2"')['value'].std()
Explanation: By computing the standard deviation of the peak positions:
End of explanation
dt.query('name == "mu1"')['value'].hist()
dt.query('name == "mu2"')['value'].hist(ax=plt.gca());
Explanation: we see that the estimation of mu1 as less error than the estimation
of mu2.
This difference can be also observed in the histogram of
the fitted values:
End of explanation
kwd_params = br.tidy_to_dict(dt.loc[dt['dataset'] == 0])
kwd_params
br.dict_to_tidy(kwd_params)
Explanation: We can also use pybroom's tidy_to_dict
and dict_to_tidy
functions to convert
a set of fitted parameters to a dict (and vice-versa):
End of explanation
bins = np.arange(-5, 5.01, 0.25)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
grid = sns.FacetGrid(ds.query('dataset < 6'), col='dataset', hue='dataset', col_wrap=3)
grid.map(plt.hist, 'data', bins=bins, normed=True);
for i, ax in enumerate(grid.axes):
kw_pars = br.tidy_to_dict(dt.loc[dt.dataset == i], keys_exclude=['ax'])
y = model_pdf(x, **kw_pars)
ax.plot(x, y, lw=2, color='k')
Explanation: This conversion is useful to call a python functions
passing argument values from a tidy DataFrame.
For example, here we use tidy_to_dict
to easily plot the model distribution:
End of explanation
def model_pdf1(x, mu, sig):
return normpdf(x, mu, sig)
def log_likelihood_lmfit1(params, x):
return -np.log(model_pdf1(x, **params.valuesdict())).sum()
params = lmfit.Parameters()
params.add('mu', 0, min=-5, max=5)
params.add('sig', 1, min=1e-6)
Results1 = [lmfit.minimize(log_likelihood_lmfit1, params, args=(di,),
nan_policy='omit', method='least_squares')
for di in d]
dg1 = br.glance(Results)
dg1.drop('message', 1).head()
dt1 = br.tidy(Results1, var_names='dataset')
dt1.query('dataset == 0')
Explanation: Single-peak model
For the sake of the example we also fit the $N$ datasets with a single Gaussian distribution:
End of explanation
dt['model'] = 'twopeaks'
dt1['model'] = 'onepeak'
dt_tot = pd.concat([dt, dt1], ignore_index=True)
bins = np.arange(-5, 5.01, 0.25)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
grid = sns.FacetGrid(ds.query('dataset < 6'), col='dataset', hue='dataset', col_wrap=3)
grid.map(plt.hist, 'data', bins=bins, normed=True);
for i, ax in enumerate(grid.axes):
kw_pars = br.tidy_to_dict(dt_tot.loc[(dt_tot.dataset == i) & (dt_tot.model == 'onepeak')])
y1 = model_pdf1(x, **kw_pars)
li1, = ax.plot(x, y1, lw=2, color='k', alpha=0.5)
kw_pars = br.tidy_to_dict(dt_tot.loc[(dt_tot.dataset == i) & (dt_tot.model == 'twopeaks')], keys_exclude=['ax'])
y = model_pdf(x, **kw_pars)
li, = ax.plot(x, y, lw=2, color='k')
grid.add_legend(legend_data=dict(onepeak=li1, twopeaks=li),
label_order=['onepeak', 'twopeaks'], title='model');
Explanation: Augment?
Pybroom augment function
extracts information that is the same size as the input dataset,
for example the array of residuals. In this case, however, we performed a scalar minimization
(the log-likelihood function returns a scalar) and therefore the MinimizerResult object
does not contain any residual array or other data of the same size as the dataset.
Comparing fit results
We will do instead a comparison of single and two-peaks distribution using the results
from the tidy function obtained in the previous section.
We start with the following plot:
End of explanation
def _plot(names, values, x, label=None, color=None):
df = pd.concat([names, values], axis=1)
kw_pars = br.tidy_to_dict(df, keys_exclude=['ax'])
func = model_pdf1 if label == 'onepeak' else model_pdf
y = func(x, **kw_pars)
plt.plot(x, y, lw=2, color=color, label=label)
bins = np.arange(-5, 5.01, 0.25)
x = bins[:-1] + 0.5*(bins[1] - bins[0])
grid = sns.FacetGrid(dt_tot.query('dataset < 6'), col='dataset', hue='model', col_wrap=3)
grid.map(_plot, 'name', 'value', x=x)
grid.add_legend()
for i, ax in enumerate(grid.axes):
ax.hist(ds.query('dataset == %d' % i).data, bins=bins, histtype='stepfilled', normed=True,
color='gray', alpha=0.5);
Explanation: The problem is that FacetGrid only takes one DataFrame as input. In the previous
example we provide the DataFrame of "experimental" data (ds) and use the .map method to plot
histograms of the different datasets. The fitted distributions, instead, are
plotted manually in the for loop.
We can invert the approach, and pass to FacetGrid the DataFrame of fitted parameters (dt_tot),
while leaving the simple histogram for manual plotting. In this case we need to write an
helper function (_plot) that knows how to plot a distribution given a set of parameter:
End of explanation |
7,194 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exogenous Variables with PyAF
PyAF allows using some external sources to improve its forecasts.
In addition to the training dataset, the user can provide an external table with exogenous variables data. This table is merged with the signal dataset when exogenous vairables values are need (either when training the model or when producing forecasts).
The signal and exogenous variables can come from the same table (self-join).
Exogenous variables Modeling
All PyAF models are of the form of a linear decomposition (Trend + Periodic + AR). The exogenous avriables are introduced in the AR component through thier past values.
Before working with exogenous variables, they need to be first transformed into a numerical form (encoed). PyAF used standrd encoding procedures. Non-numerical exogenous variables are dummified (a binary column is created for each distinct column value) and numericcal columns are stadanrdiozed ( Y = (X-m)/s , where m is the mean and s is the standard deviation)
Example with Ozone dataset
For demosntration puroposes, we tranform the Loas Angeles ozone dataset so that it contains some exogenous variables. In a real case, these variables can provide some information on Los Angeles (population , temperature, ...) on the same period.
Here, 4 variables have been created artificially.
Step1: This table contains for each 'Date' value, two numeriocal exogenous variable ('Exog2' and 'Exog3') and two character(object) variables ('Exog4' and 'Exog5')
This is how the encoded dataset looks like internally
Step2: Training process
The only addition in this case, is an extra parameter for the train function that gives the name of the exogenous table and the list of exogenous variables to be used
Step3: In this specific model, the ARX cxomponent shows that the most important predictors are (in this order) | Python Code:
import numpy as np
import pandas as pd
import datetime
csvfile_link = "https://raw.githubusercontent.com/antoinecarme/pyaf/master/data/ozone-la-exogenous-2.csv"
exog_dataframe = pd.read_csv(csvfile_link);
exog_dataframe['Date'] = exog_dataframe['Date'].astype(np.datetime64);
print(exog_dataframe.info())
exog_dataframe.head()
Explanation: Exogenous Variables with PyAF
PyAF allows using some external sources to improve its forecasts.
In addition to the training dataset, the user can provide an external table with exogenous variables data. This table is merged with the signal dataset when exogenous vairables values are need (either when training the model or when producing forecasts).
The signal and exogenous variables can come from the same table (self-join).
Exogenous variables Modeling
All PyAF models are of the form of a linear decomposition (Trend + Periodic + AR). The exogenous avriables are introduced in the AR component through thier past values.
Before working with exogenous variables, they need to be first transformed into a numerical form (encoed). PyAF used standrd encoding procedures. Non-numerical exogenous variables are dummified (a binary column is created for each distinct column value) and numericcal columns are stadanrdiozed ( Y = (X-m)/s , where m is the mean and s is the standard deviation)
Example with Ozone dataset
For demosntration puroposes, we tranform the Loas Angeles ozone dataset so that it contains some exogenous variables. In a real case, these variables can provide some information on Los Angeles (population , temperature, ...) on the same period.
Here, 4 variables have been created artificially.
End of explanation
encoded_csvfile_link = "https://raw.githubusercontent.com/antoinecarme/pyaf/master/data/ozone_exogenous_encoded.csv"
encoded_ozone_dataframe = pd.read_csv(encoded_csvfile_link);
# print(encoded_ozone_dataframe.columns)
interesting_Columns = ['Date',
'Ozone', 'Exog2',
'Exog2',
'Exog3',
'Exog4=E', 'Exog4=F', 'Exog4=C', 'Exog4=D', 'Exog4=B',
'Exog5=K', 'Exog5=L', 'Exog5=M', 'Exog5=N'];
encoded_ozone_dataframe = encoded_ozone_dataframe[interesting_Columns]
print(encoded_ozone_dataframe.info())
encoded_ozone_dataframe.head()
Explanation: This table contains for each 'Date' value, two numeriocal exogenous variable ('Exog2' and 'Exog3') and two character(object) variables ('Exog4' and 'Exog5')
This is how the encoded dataset looks like internally:
End of explanation
import pyaf.ForecastEngine as autof
lEngine = autof.cForecastEngine()
csvfile_link = "https://raw.githubusercontent.com/antoinecarme/TimeSeriesData/master/ozone-la.csv"
ozone_dataframe = pd.read_csv(csvfile_link);
ozone_dataframe['Date'] = ozone_dataframe['Month'].apply(lambda x : datetime.datetime.strptime(x, "%Y-%m"))
ozone_dataframe.info()
lExogenousData = (exog_dataframe , ['Exog2' , 'Exog3' , 'Exog4', 'Exog5'])
lEngine.train(ozone_dataframe , 'Date' , 'Ozone', 12 , lExogenousData);
lEngine.getModelInfo()
Explanation: Training process
The only addition in this case, is an extra parameter for the train function that gives the name of the exogenous table and the list of exogenous variables to be used
End of explanation
lEngine_Without_Exogenous = autof.cForecastEngine()
lEngine_Without_Exogenous.train(ozone_dataframe , 'Date' , 'Ozone', 12);
lEngine_Without_Exogenous.getModelInfo()
ozone_forecast_without_exog = lEngine_Without_Exogenous.forecast(ozone_dataframe, 12);
ozone_forecast_with_exog = lEngine.forecast(ozone_dataframe, 12);
%matplotlib inline
ozone_forecast_without_exog.plot.line('Date', ['Ozone' , 'Ozone_Forecast',
'Ozone_Forecast_Lower_Bound',
'Ozone_Forecast_Upper_Bound'], grid = True, figsize=(12, 8))
ozone_forecast_with_exog.plot.line('Date', ['Ozone' , 'Ozone_Forecast',
'Ozone_Forecast_Lower_Bound',
'Ozone_Forecast_Upper_Bound'], grid = True, figsize=(12, 8))
Explanation: In this specific model, the ARX cxomponent shows that the most important predictors are (in this order):
1. The previous value of Exog2 (Exog2_Lag1)
2. The value of Exog3 two months ago (Exog3_Lag2)
3. The fact that the previous value of Exog4 is 'F' or not (Exog4=F_Lag1).
4. The value of Exog2 two months ago (Exog2_Lag2)
5. etc ...
The effect of introducing the exogenous variables.
End of explanation |
7,195 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuing from the previous blog post, this notebook will explain how to deploy a Bokeh server on Heroku, allowing the world to access the brilliant data visualizations that you've developed using Bokeh. Note- this tutorial was written in May 2017. If you're reading at a much later date, you may need to update the Python and package versions below (and it's possible that Heroku may not support these versions anymore).
The example from this post can be found running here.
To begin, install the Heroku CLI using the instructions found here. The documentation is very helpful, as well.
Open a terminal on your computer and login to Heroku using the command
Step1: Next, the Python modules which are required to run the Bokeh server must be specified in a file called "requirements.txt", again case sensitive. These will mainly include any imports in the "linear_example.py" code, but I like to specify any dependencies of the imports as well to ensure that the specific version I request is loaded. The release number of the module to load can be specified by "==", or ">=" if "at least" that version is required.
Note that these are likely not the latest release number of these modules, but these are the versions that this particular example was tested on.
Step2: Finally, a file called "Procfile" tells Heroku how to run our files as a web application. It's essential that the "P" in "Procfile" be capitalized, and that the file does not have an extension. These are two common errors when defining a Procfile.
Our Procfile will contain the following code | Python Code:
python-3.6.1
Explanation: Continuing from the previous blog post, this notebook will explain how to deploy a Bokeh server on Heroku, allowing the world to access the brilliant data visualizations that you've developed using Bokeh. Note- this tutorial was written in May 2017. If you're reading at a much later date, you may need to update the Python and package versions below (and it's possible that Heroku may not support these versions anymore).
The example from this post can be found running here.
To begin, install the Heroku CLI using the instructions found here. The documentation is very helpful, as well.
Open a terminal on your computer and login to Heroku using the command:
heroku login
To create a new Heroku app, use the command:
heroku create appname
Where the appname is the name of your new app. Keep note of what appname you choose, because you'll be using it later in this example for the "--host" parameter.
Now, let's prepare the files and folder structure that will be pushed to Heroku. You'll need "linear_example.py" from the previous post, which can be downloaded $HERE. Make sure it's named exactly "linear_example.py", or the following steps will not work.
We need to define a "runtime.txt" file, which specifies to Heroku which version of Python we want to run. This file must be named exactly "runtime.txt", case sensitive. It should contain the following line:
End of explanation
bokeh==0.12.4
Jinja2==2.9.5
MarkupSafe==0.23
numpy==1.12.0
pandas==0.19.2
PyYAML==3.12
requests==2.13.0
scikit-learn==0.18.1
scipy==0.18.1
tornado==4.4.2
Explanation: Next, the Python modules which are required to run the Bokeh server must be specified in a file called "requirements.txt", again case sensitive. These will mainly include any imports in the "linear_example.py" code, but I like to specify any dependencies of the imports as well to ensure that the specific version I request is loaded. The release number of the module to load can be specified by "==", or ">=" if "at least" that version is required.
Note that these are likely not the latest release number of these modules, but these are the versions that this particular example was tested on.
End of explanation
web: bokeh serve --port=$PORT --num-procs=0 --host=linregex.herokuapp.com --address=0.0.0.0 --use-xheaders linear_example.py
Explanation: Finally, a file called "Procfile" tells Heroku how to run our files as a web application. It's essential that the "P" in "Procfile" be capitalized, and that the file does not have an extension. These are two common errors when defining a Procfile.
Our Procfile will contain the following code:
End of explanation |
7,196 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examining racial discrimination in the US job market
Background
Racial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés black-sounding or white-sounding names and observing the impact on requests for interviews from employers.
Data
In the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.
Note that the 'b' and 'w' values in race are assigned randomly to the resumes.
Exercise
You will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.
Answer the following questions in this notebook below and submit to your Github account.
What test is appropriate for this problem? Does CLT apply?
What are the null and alternate hypotheses?
Compute margin of error, confidence interval, and p-value.
Discuss statistical significance.
You can include written notes in notebook cells using Markdown
Step1: Does the central limit theorem apply? What statistical test is appropriate?
CLT applies (in it's most common form) for a sufficiently large set of identically distributed independent random variables. Each of these data points if independent and drawn from the same probability distribution (we assume). If the resumes were sent out to a representative collection of potential employers then this shouldn't be an issue.
As well we see from our results table that for both categories $np >10$ and $n(1-p) >10$.
We note however, that unlike the previous project, there are only two binary states for two types of data points, making for four categories in total. Thus, an appropriate statistical test would be Pearson's $\chi^{2}$ test. It is a a statsitcal test which applies to sets of categorical data like we have here. The CLT tells us that for large sample sizes the disbtibution will tend toward a multivariate normal distribution.
The alternative would be to directly compare the rate of call back between the two groups, which would be a two-tailed z-test of the difference of two proportions. Both of these methods were covered in the Khan academy links provided by spring board.
What are the null and alternate hypotheses?
The null hypotheses is that what race a name sounds like should not have any predictive effect on the rate of callbacks. The alternate hypotheses is that there will be a statistical difference between the two groups. Since the number of black and white names was the same, and the names were randomly assigned to identical resumes (thereby removing the potential real-world biases of different education, experience levels or other advtanges/disadvatages), we expect to see the same number of total callbacks under the null hypothesis. Clearly this is not the case, as we see white names have a $9.65\%$call back rate in contrast to black names having $6.45\%$ call backs.
Compute margin of error, confidence interval, and p-value
Step2: We obtain $\chi^2 = 16.9$ with a p-value of $3.98\times10^{-5}$. This is highly significant, well below standard thresholds. We can conclude that it is very likely that perceived race of name plays a role in the rate of callbacks.
Step3: Thus we are confident that there is a 95% chance that the true difference in callback rate for black and white names is within this range.
statistical significance
For our p-value test, we found that we would expect this result from random chance less than 4 times in 100000. Our confidence is thus very high that the perceived effect is not due to random chance.
As for our confidence interval, we have calculated the range by setting our error rate at 5%. So less than 5% would random chance lead to a difference in proportions greater than
Step4: We do note that there is not an even split by sex however. While female resumes have a higher callback rate than male resumes among both groups, it is the black sounding resumes that had more feminine names, so this should work to shrink the difference.
It is possible that some of the difference in call back rates is attributed to the sex of a name as well as its percieved race. Using the pool of names,one should check if the white name database housed more gender-neutral names for the resume. Some of the statistical significance in the difference between the two groups may be attributed therefore to a different form of bias on the part of potential employers. When looking through the paper referenced in the link given at the start of the notebook however, we see that the first names do not seem gender-netural. The possible exception is 'Brett' as a white masculine name, which can also be used as a feminime name, but has fallen out of favour in modern times. Therefore
So by doing a bit of separate legwork we are confident that are hypotheses statistical significance is not undermined by the sex split. | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
from IPython.core.display import HTML
css = open('style-table.css').read() + open('style-notebook.css').read()
HTML('<style>{}</style>'.format(css))
data = pd.io.stata.read_stata('data/us_job_market_discrimination.dta')
data.head()
# number of callbacks for black-sounding names
sum(data[data.race=='b'].call)
#number of callbacks for white-sounding names
sum(data[data.race=='w'].call)
black = data[data.race=='b']
white = data[data.race=='w']
len(black)
len(white)
black_called = len(black[black['call']==True])
black_notcalled = len(black[black['call']== False])
white_called = len(white[white['call']==True])
white_notcalled = len(white[white['call']== False])
#probability of a white sounding name getting call back
prob_white_called = white_called/len(white)
prob_white_called
prob_black_called = black_called/len(black)
prob_black_called
#probablity of a resume gettting a callback
prob_called = sum(data.call)/len(data)
prob_called
results = pd.DataFrame({'black':{'called':black_called,'not_called':black_notcalled},
'white':{'called':white_called,'not_called':white_notcalled}})
results
Explanation: Examining racial discrimination in the US job market
Background
Racial discrimination continues to be pervasive in cultures throughout the world. Researchers examined the level of racial discrimination in the United States labor market by randomly assigning identical résumés black-sounding or white-sounding names and observing the impact on requests for interviews from employers.
Data
In the dataset provided, each row represents a resume. The 'race' column has two values, 'b' and 'w', indicating black-sounding and white-sounding. The column 'call' has two values, 1 and 0, indicating whether the resume received a call from employers or not.
Note that the 'b' and 'w' values in race are assigned randomly to the resumes.
Exercise
You will perform a statistical analysis to establish whether race has a significant impact on the rate of callbacks for resumes.
Answer the following questions in this notebook below and submit to your Github account.
What test is appropriate for this problem? Does CLT apply?
What are the null and alternate hypotheses?
Compute margin of error, confidence interval, and p-value.
Discuss statistical significance.
You can include written notes in notebook cells using Markdown:
- In the control panel at the top, choose Cell > Cell Type > Markdown
- Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
Resources
Experiment information and data source: http://www.povertyactionlab.org/evaluation/discrimination-job-market-united-states
Scipy statistical methods: http://docs.scipy.org/doc/scipy/reference/stats.html
Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
End of explanation
#get expected proportions of each group
total_called = sum(data.call)
total_notcalled = len(data) - sum(data.call)
total_called/2
#Use a chisquare test with 1 degree of freedom since (#col-1)*(#rows-1) = 1.
result_freq = [black_called, white_called, black_notcalled, white_notcalled]
expected_freq = [total_called/2,total_called/2,total_notcalled/2,total_notcalled/2]
stats.chisquare(f_obs=result_freq, f_exp = expected_freq, ddof=2)
Explanation: Does the central limit theorem apply? What statistical test is appropriate?
CLT applies (in it's most common form) for a sufficiently large set of identically distributed independent random variables. Each of these data points if independent and drawn from the same probability distribution (we assume). If the resumes were sent out to a representative collection of potential employers then this shouldn't be an issue.
As well we see from our results table that for both categories $np >10$ and $n(1-p) >10$.
We note however, that unlike the previous project, there are only two binary states for two types of data points, making for four categories in total. Thus, an appropriate statistical test would be Pearson's $\chi^{2}$ test. It is a a statsitcal test which applies to sets of categorical data like we have here. The CLT tells us that for large sample sizes the disbtibution will tend toward a multivariate normal distribution.
The alternative would be to directly compare the rate of call back between the two groups, which would be a two-tailed z-test of the difference of two proportions. Both of these methods were covered in the Khan academy links provided by spring board.
What are the null and alternate hypotheses?
The null hypotheses is that what race a name sounds like should not have any predictive effect on the rate of callbacks. The alternate hypotheses is that there will be a statistical difference between the two groups. Since the number of black and white names was the same, and the names were randomly assigned to identical resumes (thereby removing the potential real-world biases of different education, experience levels or other advtanges/disadvatages), we expect to see the same number of total callbacks under the null hypothesis. Clearly this is not the case, as we see white names have a $9.65\%$call back rate in contrast to black names having $6.45\%$ call backs.
Compute margin of error, confidence interval, and p-value
End of explanation
#calculate standard error, using pooled data.
stderr = np.sqrt(prob_called*(1-prob_called)*(1/len(black)+1/len(white)))
print(stderr)
#get z score
z_score = (prob_white_called - prob_black_called) / stderr
z_score
#Can also compute a p-value for the difference in proportions directly,
#apart from that obtained from our chi-squared test. Note that this equivalent, gives same result.
#Use norm.sf since it's a positive z value. (more accurate than 1-.cdf)
p_value2 = stats.norm.sf(z_score)*2
p_value2
#Now a 95 percent confidence interval, two tailed corresponds to
z_critical = stats.norm.ppf(.975)
z_critical
std_err_unpooled = np.sqrt(prob_black_called *(1-prob_black_called)/len(black)+
prob_white_called*(1-prob_white_called)/len(white))
conf_interval = [prob_white_called-prob_black_called - z_critical*stderr,
prob_white_called-prob_black_called + z_critical*stderr ]
conf_interval
Explanation: We obtain $\chi^2 = 16.9$ with a p-value of $3.98\times10^{-5}$. This is highly significant, well below standard thresholds. We can conclude that it is very likely that perceived race of name plays a role in the rate of callbacks.
End of explanation
len(white[white['sex']=='f'])
len(white[white['sex']=='m'])
len(black[black['sex']=='f'])
len(black[black['sex']=='m'])
print(sum(white[white['sex']=='f'].call)/len(white[white['sex']=='f']))
print(sum(white[white['sex']=='m'].call)/len(white[white['sex']=='m']))
print(sum(black[black['sex']=='f'].call)/len(black[black['sex']=='f']))
print(sum(black[black['sex']=='m'].call)/len(black[black['sex']=='m']))
Explanation: Thus we are confident that there is a 95% chance that the true difference in callback rate for black and white names is within this range.
statistical significance
For our p-value test, we found that we would expect this result from random chance less than 4 times in 100000. Our confidence is thus very high that the perceived effect is not due to random chance.
As for our confidence interval, we have calculated the range by setting our error rate at 5%. So less than 5% would random chance lead to a difference in proportions greater than
End of explanation
white_called =sum(data[data.race=='w'].call)
black_called = sum(data[data.race=='b'].call)
black_nocall = sum(data[])
Explanation: We do note that there is not an even split by sex however. While female resumes have a higher callback rate than male resumes among both groups, it is the black sounding resumes that had more feminine names, so this should work to shrink the difference.
It is possible that some of the difference in call back rates is attributed to the sex of a name as well as its percieved race. Using the pool of names,one should check if the white name database housed more gender-neutral names for the resume. Some of the statistical significance in the difference between the two groups may be attributed therefore to a different form of bias on the part of potential employers. When looking through the paper referenced in the link given at the start of the notebook however, we see that the first names do not seem gender-netural. The possible exception is 'Brett' as a white masculine name, which can also be used as a feminime name, but has fallen out of favour in modern times. Therefore
So by doing a bit of separate legwork we are confident that are hypotheses statistical significance is not undermined by the sex split.
End of explanation |
7,197 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load up some example data. This is a little 28 residue peptide
Step1: md.baker_hubbard idenfies hydrogen bonds baced on cutoffs
for the Donor-H...Acceptor distance and angle. The criterion employed
is $\theta > 120$ and $r_\text{H...Acceptor} < 2.5 A$ in
at least 10% of the trajectory. The return value is a list of the
indices of the atoms (donor, h, acceptor) that satisfy this criteria.
Step2: Let's compute the actual distances between the donors and acceptors
Step3: And plot a histogram for a few of them | Python Code:
t = md.load_pdb('http://www.rcsb.org/pdb/files/2EQQ.pdb')
print(t)
Explanation: Load up some example data. This is a little 28 residue peptide
End of explanation
hbonds = md.baker_hubbard(t, periodic=False)
label = lambda hbond : '%s -- %s' % (t.topology.atom(hbond[0]), t.topology.atom(hbond[2]))
for hbond in hbonds:
print(label(hbond))
Explanation: md.baker_hubbard idenfies hydrogen bonds baced on cutoffs
for the Donor-H...Acceptor distance and angle. The criterion employed
is $\theta > 120$ and $r_\text{H...Acceptor} < 2.5 A$ in
at least 10% of the trajectory. The return value is a list of the
indices of the atoms (donor, h, acceptor) that satisfy this criteria.
End of explanation
da_distances = md.compute_distances(t, hbonds[:, [0,2]], periodic=False)
Explanation: Let's compute the actual distances between the donors and acceptors
End of explanation
color = itertools.cycle(['r', 'b', 'gold'])
for i in [2, 3, 4]:
plt.hist(da_distances[:, i], color=next(color), label=label(hbonds[i]), alpha=0.5)
plt.legend()
plt.ylabel('Freq');
plt.xlabel('Donor-acceptor distance [nm]')
Explanation: And plot a histogram for a few of them
End of explanation |
7,198 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summarizing Images
Images are high dimensional objects
Step1: How Many Photons Came From the Cluster?
Let's estimate the total counts due to the cluster.
That means we need to somehow ignore
all the other objects in the field
the diffuse X-ray "background"
Let's start by masking various regions of the image to separate cluster from background.
Step2: Estimating the background
Now let's look at the outer parts of the image, far from the cluster, and estimate the background level there.
Step3: Let's look at the mean and median of the pixels in this image that have non-negative values.
Step4: Exercise
Step5: Exercise
Step6: Now we can make our estimates | Python Code:
import astropy.io.fits as pyfits
import numpy as np
import astropy.visualization as viz
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
targdir = 'a1835_xmm/'
imagefile = targdir+'P0098010101M2U009IMAGE_3000.FTZ'
expmapfile = targdir+'P0098010101M2U009EXPMAP3000.FTZ'
bkgmapfile = targdir+'P0098010101M2X000BKGMAP3000.FTZ'
!du -sch $targdir/*
Explanation: Summarizing Images
Images are high dimensional objects: our XMM image contains 648*648 = datapoints (the pixel values).
Visualizing the data is an extremely important first step: the next is summarizing, which can be thought of as dimensionality reduction.
Let's dust off some standard statistics and put them to good use in summarizing this X-ray image.
End of explanation
imfits = pyfits.open(imagefile)
im = imfits[0].data
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
Explanation: How Many Photons Came From the Cluster?
Let's estimate the total counts due to the cluster.
That means we need to somehow ignore
all the other objects in the field
the diffuse X-ray "background"
Let's start by masking various regions of the image to separate cluster from background.
End of explanation
maskedimage = im.copy()
# First make some coordinate arrays, including polar r from the cluster center:
(ny,nx) = maskedimage.shape
centroid = np.where(maskedimage == np.max(maskedimage))
x = np.linspace(0, nx-1, nx)
y = np.linspace(0, ny-1, ny)
dx, dy = np.meshgrid(x,y)
dx = dx - centroid[1]
dy = dy - centroid[0]
r = np.sqrt(dx*dx + dy*dy)
# Now select an outer annulus, for the background and an inner circle, for the cluster:
background = maskedimage.copy()
background[r < 100] = -3
background[r > 150] = -3
signal = maskedimage.copy()
signal[r > 100] = 0.0
plt.imshow(viz.scale_image(background, scale='log', max_cut=40), cmap='gray', origin='lower')
Explanation: Estimating the background
Now let's look at the outer parts of the image, far from the cluster, and estimate the background level there.
End of explanation
meanbackground = np.mean(background[background > -1])
medianbackground = np.median(background[background > -1])
print "Mean background counts per pixel = ",meanbackground
print "Median background counts per pixel = ",medianbackground
Explanation: Let's look at the mean and median of the pixels in this image that have non-negative values.
End of explanation
plt.figure(figsize=(10,7))
n, bins, patches = plt.hist(background[background > -1], bins=np.linspace(-3.5,29.5,34))
# plt.yscale('log', nonposy='clip')
plt.xlabel('Background annulus pixel value (counts)')
plt.ylabel('Frequency')
plt.axis([-3.0, 30.0, 0, 40000])
plt.grid(True)
plt.show()
stdevbackground = np.std(background[background > -1])
print "Standard deviation: ",stdevbackground
Explanation: Exercise:
Why do you think there is a difference? Talk to your neighbor for a minute, and be ready to suggest an answer.
To understand the difference in these two estimates, lets look at a pixel histogram for this annulus.
End of explanation
plt.imshow(viz.scale_image(signal, scale='log', max_cut=40), cmap='gray', origin='lower')
plt.figure(figsize=(10,7))
n, bins, patches = plt.hist(signal[signal > -1], bins=np.linspace(-3.5,29.5,34), color='red')
plt.yscale('log', nonposy='clip')
plt.xlabel('Signal region pixel value (counts)')
plt.ylabel('Frequency')
plt.axis([-3.0, 30.0, 0, 500000])
plt.grid(True)
plt.show()
Explanation: Exercise:
"The background level in this image is approximately $0.09 \pm 0.66$ counts"
What's wrong with this statement?
Talk to your neighbor for a few minutes, and see if you can come up with a better version.
Estimating the Cluster Counts
Now let's summarize the circular region centered on the cluster.
End of explanation
# Total counts in signal region:
Ntotal = np.sum(signal[signal > -1])
# Background counts: mean in annulus, multiplied by number of pixels in signal region:
N = signal.copy()*0.0
N[signal > -1] = 1.0
Nbackground = np.sum(N)*meanbackground # Is this a good choice?
# Difference is the cluster counts:
Ncluster = Ntotal - Nbackground
print "Counts in signal region: ",Ntotal
print "Approximate counts due to background: ",Nbackground
print "Approximate counts due to cluster: ",Ncluster
Explanation: Now we can make our estimates:
End of explanation |
7,199 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detecting Twitter Bots
Step1: Exploratory Data Analysis
Identifying Missingness in the data
Step2: Identifying Imbalance in the data
Step3: Feature Independence using Spearman correlation
Step4: Result
Step5: Performing Feature Extraction
Step6: Implementing Different Models
Decision Tree Classifier
Step7: Result
Step8: Result
Step9: Our Classifier
Step10: ROC Comparison after tuning the baseline model | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['patch.force_edgecolor'] = True
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
filepath = 'https://raw.githubusercontent.com/jubins/ML-TwitterBotDetection/master/FinalProjectAndCode/kaggle_data/'
file= filepath+'training_data_2_csv_UTF.csv'
training_data = pd.read_csv(file)
bots = training_data[training_data.bot==1]
nonbots = training_data[training_data.bot==0]
Explanation: Detecting Twitter Bots
End of explanation
def get_heatmap(df):
#This function gives heatmap of all NaN values
plt.figure(figsize=(10,6))
sns.heatmap(df.isnull(), yticklabels=False, cbar=False, cmap='viridis')
plt.tight_layout()
return plt.show()
get_heatmap(training_data)
bots.friends_count/bots.followers_count
plt.figure(figsize=(10,5))
plt.subplot(2,1,1)
plt.title('Bots Friends vs Followers')
sns.regplot(bots.friends_count, bots.followers_count, color='red', label='Bots')
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.tight_layout()
plt.subplot(2,1,2)
plt.title('NonBots Friends vs Followers')
sns.regplot(nonbots.friends_count, nonbots.followers_count, color='blue', label='NonBots')
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.tight_layout()
plt.show()
Explanation: Exploratory Data Analysis
Identifying Missingness in the data
End of explanation
bots['friends_by_followers'] = bots.friends_count/bots.followers_count
bots[bots.friends_by_followers<1].shape
nonbots['friends_by_followers'] = nonbots.friends_count/nonbots.followers_count
nonbots[nonbots.friends_by_followers<1].shape
plt.figure(figsize=(10,5))
plt.plot(bots.listed_count, color='red', label='Bots')
plt.plot(nonbots.listed_count, color='blue', label='NonBots')
plt.legend(loc='upper left')
plt.ylim(10000,20000)
print(bots[(bots.listed_count<5)].shape)
bots_listed_count_df = bots[bots.listed_count<16000]
nonbots_listed_count_df = nonbots[nonbots.listed_count<16000]
bots_verified_df = bots_listed_count_df[bots_listed_count_df.verified==False]
bots_screenname_has_bot_df_ = bots_verified_df[(bots_verified_df.screen_name.str.contains("bot", case=False)==True)].shape
plt.figure(figsize=(12,7))
plt.subplot(2,1,1)
plt.plot(bots_listed_count_df.friends_count, color='red', label='Bots Friends')
plt.plot(nonbots_listed_count_df.friends_count, color='blue', label='NonBots Friends')
plt.legend(loc='upper left')
plt.subplot(2,1,2)
plt.plot(bots_listed_count_df.followers_count, color='red', label='Bots Followers')
plt.plot(nonbots_listed_count_df.followers_count, color='blue', label='NonBots Followers')
plt.legend(loc='upper left')
#bots[bots.listedcount>10000]
condition = (bots.screen_name.str.contains("bot", case=False)==True)|(bots.description.str.contains("bot", case=False)==True)|(bots.location.isnull())|(bots.verified==False)
bots['screen_name_binary'] = (bots.screen_name.str.contains("bot", case=False)==True)
bots['location_binary'] = (bots.location.isnull())
bots['verified_binary'] = (bots.verified==False)
bots.shape
condition = (nonbots.screen_name.str.contains("bot", case=False)==False)| (nonbots.description.str.contains("bot", case=False)==False) |(nonbots.location.isnull()==False)|(nonbots.verified==True)
nonbots['screen_name_binary'] = (nonbots.screen_name.str.contains("bot", case=False)==False)
nonbots['location_binary'] = (nonbots.location.isnull()==False)
nonbots['verified_binary'] = (nonbots.verified==True)
nonbots.shape
df = pd.concat([bots, nonbots])
df.shape
Explanation: Identifying Imbalance in the data
End of explanation
df.corr(method='spearman')
plt.figure(figsize=(8,4))
sns.heatmap(df.corr(method='spearman'), cmap='coolwarm', annot=True)
plt.tight_layout()
plt.show()
Explanation: Feature Independence using Spearman correlation
End of explanation
#filepath = 'https://raw.githubusercontent.com/jubins/ML-TwitterBotDetection/master/FinalCode/kaggle_data/'
filepath = 'C:/Users/jubin/Documents/GitHub/ML-TwitterBotDetection/FinalProjectAndCode/kaggle_data/'
file= open(filepath+'training_data_2_csv_UTF.csv', mode='r', encoding='utf-8', errors='ignore')
training_data = pd.read_csv(file)
bag_of_words_bot = r'bot|b0t|cannabis|tweet me|mishear|follow me|updates every|gorilla|yes_ofc|forget' \
r'expos|kill|clit|bbb|butt|fuck|XXX|sex|truthe|fake|anony|free|virus|funky|RNA|kuck|jargon' \
r'nerd|swag|jack|bang|bonsai|chick|prison|paper|pokem|xx|freak|ffd|dunia|clone|genie|bbb' \
r'ffd|onlyman|emoji|joke|troll|droop|free|every|wow|cheese|yeah|bio|magic|wizard|face'
training_data['screen_name_binary'] = training_data.screen_name.str.contains(bag_of_words_bot, case=False, na=False)
training_data['name_binary'] = training_data.name.str.contains(bag_of_words_bot, case=False, na=False)
training_data['description_binary'] = training_data.description.str.contains(bag_of_words_bot, case=False, na=False)
training_data['status_binary'] = training_data.status.str.contains(bag_of_words_bot, case=False, na=False)
Explanation: Result:
- There is no correlation between id, statuses_count, default_profile, default_profile_image and target variable.
- There is strong correlation between verified, listed_count, friends_count, followers_count and target variable.
- We cannot perform correlation for categorical attributes. So we will take screen_name, name, description, status into feature engineering. While use verified, listed_count for feature extraction.
Performing Feature Engineering
End of explanation
training_data['listed_count_binary'] = (training_data.listed_count>20000)==False
features = ['screen_name_binary', 'name_binary', 'description_binary', 'status_binary', 'verified', 'followers_count', 'friends_count', 'statuses_count', 'listed_count_binary', 'bot']
Explanation: Performing Feature Extraction
End of explanation
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score, roc_curve, auc
from sklearn.model_selection import train_test_split
X = training_data[features].iloc[:,:-1]
y = training_data[features].iloc[:,-1]
dt = DecisionTreeClassifier(criterion='entropy', min_samples_leaf=50, min_samples_split=10)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
dt = dt.fit(X_train, y_train)
y_pred_train = dt.predict(X_train)
y_pred_test = dt.predict(X_test)
print("Trainig Accuracy: %.5f" %accuracy_score(y_train, y_pred_train))
print("Test Accuracy: %.5f" %accuracy_score(y_test, y_pred_test))
sns.set(font_scale=1.5)
sns.set_style("whitegrid", {'axes.grid' : False})
scores_train = dt.predict_proba(X_train)
scores_test = dt.predict_proba(X_test)
y_scores_train = []
y_scores_test = []
for i in range(len(scores_train)):
y_scores_train.append(scores_train[i][1])
for i in range(len(scores_test)):
y_scores_test.append(scores_test[i][1])
fpr_dt_train, tpr_dt_train, _ = roc_curve(y_train, y_scores_train, pos_label=1)
fpr_dt_test, tpr_dt_test, _ = roc_curve(y_test, y_scores_test, pos_label=1)
plt.plot(fpr_dt_train, tpr_dt_train, color='darkblue', label='Train AUC: %5f' %auc(fpr_dt_train, tpr_dt_train))
plt.plot(fpr_dt_test, tpr_dt_test, color='red', ls='--', label='Test AUC: %5f' %auc(fpr_dt_test, tpr_dt_test))
plt.title("Decision Tree ROC Curve")
plt.xlabel("False Positive Rate (FPR)")
plt.ylabel("True Positive Rate (TPR)")
plt.legend(loc='lower right')
Explanation: Implementing Different Models
Decision Tree Classifier
End of explanation
from sklearn.naive_bayes import MultinomialNB
X = training_data[features].iloc[:,:-1]
y = training_data[features].iloc[:,-1]
mnb = MultinomialNB(alpha=0.0009)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
mnb = mnb.fit(X_train, y_train)
y_pred_train = mnb.predict(X_train)
y_pred_test = mnb.predict(X_test)
print("Trainig Accuracy: %.5f" %accuracy_score(y_train, y_pred_train))
print("Test Accuracy: %.5f" %accuracy_score(y_test, y_pred_test))
sns.set_style("whitegrid", {'axes.grid' : False})
scores_train = mnb.predict_proba(X_train)
scores_test = mnb.predict_proba(X_test)
y_scores_train = []
y_scores_test = []
for i in range(len(scores_train)):
y_scores_train.append(scores_train[i][1])
for i in range(len(scores_test)):
y_scores_test.append(scores_test[i][1])
fpr_mnb_train, tpr_mnb_train, _ = roc_curve(y_train, y_scores_train, pos_label=1)
fpr_mnb_test, tpr_mnb_test, _ = roc_curve(y_test, y_scores_test, pos_label=1)
plt.plot(fpr_mnb_train, tpr_mnb_train, color='darkblue', label='Train AUC: %5f' %auc(fpr_mnb_train, tpr_mnb_train))
plt.plot(fpr_mnb_test, tpr_mnb_test, color='red', ls='--', label='Test AUC: %5f' %auc(fpr_mnb_test, tpr_mnb_test))
plt.title("Multinomial NB ROC Curve")
plt.xlabel("False Positive Rate (FPR)")
plt.ylabel("True Positive Rate (TPR)")
plt.legend(loc='lower right')
Explanation: Result: Decision Tree gives very good performance and generalizes well. But it may be overfitting as AUC is 0.937, so we will try other models.
Multinomial Naive Bayes Classifier
End of explanation
from sklearn.ensemble import RandomForestClassifier
X = training_data[features].iloc[:,:-1]
y = training_data[features].iloc[:,-1]
rf = RandomForestClassifier(criterion='entropy', min_samples_leaf=100, min_samples_split=20)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)
rf = rf.fit(X_train, y_train)
y_pred_train = rf.predict(X_train)
y_pred_test = rf.predict(X_test)
print("Trainig Accuracy: %.5f" %accuracy_score(y_train, y_pred_train))
print("Test Accuracy: %.5f" %accuracy_score(y_test, y_pred_test))
sns.set_style("whitegrid", {'axes.grid' : False})
scores_train = rf.predict_proba(X_train)
scores_test = rf.predict_proba(X_test)
y_scores_train = []
y_scores_test = []
for i in range(len(scores_train)):
y_scores_train.append(scores_train[i][1])
for i in range(len(scores_test)):
y_scores_test.append(scores_test[i][1])
fpr_rf_train, tpr_rf_train, _ = roc_curve(y_train, y_scores_train, pos_label=1)
fpr_rf_test, tpr_rf_test, _ = roc_curve(y_test, y_scores_test, pos_label=1)
plt.plot(fpr_rf_train, tpr_rf_train, color='darkblue', label='Train AUC: %5f' %auc(fpr_rf_train, tpr_rf_train))
plt.plot(fpr_rf_test, tpr_rf_test, color='red', ls='--', label='Test AUC: %5f' %auc(fpr_rf_test, tpr_rf_test))
plt.title("Random ForestROC Curve")
plt.xlabel("False Positive Rate (FPR)")
plt.ylabel("True Positive Rate (TPR)")
plt.legend(loc='lower right')
Explanation: Result: Clearly, Multinomial Niave Bayes peforms poorly and is not a good choice as the Train AUC is just 0.556 and Test is 0.555.
Random Forest Classifier
End of explanation
class twitter_bot(object):
def __init__(self):
pass
def perform_train_test_split(df):
msk = np.random.rand(len(df)) < 0.75
train, test = df[msk], df[~msk]
X_train, y_train = train, train.ix[:,-1]
X_test, y_test = test, test.ix[:, -1]
return (X_train, y_train, X_test, y_test)
def bot_prediction_algorithm(df):
# creating copy of dataframe
train_df = df.copy()
# performing feature engineering on id and verfied columns
# converting id to int
train_df['id'] = train_df.id.apply(lambda x: int(x))
#train_df['friends_count'] = train_df.friends_count.apply(lambda x: int(x))
train_df['followers_count'] = train_df.followers_count.apply(lambda x: 0 if x=='None' else int(x))
train_df['friends_count'] = train_df.friends_count.apply(lambda x: 0 if x=='None' else int(x))
#We created two bag of words because more bow is stringent on test data, so on all small dataset we check less
if train_df.shape[0]>600:
#bag_of_words_for_bot
bag_of_words_bot = r'bot|b0t|cannabis|tweet me|mishear|follow me|updates every|gorilla|yes_ofc|forget' \
r'expos|kill|clit|bbb|butt|fuck|XXX|sex|truthe|fake|anony|free|virus|funky|RNA|kuck|jargon' \
r'nerd|swag|jack|bang|bonsai|chick|prison|paper|pokem|xx|freak|ffd|dunia|clone|genie|bbb' \
r'ffd|onlyman|emoji|joke|troll|droop|free|every|wow|cheese|yeah|bio|magic|wizard|face'
else:
# bag_of_words_for_bot
bag_of_words_bot = r'bot|b0t|cannabis|mishear|updates every'
# converting verified into vectors
train_df['verified'] = train_df.verified.apply(lambda x: 1 if ((x == True) or x == 'TRUE') else 0)
# check if the name contains bot or screenname contains b0t
condition = ((train_df.name.str.contains(bag_of_words_bot, case=False, na=False)) |
(train_df.description.str.contains(bag_of_words_bot, case=False, na=False)) |
(train_df.screen_name.str.contains(bag_of_words_bot, case=False, na=False)) |
(train_df.status.str.contains(bag_of_words_bot, case=False, na=False))
) # these all are bots
predicted_df = train_df[condition] # these all are bots
predicted_df.bot = 1
predicted_df = predicted_df[['id', 'bot']]
# check if the user is verified
verified_df = train_df[~condition]
condition = (verified_df.verified == 1) # these all are nonbots
predicted_df1 = verified_df[condition][['id', 'bot']]
predicted_df1.bot = 0
predicted_df = pd.concat([predicted_df, predicted_df1])
# check if description contains buzzfeed
buzzfeed_df = verified_df[~condition]
condition = (buzzfeed_df.description.str.contains("buzzfeed", case=False, na=False)) # these all are nonbots
predicted_df1 = buzzfeed_df[buzzfeed_df.description.str.contains("buzzfeed", case=False, na=False)][['id', 'bot']]
predicted_df1.bot = 0
predicted_df = pd.concat([predicted_df, predicted_df1])
# check if listed_count>16000
listed_count_df = buzzfeed_df[~condition]
listed_count_df.listed_count = listed_count_df.listed_count.apply(lambda x: 0 if x == 'None' else x)
listed_count_df.listed_count = listed_count_df.listed_count.apply(lambda x: int(x))
condition = (listed_count_df.listed_count > 16000) # these all are nonbots
predicted_df1 = listed_count_df[condition][['id', 'bot']]
predicted_df1.bot = 0
predicted_df = pd.concat([predicted_df, predicted_df1])
#remaining
predicted_df1 = listed_count_df[~condition][['id', 'bot']]
predicted_df1.bot = 0 # these all are nonbots
predicted_df = pd.concat([predicted_df, predicted_df1])
return predicted_df
def get_predicted_and_true_values(features, target):
y_pred, y_true = twitter_bot.bot_prediction_algorithm(features).bot.tolist(), target.tolist()
return (y_pred, y_true)
def get_accuracy_score(df):
(X_train, y_train, X_test, y_test) = twitter_bot.perform_train_test_split(df)
# predictions on training data
y_pred_train, y_true_train = twitter_bot.get_predicted_and_true_values(X_train, y_train)
train_acc = metrics.accuracy_score(y_pred_train, y_true_train)
#predictions on test data
y_pred_test, y_true_test = twitter_bot.get_predicted_and_true_values(X_test, y_test)
test_acc = metrics.accuracy_score(y_pred_test, y_true_test)
return (train_acc, test_acc)
def plot_roc_curve(df):
(X_train, y_train, X_test, y_test) = twitter_bot.perform_train_test_split(df)
# Train ROC
y_pred_train, y_true = twitter_bot.get_predicted_and_true_values(X_train, y_train)
scores = np.linspace(start=0.01, stop=0.9, num=len(y_true))
fpr_train, tpr_train, threshold = metrics.roc_curve(y_pred_train, scores, pos_label=0)
plt.plot(fpr_train, tpr_train, label='Train AUC: %5f' % metrics.auc(fpr_train, tpr_train), color='darkblue')
#Test ROC
y_pred_test, y_true = twitter_bot.get_predicted_and_true_values(X_test, y_test)
scores = np.linspace(start=0.01, stop=0.9, num=len(y_true))
fpr_test, tpr_test, threshold = metrics.roc_curve(y_pred_test, scores, pos_label=0)
plt.plot(fpr_test,tpr_test, label='Test AUC: %5f' %metrics.auc(fpr_test,tpr_test), ls='--', color='red')
#Misc
plt.xlim([-0.1,1])
plt.title("Reciever Operating Characteristic (ROC)")
plt.xlabel("False Positive Rate (FPR)")
plt.ylabel("True Positive Rate (TPR)")
plt.legend(loc='lower right')
plt.show()
if __name__ == '__main__':
start = time.time()
filepath = 'https://raw.githubusercontent.com/jubins/ML-TwitterBotDetection/master/FinalProjectAndCode/kaggle_data/'
train_df = pd.read_csv(filepath + 'training_data_2_csv_UTF.csv')
test_df = pd.read_csv(filepath + 'test_data_4_students.csv', sep='\t')
print("Train Accuracy: ", twitter_bot.get_accuracy_score(train_df)[0])
print("Test Accuracy: ", twitter_bot.get_accuracy_score(train_df)[1])
#predicting test data results
predicted_df = twitter_bot.bot_prediction_algorithm(test_df)
#plotting the ROC curve
twitter_bot.plot_roc_curve(train_df)
Explanation: Our Classifier
End of explanation
plt.figure(figsize=(14,10))
(X_train, y_train, X_test, y_test) = twitter_bot.perform_train_test_split(df)
#Train ROC
y_pred_train, y_true = twitter_bot.get_predicted_and_true_values(X_train, y_train)
scores = np.linspace(start=0, stop=1, num=len(y_true))
fpr_botc_train, tpr_botc_train, threshold = metrics.roc_curve(y_pred_train, scores, pos_label=0)
#Test ROC
y_pred_test, y_true = twitter_bot.get_predicted_and_true_values(X_test, y_test)
scores = np.linspace(start=0, stop=1, num=len(y_true))
fpr_botc_test, tpr_botc_test, threshold = metrics.roc_curve(y_pred_test, scores, pos_label=0)
#Train ROC
plt.subplot(2,2,1)
plt.plot(fpr_botc_train, tpr_botc_train, label='Our Classifier AUC: %5f' % metrics.auc(fpr_botc_train,tpr_botc_train), color='darkblue')
plt.plot(fpr_rf_train, tpr_rf_train, label='Random Forest AUC: %5f' %auc(fpr_rf_train, tpr_rf_train))
plt.plot(fpr_dt_train, tpr_dt_train, label='Decision Tree AUC: %5f' %auc(fpr_dt_train, tpr_dt_train))
plt.plot(fpr_mnb_train, tpr_mnb_train, label='MultinomialNB AUC: %5f' %auc(fpr_mnb_train, tpr_mnb_train))
plt.title("Training Set ROC Curve")
plt.xlabel("False Positive Rate (FPR)")
plt.ylabel("True Positive Rate (TPR)")
plt.legend(loc='lower right')
#Test ROC
plt.subplot(2,2,2)
plt.plot(fpr_botc_test,tpr_botc_test, label='Our Classifier AUC: %5f' %metrics.auc(fpr_botc_test,tpr_botc_test), color='darkblue')
plt.plot(fpr_rf_test, tpr_rf_test, label='Random Forest AUC: %5f' %auc(fpr_rf_test, tpr_rf_test))
plt.plot(fpr_dt_test, tpr_dt_test, label='Decision Tree AUC: %5f' %auc(fpr_dt_test, tpr_dt_test))
plt.plot(fpr_mnb_test, tpr_mnb_test, label='MultinomialNB AUC: %5f' %auc(fpr_mnb_test, tpr_mnb_test))
plt.title("Test Set ROC Curve")
plt.xlabel("False Positive Rate (FPR)")
plt.ylabel("True Positive Rate (TPR)")
plt.legend(loc='lower right')
plt.tight_layout()
Explanation: ROC Comparison after tuning the baseline model
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.