Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
3,300 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convección en una dimensión
Step1: Parámetros
Step2: ¡Número de Courant !
Step3: Condiciones iniciales
Step4: Un paso en el tiempo
Recordemos que queremos implementar $u_i^{n+1} = u_i^n - \mathrm{Co}/2 (u_{i+1}^n-u_{i-1}^n)$
Step5: Tiempo completo
Step6: Formulaciones alternativas
¿Qué pasa si probamos el algoritmo "peor"? $u_i^{n+1} = u_i^n - \mathrm{Co} (u_{i}^n-u_{i-1}^n)$ | Python Code:
%matplotlib inline
import scipy as np
from matplotlib import pyplot as plt
Explanation: Convección en una dimensión
End of explanation
L = 1.0 # longitud del sistema 1D
nx = 42 # nodos espaciales
dx = L / (nx-2) # sí, quitamos dos nodos ...
x = np.linspace( 0 , L , num=nx )
T= 0.1 # tiempo total
nt = 100 # pasos temporales
dt = T / nt
c = 1 # velocidad de la onda
Explanation: Parámetros
End of explanation
Co = c * dt / dx
Co
Explanation: ¡Número de Courant !
End of explanation
u0 = 1 * np.ones(nx) # todo uno
x1 = L/4 ; n1 = int(x1 / dx)
x2 = L/2 ; n2 = int(x2 / dx)
u0[ n1 : n2 ] = 2
plt.plot( x , u0 )
Explanation: Condiciones iniciales
End of explanation
u = u0.copy()
un = u.copy() # distribución actual
i = 1
u[i] = un[i] - (Co / 2.0) * (un[i+1] - _valor_izdo )
for i in range( 2 , nx - 2 ): # Ahora queda claro por qué hemos quitado los extremos !!
u[i] = un[i] - (Co / 2.0) * (un[i+1] - un[i-1])
plt.plot(x,u)
Explanation: Un paso en el tiempo
Recordemos que queremos implementar $u_i^{n+1} = u_i^n - \mathrm{Co}/2 (u_{i+1}^n-u_{i-1}^n)$
End of explanation
u = u0.copy()
for n in range(nt):
un = u.copy()
for i in range( 1 , nx - 1 ):
u[i] = un[i] - (Co / 2.0) * (un[i+1] - un[i-1])
plt.plot(x , u , x , u0 , 'r')
Explanation: Tiempo completo
End of explanation
u = u0.copy()
for n in range(nt):
un = u.copy()
for i in range( 1 , nx ):
u[i] = un[i] - Co * (un[i] - un[i-1])
plt.plot(x , u , x , u0 , 'r')
Explanation: Formulaciones alternativas
¿Qué pasa si probamos el algoritmo "peor"? $u_i^{n+1} = u_i^n - \mathrm{Co} (u_{i}^n-u_{i-1}^n)$
End of explanation |
3,301 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initialization
Welcome to the first assignment of "Improving Deep Neural Networks".
Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning.
If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results.
A well chosen initialization can
Step2: You would like a classifier to separate the blue dots from the red dots.
1 - Neural Network model
You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with
Step4: 2 - Zero initialization
There are two types of parameters to initialize in a neural network
Step5: Expected Output
Step6: The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary
Step8: The model is predicting 0 for every example.
In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression.
<font color='blue'>
What you should remember
Step9: Expected Output
Step10: If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes.
Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
Step12: Observations
Step13: Expected Output | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import sklearn
import sklearn.datasets
from init_utils import sigmoid, relu, compute_loss, forward_propagation, backward_propagation
from init_utils import update_parameters, predict, load_dataset, plot_decision_boundary, predict_dec
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# load image dataset: blue/red dots in circles
train_X, train_Y, test_X, test_Y = load_dataset()
Explanation: Initialization
Welcome to the first assignment of "Improving Deep Neural Networks".
Training your neural network requires specifying an initial value of the weights. A well chosen initialization method will help learning.
If you completed the previous course of this specialization, you probably followed our instructions for weight initialization, and it has worked out so far. But how do you choose the initialization for a new neural network? In this notebook, you will see how different initializations lead to different results.
A well chosen initialization can:
- Speed up the convergence of gradient descent
- Increase the odds of gradient descent converging to a lower training (and generalization) error
To get started, run the following cell to load the packages and the planar dataset you will try to classify.
End of explanation
def model(X, Y, learning_rate = 0.01, num_iterations = 15000, print_cost = True, initialization = "he"):
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (containing 0 for red dots; 1 for blue dots), of shape (1, number of examples)
learning_rate -- learning rate for gradient descent
num_iterations -- number of iterations to run gradient descent
print_cost -- if True, print the cost every 1000 iterations
initialization -- flag to choose which initialization to use ("zeros","random" or "he")
Returns:
parameters -- parameters learnt by the model
grads = {}
costs = [] # to keep track of the loss
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 10, 5, 1]
# Initialize parameters dictionary.
if initialization == "zeros":
parameters = initialize_parameters_zeros(layers_dims)
elif initialization == "random":
parameters = initialize_parameters_random(layers_dims)
elif initialization == "he":
parameters = initialize_parameters_he(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
a3, cache = forward_propagation(X, parameters)
# Loss
cost = compute_loss(a3, Y)
# Backward propagation.
grads = backward_propagation(X, Y, cache)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 1000 iterations
if print_cost and i % 1000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
costs.append(cost)
# plot the loss
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
Explanation: You would like a classifier to separate the blue dots from the red dots.
1 - Neural Network model
You will use a 3-layer neural network (already implemented for you). Here are the initialization methods you will experiment with:
- Zeros initialization -- setting initialization = "zeros" in the input argument.
- Random initialization -- setting initialization = "random" in the input argument. This initializes the weights to large random values.
- He initialization -- setting initialization = "he" in the input argument. This initializes the weights to random values scaled according to a paper by He et al., 2015.
Instructions: Please quickly read over the code below, and run it. In the next part you will implement the three initialization methods that this model() calls.
End of explanation
# GRADED FUNCTION: initialize_parameters_zeros
def initialize_parameters_zeros(layers_dims):
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
parameters = {}
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.zeros(shape = (layers_dims[l], layers_dims[l-1]))
parameters['b' + str(l)] = np.zeros(shape = (layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_zeros([3,2,1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: 2 - Zero initialization
There are two types of parameters to initialize in a neural network:
- the weight matrices $(W^{[1]}, W^{[2]}, W^{[3]}, ..., W^{[L-1]}, W^{[L]})$
- the bias vectors $(b^{[1]}, b^{[2]}, b^{[3]}, ..., b^{[L-1]}, b^{[L]})$
Exercise: Implement the following function to initialize all parameters to zeros. You'll see later that this does not work well since it fails to "break symmetry", but lets try it anyway and see what happens. Use np.zeros((..,..)) with the correct shapes.
End of explanation
parameters = model(train_X, train_Y, initialization = "zeros")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td>
**W1**
</td>
<td>
[[ 0. 0. 0.]
[ 0. 0. 0.]]
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
[[ 0.]
[ 0.]]
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
[[ 0. 0.]]
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
[[ 0.]]
</td>
</tr>
</table>
Run the following code to train your model on 15,000 iterations using zeros initialization.
End of explanation
print ("predictions_train = " + str(predictions_train))
print ("predictions_test = " + str(predictions_test))
plt.title("Model with Zeros initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: The performance is really bad, and the cost does not really decrease, and the algorithm performs no better than random guessing. Why? Lets look at the details of the predictions and the decision boundary:
End of explanation
# GRADED FUNCTION: initialize_parameters_random
def initialize_parameters_random(layers_dims):
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn( layers_dims[l], layers_dims[l-1]) * 10
parameters['b' + str(l)] = np.zeros(shape = (layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_random([3, 2, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: The model is predicting 0 for every example.
In general, initializing all the weights to zero results in the network failing to break symmetry. This means that every neuron in each layer will learn the same thing, and you might as well be training a neural network with $n^{[l]}=1$ for every layer, and the network is no more powerful than a linear classifier such as logistic regression.
<font color='blue'>
What you should remember:
- The weights $W^{[l]}$ should be initialized randomly to break symmetry.
- It is however okay to initialize the biases $b^{[l]}$ to zeros. Symmetry is still broken so long as $W^{[l]}$ is initialized randomly.
3 - Random initialization
To break symmetry, lets intialize the weights randomly. Following random initialization, each neuron can then proceed to learn a different function of its inputs. In this exercise, you will see what happens if the weights are intialized randomly, but to very large values.
Exercise: Implement the following function to initialize your weights to large random values (scaled by *10) and your biases to zeros. Use np.random.randn(..,..) * 10 for weights and np.zeros((.., ..)) for biases. We are using a fixed np.random.seed(..) to make sure your "random" weights match ours, so don't worry if running several times your code gives you always the same initial values for the parameters.
End of explanation
parameters = model(train_X, train_Y, initialization = "random")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
Explanation: Expected Output:
<table>
<tr>
<td>
**W1**
</td>
<td>
[[ 17.88628473 4.36509851 0.96497468]
[-18.63492703 -2.77388203 -3.54758979]]
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
[[ 0.]
[ 0.]]
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
[[-0.82741481 -6.27000677]]
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
[[ 0.]]
</td>
</tr>
</table>
Run the following code to train your model on 15,000 iterations using random initialization.
End of explanation
print (predictions_train)
print (predictions_test)
plt.title("Model with large random initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: If you see "inf" as the cost after the iteration 0, this is because of numerical roundoff; a more numerically sophisticated implementation would fix this. But this isn't worth worrying about for our purposes.
Anyway, it looks like you have broken symmetry, and this gives better results. than before. The model is no longer outputting all 0s.
End of explanation
# GRADED FUNCTION: initialize_parameters_he
def initialize_parameters_he(layers_dims):
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
np.random.seed(3)
parameters = {}
L = len(layers_dims) - 1 # integer representing the number of layers
for l in range(1, L + 1):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn( layers_dims[l], layers_dims[l-1]) * np.sqrt(2/layers_dims[l-1])
parameters['b' + str(l)] = np.zeros(shape = (layers_dims[l], 1))
### END CODE HERE ###
return parameters
parameters = initialize_parameters_he([2, 4, 1])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Observations:
- The cost starts very high. This is because with large random-valued weights, the last activation (sigmoid) outputs results that are very close to 0 or 1 for some examples, and when it gets that example wrong it incurs a very high loss for that example. Indeed, when $\log(a^{[3]}) = \log(0)$, the loss goes to infinity.
- Poor initialization can lead to vanishing/exploding gradients, which also slows down the optimization algorithm.
- If you train this network longer you will see better results, but initializing with overly large random numbers slows down the optimization.
<font color='blue'>
In summary:
- Initializing weights to very large random values does not work well.
- Hopefully intializing with small random values does better. The important question is: how small should be these random values be? Lets find out in the next part!
4 - He initialization
Finally, try "He Initialization"; this is named for the first author of He et al., 2015. (If you have heard of "Xavier initialization", this is similar except Xavier initialization uses a scaling factor for the weights $W^{[l]}$ of sqrt(1./layers_dims[l-1]) where He initialization would use sqrt(2./layers_dims[l-1]).)
Exercise: Implement the following function to initialize your parameters with He initialization.
Hint: This function is similar to the previous initialize_parameters_random(...). The only difference is that instead of multiplying np.random.randn(..,..) by 10, you will multiply it by $\sqrt{\frac{2}{\text{dimension of the previous layer}}}$, which is what He initialization recommends for layers with a ReLU activation.
End of explanation
parameters = model(train_X, train_Y, initialization = "he")
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
plt.title("Model with He initialization")
axes = plt.gca()
axes.set_xlim([-1.5,1.5])
axes.set_ylim([-1.5,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: Expected Output:
<table>
<tr>
<td>
**W1**
</td>
<td>
[[ 1.78862847 0.43650985]
[ 0.09649747 -1.8634927 ]
[-0.2773882 -0.35475898]
[-0.08274148 -0.62700068]]
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
[[ 0.]
[ 0.]
[ 0.]
[ 0.]]
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
[[-0.03098412 -0.33744411 -0.92904268 0.62552248]]
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
[[ 0.]]
</td>
</tr>
</table>
Run the following code to train your model on 15,000 iterations using He initialization.
End of explanation |
3,302 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step5: Basic Idea of Count Min sketch
We map the input value to multiple points in a relatively small output space. Therefore, the count associated with a given input will be applied to multiple counts in the output space. Even though collisions will occur, the minimum count associated with a given input will have some desirable properties, including the ability to be used to estimate the largest N counts.
<img src="files/count_min_2.png">
http
Step6: Is it possible to make the sketch so coarse that its estimates are wrong even for this data set?
Step7: Yes! (if you try enough) Why?
The 'w' parameter goes like ceiling(exp(1)/epsilon), which is always >=~ 3.
The 'd' parameter goes like ceiling(log(1/delta), which is always >= 1.
So, you're dealing with a space with minimum size 3 x 1. With 10 records, it's possible that all 4 users map their counts to the point. So it's possible to see an estimate as high as 10, in this case.
Now for a larger data set.
Step8: For this precision and dataset size, the CM algo takes much longer than the exact solution. In fact, the crossover point at which the CM sketch can achieve reasonable accuracy in the same time as the exact solution is a very large number of entries. | Python Code:
import sys
import random
import numpy as np
import heapq
import json
import time
BIG_PRIME = 9223372036854775783
def random_parameter():
return random.randrange(0, BIG_PRIME - 1)
class Sketch:
def __init__(self, delta, epsilon, k):
Setup a new count-min sketch with parameters delta, epsilon and k
The parameters delta and epsilon control the accuracy of the
estimates of the sketch
Cormode and Muthukrishnan prove that for an item i with count a_i, the
estimate from the sketch a_i_hat will satisfy the relation
a_hat_i <= a_i + epsilon * ||a||_1
with probability at least 1 - delta, where a is the the vector of all
all counts and ||x||_1 is the L1 norm of a vector x
Parameters
----------
delta : float
A value in the unit interval that sets the precision of the sketch
epsilon : float
A value in the unit interval that sets the precision of the sketch
k : int
A positive integer that sets the number of top items counted
Examples
--------
>>> s = Sketch(10**-7, 0.005, 40)
Raises
------
ValueError
If delta or epsilon are not in the unit interval, or if k is
not a positive integer
if delta <= 0 or delta >= 1:
raise ValueError("delta must be between 0 and 1, exclusive")
if epsilon <= 0 or epsilon >= 1:
raise ValueError("epsilon must be between 0 and 1, exclusive")
if k < 1:
raise ValueError("k must be a positive integer")
self.w = int(np.ceil(np.exp(1) / epsilon))
self.d = int(np.ceil(np.log(1 / delta)))
self.k = k
self.hash_functions = [self.__generate_hash_function() for i in range(self.d)]
self.count = np.zeros((self.d, self.w), dtype='int32')
self.heap, self.top_k = [], {} # top_k => [estimate, key] pairs
def update(self, key, increment):
Updates the sketch for the item with name of key by the amount
specified in increment
Parameters
----------
key : string
The item to update the value of in the sketch
increment : integer
The amount to update the sketch by for the given key
Examples
--------
>>> s = Sketch(10**-7, 0.005, 40)
>>> s.update('http://www.cnn.com/', 1)
for row, hash_function in enumerate(self.hash_functions):
column = hash_function(abs(hash(key)))
self.count[row, column] += increment
self.update_heap(key)
def update_heap(self, key):
Updates the class's heap that keeps track of the top k items for a
given key
For the given key, it checks whether the key is present in the heap,
updating accordingly if so, and adding it to the heap if it is
absent
Parameters
----------
key : string
The item to check against the heap
estimate = self.get(key)
if not self.heap or estimate >= self.heap[0][0]:
if key in self.top_k:
old_pair = self.top_k.get(key)
old_pair[0] = estimate
heapq.heapify(self.heap)
else:
if len(self.top_k) < self.k:
heapq.heappush(self.heap, [estimate, key])
self.top_k[key] = [estimate, key]
else:
new_pair = [estimate, key]
old_pair = heapq.heappushpop(self.heap, new_pair)
if new_pair[1] != old_pair[1]:
del self.top_k[old_pair[1]]
self.top_k[key] = new_pair
self.top_k[key] = new_pair
def get(self, key):
Fetches the sketch estimate for the given key
Parameters
----------
key : string
The item to produce an estimate for
Returns
-------
estimate : int
The best estimate of the count for the given key based on the
sketch
Examples
--------
>>> s = Sketch(10**-7, 0.005, 40)
>>> s.update('http://www.cnn.com/', 1)
>>> s.get('http://www.cnn.com/')
1
value = sys.maxint
for row, hash_function in enumerate(self.hash_functions):
column = hash_function(abs(hash(key)))
value = min(self.count[row, column], value)
return value
def __generate_hash_function(self):
Returns a hash function from a family of pairwise-independent hash
functions
a, b = random_parameter(), random_parameter()
return lambda x: (a * x + b) % BIG_PRIME % self.w
# define a function to return a list of the exact top users, sorted by count
def exact_top_users(f, top_n = 10):
import operator
counts = {}
for user in f:
user = user.rstrip('\n')
try:
if user not in counts:
counts[user] = 1
else:
counts[user] += 1
except ValueError:
pass
except KeyError:
pass
counter = 0
results = []
for user,count in reversed(sorted(counts.iteritems(), key=operator.itemgetter(1))):
if counter >= top_n:
break
results.append('{},{}'.format(user,str(count)))
counter += 1
return results
# note that the output format is '[user] [count]'
f = open('CM_small.txt')
results_exact = sorted(exact_top_users(f))
print(results_exact)
# define a function to return a list of the estimated top users, sorted by count
def CM_top_users(f, s, top_n = 10):
for user_name in f:
s.update(user_name.rstrip('\n'),1)
results = []
counter = 0
for value in reversed(sorted(s.top_k.values())):
if counter >= top_n:
break
results.append('{1},{0}'.format(str(value[0]),str(value[1])))
counter += 1
return results
# note that the output format is '[user] [count]'
# instantiate a Sketch object
s = Sketch(10**-3, 0.1, 10)
f = open('CM_small.txt')
results_CM = sorted(CM_top_users(f,s))
print(results_CM)
for item in zip(results_exact,results_CM):
print(item)
Explanation: Basic Idea of Count Min sketch
We map the input value to multiple points in a relatively small output space. Therefore, the count associated with a given input will be applied to multiple counts in the output space. Even though collisions will occur, the minimum count associated with a given input will have some desirable properties, including the ability to be used to estimate the largest N counts.
<img src="files/count_min_2.png">
http://debasishg.blogspot.com/2014/01/count-min-sketch-data-structure-for.html
Parameters of the sketch:
epsilon
delta
These parameters are inversely and exponentially (respectively) related to the sketch size parameters, d and w.
Implementation of the CM sketch
End of explanation
s = Sketch(0.9, 0.9, 10)
f = open('CM_small.txt')
results_coarse_CM = CM_top_users(f,s)
print(results_coarse_CM)
Explanation: Is it possible to make the sketch so coarse that its estimates are wrong even for this data set?
End of explanation
f = open('CM_large.txt')
%time results_exact = exact_top_users(f)
print(results_exact)
# this could take a few minutes
f = open('CM_large.txt')
s = Sketch(10**-4, 0.001, 10)
%time results_CM = CM_top_users(f,s)
print(results_CM)
Explanation: Yes! (if you try enough) Why?
The 'w' parameter goes like ceiling(exp(1)/epsilon), which is always >=~ 3.
The 'd' parameter goes like ceiling(log(1/delta), which is always >= 1.
So, you're dealing with a space with minimum size 3 x 1. With 10 records, it's possible that all 4 users map their counts to the point. So it's possible to see an estimate as high as 10, in this case.
Now for a larger data set.
End of explanation
for item in zip(results_exact,results_CM):
print(item)
# the CM sketch gets the top entry (an outlier) correct but doesn't do well estimating the order of the more degenerate counts
# let's decrease the precision via both the epsilon and delta parameters, and see whether it still gets the "heavy-hitter" correct
f = open('CM_large.txt')
s = Sketch(10**-3, 0.01, 10)
%time results_CM = CM_top_users(f,s)
print(results_CM)
# nope...sketch is too coarse, too many collisions, and the prominence of user 'Euph0r1a__ 129' is obscured
for item in zip(results_exact,results_CM):
print(item)
Explanation: For this precision and dataset size, the CM algo takes much longer than the exact solution. In fact, the crossover point at which the CM sketch can achieve reasonable accuracy in the same time as the exact solution is a very large number of entries.
End of explanation |
3,303 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Данные
Возьмите данные с https
Step1: Сравним по возрасту
Step2: Сравним по полу
Step3: Сравним по фертильности
Step4: <b>Вывод по возрасту
Step5: <b>Добавим новые признаки в train</b>
Step6: <b>Добавим новые признаки в test по-аналогии</b>
Step7: <div class="panel panel-info" style="margin
Step8: Статистические тесты
Step9: Методы обертки
Step10: Отбор при помощи модели Lasso
Step11: Отбор при помощи модели RandomForest
Step12: <b>Вывод по признакам
Step13: Проверим, что никакие строчки при манипуляции с NaN не потерялись | Python Code:
visual = pd.read_csv('data/CatsAndDogs/TRAIN2.csv')
#Сделаем числовой столбец Outcome, показывающий, взяли животное из приюта или нет
#Сначала заполним единицами, типа во всех случах хорошо
visual['Outcome'] = 'true'
#Неудачные случаи занулим
visual.loc[visual.OutcomeType == 'Euthanasia', 'Outcome'] = 'false'
visual.loc[visual.OutcomeType == 'Died', 'Outcome'] = 'false'
#Заменим строки, где в SexuponOutcome NaN, на что-нибудь осмысленное
visual.loc[visual.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown'
#Сделаем два отдельных столбца для пола и фертильности
visual['Gender'] = visual.SexuponOutcome.apply(lambda s: s.split(' ')[-1])
visual['Fertility'] = visual.SexuponOutcome.apply(lambda s: s.split(' ')[0])
Explanation: Данные
Возьмите данные с https://www.kaggle.com/c/shelter-animal-outcomes .
Обратите внимание, что в этот раз у нас много классов, почитайте в разделе Evaluation то, как вычисляется итоговый счет (score).
Визуализация
<div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 1.</h3>
</div>
</div>
Выясните, построив необходимые графики, влияет ли возраст, пол или фертильность животного на его шансы быть взятыми из приюта.
Подготовим данные
End of explanation
mergedByAges = visual.groupby('AgeuponOutcome')['Outcome'].value_counts().to_dict()
results = pd.DataFrame(data = mergedByAges, index=[0]).stack().fillna(0).transpose()
results.columns = pd.Index(['true', 'false'])
results['total'] = results.true + results.false
results.sort_values(by='true', ascending=False, inplace=True)
results[['true', 'false']].plot(kind='bar', stacked=False, rot=45);
Explanation: Сравним по возрасту
End of explanation
mergedByGender = visual.groupby('Gender')['Outcome'].value_counts().to_dict()
results = pd.DataFrame(data = mergedByGender, index=[0]).stack().fillna(0).transpose()
results.columns = pd.Index(['true', 'false'])
results['total'] = results.true + results.false
results.sort_values(by='true', ascending=False, inplace=True)
results[['true', 'false']].plot(kind='bar', stacked=True, rot=45);
Explanation: Сравним по полу
End of explanation
mergedByFert = visual.groupby('Fertility')['Outcome'].value_counts().to_dict()
results = pd.DataFrame(data = mergedByFert, index=[0]).stack().fillna(0).transpose()
results.columns = pd.Index(['true', 'false'])
results['total'] = results.true + results.false
results.sort_values(by='true', ascending=False, inplace=True)
results[['true', 'false']].plot(kind='bar', stacked=True, rot=45);
Explanation: Сравним по фертильности
End of explanation
train, test = pd.read_csv(
'data/CatsAndDogs/TRAIN2.csv' #наши данные
#'data/CatsAndDogs/train.csv' #исходные данные
), pd.read_csv(
'data/CatsAndDogs/TEST2.csv' #наши данные
#'data/CatsAndDogs/test.csv' #исходные данные
)
train.head()
test.shape
Explanation: <b>Вывод по возрасту:</b> лучше берут не самых старых, но и не самых молодых
<br>
<b>Вывод по полу:</b> по большому счёту не имеет значения
<br>
<b>Вывод по фертильности:</b> лучше берут животных с ненарушенными репродуктивными способностями. Однако две следующие группы не сильно различаются по сути и, если их сложить, то разница не столь велика.
Построение моделей
<div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 2.</h3>
</div>
</div>
Посмотрите тетрадку с генерацией новых признаков. Сделайте как можно больше релевантных признаков из всех имеющихся.
Не забудьте параллельно обрабатывать отложенную выборку (test), чтобы в ней были те же самые признаки, что и в обучающей.
<b>Возьмем исходные данные</b>
End of explanation
#Сначала по-аналогии с визуализацией
#Заменим строки, где в SexuponOutcome, Breed, Color NaN
train.loc[train.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown'
train.loc[train.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = '0 0'
train.loc[train.Breed.isnull(), 'Breed'] = 'Unknown'
train.loc[train.Color.isnull(), 'Color'] = 'Unknown'
#Сделаем два отдельных столбца для пола и фертильности
train['Gender'] = train.SexuponOutcome.apply(lambda s: s.split(' ')[-1])
train['Fertility'] = train.SexuponOutcome.apply(lambda s: s.split(' ')[0])
#Теперь что-то новое
#Столбец, в котором отмечено, есть имя у животного или нет
train['hasName'] = 1
train.loc[train.Name.isnull(), 'hasName'] = 0
#Столбец, в котором объединены порода и цвет
train['breedColor'] = train.apply(lambda row: row['Breed'] + ' ' + str(row['Color']), axis=1)
#Декомпозируем DateTime
#Во-первых, конвертируем столбец в тип DateTime из строкового
train['DateTime'] = pd.to_datetime(train['DateTime'])
#А теперь декомпозируем
train['dayOfWeek'] = train.DateTime.apply(lambda dt: dt.dayofweek)
train['month'] = train.DateTime.apply(lambda dt: dt.month)
train['day'] = train.DateTime.apply(lambda dt: dt.day)
train['quarter'] = train.DateTime.apply(lambda dt: dt.quarter)
train['hour'] = train.DateTime.apply(lambda dt: dt.hour)
train['minute'] = train.DateTime.apply(lambda dt: dt.hour)
train['year'] = train.DateTime.apply(lambda dt: dt.year)
#Разбиение возраста
#Сделаем два отдельных столбца для обозначения года/месяца и их количества
train['AgeuponFirstPart'] = train.AgeuponOutcome.apply(lambda s: s.split(' ')[0])
train['AgeuponSecondPart'] = train.AgeuponOutcome.apply(lambda s: s.split(' ')[-1])
#Переведем примерно в среднем месяцы, годы и недели в дни с учетом окончаний s
train['AgeuponSecondPartInDays'] = 0
train.loc[train.AgeuponSecondPart == 'year', 'AgeuponSecondPartInDays'] = 365
train.loc[train.AgeuponSecondPart == 'years', 'AgeuponSecondPartInDays'] = 365
train.loc[train.AgeuponSecondPart == 'month', 'AgeuponSecondPartInDays'] = 30
train.loc[train.AgeuponSecondPart == 'months', 'AgeuponSecondPartInDays'] = 30
train.loc[train.AgeuponSecondPart == 'week', 'AgeuponSecondPartInDays'] = 7
train.loc[train.AgeuponSecondPart == 'weeks', 'AgeuponSecondPartInDays'] = 7
#Во-первых, конвертируем столбец в числовой тип из строкового
train['AgeuponFirstPart'] = pd.to_numeric(train['AgeuponFirstPart'])
train['AgeuponSecondPartInDays'] = pd.to_numeric(train['AgeuponSecondPartInDays'])
#А теперь получим нормальное время жизни в днях
train['LifetimeInDays'] = train['AgeuponFirstPart'] * train['AgeuponSecondPartInDays']
#Удалим уж совсем бессмысленные промежуточные столбцы
train = train.drop(['AgeuponSecondPartInDays', 'AgeuponSecondPart', 'AgeuponFirstPart'], axis=1)
train.head()
Explanation: <b>Добавим новые признаки в train</b>
End of explanation
#Сначала по-аналогии с визуализацией
#Заменим строки, где в SexuponOutcome, Breed, Color NaN
test.loc[test.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown'
test.loc[test.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = '0 0'
test.loc[test.Breed.isnull(), 'Breed'] = 'Unknown'
test.loc[test.Color.isnull(), 'Color'] = 'Unknown'
#Сделаем два отдельных столбца для пола и фертильности
test['Gender'] = test.SexuponOutcome.apply(lambda s: s.split(' ')[-1])
test['Fertility'] = test.SexuponOutcome.apply(lambda s: s.split(' ')[0])
#Теперь что-то новое
#Столбец, в котором отмечено, есть имя у животного или нет
test['hasName'] = 1
test.loc[test.Name.isnull(), 'hasName'] = 0
#Столбец, в котором объединены порода и цвет
test['breedColor'] = test.apply(lambda row: row['Breed'] + ' ' + str(row['Color']), axis=1)
#Декомпозируем DateTime
#Во-первых, конвертируем столбец в тип DateTime из строкового
test['DateTime'] = pd.to_datetime(test['DateTime'])
#А теперь декомпозируем
test['dayOfWeek'] = test.DateTime.apply(lambda dt: dt.dayofweek)
test['month'] = test.DateTime.apply(lambda dt: dt.month)
test['day'] = test.DateTime.apply(lambda dt: dt.day)
test['quarter'] = test.DateTime.apply(lambda dt: dt.quarter)
test['hour'] = test.DateTime.apply(lambda dt: dt.hour)
test['minute'] = test.DateTime.apply(lambda dt: dt.hour)
test['year'] = test.DateTime.apply(lambda dt: dt.year)
#Разбиение возраста
#Сделаем два отдельных столбца для обозначения года/месяца и их количества
test['AgeuponFirstPart'] = test.AgeuponOutcome.apply(lambda s: s.split(' ')[0])
test['AgeuponSecondPart'] = test.AgeuponOutcome.apply(lambda s: s.split(' ')[-1])
#Переведем примерно в среднем месяцы, годы и недели в дни с учетом окончаний s
test['AgeuponSecondPartInDays'] = 0
test.loc[test.AgeuponSecondPart == 'year', 'AgeuponSecondPartInDays'] = 365
test.loc[test.AgeuponSecondPart == 'years', 'AgeuponSecondPartInDays'] = 365
test.loc[test.AgeuponSecondPart == 'month', 'AgeuponSecondPartInDays'] = 30
test.loc[test.AgeuponSecondPart == 'months', 'AgeuponSecondPartInDays'] = 30
test.loc[test.AgeuponSecondPart == 'week', 'AgeuponSecondPartInDays'] = 7
test.loc[test.AgeuponSecondPart == 'weeks', 'AgeuponSecondPartInDays'] = 7
#Во-первых, конвертируем столбец в числовой тип из строкового
test['AgeuponFirstPart'] = pd.to_numeric(test['AgeuponFirstPart'])
test['AgeuponSecondPartInDays'] = pd.to_numeric(test['AgeuponSecondPartInDays'])
#А теперь получим нормальное время жизни в днях
test['LifetimeInDays'] = test['AgeuponFirstPart'] * test['AgeuponSecondPartInDays']
#Удалим уж совсем бессмысленные промежуточные столбцы
test = test.drop(['AgeuponSecondPartInDays', 'AgeuponSecondPart', 'AgeuponFirstPart'], axis=1)
test.head()
Explanation: <b>Добавим новые признаки в test по-аналогии</b>
End of explanation
np.random.seed = 1234
from sklearn.preprocessing import LabelEncoder
from sklearn import preprocessing
#####################Заменим NaN значения на слово Unknown##################
#Уберем Nan значения из train
train.loc[train.AnimalID.isnull(), 'AnimalID'] = 'Unknown'
train.loc[train.Name.isnull(), 'Name'] = 'Unknown'
train.loc[train.OutcomeType.isnull(), 'OutcomeType'] = 'Unknown'
train.loc[train.AnimalType.isnull(), 'AnimalType'] = 'Unknown'
train.loc[train.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = 'Unknown'
train.loc[train.LifetimeInDays.isnull(), 'LifetimeInDays'] = 'Unknown'
#Уберем Nan значения из test
test.loc[test.AnimalID.isnull(), 'AnimalID'] = 'Unknown'
test.loc[test.Name.isnull(), 'Name'] = 'Unknown'
test.loc[test.AnimalType.isnull(), 'AnimalType'] = 'Unknown'
test.loc[test.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = 'Unknown'
test.loc[test.LifetimeInDays.isnull(), 'LifetimeInDays'] = 'Unknown'
#####################Закодируем слова числами################################
#Закодировали AnimalID цифрами вместо названий в test & train
#encAnimalID = preprocessing.LabelEncoder()
#encAnimalID.fit(pd.concat((test['AnimalID'], train['AnimalID'])))
#test['AnimalID'] = encAnimalID.transform(test['AnimalID'])
#train['AnimalID'] = encAnimalID.transform(train['AnimalID'])
#Закодировали имя цифрами вместо названий в test & train
encName = preprocessing.LabelEncoder()
encName.fit(pd.concat((test['Name'], train['Name'])))
test['Name'] = encName.transform(test['Name'])
train['Name'] = encName.transform(train['Name'])
#Закодировали DateTime цифрами вместо названий в test & train
encDateTime = preprocessing.LabelEncoder()
encDateTime.fit(pd.concat((test['DateTime'], train['DateTime'])))
test['DateTime'] = encDateTime.transform(test['DateTime'])
train['DateTime'] = encDateTime.transform(train['DateTime'])
#Закодировали OutcomeType цифрами вместо названий в train, т.к. в test их нет
encOutcomeType = preprocessing.LabelEncoder()
encOutcomeType.fit(train['OutcomeType'])
train['OutcomeType'] = encOutcomeType.transform(train['OutcomeType'])
#Закодировали AnimalType цифрами вместо названий в test & train
encAnimalType = preprocessing.LabelEncoder()
encAnimalType.fit(pd.concat((test['AnimalType'], train['AnimalType'])))
test['AnimalType'] = encAnimalType.transform(test['AnimalType'])
train['AnimalType'] = encAnimalType.transform(train['AnimalType'])
#Закодировали SexuponOutcome цифрами вместо названий в test & train
encSexuponOutcome = preprocessing.LabelEncoder()
encSexuponOutcome.fit(pd.concat((test['SexuponOutcome'], train['SexuponOutcome'])))
test['SexuponOutcome'] = encSexuponOutcome.transform(test['SexuponOutcome'])
train['SexuponOutcome'] = encSexuponOutcome.transform(train['SexuponOutcome'])
#Закодировали AgeuponOutcome цифрами вместо названий в test & train
encAgeuponOutcome = preprocessing.LabelEncoder()
encAgeuponOutcome.fit(pd.concat((test['AgeuponOutcome'], train['AgeuponOutcome'])))
test['AgeuponOutcome'] = encAgeuponOutcome.transform(test['AgeuponOutcome'])
train['AgeuponOutcome'] = encAgeuponOutcome.transform(train['AgeuponOutcome'])
#Закодировали Breed цифрами вместо названий в test & train
encBreed = preprocessing.LabelEncoder()
encBreed.fit(pd.concat((test['Breed'], train['Breed'])))
test['Breed'] = encBreed.transform(test['Breed'])
train['Breed'] = encBreed.transform(train['Breed'])
#Закодировали Color цифрами вместо названий в test & train
encColor = preprocessing.LabelEncoder()
encColor.fit(pd.concat((test['Color'], train['Color'])))
test['Color'] = encColor.transform(test['Color'])
train['Color'] = encColor.transform(train['Color'])
#Закодировали Gender цифрами вместо названий в test & train
encGender = preprocessing.LabelEncoder()
encGender.fit(pd.concat((test['Gender'], train['Gender'])))
test['Gender'] = encGender.transform(test['Gender'])
train['Gender'] = encGender.transform(train['Gender'])
#Закодировали Fertility цифрами вместо названий в test & train
encFertility = preprocessing.LabelEncoder()
encFertility.fit(pd.concat((test['Fertility'], train['Fertility'])))
test['Fertility'] = encFertility.transform(test['Fertility'])
train['Fertility'] = encFertility.transform(train['Fertility'])
#Закодировали breedColor цифрами вместо названий в test & train
encbreedColor = preprocessing.LabelEncoder()
encbreedColor.fit(pd.concat((test['breedColor'], train['breedColor'])))
test['breedColor'] = encbreedColor.transform(test['breedColor'])
train['breedColor'] = encbreedColor.transform(train['breedColor'])
####################################Предобработка#################################
from sklearn.model_selection import cross_val_score
#poly_features = preprocessing.PolynomialFeatures(3)
#Подготовили данные так, что X_tr - таблица без AnimalID и OutcomeType, а в y_tr сохранены OutcomeType
X_tr, y_tr = train.drop(['AnimalID', 'OutcomeType'], axis=1), train['OutcomeType']
#Типа перевели dataFrame в array и сдалали над ним предварительную обработку
#X_tr = poly_features.fit_transform(X_tr)
X_tr.head()
Explanation: <div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 3.</h3>
</div>
</div>
Выполните отбор признаков, попробуйте различные методы. Проверьте качество на кросс-валидации.
Выведите топ самых важных и самых незначащих признаков.
Предобработка данных
End of explanation
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2, f_classif, mutual_info_classif
skb = SelectKBest(mutual_info_classif, k=15)
x_new = skb.fit_transform(X_tr, y_tr)
x_new
Explanation: Статистические тесты
End of explanation
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression
names = X_tr.columns.values
lr = LinearRegression()
rfe = RFE(lr, n_features_to_select=1)
rfe.fit(X_tr,y_tr);
print("Features sorted by their rank:")
print(sorted(zip(map(lambda x: round(x, 4), rfe.ranking_), names)))
Explanation: Методы обертки
End of explanation
from sklearn.linear_model import Lasso
clf = Lasso()
clf.fit(X_tr, y_tr);
clf.coef_
features = X_tr.columns.values
print('Всего Lasso выкинуло %s переменных' % (clf.coef_ == 0).sum())
print('Это признаки:')
for s in features[np.where(clf.coef_ == 0)[0]]:
print(' * ', s)
Explanation: Отбор при помощи модели Lasso
End of explanation
from sklearn.ensemble import RandomForestRegressor
clf = RandomForestRegressor()
clf.fit(X_tr, y_tr);
clf.feature_importances_
imp_feature_idx = clf.feature_importances_.argsort()
imp_feature_idx
features = X_tr.columns.values
k = 0
while k < len(features):
print(features[k], imp_feature_idx[k])
k += 1
Explanation: Отбор при помощи модели RandomForest
End of explanation
#Для начала выкинем ненужные признаки, выявленные на прошлом этапе
X_tr = X_tr.drop(['Name', 'DateTime', 'month', 'day', 'Breed', 'breedColor'], axis=1)
test = test.drop(['Name', 'DateTime', 'month', 'day', 'Breed', 'breedColor'], axis=1)
X_tr.head()
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
clf1 = LogisticRegression(random_state=1234)
clf3 = GaussianNB()
clf5 = KNeighborsClassifier()
eclf = VotingClassifier(estimators=[
('lr', clf1), ('gnb', clf3), ('knn', clf5)],
voting='soft', weights=[1,1,10])
scores = cross_val_score(eclf, X_tr, y_tr)
eclf = eclf.fit(X_tr, y_tr)
print('Best score:', scores.min())
#delete AnimalID from test
X_te = test.drop(['AnimalID'], axis=1)
X_te.head()
y_te = eclf.predict(X_te)
y_te
ans_nn = pd.DataFrame({'AnimalID': test['AnimalID'], 'type': encOutcomeType.inverse_transform(y_te)})
ans_nn.head()
#Зададим функцию для трансформации
def onehot_encode(df_train, column):
from sklearn.preprocessing import LabelBinarizer
cs = df_train.select_dtypes(include=['O']).columns.values
if column not in cs:
return (df_train, None)
rest = [x for x in df_train.columns.values if x != column]
lb = LabelBinarizer()
train_data = lb.fit_transform(df_train[column])
new_col_names = ['%s' % x for x in lb.classes_]
if len(new_col_names) != train_data.shape[1]:
new_col_names = new_col_names[::-1][:train_data.shape[1]]
new_train = pd.concat((df_train.drop([column], axis=1), pd.DataFrame(data=train_data, columns=new_col_names)), axis=1)
return (new_train, lb)
ans_nn, lb = onehot_encode(ans_nn, 'type')
ans_nn
ans_nn.head()
Explanation: <b>Вывод по признакам:</b>
<br>
<b>Не нужны:</b> Name, DateTime, month, day, Breed, breedColor. Всё остальное менее однозначно, можно и оставить.
<div class="panel panel-info" style="margin: 50px 0 0 0">
<div class="panel-heading">
<h3 class="panel-title">Задание 4.</h3>
</div>
</div>
Попробуйте смешать разные модели с помощью <b>sklearn.ensemble.VotingClassifier</b>. Увеличилась ли точность? Изменилась ли дисперсия?
End of explanation
test.shape[0] == ans_nn.shape[0]
#Сделаем нумерацию индексов не с 0, а с 1
ans_nn.index += 1
#Воткнем столбец с индексами как столбец в конкретное место
ans_nn.insert(0, 'ID', ans_nn.index)
#delete AnimalID from test
ans_nn = ans_nn.drop(['AnimalID'], axis=1)
ans_nn.head()
#Сохраним
ans_nn.to_csv('ans_catdog.csv', index=False)
Explanation: Проверим, что никакие строчки при манипуляции с NaN не потерялись
End of explanation |
3,304 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Pairs Trading
By Delaney Mackenzie and Maxwell Margenot
Part of the Quantopian Lecture Series
Step1: Generating Two Fake Securities
We model X's daily returns by drawing from a normal distribution. Then we perform a cumulative sum to get the value of X on each day.
Step2: Now we generate Y. Remember that Y is supposed to have a deep economic link to X, so the price of Y should vary pretty similarly. We model this by taking X, shifting it up and adding some random noise drawn from a normal distribution.
Step3: Cointegration
We've constructed an example of two cointegrated series. Cointegration is a more subtle relationship than correlation. If two time series are cointegrated, there is some linear combination between them that will vary around a mean. At all points in time, the combination between them is related to the same probability distribution.
For more details on how we formally define cointegration and how to understand it, please see the Integration, Cointegration, and Stationarity lecture from the Quantopian Lecture Series.
We'll plot the difference between the two now so we can see how this looks.
Step4: Testing for Cointegration
That's an intuitive definition, but how do we test for this statistically? There is a convenient cointegration test that lives in statsmodels.tsa.stattools. Let's say that our confidence level is $0.05$. We should see a p-value below our cutoff, as we've artifically created two series that are the textbook definition of cointegration.
Step5: Correlation vs. Cointegration
Correlation and cointegration, while theoretically similar, are not the same. To demonstrate this, we'll show examples of series that are correlated, but not cointegrated, and vice versa. To start let's check the correlation of the series we just generated.
Step6: That's very high, as we would expect. But how would two series that are correlated but not cointegrated look?
Correlation Without Cointegration
A simple example is two series that just diverge.
Step7: Cointegration Without Correlation
A simple example of this case is a normally distributed series and a square wave.
Step8: Sure enough, the correlation is incredibly low, but the p-value shows that these are cointegrated.
Hedging
Because you'd like to protect yourself from bad markets, often times short sales will be used to hedge long investments. Because a short sale makes money if the security sold loses value, and a long purchase will make money if a security gains value, one can long parts of the market and short others. That way if the entire market falls off a cliff, we'll still make money on the shorted securities and hopefully break even. In the case of two securities we'll call it a hedged position when we are long on one security and short on the other.
The Trick
Step9: Looking for Cointegrated Pairs of Alternative Energy Securities
We are looking through a set of solar company stocks to see if any of them are cointegrated. We'll start by defining the list of securities we want to look through. Then we'll get the pricing data for each security for the year of 2014.
Our approach here is somewhere in the middle of the spectrum that we mentioned before. We have formulated an economic hypothesis that there is some sort of link between a subset of securities within the energy sector and we want to test whether there are any cointegrated pairs. This incurs significantly less multiple comparisons bias than searching through hundreds of securities and slightly more than forming a hypothesis for an individual test.
NOTE
Step10: Example of how to get all the prices of all the stocks loaded using get_pricing() above in one pandas dataframe object
Step11: Example of how to get just the prices of a single stock that was loaded using get_pricing() above
Step12: Now we'll run our method on the list and see if any pairs are cointegrated.
Step13: Looks like 'ABGB' and 'FSLR' are cointegrated. Let's take a look at the prices to make sure there's nothing weird going on.
Step14: Calculating the Spread
Now we will plot the spread of the two series. In order to actually calculate the spread, we use a linear regression to get the coefficient for the linear combination to construct between our two securities, as shown in the stationarity lecture. Using a linear regression to estimate the coefficient is known as the Engle-Granger method.
Step15: Alternatively, we could examine the ratio betwen the two series.
Step16: Examining the price ratio of a trading pair is a traditional way to handle pairs trading. Part of why this works as a signal is based in our assumptions of how stock prices move, specifically because stock prices are typically assumed to be log-normally distributed. What this implies is that by taking a ratio of the prices, we are taking a linear combination of the returns associated with them (since prices are just the exponentiated returns).
This can be a little irritating to deal with for our purposes as purchasing the precisely correct ratio of a trading pair may not be practical. We choose instead to move forward with simply calculating the spread between the cointegrated stocks using linear regression. This is a very simple way to handle the relationship, however, and is likely not feasible for non-toy examples. There are other potential methods for estimating the spread listed at the bottom of this lecture. If you want to get more into the theory of why having cointegrated stocks matters for pairs trading, again, please see the Integration, Cointegration, and Stationarity Lecture from the Quantopian Lecture Series.
So, back to our example. The absolute spread isn't very useful in statistical terms. It is more helpful to normalize our signal by treating it as a z-score.
WARNING
In practice this is usually done to try to give some scale to the data, but this assumes some underlying distribution, usually a normal distribution. Under a normal distribution, we would know that approximately 84% of all spread values will be smaller. However, much financial data is not normally distributed, and one must be very careful not to assume normality, nor any specific distribution when generating statistics. It could be the case that the true distribution of spreads was very fat-tailed and prone to extreme values. This could mess up our model and result in large losses.
Step17: Simple Strategy
Step18: We can use the moving averages to compute the z-score of the spread at each given time. This will tell us how extreme the spread is and whether it's a good idea to enter a position at this time. Let's take a look at the z-score now.
Step19: The z-score doesn't mean much out of context, let's plot it next to the prices to get an idea of what it looks like. We'll take the negative of the z-score because the spreads were all negative and that is a little counterintuitive to trade on.
Step20: Out of Sample Test
Now that we have constructed our spread appropriately and have an idea of how we will go about making trades, it is time to conduct some out of sample testing. Our whole model is based on the premise that these securities are cointegrated, but we built it on information from a certain time period. If we actually want to implement this model, we need to conduct an out of sample test to confirm that the principles of our model are still valid going forward.
Since we initially built the model on the 2014 - 2015 year, let's see if this cointegrated relationship holds for 2015 - 2016. Historical results do not guarantee future results so this is a sanity check to see if the work we have done holds strong. | Python Code:
import numpy as np
import pandas as pd
import statsmodels
import statsmodels.api as sm
from statsmodels.tsa.stattools import coint
# just set the seed for the random number generator
np.random.seed(107)
import matplotlib.pyplot as plt
Explanation: Introduction to Pairs Trading
By Delaney Mackenzie and Maxwell Margenot
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Pairs trading is a classic example of a strategy based on mathematical analysis. The principle is as follows. Let's say you have a pair of securities X and Y that have some underlying economic link. An example might be two companies that manufacture the same product, or two companies in one supply chain. If we can model this economic link with a mathematical model, we can make trades on it. We'll start by constructing a toy example.
Before we proceed, note that the content in this lecture depends heavily on the Stationarity, Integration, and Cointegration lecture in order to properly understand the mathematical basis for the methodology that we employ here. It is recommended that you go through that lecture before this continuing.
End of explanation
X_returns = np.random.normal(0, 1, 100) # Generate the daily returns
# sum them and shift all the prices up into a reasonable range
X = pd.Series(np.cumsum(X_returns), name='X') + 50
X.plot();
Explanation: Generating Two Fake Securities
We model X's daily returns by drawing from a normal distribution. Then we perform a cumulative sum to get the value of X on each day.
End of explanation
some_noise = np.random.normal(0, 1, 100)
Y = X + 5 + some_noise
Y.name = 'Y'
pd.concat([X, Y], axis=1).plot();
Explanation: Now we generate Y. Remember that Y is supposed to have a deep economic link to X, so the price of Y should vary pretty similarly. We model this by taking X, shifting it up and adding some random noise drawn from a normal distribution.
End of explanation
(Y - X).plot() # Plot the spread
plt.axhline((Y - X).mean(), color='red', linestyle='--') # Add the mean
plt.xlabel('Time')
plt.legend(['Price Spread', 'Mean']);
Explanation: Cointegration
We've constructed an example of two cointegrated series. Cointegration is a more subtle relationship than correlation. If two time series are cointegrated, there is some linear combination between them that will vary around a mean. At all points in time, the combination between them is related to the same probability distribution.
For more details on how we formally define cointegration and how to understand it, please see the Integration, Cointegration, and Stationarity lecture from the Quantopian Lecture Series.
We'll plot the difference between the two now so we can see how this looks.
End of explanation
# compute the p-value of the cointegration test
# will inform us as to whether the spread between the 2 timeseries is stationary
# around its mean
score, pvalue, _ = coint(X,Y)
print pvalue
Explanation: Testing for Cointegration
That's an intuitive definition, but how do we test for this statistically? There is a convenient cointegration test that lives in statsmodels.tsa.stattools. Let's say that our confidence level is $0.05$. We should see a p-value below our cutoff, as we've artifically created two series that are the textbook definition of cointegration.
End of explanation
X.corr(Y)
Explanation: Correlation vs. Cointegration
Correlation and cointegration, while theoretically similar, are not the same. To demonstrate this, we'll show examples of series that are correlated, but not cointegrated, and vice versa. To start let's check the correlation of the series we just generated.
End of explanation
X_returns = np.random.normal(1, 1, 100)
Y_returns = np.random.normal(2, 1, 100)
X_diverging = pd.Series(np.cumsum(X_returns), name='X')
Y_diverging = pd.Series(np.cumsum(Y_returns), name='Y')
pd.concat([X_diverging, Y_diverging], axis=1).plot();
print 'Correlation: ' + str(X_diverging.corr(Y_diverging))
score, pvalue, _ = coint(X_diverging,Y_diverging)
print 'Cointegration test p-value: ' + str(pvalue)
Explanation: That's very high, as we would expect. But how would two series that are correlated but not cointegrated look?
Correlation Without Cointegration
A simple example is two series that just diverge.
End of explanation
Y2 = pd.Series(np.random.normal(0, 1, 1000), name='Y2') + 20
Y3 = Y2.copy()
# Y2 = Y2 + 10
Y3[0:100] = 30
Y3[100:200] = 10
Y3[200:300] = 30
Y3[300:400] = 10
Y3[400:500] = 30
Y3[500:600] = 10
Y3[600:700] = 30
Y3[700:800] = 10
Y3[800:900] = 30
Y3[900:1000] = 10
Y2.plot()
Y3.plot()
plt.ylim([0, 40]);
# correlation is nearly zero
print 'Correlation: ' + str(Y2.corr(Y3))
score, pvalue, _ = coint(Y2,Y3)
print 'Cointegration test p-value: ' + str(pvalue)
Explanation: Cointegration Without Correlation
A simple example of this case is a normally distributed series and a square wave.
End of explanation
def find_cointegrated_pairs(data):
n = data.shape[1]
score_matrix = np.zeros((n, n))
pvalue_matrix = np.ones((n, n))
keys = data.keys()
pairs = []
for i in range(n):
for j in range(i+1, n):
S1 = data[keys[i]]
S2 = data[keys[j]]
result = coint(S1, S2)
score = result[0]
pvalue = result[1]
score_matrix[i, j] = score
pvalue_matrix[i, j] = pvalue
if pvalue < 0.05:
pairs.append((keys[i], keys[j]))
return score_matrix, pvalue_matrix, pairs
Explanation: Sure enough, the correlation is incredibly low, but the p-value shows that these are cointegrated.
Hedging
Because you'd like to protect yourself from bad markets, often times short sales will be used to hedge long investments. Because a short sale makes money if the security sold loses value, and a long purchase will make money if a security gains value, one can long parts of the market and short others. That way if the entire market falls off a cliff, we'll still make money on the shorted securities and hopefully break even. In the case of two securities we'll call it a hedged position when we are long on one security and short on the other.
The Trick: Where it all comes together
Because the securities drift towards and apart from each other, there will be times when the distance is high and times when the distance is low. The trick of pairs trading comes from maintaining a hedged position across X and Y. If both securities go down, we neither make nor lose money, and likewise if both go up. We make money on the spread of the two reverting to the mean. In order to do this we'll watch for when X and Y are far apart, then short Y and long X. Similarly we'll watch for when they're close together, and long Y and short X.
Going Long the Spread
This is when the spread is small and we expect it to become larger. We place a bet on this by longing Y and shorting X.
Going Short the Spread
This is when the spread is large and we expect it to become smaller. We place a bet on this by shorting Y and longing X.
Specific Bets
One important concept here is that we are placing a bet on one specific thing, and trying to reduce our bet's dependency on other factors such as the market.
Finding real securities that behave like this
The best way to do this is to start with securities you suspect may be cointegrated and perform a statistical test. If you just run statistical tests over all pairs, you'll fall prey to multiple comparison bias.
Here's a method to look through a list of securities and test for cointegration between all pairs. It returns a cointegration test score matrix, a p-value matrix, and any pairs for which the p-value was less than $0.05$.
WARNING: This will incur a large amount of multiple comparisons bias.
The methods for finding viable pairs all live on a spectrum. At one end there is the formation of an economic hypothesis for an individual pair. You have some extra knowledge about an economic link that leads you to believe that the pair is cointegrated, so you go out and test for the presence of cointegration. In this case you will incur no multiple comparisons bias. At the other end of the spectrum, you perform a search through hundreds of different securities for any viable pairs according to your test. In this case you will incur a very large amount of multiple comparisons bias.
Multiple comparisons bias is the increased chance to incorrectly generate a significant p-value when many tests are run. If 100 tests are run on random data, we should expect to see 5 p-values below $0.05$ on expectation. Because we will perform $n(n-1)/2$ comparisons, we should expect to see many incorrectly significant p-values. For the sake of this example we will ignore this and continue. In practice a second verification step would be needed if looking for pairs this way. Another approach is to pick a small number of pairs you have reason to suspect might be cointegrated and test each individually. This will result in less exposure to multiple comparisons bias. You can read more about multiple comparisons bias here.
End of explanation
symbol_list = ['ABGB', 'ASTI', 'CSUN', 'DQ', 'FSLR','SPY']
prices_df = get_pricing(symbol_list, fields=['price']
, start_date='2014-01-01', end_date='2015-01-01')['price']
prices_df.columns = map(lambda x: x.symbol, prices_df.columns)
Explanation: Looking for Cointegrated Pairs of Alternative Energy Securities
We are looking through a set of solar company stocks to see if any of them are cointegrated. We'll start by defining the list of securities we want to look through. Then we'll get the pricing data for each security for the year of 2014.
Our approach here is somewhere in the middle of the spectrum that we mentioned before. We have formulated an economic hypothesis that there is some sort of link between a subset of securities within the energy sector and we want to test whether there are any cointegrated pairs. This incurs significantly less multiple comparisons bias than searching through hundreds of securities and slightly more than forming a hypothesis for an individual test.
NOTE: We include the market in our data. This is because the market drives the movement of so many securities that you often times might find two seemingingly cointegrated securities, but in reality they are not cointegrated and just both conintegrated with the market. This is known as a confounding variable and it is important to check for market involvement in any relationship you find.
get_pricing() is a Quantopian method that pulls in stock data, and loads it into a Python Pandas DataPanel object. Available fields are 'price', 'open_price', 'high', 'low', 'volume'. But for this example we will just use 'price' which is the daily closing price of the stock.
End of explanation
prices_df.head()
Explanation: Example of how to get all the prices of all the stocks loaded using get_pricing() above in one pandas dataframe object
End of explanation
prices_df['SPY'].head()
Explanation: Example of how to get just the prices of a single stock that was loaded using get_pricing() above
End of explanation
# Heatmap to show the p-values of the cointegration test between each pair of
# stocks. Only show the value in the upper-diagonal of the heatmap
scores, pvalues, pairs = find_cointegrated_pairs(prices_df)
import seaborn
seaborn.heatmap(pvalues, xticklabels=symbol_list, yticklabels=symbol_list, cmap='RdYlGn_r'
, mask = (pvalues >= 0.05)
)
print pairs
Explanation: Now we'll run our method on the list and see if any pairs are cointegrated.
End of explanation
S1 = prices_df['ABGB']
S2 = prices_df['FSLR']
score, pvalue, _ = coint(S1, S2)
pvalue
Explanation: Looks like 'ABGB' and 'FSLR' are cointegrated. Let's take a look at the prices to make sure there's nothing weird going on.
End of explanation
S1 = sm.add_constant(S1)
results = sm.OLS(S2, S1).fit()
S1 = S1['ABGB']
b = results.params['ABGB']
spread = S2 - b * S1
spread.plot()
plt.axhline(spread.mean(), color='black')
plt.legend(['Spread']);
Explanation: Calculating the Spread
Now we will plot the spread of the two series. In order to actually calculate the spread, we use a linear regression to get the coefficient for the linear combination to construct between our two securities, as shown in the stationarity lecture. Using a linear regression to estimate the coefficient is known as the Engle-Granger method.
End of explanation
ratio = S1/S2
ratio.plot()
plt.axhline(ratio.mean(), color='black')
plt.legend(['Price Ratio']);
Explanation: Alternatively, we could examine the ratio betwen the two series.
End of explanation
def zscore(series):
return (series - series.mean()) / np.std(series)
zscore(spread).plot()
plt.axhline(zscore(spread).mean(), color='black')
plt.axhline(1.0, color='red', linestyle='--')
plt.axhline(-1.0, color='green', linestyle='--')
plt.legend(['Spread z-score', 'Mean', '+1', '-1']);
Explanation: Examining the price ratio of a trading pair is a traditional way to handle pairs trading. Part of why this works as a signal is based in our assumptions of how stock prices move, specifically because stock prices are typically assumed to be log-normally distributed. What this implies is that by taking a ratio of the prices, we are taking a linear combination of the returns associated with them (since prices are just the exponentiated returns).
This can be a little irritating to deal with for our purposes as purchasing the precisely correct ratio of a trading pair may not be practical. We choose instead to move forward with simply calculating the spread between the cointegrated stocks using linear regression. This is a very simple way to handle the relationship, however, and is likely not feasible for non-toy examples. There are other potential methods for estimating the spread listed at the bottom of this lecture. If you want to get more into the theory of why having cointegrated stocks matters for pairs trading, again, please see the Integration, Cointegration, and Stationarity Lecture from the Quantopian Lecture Series.
So, back to our example. The absolute spread isn't very useful in statistical terms. It is more helpful to normalize our signal by treating it as a z-score.
WARNING
In practice this is usually done to try to give some scale to the data, but this assumes some underlying distribution, usually a normal distribution. Under a normal distribution, we would know that approximately 84% of all spread values will be smaller. However, much financial data is not normally distributed, and one must be very careful not to assume normality, nor any specific distribution when generating statistics. It could be the case that the true distribution of spreads was very fat-tailed and prone to extreme values. This could mess up our model and result in large losses.
End of explanation
# Get the spread between the 2 stocks
# Calculate rolling beta coefficient
rolling_beta = pd.ols(y=S1, x=S2, window_type='rolling', window=30)
spread = S2 - rolling_beta.beta['x'] * S1
spread.name = 'spread'
# Get the 1 day moving average of the price spread
spread_mavg1 = pd.rolling_mean(spread, window=1)
spread_mavg1.name = 'spread 1d mavg'
# Get the 30 day moving average
spread_mavg30 = pd.rolling_mean(spread, window=30)
spread_mavg30.name = 'spread 30d mavg'
plt.plot(spread_mavg1.index, spread_mavg1.values)
plt.plot(spread_mavg30.index, spread_mavg30.values)
plt.legend(['1 Day Spread MAVG', '30 Day Spread MAVG'])
plt.ylabel('Spread');
Explanation: Simple Strategy:
Go "Long" the spread whenever the z-score is below -1.0
Go "Short" the spread when the z-score is above 1.0
Exit positions when the z-score approaches zero
This is just the tip of the iceberg, and only a very simplistic example to illustrate the concepts. In practice you would want to compute a more optimal weighting for how many shares to hold for S1 and S2. Some additional resources on pair trading are listed at the end of this notebook
Trading using constantly updating statistics
In general taking a statistic over your whole sample size can be bad. For example, if the market is moving up, and both securities with it, then your average price over the last 3 years may not be representative of today. For this reason traders often use statistics that rely on rolling windows of the most recent data.
Moving Averages
A moving average is just an average over the last $n$ datapoints for each given time. It will be undefined for the first $n$ datapoints in our series. Shorter moving averages will be more jumpy and less reliable, but respond to new information quickly. Longer moving averages will be smoother, but take more time to incorporate new information.
We also need to use a rolling beta, a rolling estimate of how our spread should be calculated, in order to keep all of our parameters up to date.
End of explanation
# Take a rolling 30 day standard deviation
std_30 = pd.rolling_std(spread, window=30)
std_30.name = 'std 30d'
# Compute the z score for each day
zscore_30_1 = (spread_mavg1 - spread_mavg30)/std_30
zscore_30_1.name = 'z-score'
zscore_30_1.plot()
plt.axhline(0, color='black')
plt.axhline(1.0, color='red', linestyle='--');
Explanation: We can use the moving averages to compute the z-score of the spread at each given time. This will tell us how extreme the spread is and whether it's a good idea to enter a position at this time. Let's take a look at the z-score now.
End of explanation
# Plot the prices scaled down along with the negative z-score
# just divide the stock prices by 10 to make viewing it on the plot easier
plt.plot(S1.index, S1.values/10)
plt.plot(S2.index, S2.values/10)
plt.plot(zscore_30_1.index, zscore_30_1.values)
plt.legend(['S1 Price / 10', 'S2 Price / 10', 'Price Spread Rolling z-Score']);
Explanation: The z-score doesn't mean much out of context, let's plot it next to the prices to get an idea of what it looks like. We'll take the negative of the z-score because the spreads were all negative and that is a little counterintuitive to trade on.
End of explanation
symbol_list = ['ABGB', 'FSLR']
prices_df = get_pricing(symbol_list, fields=['price']
, start_date='2015-01-01', end_date='2016-01-01')['price']
prices_df.columns = map(lambda x: x.symbol, prices_df.columns)
S1 = prices_df['ABGB']
S2 = prices_df['FSLR']
score, pvalue, _ = coint(S1, S2)
print 'p-value: ', pvalue
Explanation: Out of Sample Test
Now that we have constructed our spread appropriately and have an idea of how we will go about making trades, it is time to conduct some out of sample testing. Our whole model is based on the premise that these securities are cointegrated, but we built it on information from a certain time period. If we actually want to implement this model, we need to conduct an out of sample test to confirm that the principles of our model are still valid going forward.
Since we initially built the model on the 2014 - 2015 year, let's see if this cointegrated relationship holds for 2015 - 2016. Historical results do not guarantee future results so this is a sanity check to see if the work we have done holds strong.
End of explanation |
3,305 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's train this model on TPU. It's worth it.
Imports
Step1: TPU detection
Step2: Configuration
Step3: Read images and labels from TFRecords
Step4: training and validation datasets
Step5: Model [WORK REQUIRED]
train the model as it is, with a single convolutional layer
Accuracy 40%... Not great.
add additional convolutional layers interleaved with max-pooling layers. Try also adding a second dense layer. For example
Step6: Training
Step7: Predictions | Python Code:
import os, sys, math
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
print("Tensorflow version " + tf.__version__)
AUTOTUNE = tf.data.AUTOTUNE
Explanation: Let's train this model on TPU. It's worth it.
Imports
End of explanation
try: # detect TPUs
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() # TPU detection
strategy = tf.distribute.TPUStrategy(tpu)
except ValueError: # detect GPUs
strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines
#strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU
#strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines
print("Number of accelerators: ", strategy.num_replicas_in_sync)
Explanation: TPU detection
End of explanation
GCS_PATTERN = 'gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec'
IMAGE_SIZE = [192, 192]
if tpu:
BATCH_SIZE = 16*strategy.num_replicas_in_sync # A TPU has 8 cores so this will be 128
else:
BATCH_SIZE = 32 # On Colab/GPU, a higher batch size does not help and sometimes does not fit on the GPU (OOM)
VALIDATION_SPLIT = 0.19
CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] # do not change, maps to the labels in the data (folder names)
# splitting data files between training and validation
filenames = tf.io.gfile.glob(GCS_PATTERN)
split = int(len(filenames) * VALIDATION_SPLIT)
training_filenames = filenames[split:]
validation_filenames = filenames[:split]
print("Pattern matches {} data files. Splitting dataset into {} training files and {} validation files".format(len(filenames), len(training_filenames), len(validation_filenames)))
validation_steps = int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE
steps_per_epoch = int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE
print("With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format(BATCH_SIZE, steps_per_epoch, validation_steps))
#@title display utilities [RUN ME]
def dataset_to_numpy_util(dataset, N):
dataset = dataset.batch(N)
# In eager mode, iterate in the Datset directly.
for images, labels in dataset:
numpy_images = images.numpy()
numpy_labels = labels.numpy()
break;
return numpy_images, numpy_labels
def title_from_label_and_target(label, correct_label):
label = np.argmax(label, axis=-1) # one-hot to class number
correct_label = np.argmax(correct_label, axis=-1) # one-hot to class number
correct = (label == correct_label)
return "{} [{}{}{}]".format(CLASSES[label], str(correct), ', shoud be ' if not correct else '',
CLASSES[correct_label] if not correct else ''), correct
def display_one_flower(image, title, subplot, red=False):
plt.subplot(subplot)
plt.axis('off')
plt.imshow(image)
plt.title(title, fontsize=16, color='red' if red else 'black')
return subplot+1
def display_9_images_from_dataset(dataset):
subplot=331
plt.figure(figsize=(13,13))
images, labels = dataset_to_numpy_util(dataset, 9)
for i, image in enumerate(images):
title = CLASSES[np.argmax(labels[i], axis=-1)]
subplot = display_one_flower(image, title, subplot)
if i >= 8:
break;
#plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_9_images_with_predictions(images, predictions, labels):
subplot=331
plt.figure(figsize=(13,13))
for i, image in enumerate(images):
title, correct = title_from_label_and_target(predictions[i], labels[i])
subplot = display_one_flower(image, title, subplot, not correct)
if i >= 8:
break;
#plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
#plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor('#F8F8F8')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
Explanation: Configuration
End of explanation
def read_tfrecord(example):
features = {
"image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar
"one_hot_class": tf.io.VarLenFeature(tf.float32),
}
example = tf.io.parse_single_example(example, features)
image = tf.io.decode_jpeg(example['image'], channels=3)
image = tf.cast(image, tf.float32) / 255.0 # convert image to floats in [0, 1] range
image = tf.reshape(image, [*IMAGE_SIZE, 3]) # explicit size will be needed for TPU
one_hot_class = tf.sparse.to_dense(example['one_hot_class'])
one_hot_class = tf.reshape(one_hot_class, [5])
return image, one_hot_class
def load_dataset(filenames):
# read from TFRecords. For optimal performance, read from multiple
# TFRecord files at once and set the option experimental_deterministic = False
# to allow order-altering optimizations.
option_no_order = tf.data.Options()
option_no_order.experimental_deterministic = False
dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTOTUNE)
dataset = dataset.with_options(option_no_order)
dataset = dataset.map(read_tfrecord, num_parallel_calls=AUTOTUNE)
return dataset
display_9_images_from_dataset(load_dataset(training_filenames))
Explanation: Read images and labels from TFRecords
End of explanation
def get_batched_dataset(filenames, train=False):
dataset = load_dataset(filenames)
dataset = dataset.cache() # This dataset fits in RAM
if train:
# Best practices for Keras:
# Training dataset: repeat then batch
# Evaluation dataset: do not repeat
dataset = dataset.repeat()
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTOTUNE) # prefetch next batch while training (autotune prefetch buffer size)
# should shuffle too but this dataset was well shuffled on disk already
return dataset
# source: Dataset performance guide: https://www.tensorflow.org/guide/performance/datasets
# instantiate the datasets
training_dataset = get_batched_dataset(training_filenames, train=True)
validation_dataset = get_batched_dataset(validation_filenames, train=False)
some_flowers, some_labels = dataset_to_numpy_util(load_dataset(validation_filenames), 160)
Explanation: training and validation datasets
End of explanation
with strategy.scope(): # this line is all that is needed to run on TPU (or multi-GPU, ...)
model = tf.keras.Sequential([
###
tf.keras.layers.InputLayer(input_shape=[*IMAGE_SIZE, 3]),
tf.keras.layers.Conv2D(kernel_size=3, filters=20, padding='same', activation='relu'),
#
# YOUR LAYERS HERE
#
# LAYERS TO TRY:
# Conv2D(kernel_size=3, filters=30, padding='same', activation='relu')
# MaxPooling2D(pool_size=2)
# GlobalAveragePooling2D() / Flatten()
# Dense(90, activation='relu')
#
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(5, activation='softmax')
###
])
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
Explanation: Model [WORK REQUIRED]
train the model as it is, with a single convolutional layer
Accuracy 40%... Not great.
add additional convolutional layers interleaved with max-pooling layers. Try also adding a second dense layer. For example:<br/>
conv 3x3, 16 filters, relu<br/>
conv 3x3, 30 filters, relu<br/>
max pool 2x2<br/>
conv 3x3, 50 filters, relu<br/>
max pool 2x2<br/>
conv 3x3, 70 filters, relu<br/>
flatten<br/>
dense 5 softmax<br/>
Accuracy 60%... slightly better. But this model is more than 800K parameters and it overfits dramatically (overfitting = eval loss goes up instead of down).
Try replacing the Flatten layer by Global average pooling.
Accuracy still 60% but the model is back to a modest 50K parameters, and does not overfit anymore. If you train longer, it can go even higher.
Try experimenting with 1x1 convolutions too. They typically follow a 3x3 convolution and decrease the filter count. You can also add dropout between the dense layers. For example:
conv 3x3, 20 filters, relu<br/>
conv 3x3, 50 filters, relu<br/>
max pool 2x2<br/>
conv 3x3, 70 filters, relu<br/>
conv 1x1, 50 filters, relu<br/>
max pool 2x2<br/>
conv 3x3, 100 filters, relu<br/>
conv 1x1, 70 filters, relu<br/>
max pool 2x2<br/>
conv 3x3, 120 filters, relu<br/>
conv 1x1, 80 filters, relu<br/>
max pool 2x2<br/>
global average pooling<br/>
dense 5 softmax<br/>
accuracy 70%
The goal is 80% accuracy ! Good luck. (You might want to train for more than 20 epochs to get there. Se your trainig curves to see if it is worth training longer.)
End of explanation
EPOCHS = 20
history = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset)
print(history.history.keys())
display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
Explanation: Training
End of explanation
# randomize the input so that you can execute multiple times to change results
permutation = np.random.permutation(160)
some_flowers, some_labels = (some_flowers[permutation], some_labels[permutation])
predictions = model.predict(some_flowers, batch_size=16)
evaluations = model.evaluate(some_flowers, some_labels, batch_size=16)
print(np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist())
print('[val_loss, val_acc]', evaluations)
display_9_images_with_predictions(some_flowers, predictions, some_labels)
Explanation: Predictions
End of explanation |
3,306 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluation Python année 2016-2017 - solution
Le répertoire data contient deux fichiers csv simulés aléatoirement dont il faudra se servir pour répondre aux 10 questions qui suivent. Chaque question vaut deux points. Le travail est à rendre pour le lundi 20 février sous la forme d'un notebook envoyé en pièce jointe d'un mail.
Step1: 1
Deux fichiers sont extraits de la base de données d'un médecin. Un fichier contient des informations sur des personnes, un autre sur les rendez-vous pris par ces personnes. Quels sont-ils ?
Step2: Le second fichier est plus volumineux et contient une variable price qui ne peut pas être reliée aux personnes. Le premier fichier est celui des personnes, le second celui des rendez-vous. La variable idc est présente dans les deux tables. C'est elle qui identifie les personnes dans les deux tables.
2
On souhaite étudier la relation entre le prix moyen payé par une personne, son âge et son genre. Calculer le prix moyen payé par une personne ?
La table des rendez-vous contient toutes l'information nécessaire. La question était un peu ambiguë. On peut déterminer le prix moyen payé par personne, ou le prix moyen des prix moyens... On va répondre à la première option car la seconde n'a pas beaucoup d'intérêt et c'est très proche du prix moyen par rendez-vous, ce que la question aurait sans doute formulé dans ce sens si telle avait été son intention. On groupe par idc et on fait la moyenne.
Step3: 3
Faire la jointure entre les deux tables.
Step4: Cette jointure est assez simple puisque la colonne partagée porte le même nom dans les deux tables. On peut néanmoins se poser la question de savoir s'il y a des personnes qui n'ont pas de rendez-vous associé et réciproquement.
Step5: Visiblement, ce n'est pas le cas puisqu'une jointure incluant les éléments sans correspondances dans les deux tables n'ajoute pas plus d'éléments à la jointure.
4
Tracer deux nuages de points (age, prix moyen) et (genre, prix moyen) ?
Step6: On peut aussi utiliser un module comme seaborn qui propose des dessins intéressants pour un statisticatien.
Step7: On ne voit pas grand chose sur ce second graphe. Une option est d'ajouter un bruit aléatoire sur le genre pour éclater le nuage.
Step8: Il n'y a rien de flagrant. On peut faire un graphe moustache.
Step9: C'est mieux. Un dernier. Le diagramme violon, plus complet que le précédent.
Step10: 5
Calculer les coefficients de la régression $prix_moyen \sim age + genre$.
Une régression. Le premier réflexe est scikit-learn.
Step11: On utilise maintenant statsmodels qui est plus complet pour toute ce qui est un modèle linéaire.
Step12: Ou encore (après avoir ajouté une constante).
Step13: On peut aussi définir la régression sous la forme de formule avec le module patsy.
Step14: 6
On souhaite étudier le prix d'une consultation en fonction du jour de la semaine. Ajouter une colonne dans la table de votre choix avec le jour de la semaine.
On convertit d'abord la colonne date (chaîne de caractères au format date) avec la fonction to_datetime.
Step15: Et on récupère le jour de la semaine avec weekday.
Step16: 7
Créer un graphe moustache qui permet de vérifier cette hypothèse.
On réutilise le code d'une question précédente.
Step17: C'est clairement plus cher le dimanche.
8
Ajouter une colonne dans la table de votre choix qui contient 365 si c'est le premier rendez-vous, le nombre de jour écoulés depuis le précédent rendez-vous. On appelle cette colonne $delay$. On ajoute également la colonne $1/delay$.
Pour commencer, on convertit la date en nombre de jours depuis la première date.
Step18: On convertit en entier.
Step19: On trie par patient et jour puis on effectue la différence.
Step20: Il reste à traiter le premier jour ou plutôt le premier rendez-vous. On le récupère pour chaque patient.
Step21: Puis on fait une jointure.
Step22: Il ne reste plus qu'à remplacer les NaN par jouri.
Step23: Finalement, il faut ajouter une colonne $1/delay$. Comme des patients ont parfois deux rendez-vous le même jour, pour éviter les valeurs nulles, on ajoute la colonne $1/(1+delay)$. On aurait pu également pour éviter les valeurs nulles considérer le temps en secondes et non en jour entre deux rendez-vous.
Step24: 8 - réponse plus courte
On garde l'idée de la fonction diff et on ajoute la fonction shift.
Step25: 9
Calculer les coefficients de la régression $prix \sim age + genre + delay + 1/delay + jour_semaine$.
L'âge ne fait pas partie de la table tout. Il faut faire une jointure pour le récupérer.
Step26: Ensuite retour à scikit-learn et plutôt le second statsmodels pour effectuer des tests sur les coefficients du modèle. On regarde d'abord les corrélations.
Step27: Si le jeu de données n'est pas trop volumineux.
Step28: Un dernier pour la route.
Step29: Régression
Step30: 10
Comment comparer ce modèle avec le précédent ? Implémentez le calcul qui vous permet de répondre à cette question.
Nous pourrions comparer les coefficients $R^2$ (0.57, 0.61) des régressions pour savoir quelle est la meilleur excepté que celle-ci ne sont pas calculées sur les mêmes données. La comparaison n'a pas de sens et il serait dangeraux d'en tirer des conclusions. Les valeurs sont de plus très proches. Il faut comparer les prédictions. Dans le premier cas, on prédit le prix moyen. Dans le second, on prédit le prix d'une consultation. Il est alors possible de calculer une prédiction moyenne par patient et de comparer les erreurs de prédiction du prix moyen. D'un côté, la prédiction du prix moyen, de l'autre la prédiction du prix d'une consultation agrégé par patient.
Step31: On calcule l'erreur.
Step32: On agrège les secondes prédictions.
Step33: Le second modèle est clairement meilleur.
Step34: La seconde régression utilise une information dont on ne dispose pas au niveau agrégé
Step35: C'est finalement un peu plus visible sur le graphe précédent (nuage de points) mais aussi un peu trompeur du fait de la superposition des points. Une dernière remarque. En machine learning, nous avons l'habitude d'apprendre un modèle sur une base d'apprentissage et de tester les prédictions sur une autre. Dans notre cas, nous avons appris et prédit sur la même base. Ce type de tester est évidemment plus fiable. Mais nous avons comparé ici deux erreurs d'apprentissage moyennes et c'est exactement ce que l'on fait lorsqu'on compare deux coefficients $R^2$.
Un dernier graphe pour la route obtenu en triant les erreurs par ordre croissant.
Step36: Le second modèle fait des erreurs moins importantes surtout côté négatif. Il sous-estime moins la bonne valeur.
Traitement spécifique de la variable catégorielle weekday
Le second modèle ne prend pas en compte le dimanche comme jour de la semaine. weekday est une variable catégorielle. Contrairment au genre, elle possède plus de deux modalités. Il serait intéressant de la traiter comme un ensemble de variable binaire et non une colonne de type entier.
Step37: On supprime une modalité pour éviter d'avoir une matrice corrélée avec la constante et on ajoute des variables au modèle de régression.
Step38: On vérifie que le coefficient pour dimanche n'est clairement significatif contrairement aux autres dont la probabilité d'être nul est élevée. Le médecin appliquerait une majoration de 20 euros le dimanche. Le coefficient $R^2$ est aussi nettement plus élevé. On construit les prédictions d'un second modèle en ne tenant compte que de la variable dimanche.
Step39: Nettement, nettement mieux.
Step40: Une variable catégorielle en une seule colonne ?
Un jeu de données peut rapidement croître s'il est étendu pour chaque variable catégorielle. On peut utiliser le module category_encoders ou statsmodels.
Step41: Ou associer la valeur de la cible à prédire pour chaque jour de la semaine.
Step42: On apprend avec la variable weekday
Step43: Et on compare avec la variable price_label | Python Code:
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Evaluation Python année 2016-2017 - solution
Le répertoire data contient deux fichiers csv simulés aléatoirement dont il faudra se servir pour répondre aux 10 questions qui suivent. Chaque question vaut deux points. Le travail est à rendre pour le lundi 20 février sous la forme d'un notebook envoyé en pièce jointe d'un mail.
End of explanation
import pandas
persons = pandas.read_csv("https://raw.githubusercontent.com/sdpython/actuariat_python/master/_doc/notebooks/examen/data/persons.txt",
sep="\t")
persons.head()
rend = pandas.read_csv("https://raw.githubusercontent.com/sdpython/actuariat_python/master/_doc/notebooks/examen/data/rendezvous.txt",
sep="\t")
rend.head()
persons.shape, rend.shape
Explanation: 1
Deux fichiers sont extraits de la base de données d'un médecin. Un fichier contient des informations sur des personnes, un autre sur les rendez-vous pris par ces personnes. Quels sont-ils ?
End of explanation
gr = rend.groupby("idc").mean()
gr.head()
Explanation: Le second fichier est plus volumineux et contient une variable price qui ne peut pas être reliée aux personnes. Le premier fichier est celui des personnes, le second celui des rendez-vous. La variable idc est présente dans les deux tables. C'est elle qui identifie les personnes dans les deux tables.
2
On souhaite étudier la relation entre le prix moyen payé par une personne, son âge et son genre. Calculer le prix moyen payé par une personne ?
La table des rendez-vous contient toutes l'information nécessaire. La question était un peu ambiguë. On peut déterminer le prix moyen payé par personne, ou le prix moyen des prix moyens... On va répondre à la première option car la seconde n'a pas beaucoup d'intérêt et c'est très proche du prix moyen par rendez-vous, ce que la question aurait sans doute formulé dans ce sens si telle avait été son intention. On groupe par idc et on fait la moyenne.
End of explanation
join = persons.merge(gr.reset_index(), on="idc")
join.head()
Explanation: 3
Faire la jointure entre les deux tables.
End of explanation
join.shape
join = persons.merge(gr.reset_index(), on="idc", how="outer")
join.shape
Explanation: Cette jointure est assez simple puisque la colonne partagée porte le même nom dans les deux tables. On peut néanmoins se poser la question de savoir s'il y a des personnes qui n'ont pas de rendez-vous associé et réciproquement.
End of explanation
join.plot(x="age", y="price", kind="scatter")
Explanation: Visiblement, ce n'est pas le cas puisqu'une jointure incluant les éléments sans correspondances dans les deux tables n'ajoute pas plus d'éléments à la jointure.
4
Tracer deux nuages de points (age, prix moyen) et (genre, prix moyen) ?
End of explanation
import seaborn
g = seaborn.jointplot("age", "price", data=join, kind="reg", size=7, scatter_kws={"s": 10})
join.plot(x="gender", y="price", kind="scatter")
Explanation: On peut aussi utiliser un module comme seaborn qui propose des dessins intéressants pour un statisticatien.
End of explanation
import numpy
bruit = join.copy()
bruit["gx"] = bruit.gender + numpy.random.random(bruit.shape[0])/3
bruit.plot(x="gx", y="price", kind="scatter")
Explanation: On ne voit pas grand chose sur ce second graphe. Une option est d'ajouter un bruit aléatoire sur le genre pour éclater le nuage.
End of explanation
join[["price", "gender"]].boxplot(by="gender")
Explanation: Il n'y a rien de flagrant. On peut faire un graphe moustache.
End of explanation
seaborn.violinplot(x="gender", y="price", data=join, inner="quart")
Explanation: C'est mieux. Un dernier. Le diagramme violon, plus complet que le précédent.
End of explanation
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(join[["age", "gender"]], join["price"])
lr.coef_
lr.intercept_
Explanation: 5
Calculer les coefficients de la régression $prix_moyen \sim age + genre$.
Une régression. Le premier réflexe est scikit-learn.
End of explanation
from statsmodels.formula.api import ols
lr = ols("price ~ age + gender", data=join)
res = lr.fit()
res.summary()
Explanation: On utilise maintenant statsmodels qui est plus complet pour toute ce qui est un modèle linéaire.
End of explanation
from statsmodels.api import OLS
join2 = join.copy()
join2["cst"] = 1
lr = OLS(join2["price"], join2[["age", "gender", "cst"]])
res = lr.fit()
res.summary()
Explanation: Ou encore (après avoir ajouté une constante).
End of explanation
from patsy import dmatrices
y, X = dmatrices("price ~ age + gender" , join, return_type="matrix")
y = numpy.ravel(y)
lr = LinearRegression(fit_intercept=False)
lr.fit(X, y)
lr.coef_, lr.intercept_
X[:2]
Explanation: On peut aussi définir la régression sous la forme de formule avec le module patsy.
End of explanation
rend["date2"] = pandas.to_datetime(rend.date)
rend.dtypes
Explanation: 6
On souhaite étudier le prix d'une consultation en fonction du jour de la semaine. Ajouter une colonne dans la table de votre choix avec le jour de la semaine.
On convertit d'abord la colonne date (chaîne de caractères au format date) avec la fonction to_datetime.
End of explanation
rend["weekday"] = rend.date2.dt.weekday
rend.head()
Explanation: Et on récupère le jour de la semaine avec weekday.
End of explanation
rend[["price", "weekday"]].boxplot(by="weekday")
Explanation: 7
Créer un graphe moustache qui permet de vérifier cette hypothèse.
On réutilise le code d'une question précédente.
End of explanation
rend["constant"] = rend["date2"].min()
rend["jour"] = rend["date2"] - rend["constant"]
rend.head(n=2)
Explanation: C'est clairement plus cher le dimanche.
8
Ajouter une colonne dans la table de votre choix qui contient 365 si c'est le premier rendez-vous, le nombre de jour écoulés depuis le précédent rendez-vous. On appelle cette colonne $delay$. On ajoute également la colonne $1/delay$.
Pour commencer, on convertit la date en nombre de jours depuis la première date.
End of explanation
rend["jouri"] = rend.jour.apply(lambda d: d.days)
rend.head(n=2)
Explanation: On convertit en entier.
End of explanation
diff = rend.sort_values(["idc", "jouri"])["jouri"].diff()
rend["diff"] = diff
rend.head(n=2)
Explanation: On trie par patient et jour puis on effectue la différence.
End of explanation
first = rend[["idc", "date"]].groupby("idc", as_index=False).min()
first["j365"] = 365
first.head(n=2)
Explanation: Il reste à traiter le premier jour ou plutôt le premier rendez-vous. On le récupère pour chaque patient.
End of explanation
tout = rend.merge(first, on=["idc", "date"], how="outer")
tout[["idc", "jouri", "date", "j365"]].head(n=5)
Explanation: Puis on fait une jointure.
End of explanation
tout["delay"] = tout.j365.fillna(tout.jouri)
tout[["idc", "jouri", "date", "j365", "delay"]].head(n=8)
Explanation: Il ne reste plus qu'à remplacer les NaN par jouri.
End of explanation
tout["delay1"] = 1/ (tout["delay"] + 1)
tout[["delay", "delay1"]].head()
Explanation: Finalement, il faut ajouter une colonne $1/delay$. Comme des patients ont parfois deux rendez-vous le même jour, pour éviter les valeurs nulles, on ajoute la colonne $1/(1+delay)$. On aurait pu également pour éviter les valeurs nulles considérer le temps en secondes et non en jour entre deux rendez-vous.
End of explanation
rend2 = rend.sort_values(["idc", "jouri"]).reset_index(drop=True).copy()
rend2["diff"] = rend2["jouri"].diff()
rend2.loc[rend2.idc != rend2.idc.shift(1), "diff"] = 365
rend2.head()
Explanation: 8 - réponse plus courte
On garde l'idée de la fonction diff et on ajoute la fonction shift.
End of explanation
mat = tout.merge(persons, on="idc")
Explanation: 9
Calculer les coefficients de la régression $prix \sim age + genre + delay + 1/delay + jour_semaine$.
L'âge ne fait pas partie de la table tout. Il faut faire une jointure pour le récupérer.
End of explanation
mat[["age", "gender", "delay", "delay1", "weekday", "price"]].corr()
seaborn.clustermap(mat[["age", "gender", "delay", "delay1", "weekday", "price"]].corr(), figsize=(5,5))
Explanation: Ensuite retour à scikit-learn et plutôt le second statsmodels pour effectuer des tests sur les coefficients du modèle. On regarde d'abord les corrélations.
End of explanation
seaborn.pairplot(mat[["age", "gender", "delay", "delay1", "weekday", "price"]],
plot_kws={"s": 10}, size=1)
Explanation: Si le jeu de données n'est pas trop volumineux.
End of explanation
feat = mat[["age", "gender", "delay", "delay1", "weekday", "price"]]
g = seaborn.PairGrid(feat.sort_values("price", ascending=False), x_vars=feat.columns[:-1],
y_vars=["price"], size=5, aspect=.25)
g.map(seaborn.stripplot, size=3, orient="h", palette="Reds_r", edgecolor="gray")
Explanation: Un dernier pour la route.
End of explanation
lr = LinearRegression()
lr.fit(mat[["age", "gender", "delay", "delay1", "weekday"]], mat["price"])
lr.coef_
from statsmodels.formula.api import ols
lr = ols("price ~ age + gender + delay + delay1 + weekday", data=mat)
res = lr.fit()
res.summary()
Explanation: Régression
End of explanation
lr_moy = LinearRegression()
lr_moy.fit(join[["age", "gender"]], join["price"])
lr_moy.coef_, lr_moy.intercept_
pred_moy = lr_moy.predict(join[["age", "gender"]])
join["pred_moy"] = pred_moy
join.head()
Explanation: 10
Comment comparer ce modèle avec le précédent ? Implémentez le calcul qui vous permet de répondre à cette question.
Nous pourrions comparer les coefficients $R^2$ (0.57, 0.61) des régressions pour savoir quelle est la meilleur excepté que celle-ci ne sont pas calculées sur les mêmes données. La comparaison n'a pas de sens et il serait dangeraux d'en tirer des conclusions. Les valeurs sont de plus très proches. Il faut comparer les prédictions. Dans le premier cas, on prédit le prix moyen. Dans le second, on prédit le prix d'une consultation. Il est alors possible de calculer une prédiction moyenne par patient et de comparer les erreurs de prédiction du prix moyen. D'un côté, la prédiction du prix moyen, de l'autre la prédiction du prix d'une consultation agrégé par patient.
End of explanation
err1 = ((join.pred_moy - join.price)**2).sum() / join.shape[0]
err1
join.plot(x="price", y="pred_moy", kind="scatter")
lrc = LinearRegression()
feat = mat[["age", "gender", "delay", "delay1", "weekday", "price", "idc"]].copy()
lrc.fit(feat[["age", "gender", "delay", "delay1", "weekday"]], feat["price"])
lrc.coef_, lrc.intercept_
predc = lrc.predict(feat[["age", "gender", "delay", "delay1", "weekday"]])
feat["predc"] = predc
feat.head()
Explanation: On calcule l'erreur.
End of explanation
agg = feat[["idc","predc", "price"]].groupby("idc").mean()
agg.head()
err2 = ((agg.predc - agg.price)**2).sum() / agg.shape[0]
err2
Explanation: On agrège les secondes prédictions.
End of explanation
agg.plot(x="price", y="predc", kind="scatter")
Explanation: Le second modèle est clairement meilleur.
End of explanation
temp = join.sort_values("price").reset_index(drop=True).reset_index(drop=False)
temp.head(n=1)
temp.plot(x="index", y=["price", "pred_moy"])
temp2 = agg.sort_values("price").reset_index(drop=True).reset_index(drop=False)
temp2.head(n=1)
ax = temp.plot(x="index", y="price", figsize=(14,4), ylim=[60,120])
temp.plot(x="index", y="pred_moy", linewidth=1, ax=ax, ylim=[60,120])
temp2.plot(x="index", y="predc", ax=ax, linewidth=0.6, ylim=[60,120])
Explanation: La seconde régression utilise une information dont on ne dispose pas au niveau agrégé : le jour de la semaine et un précédent graphe a clairement montré que c'était une variable importante. Un dernier graphe pour comparer les deux prédictions en montrant les prédictions triées par prix à prédire.
End of explanation
temp["erreur"] = temp.pred_moy - temp.price
temp2["erreur2"] = temp2.predc - temp2.price
ax = temp[["erreur"]].sort_values("erreur").reset_index(drop=True).plot()
temp2[["erreur2"]].sort_values("erreur2").reset_index(drop=True).plot(ax=ax)
ax.plot([0,1000], [0,0], "r--")
Explanation: C'est finalement un peu plus visible sur le graphe précédent (nuage de points) mais aussi un peu trompeur du fait de la superposition des points. Une dernière remarque. En machine learning, nous avons l'habitude d'apprendre un modèle sur une base d'apprentissage et de tester les prédictions sur une autre. Dans notre cas, nous avons appris et prédit sur la même base. Ce type de tester est évidemment plus fiable. Mais nous avons comparé ici deux erreurs d'apprentissage moyennes et c'est exactement ce que l'on fait lorsqu'on compare deux coefficients $R^2$.
Un dernier graphe pour la route obtenu en triant les erreurs par ordre croissant.
End of explanation
dummies = pandas.get_dummies(mat.weekday)
dummies.columns=["lu", "ma", "me", "je", "ve", "sa", "di"]
dummies.head()
Explanation: Le second modèle fait des erreurs moins importantes surtout côté négatif. Il sous-estime moins la bonne valeur.
Traitement spécifique de la variable catégorielle weekday
Le second modèle ne prend pas en compte le dimanche comme jour de la semaine. weekday est une variable catégorielle. Contrairment au genre, elle possède plus de deux modalités. Il serait intéressant de la traiter comme un ensemble de variable binaire et non une colonne de type entier.
End of explanation
mat2 = pandas.concat([mat, dummies.drop("lu", axis=1)], axis=1)
mat2.head(n=1)
lr = ols("price ~ age + gender + delay + delay1 + ma + me + je + ve + sa + di", data=mat2)
res = lr.fit()
res.summary()
Explanation: On supprime une modalité pour éviter d'avoir une matrice corrélée avec la constante et on ajoute des variables au modèle de régression.
End of explanation
lrc = LinearRegression()
feat2 = mat2[["age", "gender", "delay", "delay1", "price", "idc", "di"]].copy()
lrc.fit(feat2[["age", "gender", "delay", "delay1", "di"]], feat2["price"])
lrc.coef_, lrc.intercept_
predc = lrc.predict(feat2[["age", "gender", "delay", "delay1", "di"]])
feat2["predc"] = predc
feat2.head()
agg2 = feat2[["idc","predc", "price"]].groupby("idc").mean()
agg2.head()
err2d = ((agg2.predc - agg2.price)**2).sum() / agg2.shape[0]
err2d
Explanation: On vérifie que le coefficient pour dimanche n'est clairement significatif contrairement aux autres dont la probabilité d'être nul est élevée. Le médecin appliquerait une majoration de 20 euros le dimanche. Le coefficient $R^2$ est aussi nettement plus élevé. On construit les prédictions d'un second modèle en ne tenant compte que de la variable dimanche.
End of explanation
agg2.plot(x="price", y="predc", kind="scatter")
temp2 = agg2.sort_values("price").reset_index(drop=True).reset_index(drop=False)
ax = temp2.plot(x="index", y="price", figsize=(14,4), ylim=[60,120])
temp2.plot(x="index", y="predc", linewidth=1, ax=ax, ylim=[60,120])
Explanation: Nettement, nettement mieux.
End of explanation
from category_encoders import PolynomialEncoder
encoder = PolynomialEncoder(cols=["weekday"])
encoder.fit(rend[["weekday"]], rend["price"])
pred = encoder.transform(rend[["weekday"]])
conc = pandas.concat([rend[["weekday"]], pred], axis=1)
conc.head()
Explanation: Une variable catégorielle en une seule colonne ?
Un jeu de données peut rapidement croître s'il est étendu pour chaque variable catégorielle. On peut utiliser le module category_encoders ou statsmodels.
End of explanation
copy = rend[["weekday", "price"]].copy()
gr = copy.groupby("weekday", as_index=False).mean()
gr
feat3 = mat[["age", "gender", "delay", "delay1", "price", "idc", "weekday"]]
feat3 = feat3.merge(gr, on="weekday", suffixes=("", "_label"))
feat3.head()
Explanation: Ou associer la valeur de la cible à prédire pour chaque jour de la semaine.
End of explanation
lr = ols("price ~ age + gender + delay + delay1 + weekday", data=feat3)
res = lr.fit()
res.summary()
Explanation: On apprend avec la variable weekday :
End of explanation
lr = ols("price ~ age + gender + delay + delay1 + price_label", data=feat3)
res = lr.fit()
res.summary()
Explanation: Et on compare avec la variable price_label :
End of explanation |
3,307 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's use some real data...
Step1: Copy the first 2 million rows into a bcolz carray to use for benchmarking.
Step2: Out of interest, what chunk size did bcolz choose?
Step3: How long does it take to decompress the data?
Step4: How long does it take to compute the maximum value?
Step5: Check that scikit-allel's chunked implementation of max() behaves as expected - implementation is not threaded so total time should equal time to decompress data plus time to compute max.
Step6: Now see how dask behaves.
Step7: See especially the case with 4 blosc threads and 4 dask threads. Total wall time here is less than the sum of the time required to decompress the data with the same number of blosc threads and the time required to compute the maximum. So dask is able to do some work in parallel, even though bcolz does not release the GIL.
Step8: Try a slightly more compute-intensive task. First the non-parallel version.
Step9: Now dask.
Step10: with nogil
Hack bcolz to reinstate nogil sections around blosc_decompress...
Step11: Check compression time is unaffected.
Step12: Check scikit-allel's chunked implementation is unaffected.
Step13: Now see if dask does any better...
Step14: Try the more compute-intensive operation again.
Step15: with nogil, with c-blosc 1.7.0
Step16: Try to reproduce segfaults... | Python Code:
callset = h5py.File('/data/coluzzi/ag1000g/data/phase1/release/AR3/variation/main/hdf5/ag1000g.phase1.ar3.pass.h5', mode='r')
callset
genotype = allel.model.chunked.GenotypeChunkedArray(callset['3L/calldata/genotype'])
genotype
Explanation: Let's use some real data...
End of explanation
g = genotype.copy(stop=2000000)
g
Explanation: Copy the first 2 million rows into a bcolz carray to use for benchmarking.
End of explanation
g.data.chunklen * g.shape[1] * g.shape[2]
Explanation: Out of interest, what chunk size did bcolz choose?
End of explanation
def toarray(x):
np.asarray(x)
for n in 1, 2, 4, 8:
bcolz.blosc_set_nthreads(n)
print('--- blosc threads:', n, '---')
%time toarray(g)
print()
Explanation: How long does it take to decompress the data?
End of explanation
def time_max(x):
x = np.asarray(x)
%time x.max()
time_max(g)
Explanation: How long does it take to compute the maximum value?
End of explanation
for n in 1, 2, 4, 8:
bcolz.blosc_set_nthreads(n)
print('--- blosc threads:', n, '---')
%time g.max()
print()
Explanation: Check that scikit-allel's chunked implementation of max() behaves as expected - implementation is not threaded so total time should equal time to decompress data plus time to compute max.
End of explanation
gd = allel.GenotypeDaskArray.from_array(g)
gd
for n in 1, 2, 4, 8:
bcolz.blosc_set_nthreads(n)
for m in 1, 2, 4, 8:
print('--- blosc threads:', n, '; dask threads:', m, '---')
%time gd.max().compute(num_workers=m)
print()
Explanation: Now see how dask behaves.
End of explanation
bcolz.blosc_set_nthreads(4)
%timeit -n1 -r5 gd.max().compute(num_workers=4)
Explanation: See especially the case with 4 blosc threads and 4 dask threads. Total wall time here is less than the sum of the time required to decompress the data with the same number of blosc threads and the time required to compute the maximum. So dask is able to do some work in parallel, even though bcolz does not release the GIL.
End of explanation
bcolz.blosc_set_nthreads(4)
%time g.count_alleles()
Explanation: Try a slightly more compute-intensive task. First the non-parallel version.
End of explanation
bcolz.blosc_set_nthreads(4)
%time gd.count_alleles().compute(num_workers=4)
bcolz.blosc_set_nthreads(1)
%time gd.count_alleles().compute(num_workers=4)
Explanation: Now dask.
End of explanation
import bcolz
bcolz.__version__
bcolz.blosc_version()
Explanation: with nogil
Hack bcolz to reinstate nogil sections around blosc_decompress...
End of explanation
for n in 1, 2, 4, 8:
bcolz.blosc_set_nthreads(n)
print('--- blosc threads:', n, '---')
%time toarray(g)
print()
Explanation: Check compression time is unaffected.
End of explanation
for n in 1, 2, 4, 8:
bcolz.blosc_set_nthreads(n)
print('--- blosc threads:', n, '---')
%time g.max()
print()
Explanation: Check scikit-allel's chunked implementation is unaffected.
End of explanation
gd = allel.GenotypeDaskArray.from_array(g)
gd
for n in 1, 2, 4, 8:
bcolz.blosc_set_nthreads(n)
for m in 1, 2, 4, 8:
print('--- blosc threads:', n, '; dask threads:', m, '---')
%time gd.max().compute(num_workers=m)
print()
bcolz.blosc_set_nthreads(1)
%timeit -r5 gd.max().compute(num_workers=4)
bcolz.blosc_set_nthreads(1)
%timeit -r5 gd.max().compute(num_workers=8)
bcolz.blosc_set_nthreads(2)
%timeit -r5 gd.max().compute(num_workers=2)
bcolz.blosc_set_nthreads(4)
%timeit -r5 gd.max().compute(num_workers=4)
Explanation: Now see if dask does any better...
End of explanation
bcolz.blosc_set_nthreads(1)
%time gd.count_alleles().compute(num_workers=4)
bcolz.blosc_set_nthreads(4)
%time gd.count_alleles().compute(num_workers=4)
Explanation: Try the more compute-intensive operation again.
End of explanation
import bcolz
bcolz.__version__
bcolz.blosc_version()
g = genotype.copy(stop=2000000)
g
for n in 1, 2, 4, 8:
bcolz.blosc_set_nthreads(n)
print('--- blosc threads:', n, '---')
%time toarray(g)
print()
gd = allel.GenotypeDaskArray.from_array(g)
gd
for n in 1, 2, 4, 8:
bcolz.blosc_set_nthreads(n)
for m in 1, 2, 4, 8:
print('--- blosc threads:', n, '; dask threads:', m, '---')
%time gd.max().compute(num_workers=m)
print()
Explanation: with nogil, with c-blosc 1.7.0
End of explanation
bcolz.blosc_set_nthreads(1)
gd.astype('f4').max().compute()
gd.mean(axis=1).sum().compute(num_workers=4)
((gd + gd) * gd).std(axis=0).compute()
Explanation: Try to reproduce segfaults...
End of explanation |
3,308 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transport measurement data analysis
This is an example notebook for the analysis class IV_curve of qkit.analysis.IV_curve.py. This handels transport measurment data (focussed of measurements of Josephson junctions in the current bias) taken with qkit.measure.transport.transport.py and provides methods to
* load data files,
* merge data files,
* calculate numerical derivatives, such as differential resistances,
* analyse voltage and current offsets,
* correct ohmic resistances offsets arising due to lead resistivity in 2-wire measurements,
* analyse the normal state resistance,
* analyse critical currents and voltage jumps,
* analyse switching current distributions.
For error propagation the uncertainties package is used.
Step1: Load qkit transport measurement file
Transport measurement data with a given uuid can be loaded using ivc.load(uuid). Several elements are available, especially
* data file ivc.df,
* settings ivc.settings,
* measurement object ivc.mo,
* current values ivc.I,
* voltage values ivc.V,
* differential resistance values ivc.dVdI,
* sweep list ivc.sweeps,
* scan dimension (1D, 2D or 3D) ivc.scan_dim,
* in case of 2D and 3D scans, x-parameter dataset ivc.x_ds, values ivc.x_vec, name ivc.x_coordname, unit ivc.x_unit,
* in case of 3D scans, y-parameter dataset ivc.y_ds, values ivc.y_vec, name ivc.y_coordname, unit ivc.y_unit.
Step2: Merge qkit transport measurement files
Qkit transport measurement files can be merged depending on the scan dimension to one single new file by ivc.merge().
* 1D
Step3: Differential resistance
The differential resistance $ \frac{\text{d}V}{\text{d}I} $ is caclulated as numerical derivative using ivc.get_dVdI(). By default the Savitzky Golay filter is applied, but different methods can be used, e.g. a simple numerical gradient ivc.get_dVdI(mode=np.gradient)
Step4: Current and voltage offsets
The current and voltage offsets can be calculated using ivc.get_offsets() or ivc.get_offset().
The branch where the y-values are nearly constant are evaluated. The average of all corresponding x-values is considered to be the x-offset and the average of the extreme y-values are considered as y-offset. These are by default the critical y-values $ y_c$, but can also be set to retrapping y-values $ y_r $ if yr=True
* ivc.get_offsets() calculates x- and y-offset of every trace,
* ivc.get_offset() calculates x- and y-offset of the whole set (this differs only for 2D or 3D scans).
Note that reasonable initial values offset and tol_offset are sufficient to find the range where the y-values are nearly constant.
Step5: Ohmic resistance offset
The voltage values can be corrected by an ohmic resistance offset such as occur in 2wire measurements using ivc.get_2wire_slope_correction(). The two maxima in the differential resistivity $ \frac{\text{d}V}{\text{d}I} $ are identified as critical and retrapping currents. This is done using scipy.signal.find_peaks() by default, but can set to custom peak finding algorithms peak_finder. The slope of the superconducting regime in between (which should ideally be infinity) is fitted using `numpy.linalg.qr()´ algorithm and subtracted from the raw data.
Note that the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
Step6: Critical and retrapping currents
Critical currents can be determined on three different ways. The voltage jump at the critical and retrapping current can be found by
* a voltage threshold value that is exceeded,
* peaks in the differential resistance $ \frac{\text{d}V}{\text{d}I} $,
* peaks in the Gaussian smoothed derivative $ \text{i}f\exp\left(-sf^2\right) $ in the frequency domain
Voltage threshold methode for critical and retrapping currents
The critical and retrapping currents $ I_\text{c} $, $ I_\text{r} $ can be calculated by finding the currents that correspond to the voltages which exceed a certain threshold using ivc.get_Ic_threshold(). The branch where the voltage values are nearly constant are evaluated. Their maximal values of the up- and down-sweep are considered as critical currents $ I_c $ and retrapping current $ I_r $ (if Ir=True), respectively.
Note that it works best, if offset is already determined via get_offsets() and that a reasonable initial value tol_offset is sufficient.
Step7: Peak detection in differential resistance methode for critical and retrapping currents
The critical and retrapping currents $ I_\text{c} $, $ I_\text{r} $ can be calculated by detecting peaks in the differential resistance $ \frac{\text{d}V}{\text{d}I} $ using ivc.get_Ic_deriv(). This is done using scipy.signal.find_peaks() by default, but can set to custom peak finding algorithms peak_finder.
Note that the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
Step8: Peak detection in the Gaussian smoothed derivative methode for critical and retrapping currents
The critical and retrapping currents $ I_\text{c} $, $ I_\text{r} $ can be calculated by detecting peaks in the Gaussian smoothed derivative $ \left(\text{i}f\cdot\text{e}^{-sf^2}\right) $ in the frequency domain using ivc.get_Ic_dft(). This is done using scipy.signal.find_peaks() by default, but can set to custom peak finding algorithms peak_finder.
Note that the smoothing factor s and the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
Step9: Normal state resistance
The normal state resistance $ R_\text{n} $ of ohmic (overcritical) branch can be calculated using ivc.get_Rn().
The normal state resistance corresponds to the inverse linear slope of the normal conducting branch $ I=R_\text{n}^{-1}U $ (mode=0) or the average value of $ \frac{\mathrm{d}V}{\mathrm{d}I} $ of the normal conducting branch (mode=1). The ohmic range, in turn, is considered to range from the outermost tail of the peaks in the curvature $ \frac{\mathrm{d}^2V}{\mathrm{d}I^2} $ to the start/end of the sweep and the resistance is calculated as mean of the differential resistance values dVdI within this range. This is done using scipy.signal.savgol_filter(deriv=2) and scipy.signal.find_peaks() by default, but can set to any second order derivative function deriv_func and peak finding algorithms peak_finder. For scipy.signal.find_peaks() the prominence parameter is set to $ 1\,\% $ of the absolute curvature value by default.
Note that the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
Step10: Switching current measurements
Switching current distributions can be analyzed and plotted using the ivc.scm.fit() and ivc.scm.plot() of the switching_current subclass.
The switching currents need to be determined beforehand, such as by ivc.get_Ic_deriv(). Their switching current distribution $ P_k \mathrm{d}I = \frac{n_k}{N\Delta I}\mathrm{d}I $ is normalized $ \int\limits_0^\infty P(I^\prime)\mathrm{d}I^\prime = 1 $ and get by numpy.histogram().
The escape rate reads $ \Gamma(I_k) = \frac{\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|}{\Delta I}\ln\left(\frac{\sum\limits_{j\geq k} P_j}{\sum_\limits{j\geq k+1} P_j}\right) $ and the normalized escape rate $ \left[\ln\left(\frac{\omega_0}{2\pi\Gamma}\right)\right]^{2/3} $ is fitted versus $ \gamma $ to $ f(\bar{\gamma}) = a\cdot\bar{\gamma}+b $ where the root $ \left[\ln\left(\frac{\omega_0}{2\pi\Gamma}\right)\right]^{2/3} = 0 $ yields the critical current $ I_\text{c} = -\frac{b}{a} $. Here, the sweep rate results as $ \frac{\mathrm{d}I}{\mathrm{d}t} = \delta I\cdot\frac{\text{nplc}}{\text{plc}} $, the centers of bins as moving average of the returned bin-edges using np.convolve(edges, np.ones((2,))/2, mode='valid') and the bin width as $ \Delta I = \frac{\max(I_\text{b})-\min(I_\text{b})}{N_\text{bins}} $.
theoretical background
the probability distribution of switching currents is related to the escape rate $ \Gamma(I) $ and the current ramp rate $ \frac{\mathrm{d}I}{\mathrm{d}t} $ as
\begin{equation}
P(I)\mathrm{d}I = \Gamma(I)\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|^{-1} \left(1-\int\limits_0^I P(I^\prime)\mathrm{d}I^\prime\right)\mathrm{d}I
\end{equation}
This integral equation can be solved explixitly for the switching-current distribution
\begin{align}
P(I) &= \Gamma(I)\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|^{-1}\exp\left(-\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|^{-1}\int\limits_0^I\Gamma(I^\prime)\mathrm{d}I^\prime\right)\
P(I_k) &= \Gamma(I_k)\left|\frac{\mathrm{d}I_k}{\mathrm{d}t}\right|^{-1}\exp\left(-\left|\frac{\mathrm{d}I_k}{\mathrm{d}t}\right|^{-1}\Delta I\sum\limits_{j=0}^k\Gamma(I_j)\right)
\end{align}
Solving for the escape rate results in
\begin{align}
\Gamma(I) &= \frac{\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|}{\Delta I}\ln\left(\frac{\int\limits_I^\infty P(I^\prime)\mathrm{d}I^\prime}{\int\limits_{I+\Delta I}^\infty P(I^\prime)\mathrm{d}I^\prime}\right) \
\Gamma(I_k) &= \frac{\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|}{\Delta I}\ln\left(\frac{\sum\limits_{j\geq k} P_j}{\sum_\limits{j\geq k+1} P_j}\right)
\end{align}
The escape rate, in turn, is related to the attempt frequency $ \frac{\omega_0}{2\pi} $ and the barrier height $ U_0 = 2E_\mathrm{J}\left(\sqrt{1-\gamma^2}-\gamma\arccos(\gamma)\right) \approx E_\mathrm{J}\frac{4\sqrt{2}}{3}\left(1-\gamma\right)^\frac{3}{2} $ and results in
\begin{align}
\Gamma_\text{th} &= \frac{\omega_0}{2\pi}\exp\left(\frac{U_0}{k_\text{B}T}\right) \
&= \frac{\omega_0}{2\pi}\exp\left(-\frac{E_\text{J}\frac{4\sqrt{2}}{3}(1-\bar{\gamma})^{3/2}}{k_\text{B}T}\right)\
\left[\ln\left(\frac{\omega_0}{2\pi\Gamma}\right)\right]^{2/3} &= \left(\frac{E_\text{J}}{k_\text{B}T}\frac{4\sqrt{2}}{3}\right)^{2/3}\cdot(1-\bar{\gamma})
\end{align}
References
Fulton, T. A., and L. N. Dunkleberger. "Lifetime of the zero-voltage state in Josephson tunnel junctions." Physical Review B 9.11 (1974) | Python Code:
import numpy as np
from uncertainties import ufloat, umath, unumpy as unp
from scipy import signal as sig
import matplotlib.pyplot as plt
import qkit
qkit.start()
from qkit.analysis.IV_curve import IV_curve as IVC
ivc = IVC()
Explanation: Transport measurement data analysis
This is an example notebook for the analysis class IV_curve of qkit.analysis.IV_curve.py. This handels transport measurment data (focussed of measurements of Josephson junctions in the current bias) taken with qkit.measure.transport.transport.py and provides methods to
* load data files,
* merge data files,
* calculate numerical derivatives, such as differential resistances,
* analyse voltage and current offsets,
* correct ohmic resistances offsets arising due to lead resistivity in 2-wire measurements,
* analyse the normal state resistance,
* analyse critical currents and voltage jumps,
* analyse switching current distributions.
For error propagation the uncertainties package is used.
End of explanation
ivc.load(uuid='XXXXXX')
Explanation: Load qkit transport measurement file
Transport measurement data with a given uuid can be loaded using ivc.load(uuid). Several elements are available, especially
* data file ivc.df,
* settings ivc.settings,
* measurement object ivc.mo,
* current values ivc.I,
* voltage values ivc.V,
* differential resistance values ivc.dVdI,
* sweep list ivc.sweeps,
* scan dimension (1D, 2D or 3D) ivc.scan_dim,
* in case of 2D and 3D scans, x-parameter dataset ivc.x_ds, values ivc.x_vec, name ivc.x_coordname, unit ivc.x_unit,
* in case of 3D scans, y-parameter dataset ivc.y_ds, values ivc.y_vec, name ivc.y_coordname, unit ivc.y_unit.
End of explanation
ivc.merge(uuids=('XXXXXX', 'YYYYYY'), order=(-1, 1))
Explanation: Merge qkit transport measurement files
Qkit transport measurement files can be merged depending on the scan dimension to one single new file by ivc.merge().
* 1D: all sweep data are stacked and views are merged.
* 2D: values of x-parameter and its corresponding sweep data are merged in the order order.
* 3D: values of x- and y-parameters and its corresponding sweep data are merged in the order order.
End of explanation
ivc.get_dVdI()
Explanation: Differential resistance
The differential resistance $ \frac{\text{d}V}{\text{d}I} $ is caclulated as numerical derivative using ivc.get_dVdI(). By default the Savitzky Golay filter is applied, but different methods can be used, e.g. a simple numerical gradient ivc.get_dVdI(mode=np.gradient)
End of explanation
ivc.get_offsets(offset=0, tol_offset=20e-6)
ivc.get_offset(offset=0, tol_offset=20e-6)
Explanation: Current and voltage offsets
The current and voltage offsets can be calculated using ivc.get_offsets() or ivc.get_offset().
The branch where the y-values are nearly constant are evaluated. The average of all corresponding x-values is considered to be the x-offset and the average of the extreme y-values are considered as y-offset. These are by default the critical y-values $ y_c$, but can also be set to retrapping y-values $ y_r $ if yr=True
* ivc.get_offsets() calculates x- and y-offset of every trace,
* ivc.get_offset() calculates x- and y-offset of the whole set (this differs only for 2D or 3D scans).
Note that reasonable initial values offset and tol_offset are sufficient to find the range where the y-values are nearly constant.
End of explanation
ivc.get_2wire_slope_correction(prominence=1)
Explanation: Ohmic resistance offset
The voltage values can be corrected by an ohmic resistance offset such as occur in 2wire measurements using ivc.get_2wire_slope_correction(). The two maxima in the differential resistivity $ \frac{\text{d}V}{\text{d}I} $ are identified as critical and retrapping currents. This is done using scipy.signal.find_peaks() by default, but can set to custom peak finding algorithms peak_finder. The slope of the superconducting regime in between (which should ideally be infinity) is fitted using `numpy.linalg.qr()´ algorithm and subtracted from the raw data.
Note that the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
End of explanation
ivc.get_Ic_threshold(Ir=True)
Explanation: Critical and retrapping currents
Critical currents can be determined on three different ways. The voltage jump at the critical and retrapping current can be found by
* a voltage threshold value that is exceeded,
* peaks in the differential resistance $ \frac{\text{d}V}{\text{d}I} $,
* peaks in the Gaussian smoothed derivative $ \text{i}f\exp\left(-sf^2\right) $ in the frequency domain
Voltage threshold methode for critical and retrapping currents
The critical and retrapping currents $ I_\text{c} $, $ I_\text{r} $ can be calculated by finding the currents that correspond to the voltages which exceed a certain threshold using ivc.get_Ic_threshold(). The branch where the voltage values are nearly constant are evaluated. Their maximal values of the up- and down-sweep are considered as critical currents $ I_c $ and retrapping current $ I_r $ (if Ir=True), respectively.
Note that it works best, if offset is already determined via get_offsets() and that a reasonable initial value tol_offset is sufficient.
End of explanation
I_cs, I_rs, props = ivc.get_Ic_deriv(prominence=1, Ir=True)
I_cs, I_rs
print('all currents, where voltage jumps')
if ivc._scan_dim == 1:
print(np.array(map(lambda p1D: p1D['I'], props)))
elif ivc._scan_dim == 2:
print(np.array(map(lambda p2D: map(lambda p1D: p1D['I'], p2D), props)))
elif ivc._scan_dim == 3:
print(np.array(map(lambda p3D: map(lambda p2D: map(lambda p1D: p1D['I'], p2D), p3D), props)))
Explanation: Peak detection in differential resistance methode for critical and retrapping currents
The critical and retrapping currents $ I_\text{c} $, $ I_\text{r} $ can be calculated by detecting peaks in the differential resistance $ \frac{\text{d}V}{\text{d}I} $ using ivc.get_Ic_deriv(). This is done using scipy.signal.find_peaks() by default, but can set to custom peak finding algorithms peak_finder.
Note that the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
End of explanation
I_cs, I_rs, props = ivc.get_Ic_dft(Ir=True, prominence=1e-6)
I_cs, I_rs
Explanation: Peak detection in the Gaussian smoothed derivative methode for critical and retrapping currents
The critical and retrapping currents $ I_\text{c} $, $ I_\text{r} $ can be calculated by detecting peaks in the Gaussian smoothed derivative $ \left(\text{i}f\cdot\text{e}^{-sf^2}\right) $ in the frequency domain using ivc.get_Ic_dft(). This is done using scipy.signal.find_peaks() by default, but can set to custom peak finding algorithms peak_finder.
Note that the smoothing factor s and the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
End of explanation
ivc.get_Rn()
Explanation: Normal state resistance
The normal state resistance $ R_\text{n} $ of ohmic (overcritical) branch can be calculated using ivc.get_Rn().
The normal state resistance corresponds to the inverse linear slope of the normal conducting branch $ I=R_\text{n}^{-1}U $ (mode=0) or the average value of $ \frac{\mathrm{d}V}{\mathrm{d}I} $ of the normal conducting branch (mode=1). The ohmic range, in turn, is considered to range from the outermost tail of the peaks in the curvature $ \frac{\mathrm{d}^2V}{\mathrm{d}I^2} $ to the start/end of the sweep and the resistance is calculated as mean of the differential resistance values dVdI within this range. This is done using scipy.signal.savgol_filter(deriv=2) and scipy.signal.find_peaks() by default, but can set to any second order derivative function deriv_func and peak finding algorithms peak_finder. For scipy.signal.find_peaks() the prominence parameter is set to $ 1\,\% $ of the absolute curvature value by default.
Note that the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
End of explanation
props = ivc.get_Ic_deriv(prominence=1)
I_0 = np.array(list(map(lambda p2D: list(map(lambda p1D: p1D['I'][0], p2D)), props)))
ivc.scm.fit(I_0=I_0*1e6,
omega_0=1e9,
bins=30)
ivc.scm.plot()
Explanation: Switching current measurements
Switching current distributions can be analyzed and plotted using the ivc.scm.fit() and ivc.scm.plot() of the switching_current subclass.
The switching currents need to be determined beforehand, such as by ivc.get_Ic_deriv(). Their switching current distribution $ P_k \mathrm{d}I = \frac{n_k}{N\Delta I}\mathrm{d}I $ is normalized $ \int\limits_0^\infty P(I^\prime)\mathrm{d}I^\prime = 1 $ and get by numpy.histogram().
The escape rate reads $ \Gamma(I_k) = \frac{\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|}{\Delta I}\ln\left(\frac{\sum\limits_{j\geq k} P_j}{\sum_\limits{j\geq k+1} P_j}\right) $ and the normalized escape rate $ \left[\ln\left(\frac{\omega_0}{2\pi\Gamma}\right)\right]^{2/3} $ is fitted versus $ \gamma $ to $ f(\bar{\gamma}) = a\cdot\bar{\gamma}+b $ where the root $ \left[\ln\left(\frac{\omega_0}{2\pi\Gamma}\right)\right]^{2/3} = 0 $ yields the critical current $ I_\text{c} = -\frac{b}{a} $. Here, the sweep rate results as $ \frac{\mathrm{d}I}{\mathrm{d}t} = \delta I\cdot\frac{\text{nplc}}{\text{plc}} $, the centers of bins as moving average of the returned bin-edges using np.convolve(edges, np.ones((2,))/2, mode='valid') and the bin width as $ \Delta I = \frac{\max(I_\text{b})-\min(I_\text{b})}{N_\text{bins}} $.
theoretical background
the probability distribution of switching currents is related to the escape rate $ \Gamma(I) $ and the current ramp rate $ \frac{\mathrm{d}I}{\mathrm{d}t} $ as
\begin{equation}
P(I)\mathrm{d}I = \Gamma(I)\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|^{-1} \left(1-\int\limits_0^I P(I^\prime)\mathrm{d}I^\prime\right)\mathrm{d}I
\end{equation}
This integral equation can be solved explixitly for the switching-current distribution
\begin{align}
P(I) &= \Gamma(I)\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|^{-1}\exp\left(-\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|^{-1}\int\limits_0^I\Gamma(I^\prime)\mathrm{d}I^\prime\right)\
P(I_k) &= \Gamma(I_k)\left|\frac{\mathrm{d}I_k}{\mathrm{d}t}\right|^{-1}\exp\left(-\left|\frac{\mathrm{d}I_k}{\mathrm{d}t}\right|^{-1}\Delta I\sum\limits_{j=0}^k\Gamma(I_j)\right)
\end{align}
Solving for the escape rate results in
\begin{align}
\Gamma(I) &= \frac{\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|}{\Delta I}\ln\left(\frac{\int\limits_I^\infty P(I^\prime)\mathrm{d}I^\prime}{\int\limits_{I+\Delta I}^\infty P(I^\prime)\mathrm{d}I^\prime}\right) \
\Gamma(I_k) &= \frac{\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|}{\Delta I}\ln\left(\frac{\sum\limits_{j\geq k} P_j}{\sum_\limits{j\geq k+1} P_j}\right)
\end{align}
The escape rate, in turn, is related to the attempt frequency $ \frac{\omega_0}{2\pi} $ and the barrier height $ U_0 = 2E_\mathrm{J}\left(\sqrt{1-\gamma^2}-\gamma\arccos(\gamma)\right) \approx E_\mathrm{J}\frac{4\sqrt{2}}{3}\left(1-\gamma\right)^\frac{3}{2} $ and results in
\begin{align}
\Gamma_\text{th} &= \frac{\omega_0}{2\pi}\exp\left(\frac{U_0}{k_\text{B}T}\right) \
&= \frac{\omega_0}{2\pi}\exp\left(-\frac{E_\text{J}\frac{4\sqrt{2}}{3}(1-\bar{\gamma})^{3/2}}{k_\text{B}T}\right)\
\left[\ln\left(\frac{\omega_0}{2\pi\Gamma}\right)\right]^{2/3} &= \left(\frac{E_\text{J}}{k_\text{B}T}\frac{4\sqrt{2}}{3}\right)^{2/3}\cdot(1-\bar{\gamma})
\end{align}
References
Fulton, T. A., and L. N. Dunkleberger. "Lifetime of the zero-voltage state in Josephson tunnel junctions." Physical Review B 9.11 (1974): 4760.
Wallraff, Andreas. Fluxon dynamics in annular Josephson junctions: From Relativistic Strings to quantum particles. Lehrstuhl für Mikrocharakterisierung, Friedrich-Alexander-Universität, 2001.
End of explanation |
3,309 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
human judgement data
examining some of the human judgement data
Step1: data in form left image location, right image location, and a binary variable indicating if the subject chose the left (1) or right (0) image as a better representation of the original. note that the order of compression switches randomly.
Step2: now that we have an idea of the human judgement data, we can run ssim and see how well it predicts human behavior. | Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import matplotlib.image as mpimg
from PIL import Image
import progressbar
human_df = pd.read_csv('human_data.csv')
human_df.head()
Explanation: human judgement data
examining some of the human judgement data
End of explanation
# data sample
f, axarr = plt.subplots(nrows=3, ncols=3, figsize=(9,9))
for ii in range(3):
index = np.random.randint(4040)
temp_left = mpimg.imread('iclr_images/' + human_df['left_file'][index][1:-1])
temp_right = mpimg.imread('iclr_images/' + human_df['right_file'][index][1:-1])
temp_orig = mpimg.imread('iclr_images/' + human_df['orig_file'][index][1:-1])
axarr[ii][0].imshow(temp_orig, cmap='gray')
if human_df['response_left'][index] == 1:
axarr[ii][1].imshow(temp_left, cmap='gray')
axarr[ii][2].imshow(temp_right, cmap='gray')
else:
axarr[ii][2].imshow(temp_left, cmap='gray')
axarr[ii][1].imshow(temp_right, cmap='gray')
for ax_row in axarr:
for ax in ax_row:
ax.set_xticklabels([])
ax.set_yticklabels([])
axarr[0,0].set_title('original', size=15)
axarr[0,1].set_title('preferred', size=15)
axarr[0,2].set_title('rejected', size=15)
# plt.savefig('human_pref.png')
plt.show()
import matplotlib.gridspec as gridspec
plt.figure(figsize = (12,12))
gs1 = gridspec.GridSpec(3, 3)
gs1.update(wspace=0.03, hspace=0.03)
ax_dict = {}
for ii in range(9):
ax_dict[ii] = plt.subplot(gs1[ii])
ax_dict[ii].set_xticklabels([])
ax_dict[ii].set_yticklabels([])
ax_dict[ii].get_xaxis().set_visible(False)
ax_dict[ii].get_yaxis().set_visible(False)
for ii in range(3):
index = np.random.randint(4040)
temp_left = mpimg.imread('iclr_images/' + human_df['left_file'][index][1:-1])
temp_right = mpimg.imread('iclr_images/' + human_df['right_file'][index][1:-1])
temp_orig = mpimg.imread('iclr_images/' + human_df['orig_file'][index][1:-1])
ax_dict[3*ii].imshow(temp_orig, cmap='gray')
if human_df['response_left'][index] == 1:
ax_dict[3*ii+1].imshow(temp_left, cmap='gray')
ax_dict[3*ii+2].imshow(temp_right, cmap='gray')
else:
ax_dict[3*ii+2].imshow(temp_left, cmap='gray')
ax_dict[3*ii+1].imshow(temp_right, cmap='gray')
ax_dict[0].set_title('sample', size=20)
ax_dict[1].set_title('preferred', size=20)
ax_dict[2].set_title('rejected', size=20)
plt.savefig('human.png')
plt.show()
im = Image.open('iclr_images/' + human_df['left_file'][ii][1:-1])
test = np.asarray(im)
test
Explanation: data in form left image location, right image location, and a binary variable indicating if the subject chose the left (1) or right (0) image as a better representation of the original. note that the order of compression switches randomly.
End of explanation
def calculate_ssim_patch(window_orig, window_recon):
k_1, k_2, L = 0.01, 0.03, 255
if window_orig.shape != (11,11) or window_recon.shape != (11,11):
raise ValueError('please check window size for SSIM calculation!')
orig_data, recon_data = window_orig.flatten(), window_recon.flatten()
mean_x, mean_y = np.mean(orig_data), np.mean(recon_data)
var_x, var_y = np.var(recon_data), np.var(orig_data)
covar = np.cov(orig_data, recon_data)[0][1]
c_1, c_2 = (L*k_2)**2, (L*k_1)**2
num = (2*mean_x*mean_y+c_1)*(2*covar+c_2)
den = (mean_x**2+mean_y**2+c_1)*(var_x+var_y+c_2)
return num/den
def calculate_ssim_image(image_orig, image_recon):
ssim_res = []
filter_dim = 11; image_dim = image_orig.shape[0];
number_windows = image_dim - filter_dim + 1
for i in range(number_windows):
for j in range(number_windows):
orig_window = image_orig[i:i+11, j:j+11]
recon_window = image_recon[i:i+11, j:j+11]
temp = calculate_ssim_patch(orig_window, recon_window)
ssim_res.append(temp)
return np.asarray(ssim_res)
bar = progressbar.ProgressBar()
correct = 0
for ii in bar(range(len(human_df['left_file']))):
left_im = Image.open('iclr_images/' + human_df['left_file'][ii][1:-1])
left_im_pix = np.asarray(left_im)
right_im = Image.open('iclr_images/' + human_df['right_file'][ii][1:-1])
right_im_pix = np.asarray(right_im)
orig_im = Image.open('iclr_images/' + human_df['orig_file'][ii][1:-1])
orig_im_pix = np.asarray(orig_im)
left_ssim = calculate_ssim_image(orig_im_pix, left_im_pix)
right_ssim = calculate_ssim_image(orig_im_pix, right_im_pix)
ssims = [np.mean(right_ssim), np.mean(left_ssim)]
if np.argmax(ssims) == human_df['response_left'][ii]:
correct += 1
correct, correct/4040
f, axarr = plt.subplots(nrows=3, ncols=5, figsize=(15,9))
for ii in range(3):
index = np.random.randint(4040)
left_im = Image.open('iclr_images/' + human_df['left_file'][index][1:-1])
left_im_pix = np.asarray(left_im)
right_im = Image.open('iclr_images/' + human_df['right_file'][index][1:-1])
right_im_pix = np.asarray(right_im)
orig_im = Image.open('iclr_images/' + human_df['orig_file'][index][1:-1])
orig_im_pix = np.asarray(orig_im)
left_ssim = calculate_ssim_image(orig_im_pix, left_im_pix)
right_ssim = calculate_ssim_image(orig_im_pix, right_im_pix)
axarr[ii][0].imshow(orig_im_pix, cmap='gray')
axarr[ii][1].imshow(left_im_pix, cmap='gray')
axarr[ii][2].imshow(right_im_pix, cmap='gray')
axarr[ii][3].imshow(np.reshape(left_ssim,(54,54)), cmap='plasma')
axarr[ii][4].imshow(np.reshape(right_ssim,(54,54)), cmap='plasma')
axarr[0,0].set_title('original image', size=15)
axarr[0,1].set_title('left recon', size=15)
axarr[0,2].set_title('right recon', size=15)
axarr[0,3].set_title('left ssim', size=15)
axarr[0,4].set_title('right ssim', size=15)
for ax_row in axarr:
for ax in ax_row:
ax.set_xticklabels([])
ax.set_yticklabels([])
# plt.savefig('human_judge2.png')
plt.show()
Explanation: now that we have an idea of the human judgement data, we can run ssim and see how well it predicts human behavior.
End of explanation |
3,310 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load and check data
Step1: ## Analysis
Experiment Details
Step2: Did Hebbian perform better than SET?
Step3: No evidence of significant difference. In networks with high sparsity, the impact of kWinners is worst, which is expected since kWinners (at 30%) will make the activations more sparse than ReLU (which is 50% sparse on average)
What is the optimal level of weight sparsity?
Step4: Sparsity at 80 and 90% levels seem more or less equivalent, difference is 1 point in accuracy. The jump from 90 to 95% shows a drastic increase in acc, of 6 points.
Hebbian grow helps learning?
Step5: No strong evidence it helps in low sparsity case. In high sparsity (95%), seems very harmful
Hebbian pruning helps learning?
Step6: There is good evidence it helps. The trend is very clear in the low sparsity (80% sparse) cases.
Step7: Results seem similar even when no magnitude pruning is involved, only hebbian pruning
Magnitude pruning helps learning?
Step8: In low sparsity cases, results are the same for any amount of pruning. In average and high sparsity, there is a gaussian like curve, with the peak at around 0.2 (maybe extending to 0.3).
Results are consistent with what has been seen in previous experiments and in related papers.
Worth note that although results are better at 0.2, it also takes slightly longer to achieve better results compared to m
Step9: Somewhat inconsistent result looking at cases where there is no hebbian learning, only pruning by magnitude. There is an anomaly at the last entry where 50% of the weights are pruned - results are similar to 20%.
Number of samples averaged from is a lot lower in this pivot
What is the optimal combination of weight and magnitude pruning?
Step10: There is a more clear trend in the low sparsity case. Results from high sparsity are inconclusive, with several runs failing to "converge"
Weight pruning alone improves the model by up to 0.7% from 10% pruning to 50% magnitude pruning
Hebbian pruning alone improves the model by 1.5%
Both combined can increase from 1.5% seem in hebbian only to 1.8% improvement.
Comparisons above are from 0.1 to 0.5 pruning. There is a question left of why no pruning at both sides - the (0,0) point - it is an anomaly to the trend shown in the pivot. | Python Code:
exps = ['neurips_1_eval1', ]
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)
df.columns
df.shape
df.iloc[1]
df.groupby('model')['model'].count()
Explanation: Load and check data
End of explanation
# Did any trials failed?
df[df["epochs"]<30]["epochs"].count()
# Removing failed trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=30]
df.shape
# which ones failed?
# failed, or still ongoing?
df_origin['failed'] = df_origin["epochs"]<100
df_origin[df_origin['failed']]['epochs']
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
Explanation: ## Analysis
Experiment Details
End of explanation
agg(['model'])
high_sparsity = (df['on_perc']==0.05)
avg_sparsity = (df['on_perc']==0.1)
low_sparsity = (df['on_perc']==0.2)
agg(['kwinners'], low_sparsity)
agg(['kwinners'], high_sparsity)
Explanation: Did Hebbian perform better than SET?
End of explanation
agg(['on_perc'])
Explanation: No evidence of significant difference. In networks with high sparsity, the impact of kWinners is worst, which is expected since kWinners (at 30%) will make the activations more sparse than ReLU (which is 50% sparse on average)
What is the optimal level of weight sparsity?
End of explanation
agg(['hebbian_grow'])
agg(['hebbian_grow'], low_sparsity)
agg(['hebbian_grow'], high_sparsity)
Explanation: Sparsity at 80 and 90% levels seem more or less equivalent, difference is 1 point in accuracy. The jump from 90 to 95% shows a drastic increase in acc, of 6 points.
Hebbian grow helps learning?
End of explanation
agg(['hebbian_prune_perc'])
agg(['hebbian_prune_perc'], low_sparsity)
agg(['hebbian_prune_perc'], high_sparsity)
Explanation: No strong evidence it helps in low sparsity case. In high sparsity (95%), seems very harmful
Hebbian pruning helps learning?
End of explanation
no_magnitude = (df['weight_prune_perc'] == 0)
agg(['hebbian_prune_perc'], no_magnitude)
no_magnitude = (df['weight_prune_perc'] == 0)
agg(['hebbian_prune_perc'], (no_magnitude & low_sparsity))
Explanation: There is good evidence it helps. The trend is very clear in the low sparsity (80% sparse) cases.
End of explanation
agg(['weight_prune_perc'])
agg(['weight_prune_perc'], low_sparsity)
agg(['weight_prune_perc'], high_sparsity)
agg(['weight_prune_perc'], avg_sparsity)
Explanation: Results seem similar even when no magnitude pruning is involved, only hebbian pruning
Magnitude pruning helps learning?
End of explanation
no_hebbian = (df['hebbian_prune_perc'] == 0)
agg(['weight_prune_perc'], no_hebbian)
Explanation: In low sparsity cases, results are the same for any amount of pruning. In average and high sparsity, there is a gaussian like curve, with the peak at around 0.2 (maybe extending to 0.3).
Results are consistent with what has been seen in previous experiments and in related papers.
Worth note that although results are better at 0.2, it also takes slightly longer to achieve better results compared to m
End of explanation
pd.pivot_table(df,
index='hebbian_prune_perc',
columns='weight_prune_perc',
values='val_acc_max',
aggfunc=mean_and_std)
pd.pivot_table(df[low_sparsity],
index=['kwinners','hebbian_prune_perc'],
columns='weight_prune_perc',
values='val_acc_max',
aggfunc=mean_and_std)
pd.pivot_table(df[avg_sparsity],
index=['kwinners','hebbian_prune_perc'],
columns='weight_prune_perc',
values='val_acc_max',
aggfunc=mean_and_std)
Explanation: Somewhat inconsistent result looking at cases where there is no hebbian learning, only pruning by magnitude. There is an anomaly at the last entry where 50% of the weights are pruned - results are similar to 20%.
Number of samples averaged from is a lot lower in this pivot
What is the optimal combination of weight and magnitude pruning?
End of explanation
pd.pivot_table(df[avg_sparsity],
index='hebbian_prune_perc',
columns='weight_prune_perc',
values='val_acc_max',
aggfunc=mean_and_std)
pd.pivot_table(df[high_sparsity],
index='hebbian_prune_perc',
columns='weight_prune_perc',
values='val_acc_max',
aggfunc=mean_and_std)
Explanation: There is a more clear trend in the low sparsity case. Results from high sparsity are inconclusive, with several runs failing to "converge"
Weight pruning alone improves the model by up to 0.7% from 10% pruning to 50% magnitude pruning
Hebbian pruning alone improves the model by 1.5%
Both combined can increase from 1.5% seem in hebbian only to 1.8% improvement.
Comparisons above are from 0.1 to 0.5 pruning. There is a question left of why no pruning at both sides - the (0,0) point - it is an anomaly to the trend shown in the pivot.
End of explanation |
3,311 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modélisation du modèle
Sommaire
1 - Introduction
2 - Polymère
3 - Calcul de la concentration
4 - Table des valeurs
5 - Calcul de c2
6 - Graphiques
Introduction
Ce programme nous permet de modéliser la concentration (c2) pour différents food simulant. Cela nous permet également de tracer différents graphiques.
Step1: Polymère
Step2: Calcul de la concentration finale
Nous avons besoin de différentes valeurs de concentration qui sont les suivantes
Step3: Table des valeurs
Ici, nous pouvons voir les valeurs obtenus pour chaque expériences. Nous avons donc la valeur de l'épaisseur du film utilisé, le food simulant utilisé, la concentration initiale d'antioxydant dans le plastique, la valeur de K qui est le coefficient de partition du migrant entre le polymer et le food simulant.Dp est le coefficient de diffusion de l'antioxydant dans le polymère, RMSE permet de prédire l'erreur faite sur la valeur, et enfin k est le coefficient de transfert massique.
Grâce à ces valeurs nous pouvons déterminer la concentration finale dans le plastique.
Step4: Calcul de c2
Ce calcul nous permet donc d'obtenir les valeurs de la concentration finale dans le plastique et donc de déterminer l'efficacité du procédé.
Step5: Graphique
Step6: Graphique
Step7: Graphique
Step8: Grapgique
Step9: Nous remarquons que réunir tous les graphiques n'est pas spécialement adéquat car les résultats sont totalement illisibles. | Python Code:
import numpy as np
import pandas as pd
import math
import cmath
from scipy.optimize import root
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Modélisation du modèle
Sommaire
1 - Introduction
2 - Polymère
3 - Calcul de la concentration
4 - Table des valeurs
5 - Calcul de c2
6 - Graphiques
Introduction
Ce programme nous permet de modéliser la concentration (c2) pour différents food simulant. Cela nous permet également de tracer différents graphiques.
End of explanation
a = ("Table1.txt")
a
Explanation: Polymère
End of explanation
class InterfazPolimero:
def __init__ (self,a):
self.a=a
def Lire(self):
self.tab = pd.read_csv(self.a,sep=" ")
coef =self.tab.values
self.Experiment = coef[:,0]
self.Thickness = coef[:,1]
self.FoodSimulant = coef[:,2]
self.Cpo = coef[:,3]
self.K = coef [:,4]
self.Dp = coef[:,5]
self.RMSE = coef[:,6]
self.k = coef[:,7]
self.c4 = coef[:,8]
# self.c1 =coef[:,9]
self.c2 = np.zeros(10)
return self.tab
def inicializarC2(self):
self.c2 = np.zeros(10)
self.dimension = np.shape(self.c2)
print(self.dimension)
return self.c2
def calcul(self):
self.tab["j1"] = (self.tab["Dp"] / (self.tab["Thickness"] / 2)) * (self.tab["Cpo"] - self.c2)
print(self.tab["j1"])
self.c3 = self.c2 / self.K
self.j2 = self.k * (self.c3 - self.tab["c4"])
return (self.tab["j1"] - self.j2) / self.tab["j1"]
def calcul2(self):
i = 0
for self.tab["Thickness"], self.tab["Dp"], self.tab["K"], self.tab["k"], self.tab["c"] in enumerate(tab):
self.sol = root(calcul,15,args=(float(self.tab["Dp"]),float(self.tab["k"]),float(self.tab["K"]),float(self.tab["c4"]),float(self.tab["Cpo"]),float(self.tab["Thickness"])))
c2[i]= self.sol.x
i = i + 1
print(self.c2)
return self.c2
def Garder(self):
raw_data ={"résultat" : [1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793]}
df = pd.DataFrame(raw_data,index=["1","2","3","4","5","6","7","8","9","10"])
df.to_csv("c2rep")
return df
def Graphique(self):
plt.plot(self.tab["Dp"],self.Cpo,"^")
plt.title("f(Dp)=Cpo")
plt.xlabel("Dp")
plt.ylabel("Cpo")
def Graphique2(self):
plt.plot(self.tab["Dp"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
plt.title("f(Dp)=c2")
plt.xlabel("Dp")
plt.ylabel("c2")
def Graphique3(self):
plt.plot(self.tab["Cpo"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
plt.title("f(Cpo)=c2")
plt.xlabel("Cpo")
plt.ylabel("c2")
def Graphique4(self):
plt.plot(self.tab["Thickness"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
plt.title("f(Epaisseur)=c2")
plt.xlabel("Epaisseur")
plt.ylabel("c2")
def Graphique5(self):
fig,axes=plt.subplots(2,2)
axes[0,0].plot(self.tab["Dp"],self.Cpo,"^")
axes[1,1].plot(self.tab["Dp"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
axes[0,1].plot(self.tab["Cpo"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
axes[1,0].plot(self.tab["Thickness"],[1.115510936772821, 1.0542169426645587, 1.041340418781726, 1.0219,1.4353658536585368, 1.0542169426645587, 1.058921125781793,1.0217682926829268, 1.05340368852459, 1.058921125781793],"^")
p = InterfazPolimero("Table1.txt")
p
Explanation: Calcul de la concentration finale
Nous avons besoin de différentes valeurs de concentration qui sont les suivantes :
Afin de calculer la concentration finale, nous avons besoin d'équations qui sont les suivantes :
Afin de calculer cela, il faut suivre la méthode précédente. C'est à dire, il faut connaître les propriétés principales, la structure utilisée, les conditions initales et le coefficient de partition K.
Ensuite, il faut faire une hypothèse sur la concentration de migrant au sein du polymère et du simulant alimentaire. Ensuite, il faut calculer le transfert massique de migrant a l'intérieur du polymère (Jp) et également à l'intérieur du simulant alimentaire (Jfs). Ceux sont des phénomènes de transfert de matière. Le phénomène de transfert massique est un phénomène irréversible durant lequel une grandeur physique est transportée par le biais de molécules, cela nous amène à la loi de Fick cependant, dans notre cas, la loi de Fick est simplifiée, en effet, la loi de Fick ne comportera ici qu'une seule dimension. Et ce transfert peut être défini de cette manière J = (Dp/(L/2))x(C1-C2).
Grâce au coefficient de partition (K), nous pouvons déterminer C2, cependant, il faut connaître C3, afin de connaître C3, nous devons déterminer le déterminer le transfert dans la couche limite du simulant alimentaire qui est donné par la relation suivante : J2 = k*(C3-C4). Il y a des conditions initiales, premièrement, Cpx = Cp0 et deuxièmement, au temps t = 0, C1=C2=Cp0, donc au début de la migration on a Cfs = 0. La dernière condition est que $ \frac{\partial c_{Cpx}}{\partial x}$ est égale à 0. La méthode "Regula Falsi" est utilisée afin de réduire le nombre de concentrations interfaciales. Le nombre d'itérations s'arrête losque J1=J2.
End of explanation
p.Lire()
Explanation: Table des valeurs
Ici, nous pouvons voir les valeurs obtenus pour chaque expériences. Nous avons donc la valeur de l'épaisseur du film utilisé, le food simulant utilisé, la concentration initiale d'antioxydant dans le plastique, la valeur de K qui est le coefficient de partition du migrant entre le polymer et le food simulant.Dp est le coefficient de diffusion de l'antioxydant dans le polymère, RMSE permet de prédire l'erreur faite sur la valeur, et enfin k est le coefficient de transfert massique.
Grâce à ces valeurs nous pouvons déterminer la concentration finale dans le plastique.
End of explanation
p.calcul()
Explanation: Calcul de c2
Ce calcul nous permet donc d'obtenir les valeurs de la concentration finale dans le plastique et donc de déterminer l'efficacité du procédé.
End of explanation
p.Graphique()
Explanation: Graphique : f(Dp) = Cpo
Ici nous remarquons que le coefficient de diffusion n'est pas spécialement lié à la concentration initiale. De plus, les résultats sont très étranges.
End of explanation
p.Graphique2()
Explanation: Graphique : f(Dp) = c2
Ici nous remarquons que le coefficient de diffusion n'influe pas directement sur la concentration finale. Les résultats sont également très étranges.
End of explanation
p.Graphique3()
Explanation: Graphique : f(Cpo) = c2
Ici nous remarquons un point étrange situé aux alentours de 2000/1.45 car il n'est pas proche des autres. Donc nous pouvons supposer qu'il y a des erreurs de calculs. Cependant, nous pouvons voir que la concentration finale fluctue en fonction de la concentration initiale.
End of explanation
p.Graphique4()
Explanation: Grapgique : f(Epaisseur) = c2
Ici nous remarquons que le graphique est très étrange car les valeurs sont parfois décroissantes et parfois stagnes.
End of explanation
p.Graphique5()
Explanation: Nous remarquons que réunir tous les graphiques n'est pas spécialement adéquat car les résultats sont totalement illisibles.
End of explanation |
3,312 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial on how to use timestaps in Field construction
Step1: Some NetCDF files, such as for example those from the World Ocean Atlas, have time calendars that can't be parsed by xarray. These result in a ValueError
Step2: However, we can create our own numpy array of timestamps associated with each of the 12 snapshots in the netcdf file
Step3: And then we can add the timestamps as an extra argument | Python Code:
from parcels import Field
from glob import glob
import numpy as np
Explanation: Tutorial on how to use timestaps in Field construction
End of explanation
# tempfield = Field.from_netcdf(glob('WOA_data/woa18_decav_*_04.nc'), 't_an',
# {'lon': 'lon', 'lat': 'lat', 'time': 'time'})
Explanation: Some NetCDF files, such as for example those from the World Ocean Atlas, have time calendars that can't be parsed by xarray. These result in a ValueError: unable to decode time units, for example when the calendar is in 'months since' a particular date.
In these cases, a workaround in Parcels is to use the timestamps argument in Field (or FieldSet) creation. Here, we show how this works for example temperature data from the World Ocean Atlas in the Pacific Ocean
The following cell would give an error, since the calendar of the World Ocean Atlas data is in "months since 1955-01-01 00:00:00"
End of explanation
timestamps = np.expand_dims(np.array([np.datetime64('2001-%.2d-15' %m) for m in range(1,13)]), axis=1)
Explanation: However, we can create our own numpy array of timestamps associated with each of the 12 snapshots in the netcdf file
End of explanation
tempfield = Field.from_netcdf(glob('WOA_data/woa18_decav_*_04.nc'), 't_an',
{'lon': 'lon', 'lat': 'lat', 'time': 'time'},
timestamps=timestamps)
Explanation: And then we can add the timestamps as an extra argument
End of explanation |
3,313 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'emac-2-53-aerchem', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: EMAC-2-53-AERCHEM
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
3,314 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-mm', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-MM
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
3,315 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Notebook-Extensions" data-toc-modified-id="Notebook-Extensions-1">Notebook Extensions</a></span><ul class="toc-item"><li><span><a href="#Check-out-nbdime" data-toc-modified-id="Check-out-nbdime-1.1">Check out nbdime</a></span></li><li><span><a href="#ToC(2)" data-toc-modified-id="ToC(2)-1.2">ToC(2)</a></span></li><li><span><a href="#ExecuteTime" data-toc-modified-id="ExecuteTime-1.3">ExecuteTime</a></span></li><li><span><a href="#Snippets-Menu" data-toc-modified-id="Snippets-Menu-1.4">Snippets Menu</a></span></li><li><span><a href="#Python-Markdown----Maybe-doesn't-work-right-now-for-some-reason?" data-toc-modified-id="Python-Markdown----Maybe-doesn't-work-right-now-for-some-reason?-1.5">Python Markdown -- Maybe doesn't work right now for some reason?</a></span></li><li><span><a href="#Code-prettify" data-toc-modified-id="Code-prettify-1.6">Code prettify</a></span></li><li><span><a href="#Collapsible-Headings" data-toc-modified-id="Collapsible-Headings-1.7">Collapsible Headings</a></span></li><li><span><a href="#Notify----disappointing-
Step1: Notebook Extensions
Check out nbdime
https
Step2: Snippets Menu
Step3: Python Markdown -- Maybe doesn't work right now for some reason?
The value of a is {{a}}. Useful for anything you want to report, this
Step4: Collapsible Headings
Notify -- disappointing | Python Code:
from __future__ import print_function, division
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import textwrap
import os
import sys
import warnings
warnings.filterwarnings('ignore')
# special things
from pivottablejs import pivot_ui
from ipywidgets import FloatSlider, interactive, IntSlider
from scipy import interpolate
# sql
%load_ext sql_magic
import sqlalchemy
import sqlite3
from sqlalchemy import create_engine
sqlite_engine = create_engine('sqlite://')
# autoreload
%load_ext autoreload
%autoreload 1
# %aimport module_to_reload
# ehh...
# import bqplot.pyplot as plt
import ipyvolume as ipv
import altair as alt
from vega_datasets import data
import seaborn as sns
sns.set_context('poster', font_scale=1.3)
a = "hi"
b = np.array([1, 2, 4, 6])
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Notebook-Extensions" data-toc-modified-id="Notebook-Extensions-1">Notebook Extensions</a></span><ul class="toc-item"><li><span><a href="#Check-out-nbdime" data-toc-modified-id="Check-out-nbdime-1.1">Check out nbdime</a></span></li><li><span><a href="#ToC(2)" data-toc-modified-id="ToC(2)-1.2">ToC(2)</a></span></li><li><span><a href="#ExecuteTime" data-toc-modified-id="ExecuteTime-1.3">ExecuteTime</a></span></li><li><span><a href="#Snippets-Menu" data-toc-modified-id="Snippets-Menu-1.4">Snippets Menu</a></span></li><li><span><a href="#Python-Markdown----Maybe-doesn't-work-right-now-for-some-reason?" data-toc-modified-id="Python-Markdown----Maybe-doesn't-work-right-now-for-some-reason?-1.5">Python Markdown -- Maybe doesn't work right now for some reason?</a></span></li><li><span><a href="#Code-prettify" data-toc-modified-id="Code-prettify-1.6">Code prettify</a></span></li><li><span><a href="#Collapsible-Headings" data-toc-modified-id="Collapsible-Headings-1.7">Collapsible Headings</a></span></li><li><span><a href="#Notify----disappointing-:(" data-toc-modified-id="Notify----disappointing-:(-1.8">Notify -- disappointing :(</a></span></li></ul></li><li><span><a href="#Outside-Notebooks" data-toc-modified-id="Outside-Notebooks-2">Outside Notebooks</a></span><ul class="toc-item"><li><span><a href="#Tree-Filter" data-toc-modified-id="Tree-Filter-2.1">Tree Filter</a></span></li></ul></li></ul></div>
End of explanation
print("hello world")
Explanation: Notebook Extensions
Check out nbdime
https://github.com/jupyter/nbdime
and the documentation about adding the nbdime https://nbdime.readthedocs.io/en/latest/extensions.html
ToC(2)
ExecuteTime
End of explanation
a = 4
report = "FAIL"
Explanation: Snippets Menu
End of explanation
weight_categories = [ "vlow_weight", "low_weight",
"mid_weight", "high_weight",
"vhigh_weight",]
players['weightclass'] = pd.qcut(players['weight'],
len(weight_categories), weight_categories)
weight_categories = [
"vlow_weight",
"low_weight",
"mid_weight",
"high_weight",
"vhigh_weight",
]
players["weightclass"] = pd.qcut(
players["weight"], len(weight_categories), weight_categories
)
Explanation: Python Markdown -- Maybe doesn't work right now for some reason?
The value of a is {{a}}. Useful for anything you want to report, this: {{report}}.
Code prettify
Copy/paste the following into the "json defining library calls required to load the kernel-specific prettifying modules, and the prefix & postfix for the json-format string required to make the prettifying call." box.
{
"python": {
"library": "import json\ndef black_reformat(cell_text):\n import black\n import re\n cell_text = re.sub('^%', '#%#', cell_text, flags=re.M)\n try:\n reformated_text = black.format_str(cell_text, 88)\n except TypeError:\n reformated_text = black.format_str(cell_text, mode=black.FileMode(line_length=88))\n return re.sub('^#%#', '%', reformated_text, flags=re.M)",
"prefix": "print(json.dumps(black_reformat(u",
"postfix": ")))"
},
"r": {
"library": "library(formatR)\nlibrary(jsonlite)",
"prefix": "cat(toJSON(paste(tidy_source(text=",
"postfix": ", output=FALSE)[['text.tidy']], collapse='\n')))"
},
"javascript": {
"library": "jsbeautify = require('js-beautify')",
"prefix": "console.log(JSON.stringify(jsbeautify.js_beautify(",
"postfix": ")));"
}
}
From: https://neuralcoder.science/Black-Jupyter/
End of explanation
import time
time.sleep(10)
Explanation: Collapsible Headings
Notify -- disappointing :(
In theory, this will give you a browser notification if your kernel has been busy for at least N seconds (after you give permission).
End of explanation |
3,316 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Search Project for CST 495
CMU Movie Summary Corpus
http
Step1: now is the time to join the docs together
Step2: term freq
Step3: start by computing frequncy of entire corpus
Step4: now that we have normailised the data we can compute the term frequency
Step5: doc freq
Step6: compute dtf for item descriptions
Step7: term freq matrix
with the lexicon we are able to compute the term freq matrix
Step8: sparsity of term frequency matrix
We took the approach of using Bokeh for displaying the sparsity of term frequency matrix
Step9: boolean search
After doing the term frequency matrix, we went into using our first ranking function. We are using a boolean search to find documents that contains the words that are included within a user specified query. This is how our boolean search algorithm works
Step10: To compute the document ranking score we used the function get_results_tf() with results from the term frequency matrix
Step11: Inverted Index
the inverted index maps terms to the document in which they can be found
Step12: inverted index for document titles
Step13: improve the ranking function
Step14: enter different queries
Step15: TF-IDF
To implement TF-IDF we used the function
Step16: TF-IDF Intuition
Step17: TF-IDF Ranking
We then created an inverted index for the TD-IDF ranking
Step18: Ideally we do not want scores to be the same for lots of documents. High TF-IDF scores in shorter documents should be more relevant - so we could try by boosting the score for documents that are shorter than average.
Step19: Implementing BM25
To implement BM25, we used the function get_results_bm25 that used arguments "query, corpus, and the index sizes. We then printed out the results using a Bokeh chart.
Step20: Implementing Random Forest Machine Learning
Using the example from class to implement random forest ranking algorithm. | Python Code:
import csv
import re
with open("data/MovieSummaries/plot_summaries.tsv") as f:
r = csv.reader(f, delimiter='\t', quotechar='"')
tag = re.compile(r'\b[0-9]+\b')
rgx = re.compile(r'\b[a-zA-Z]+\b')
#docs = [ (' '.join(re.findall(tag, x[0])).lower(), ' '.join(re.findall(rgx, x[1])).lower()) for i,x in enumerate(r) if r>1 ]
docs= {}
for i,x in enumerate(r):
if i >1:
docs[' '.join(re.findall(tag, x[0])).lower()] = ' '.join(re.findall(rgx, x[1])).lower()
import csv
import re
with open("data/MovieSummaries/movie.metadata.tsv") as f:
r = csv.reader(f, delimiter='\t', quotechar='"')
tag = re.compile(r'\b[0-9]+\b')
rgx = re.compile(r'\b[a-zA-Z]+\b')
docs2= {}
for i,x in enumerate(r):
if i >1:
docs2[' '.join(re.findall(tag, x[0])).lower()] = ' '.join(re.findall(rgx, x[2])).lower(), ' '.join(re.findall(rgx, x[8])).lower()
#print(docs2)
Explanation: Search Project for CST 495
CMU Movie Summary Corpus
http://www.cs.cmu.edu/~ark/personas/
Dustin D'Avignon
Chris Ngo
Let's go
We begin with normalise the text by removing unwanted characters and converting to lowercase
End of explanation
doc = [(docs2.get(x), y) for x, y in docs.items() if docs2.get(x)]
# for testing
# import random
#print doc[random.randint(0, len(doc)-1)]
print doc[0][0], doc[0][1]
items_t = [ d[0] for d in doc ] # item titles
items_d = [ d[1] for d in doc ] # item description
items_i = range(0 , len(items_t)) # item id
Explanation: now is the time to join the docs together
End of explanation
corpus = items_d[0:25]
print corpus
Explanation: term freq
End of explanation
tf = {}
for doc in corpus:
for word in doc.split():
if word in tf:
tf[word] += 1
else:
tf[word] = 1
print(tf)
Explanation: start by computing frequncy of entire corpus
End of explanation
from collections import Counter
def get_tf(corpus):
tf = Counter()
for doc in corpus:
for word in doc.split():
tf[word] += 1
return tf
tf = get_tf(corpus)
print(tf)
Explanation: now that we have normailised the data we can compute the term frequency
End of explanation
import collections
def get_tf(document):
tf = Counter()
for word in document.split():
tf[word] += 1
return tf
def get_dtf(corpus):
dtf = {}
for i,doc in enumerate(corpus):
dtf[i]= get_tf(doc)
return dtf
dtf = get_dtf(items_d)
dtf[342]
Explanation: doc freq
End of explanation
dtf = get_dtf(items_d)
dtf[12]
Explanation: compute dtf for item descriptions
End of explanation
def get_tfm(corpus):
def get_lexicon(corpus):
lexicon = set()
for doc in corpus:
lexicon.update([word for word in doc.split()])
return list(lexicon)
lexicon = get_lexicon(corpus)
tfm =[]
for doc in corpus:
tfv = [0]*len(lexicon)
for term in doc.split():
tfv[lexicon.index(term)] += 1
tfm.append(tfv)
return tfm, lexicon
#test_corpus = ['mountain bike', 'road bike carbon', 'bike helmet']
#tfm, lexicon = get_tfm(test_corpus)
#print lexicon
#print tfm
Explanation: term freq matrix
with the lexicon we are able to compute the term freq matrix
End of explanation
#!pip install bokeh
import pandas as pd
from bokeh.plotting import figure, output_notebook, show, vplot
# sparsity as a function of document count
n = []
s = []
for i in range(100,1000,100):
corpus = items_d[0:i]
tfm, lexicon = get_tfm(corpus)
c = [ [x.count(0), x.count(1)] for x in tfm]
n_zero = sum([ y[0] for y in c])
n_one = sum( [y[1] for y in c])
s.append(1.0 - (float(n_one) / (n_one + n_zero)))
n.append(i)
output_notebook(hide_banner=True)
p = figure(x_axis_label='Documents', y_axis_label='Sparsity', plot_width=400, plot_height=400)
p.line(n, s, line_width=2)
p.circle(n, s, fill_color="white", size=8)
show(p)
Explanation: sparsity of term frequency matrix
We took the approach of using Bokeh for displaying the sparsity of term frequency matrix
End of explanation
# compute term frequency matrix and lexicon
tfm, lexicon = get_tfm(corpus)
# define our query
qry = 'red bike'
# convert query to query vector using lexicon
qrv = [0]*len(lexicon)
for term in qry.split():
if term in lexicon:
qrv[lexicon.index(term)] = 1
#print qrv
# compare query vector to each term frequency vector
# this is dot product between qrv and each row of tfm
for i,tfv in enumerate(tfm):
print i, sum([ xy[0] * xy[1] for xy in zip(qrv, tfv) ])
Explanation: boolean search
After doing the term frequency matrix, we went into using our first ranking function. We are using a boolean search to find documents that contains the words that are included within a user specified query. This is how our boolean search algorithm works:
Compute the lexicon for the corpus
Compute the term frequency matrix for the corpus
Convert query to query vector using the same lexicon
Compare each documents term frequncy vector to the query vector - specifically for each document in the corpus:
Compute a ranking score for each document by taking the dot product of the document's term frequency vector and the query vector
Sort the documents by ranking score
End of explanation
def get_results_tf(qry, tfm, lexicon):
qrv =[0]*len(lexicon)
for term in qry.split():
if term in lexicon:
qrv[lexicon.index(term)] = 1
results = []
for i, tfv in enumerate(tfm):
score = 0
score = sum([ xy[0] * xy[1] for xy in zip(qrv,tfv)])
results.append([score, i])
sorted_results = sorted(results, key=lambda t: t[0] * -1)
return sorted_results
def print_results(results,n, head=True):
''' Helper function to print results
'''
if head:
print('\nTop %d from recall set of %d items:' % (n,len(results)))
for r in results[:n]:
print('\t%0.2f - %s'%(r[0],items_t[r[1]]))
else:
print('\nBottom %d from recall set of %d items:' % (n,len(results)))
for r in results[-n:]:
print('\t%0.2f - %s'%(r[0],items_t[r[1]]))
tfm, lexicon = get_tfm(items_d[:1000])
results = get_results_tf('fun times', tfm , lexicon)
print_results(results,10)
Explanation: To compute the document ranking score we used the function get_results_tf() with results from the term frequency matrix
End of explanation
def create_inverted_index(corpus):
idx={}
for i, document in enumerate(corpus):
for word in document.split():
if word in idx:
idx[word].append(i)
else:
idx[word] = [i]
## HIDE
return idx
test_corpus = ['mountain bike red','road bike carbon','bike helmet']
idx = create_inverted_index(test_corpus)
print(idx)
Explanation: Inverted Index
the inverted index maps terms to the document in which they can be found
End of explanation
idx = create_inverted_index(items_d)
print(set(idx['good']).intersection(set(idx['times'])))
print(items_d[2061])
Explanation: inverted index for document titles
End of explanation
def get_results_tf(qry, idx):
score = Counter()
for term in qry.split():
for doc in idx[term]:
score[doc] += 1
results=[]
for x in [[r[0],r[1]] for r in zip(score.keys(), score.values())]:
if x[1] > 0:
results.append([x[1],x[0]])
sorted_results = sorted(results, key=lambda t: t[0] * -1 )
return sorted_results;
idx = create_inverted_index(items_d)
results = get_results_tf('zombies', idx)
print_results(results,20)
Explanation: improve the ranking function
End of explanation
results = get_results_tf('ghouls and ghosts', idx)
print_results(results, 10)
import pandas as pd
from bokeh.plotting import output_notebook, show
from bokeh.charts import Bar
from bokeh.charts.attributes import CatAttr
#from bokeh.models import ColumnDataSource
df = pd.DataFrame({'term':[x for x in idx.keys()],'freq':[len(x) for x in idx.values()]})
output_notebook(hide_banner=True)
p = Bar(df.sort_values('freq', ascending=False)[:30], label=CatAttr(columns=['term'], sort=False), values='freq',
plot_width=800, plot_height=400)
show(p)
Explanation: enter different queries
End of explanation
import math
def idf(term, idx, n):
return math.log( float(n) / (1 + len(idx[term])))
print(idf('zombie',idx,len(items_d)))
print(idf('survival',idx,len(items_d)))
print(idf('invasions',idx,len(items_d)))
Explanation: TF-IDF
To implement TF-IDF we used the function:
$$
IDF = log ( 1 + \frac{N}{n_t} )
$$
End of explanation
from bokeh.charts import vplot
idx = create_inverted_index(items_d)
df = pd.DataFrame({'term':[x for x in idx.keys()],'freq':[len(x) for x in idx.values()],
'idf':[idf(x, idx, len(items_t)) for x in idx.keys()]})
output_notebook(hide_banner=True)
p1 = Bar(df.sort_values('freq', ascending=False)[:30], label=CatAttr(columns=['term'], sort=False), values='freq',
plot_width=800, plot_height=400)
p2 = Bar(df.sort_values('freq', ascending=False)[:30], label=CatAttr(columns=['term'], sort=False), values='idf',
plot_width=800, plot_height=400)
p = vplot(p1, p2)
show(p)
Explanation: TF-IDF Intuition
End of explanation
def create_inverted_index(corpus):
idx={}
for i, doc in enumerate(corpus):
for word in doc.split():
if word in idx:
if i in idx[word]:
# Update document's frequency
idx[word][i] += 1
else:
# Add document
idx[word][i] = 1
else:
# Add term
idx[word] = {i:1}
return idx
def get_results_tfidf(qry, idx, n):
score = Counter()
for term in qry.split():
if term in idx:
i = idf(term, idx, n)
for doc in idx[term]:
score[doc] += idx[term][doc] * i
results=[]
for x in [[r[0],r[1]] for r in zip(score.keys(), score.values())]:
if x[1] > 0:
results.append([x[1],x[0]])
sorted_results = sorted(results, key=lambda t: t[0] * -1 )
return sorted_results
idx = create_inverted_index(items_d)
results = get_results_tfidf('lookout action bike zombie', idx, len(items_d))
print_results(results,10)
Explanation: TF-IDF Ranking
We then created an inverted index for the TD-IDF ranking
End of explanation
def get_results_tfidf_boost(qry, corpus):
idx = create_inverted_index(corpus)
n = len(corpus)
d = [len(x.split()) for x in corpus]
d_avg = float(sum(d)) / len(d)
score = Counter()
for term in qry.split():
if term in idx:
i = idf(term, idx, n)
for doc in idx[term]:
f = float(idx[term][doc])
score[doc] += i * ( f / (float(d[doc]) / d_avg) )
results=[]
for x in [[r[0],r[1]] for r in zip(score.keys(), score.values())]:
if x[1] > 0:
# output [0] score, [1] doc_id
results.append([x[1],x[0]])
sorted_results = sorted(results, key=lambda t: t[0] * -1 )
return sorted_results
from bokeh.charts import Scatter
results = get_results_tfidf_boost('zombie invasion', items_d)
print_results(results, 10)
# Plot score vs item length
df = pd.DataFrame({'score':[float(x[0]) for x in results],
'length':[len(items_d[x[1]].split()) for x in results]})
output_notebook()
p = Scatter(df, x='score', y='length')
show(p)
Explanation: Ideally we do not want scores to be the same for lots of documents. High TF-IDF scores in shorter documents should be more relevant - so we could try by boosting the score for documents that are shorter than average.
End of explanation
def get_results_bm25(qry, corpus, k1=1.5, b=0.75):
idx = create_inverted_index(corpus)
# 1.Assign (integer) n to be the number of documents in the corpus
n = len(corpus)
# 2.Assign (list) d with elements corresponding to the number of terms in each document in the corpus
d = [len(x.split()) for x in corpus]
# 3.Assign (float) d_avg as the average document length of the documents in the corpus
d_avg = float(sum(d)) / len(d)
score = Counter()
for term in qry.split():
if term in idx:
i = idf(term, idx, n)
for doc in idx[term]:
# 4.Assign (float) f equal to the number of times the term appears in doc
f = float(idx[term][doc])
# 5.Assign (float) s the BM25 score for this (term, document) pair
s = i * (( f * (k1 + 1) ) / (f + k1 * (1 - b + (b * (float(d[doc]) / d_avg)))))
score[doc] += s
results=[]
for x in [[r[0],r[1]] for r in zip(score.keys(), score.values())]:
if x[1] > 0:
results.append([x[1],x[0]])
sorted_results = sorted(results, key=lambda t: t[0] * -1 )
return sorted_results
results = get_results_bm25('zombie apacolypse', items_d)
print_results(results, 10)
!pip install bokeh
from bokeh.charts import Scatter
results = get_results_bm25('zombie apacolypse', items_d, k1=1.5, b=0.75)
# Plot score vs item length
df = pd.DataFrame({'score':[float(x[0]) for x in results],
'length':[len(items_d[x[1]].split()) for x in results]})
output_notebook()
p = Scatter(df, x='score', y='length')
show(p)
Explanation: Implementing BM25
To implement BM25, we used the function get_results_bm25 that used arguments "query, corpus, and the index sizes. We then printed out the results using a Bokeh chart.
End of explanation
import findspark
import os
findspark.init(os.getenv('HOME') + '/spark-1.6.0-bin-hadoop2.6')
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-csv_2.10:1.3.0 pyspark-shell'
import pyspark
try:
print(sc)
except NameError:
sc = pyspark.SparkContext()
print(sc)
from pyspark.sql import SQLContext
import os
sqlContext = SQLContext(sc)
df = sqlContext.read.format('data/MovieSummaries/plot_summaries.tsv').options().options(header='true', inferSchema='true', delimiter=',') \
.load(os.getcwd() + 'data/MovieSummaries/plot_summaries.tsv')
df.schema
df.dropna()
sqlContext.registerDataFrameAsTable(df,'dataset')
sqlContext.tableNames()
data_full = sqlContext.sql("select label_relevanceBinary, feature_1, feature_2, feature_3, feature_4 \
feature_5, feature_6, feature_7, feature_8, feature_9, feature_10 \
from dataset").rdd
from pyspark.mllib.classification import SVMWithSGD, SVMModel
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.feature import StandardScaler
label = data_full.map(lambda row: row[0])
features = data_full.map(lambda row: row[1:])
model = StandardScaler().fit(features)
features_transform = model.transform(features)
# Now combine and convert back to labelled points:
transformedData = label.zip(features_transform)
transformedData = transformedData.map(lambda row: LabeledPoint(row[0],[row[1]]))
transformedData.take(5)
data_train, data_test = transformedData.randomSplit([.75,.25],seed=1973)
print('Training data records = ' + str(data_train.count()))
print('Training data records = ' + str(data_test.count()))
from pyspark.mllib.tree import RandomForest
model = RandomForest.trainClassifier(data_train, numClasses=2, categoricalFeaturesInfo={},
numTrees=400, featureSubsetStrategy="auto",
impurity='gini', maxDepth=10, maxBins=32)
Explanation: Implementing Random Forest Machine Learning
Using the example from class to implement random forest ranking algorithm.
End of explanation |
3,317 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 9 - January 6, 2018
Problem 24 A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are
Step1: Day 8 - January 4, 2018
Problem 23 A perfect number is a number for which the sum of its proper divisors is exactly equal to the number. For example, the sum of the proper divisors of 28 would be 1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest number that can be written as the sum of two abundant numbers is 24. By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers.
Step2: Problem 22 Using names.txt (right click and 'Save Link/Target As...'), a 46K text file containing over five-thousand first names, begin by sorting it into alphabetical order. Then working out the alphabetical value for each name, multiply this value by its alphabetical position in the list to obtain a name score.
For example, when the list is sorted into alphabetical order, COLIN, which is worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would obtain a score of 938 × 53 = 49714.
What is the total of all the name scores in the file?
Step3: Problem 21 Let d(n) be defined as the sum of proper divisors of n (numbers less than n which divide evenly into n).
If d(a) = b and d(b) = a, where a ≠ b, then a and b are an amicable pair and each of a and b are called amicable numbers.
For example, the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110; therefore d(220) = 284. The proper divisors of 284 are 1, 2, 4, 71 and 142; so d(284) = 220.
Evaluate the sum of all the amicable numbers under 10000.
Note
Step4: Day 7 - January 3, 2018
Problem 20 n! means n × (n − 1) × ... × 3 × 2 × 1
For example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
Find the sum of the digits in the number 100!
Step5: Problem 19 You are given the following information, but you may prefer to do some research for yourself.
1 Jan 1900 was a Monday.
Thirty days has September,
April, June and November.
All the rest have thirty-one,
Saving February alone,
Which has twenty-eight, rain or shine.
And on leap years, twenty-nine.
A leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400.
How many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)?
Note
Step6: Problem 18 and 67 By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Find the maximum total from top to bottom of the triangle below
Step7: Day 6 - January 2, 2018
Problem 17 If the numbers 1 to 5 are written out in words
Step8: Problem 16 215 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26.
What is the sum of the digits of the number 21000?
Step9: Day 5 - January 1, 2018
Problem 15 Starting in the top left corner of a 2×2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20×20 grid?
Note
Step10: Problem 14 The following iterative sequence is defined for the set of positive integers
Step11: Problem 13 Work out the first ten digits of the sum of the following one-hundred 50-digit numbers.
Step12: Day 4 - December 31
Problem 12 The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be
Step13: Problem 11
In the 20×20 grid below, four numbers along a diagonal line have been marked in red.
08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08
49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00
81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65
52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91
22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80
24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50
32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70
67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21
24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72
21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95
78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92
16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57
86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58
19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40
04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66
88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69
04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36
20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16
20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54
01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48
The product of these numbers is 26 × 63 × 78 × 14 = 1788696.
What is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20×20 grid?
Step14: Problem 10 The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below two million.
This took longer then I hoped. TODO - Think about optimizing
Step15: Day 3 - December 30
Problem 9 A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
aa + bb = c*c
For example, 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product abc.
Note
Step16: Problem 8 The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
Note
Step17: Problem 7 By listing the first six prime numbers
Step18: Day 2 - December ??
Problem 6 The sum of the squares of the first ten natural numbers is,
12 + 22 + ... + 102 = 385
The square of the sum of the first ten natural numbers is,
(1 + 2 + ... + 10)2 = 552 = 3025
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
Step19: Problem 5 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
Note
Step20: Problem 4 A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
Step21: Day 1 - December 27
Problem 1 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
You are the 715697th person to have solved this problem.
Step22: Problem 2 Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be
Step23: Problem 3 The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
TODO | Python Code:
%%timeit
import itertools as i
a = [x for x in i.permutations(range(10))]
value = a[1000000]
moo = ''
for x in a[1000000]:
moo = moo + str(x)
print(moo)
%%timeit
# with math
import math
alist = [0,1,2,3,4,5,6,7,8,9]
value = ''
remain = 1000000
for x in range(9,0,-1):
boo = math.factorial(x)
place = (math.floor(remain / boo))
value = value + str(alist[place])
alist.pop(place)
remain = remain % boo
value = value + str(alist[0])
# print(value)
Explanation: Day 9 - January 6, 2018
Problem 24 A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are:
012 021 102 120 201 210
What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9?
Note: What permutation changes the
End of explanation
smallest = 24 # 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16
biggest = 28123 # 28123 can be written as the sum of two abundant numbers
theabuns, allnum = set(), set()
total = 0
def isAbundant(num):
count, facty = 0, 1
divisors = set()
for x in range(1,num):
if num % x == 0:
divisors.add(x)
if sum(divisors) > num:
return(True)
return(False)
print(isAbundant(12))
for a in range(11, biggest):
if isAbundant(a):
theabuns.add(a)
for a in theabuns:
for b in theabuns:
allnum.add(a+b)
for a in range(1, biggest):
if a not in allnum:
total = total + a
print(total)
for a in range(1,smallest):
total = total + a
print(total)
Explanation: Day 8 - January 4, 2018
Problem 23 A perfect number is a number for which the sum of its proper divisors is exactly equal to the number. For example, the sum of the proper divisors of 28 would be 1 + 2 + 4 + 7 + 14 = 28, which means that 28 is a perfect number.
A number n is called deficient if the sum of its proper divisors is less than n and it is called abundant if this sum exceeds n.
As 12 is the smallest abundant number, 1 + 2 + 3 + 4 + 6 = 16, the smallest number that can be written as the sum of two abundant numbers is 24. By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit.
Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers.
End of explanation
path = 'p022_names.txt'
with open(path, 'r') as file:
content = file.read().rsplit(',')
for key, value in enumerate(content):
content[key] = value.strip('/"')
content = sorted(content)
#print(content)
total = 0
for key, value in enumerate(content):
namescore = 0
for x in value:
namescore = namescore + ord(x) - 64
# print("The value of ", value, " is ",namescore, (namescore * (key+1)) )
total = total + (namescore * (key+1))
print("Sum of the value of the names * order)", total)
Explanation: Problem 22 Using names.txt (right click and 'Save Link/Target As...'), a 46K text file containing over five-thousand first names, begin by sorting it into alphabetical order. Then working out the alphabetical value for each name, multiply this value by its alphabetical position in the list to obtain a name score.
For example, when the list is sorted into alphabetical order, COLIN, which is worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would obtain a score of 938 × 53 = 49714.
What is the total of all the name scores in the file?
End of explanation
def d(num):
boo = set()
total = 0
for x in range(1,num):
if num % x == 0:
boo.add(x)
for x in boo:
total = total + int(x)
return(total)
top = 10000
divisorsums = set()
sumofall = 0
for y in range(1,top):
temp = d(y)
divisorsums.add(temp) # there will be 10k-ish in here
if y in divisorsums:
if y == d(temp) and y != temp:
print(" We really found one",y, temp)
sumofall = sumofall + y + temp
print(sumofall)
Explanation: Problem 21 Let d(n) be defined as the sum of proper divisors of n (numbers less than n which divide evenly into n).
If d(a) = b and d(b) = a, where a ≠ b, then a and b are an amicable pair and each of a and b are called amicable numbers.
For example, the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110; therefore d(220) = 284. The proper divisors of 284 are 1, 2, 4, 71 and 142; so d(284) = 220.
Evaluate the sum of all the amicable numbers under 10000.
Note: https://en.wikipedia.org/wiki/Amicable_numbers
End of explanation
%%timeit
import math
num = 100
#num = 10
count = 0
for x in str(math.factorial(num)):
count = count + int(x)
#print(count)
%%timeit
num = 101
# num = 10
count, facty = 0, 1
for x in range(1,num):
facty = facty * x
for x in str(facty):
count = count + int(x)
#print(count)
Explanation: Day 7 - January 3, 2018
Problem 20 n! means n × (n − 1) × ... × 3 × 2 × 1
For example, 10! = 10 × 9 × ... × 3 × 2 × 1 = 3628800,
and the sum of the digits in the number 10! is 3 + 6 + 2 + 8 + 8 + 0 + 0 = 27.
Find the sum of the digits in the number 100!
End of explanation
import calendar as c
Sunday = 6
count = 0
print("Is 1 Jan a Monday (0)?", c.weekday(1900,1,1))
for year in range(1901,2001):
for month in range(1,13):
if c.weekday(year,month,1) == Sunday:
count = count + 1
print("Total Sunday on the 1st of the month(1901 to 2000)", count)
Explanation: Problem 19 You are given the following information, but you may prefer to do some research for yourself.
1 Jan 1900 was a Monday.
Thirty days has September,
April, June and November.
All the rest have thirty-one,
Saving February alone,
Which has twenty-eight, rain or shine.
And on leap years, twenty-nine.
A leap year occurs on any year evenly divisible by 4, but not on a century unless it is divisible by 400.
How many Sundays fell on the first of the month during the twentieth century (1 Jan 1901 to 31 Dec 2000)?
Note: Should I do this with no libraries?
End of explanation
import sys
path = 'p18b.txt'
total, most = 0, 0
tower,distance = [], []
with open(path, 'r') as file:
content = file.read().splitlines()
for x in content:
tower.append(x.rsplit())
# print(tower)
for row, value in enumerate(tower):
distance.append([0]* len(value))
for col, node in enumerate(value):
if col < len(value)-1 :
vmom = distance[row-1][col]
else:
vmom = 0
# print("There is no mom (to the right) ")
if (col - 1) >= 0:
vdad = distance[row-1][col-1]
else:
vdad = 0
# print("There is no dad (to the left) ")
distance[row][col] = max(vdad, vmom) + int(node)
# print(distance)
print(max(distance[len(distance)-1]))
Explanation: Problem 18 and 67 By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23.
3
7 4
2 4 6
8 5 9 3
That is, 3 + 7 + 4 + 9 = 23.
Find the maximum total from top to bottom of the triangle below:
75
95 64
17 47 82
18 35 87 10
20 04 82 47 65
19 01 23 75 03 34
88 02 77 73 07 63 67
99 65 04 28 06 16 70 92
41 41 26 56 83 40 80 70 33
41 48 72 33 47 32 37 16 94 29
53 71 44 65 25 43 91 52 97 51 14
70 11 33 28 77 73 17 78 39 68 17 57
91 71 52 38 17 14 91 43 58 50 27 29 48
63 66 04 68 89 53 67 30 73 16 69 87 40 31
04 62 98 27 23 09 70 98 73 93 38 53 60 04 23
NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o)
End of explanation
cardinal = {0: '',1:'one', 11:'eleven', 2:'two', 12:'twelve',40:'forty',3:'three', 13:'thirteen', 50:'fifty', 4:'four', 14:'fourteen',
60:'sixty',
5: 'five',
15: 'fifteen',
70:'seventy',
6: 'six',
16: 'sixteen',
80:'eighty',
7: 'seven',
17: 'seventeen',
90:'ninety',
8: 'eight',
18: 'eighteen',
100:'hundred',
9: 'nine',
19: 'nineteen',
1000:'onethousand',
10:'ten',
20:'twenty',
30:'thirty'}
print(cardinal)
num = 342
print(cardinal[((num - (num % 100))/100)])
top = 22
total = 0
def gettens(num):
if num in cardinal:
return(len(cardinal[num]))
onesplace = num % 10
tensplace = num - (num % 10)
sum =len(cardinal[tensplace]) + len(cardinal[onesplace])
return(sum)
def gethundreds(num):
sum = len(cardinal[((num - (num % 100))/100)])
sum = sum + len(cardinal[100])
tens = (num % 100)
if tens != 0:
sum = sum + 3 # and
sum = sum + gettens(tens)
return(sum)
bot = 1
top = 1000
total = 0
just = 0
for x in range(bot,top+1):
if x in cardinal:
total = total + len(cardinal[x])
if x == 100:
total = total + 3
just = len(cardinal[x])
elif x > 20 and x < 100:
total = total + gettens(x)
just = gettens(x)
elif x > 100 and x < 1000:
total = total + gethundreds(x)
just = gethundreds(x)
else:
print("oops, too high or too wrong", x)
print("Total: ", total)
print(cardinal)
cardinal = {1: 'one', 11:'eleven', 2:'two', 12:'twelve',40:'forty',3:'three',
13: 'thirteen'
50 'fifty'
4: 'four'
14: 'fourteen'
60 'sixty'
5: 'five'
15: 'fifteen'
70 'seventy'
6: 'six'
16: 'sixteen'
80 'eighty'
7: 'seven'
17: 'seventeen'
90 'ninety'
8: 'eight'
18: 'eighteen'
100 'onehundred'
9: 'nine'
19: 'nineteen'
1,000 'onethousand'
10: 'ten'
20: 'twenty'
30 'thirty'
Explanation: Day 6 - January 2, 2018
Problem 17 If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
Note: Feels like a dictionary - https://www.ego4u.com/en/cram-up/vocabulary/numbers/cardinal
Not: 21145, 21121
Answer = 21124
End of explanation
x = 1000
value = 2**x
print("Two raised to the ", x, " : ",value )
total = 0
for y in str(value):
total = total + int(y )
print(" The sum of the digits is ", total)
Explanation: Problem 16 215 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26.
What is the sum of the digits of the number 21000?
End of explanation
import math
size = 20
count = 0
num = math.factorial(size*2)
dem = math.factorial(size) * math.factorial(size)
print(num/dem)
Explanation: Day 5 - January 1, 2018
Problem 15 Starting in the top left corner of a 2×2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner.
How many such routes are there through a 20×20 grid?
Note: Is this a recurssion? Is this just math?
Steps = N * 2 (so 2 will have 4 steps and 20 will have 40 steps . Half will be over, half down. Order will not matter
Thank you : https://www.youtube.com/watch?v=M8BYckxI8_U
End of explanation
# This is slow. Too slow
def getCollatz(num):
seq = set()
seq.add(num)
if num == 1:
return(seq)
else:
if num % 2 == 0:
temp = int(num/2)
else:
temp = int((3 * num) + 1 )
seq.update(getCollatz(temp))
return(seq)
seed = 999999
end = 99999
high, bigdog = 0 , 0
for x in range(seed,end,-2):
total = len(getCollatz(x))
if total > high:
bigDog = x
high = total
print(bigDog, high)
# For speed, find them once and save them. But which ones? A million
seed = 1000000
million = [0 for x in range(seed)]
million[1] = 1
million[0] = 1
high = 0
def getCollatzSize(num):
if num < seed:
if million[num] != 0:
return(million[num])
else:
if num % 2 == 0:
temp = int(num/2)
else:
temp = int((3 * num) + 1)
million[num] = getCollatzSize(temp) +1
return(million[num])
else:
if num % 2 == 0:
temp = int(num/2)
else:
temp = int((3 * num) + 1)
return(getCollatzSize(temp) +1 )
for x in range(seed):
getCollatzSize(x)
top = max(million)
print(million.index(top))
Explanation: Problem 14 The following iterative sequence is defined for the set of positive integers:
n → n/2 (n is even)
n → 3n + 1 (n is odd)
Using the rule above and starting with 13, we generate the following sequence:
13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1
It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.
Which starting number, under one million, produces the longest chain?
NOTE: Once the chain starts the terms are allowed to go above one million.
MyNOTE: Start from 1? Every number is in a set (ie 13 is always with 40, etc) Use recurssion to find the sets but do we really have to ... A MILLION TIMES?
The right answer is 837,799 (from wikipedia) with 524 steps.
End of explanation
path = 'p13.txt'
total = 0
with open(path, 'r') as file:
content = file.read().splitlines()
for x in content:
value = int(x)
total = total + value
print(total)
print(str(total)[:10])
Explanation: Problem 13 Work out the first ten digits of the sum of the following one-hundred 50-digit numbers.
End of explanation
import math
stillLooking = True
x, trinum = 1, 0
divisor = 500
# divisor = 5
def numberofFactors(num):
upto = int(math.sqrt(num) + 1)
total = 0
temp = set()
for x in range(1,upto):
if num % x == 0:
temp.add(x)
temp.add(int(num/x))
return(len(temp))
while stillLooking:
trinum = int((x*(x+1))/2)
if numberofFactors(trinum) > divisor:
stillLooking = False
print("You did it! ")
else:
x = x + 1
print("The nth triangle number with over 500 is ", x, trinum)
Explanation: Day 4 - December 31
Problem 12 The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
Let us list the factors of the first seven triangle numbers:
1: 1
3: 1,3
6: 1,2,3,6
10: 1,2,5,10
15: 1,3,5,15
21: 1,3,7,21
28: 1,2,4,7,14,28
We can see that 28 is the first triangle number to have over five divisors.
What is the value of the first triangle number to have over five hundred divisors?
Rule : n(n+1)/2 = is the value of the nth triangle number (5th = 5(5+1)/2 = 15 (https://www.mathsisfun.com/algebra/triangular-numbers.html)
Note: Every other one is a 'triagular number 1,6,15,28 ...)
Note: every triangular number is either divisible by three or has a remainder of 1 when divided by 9
End of explanation
path = 'p11.txt'
grid = []
highest = 0
with open(path, 'r') as file:
content = file.read().splitlines()
for x in content:
grid.append(x.rsplit(' '))
print(grid[3][3])
for x in range(20):
for y in range(20):
# check row
try:
temp = int(grid[x][y]) * int(grid[x][y+1]) * int(grid[x][y+2]) * int(grid[x][y+3])
if temp > highest:
highest = temp
foo, bar = x,y
temp = int(grid[x][y]) * int(grid[x+1][y]) * int(grid[x+2][y]) * int(grid[x+3][y])
if temp > highest:
highest = temp
foo, bar = x,y
temp = int(grid[x][y]) * int(grid[x+1][y+1]) * int(grid[x+2][y+2]) * int(grid[x+3][y+3])
if temp > highest:
highest = temp
foo, bar = x,y
temp = int(grid[x][y]) * int(grid[x-1][y+1]) * int(grid[x-2][y+2]) * int(grid[x-3][y+3])
if temp > highest:
highest = temp
foo, bar = x,y
except:
pass
print(highest, foo, bar)
Explanation: Problem 11
In the 20×20 grid below, four numbers along a diagonal line have been marked in red.
08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08
49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00
81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65
52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91
22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80
24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50
32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70
67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21
24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72
21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95
78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92
16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57
86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58
19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40
04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66
88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69
04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36
20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16
20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54
01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48
The product of these numbers is 26 × 63 × 78 × 14 = 1788696.
What is the greatest product of four adjacent numbers in the same direction (up, down, left, right, or diagonally) in the 20×20 grid?
End of explanation
import math
def isPrime(anumber):
if anumber == 2:
return(True)
for x in range(2, int(math.sqrt(anumber))+1):
if anumber % x == 0:
return(False)
return(True)
top = 2000000
# top = 10
count, total = 1,2 # to account for the number 2
for x in range(3,top, 2):
if isPrime(x):
count +=1
total = total + x
print(count, total)
Explanation: Problem 10 The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below two million.
This took longer then I hoped. TODO - Think about optimizing
End of explanation
for m in range(100):
for n in range(m-1):
if (m* (m+n) == 500):
print("We made it! ", m, n)
if ((m*m) - (n*n)) + (2 * m * n) + ((m*m) + (n*n)) == 1000:
print("What what", m, n)
print("A = ", ((m*m) - (n*n)), "B = ", (2 * m * n), "C=",((m*m) + (n*n)) )
a = ((m*m) - (n*n))
b = (2 * m * n)
c = ((m*m) + (n*n))
print("Sum = ", a + b + c)
print("Product = ", a * b * c)
break
Explanation: Day 3 - December 30
Problem 9 A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
aa + bb = c*c
For example, 3^2 + 4^2 = 9 + 16 = 25 = 5^2.
There exists exactly one Pythagorean triplet for which a + b + c = 1000.
Find the product abc.
Note: 3+ 4+ 5 = 12 and 3x4x5 = 60
Note: https://www.youtube.com/watch?v=B2FLARYs3bo Pick any two numbers M and N (with M > N) you get a Pythagorean Triple with A = M^2 - N^2, B = (MN)2 C = M^2 + N^2
So .... (M^2 - N^2) + (MN)2 + M^2 + N^2 = 1000
End of explanation
test = '7316717653133062491922511967442657474235534919493496983520312774506326239578318016984801869478851843858615607891129494954595017379583319528532088055111254069874715852386305071569329096329522744304355766896648950445244523161731856403098711121722383113622298934233803081353362766142828064444866452387493035890729629049156044077239071381051585930796086670172427121883998797908792274921901699720888093776657273330010533678812202354218097512545405947522435258490771167055601360483958644670632441572215539753697817977846174064955149290862569321978468622482839722413756570560574902614079729686524145351004748216637048440319989000889524345065854122758866688116427171479924442928230863465674813919123162824586178664583591245665294765456828489128831426076900422421902267105562632111110937054421750694165896040807198403850962455444362981230987879927244284909188845801561660979191338754992005240636899125607176060588611646710940507754100225698315520005593572972571636269561882670428252483600823257530420752963450'
#test = '01234567899876543210'
length = len(test)
top = test[0:13]
topvalue = 0
for x in range(length):
temp = test[x:x+13]
product = 1
for y in temp:
product = product * int(y)
if product > topvalue:
topvalue = product
top = temp
print("Value ", topvalue, "string ", top )
Explanation: Problem 8 The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
Note: I think treating it like a string is the best
End of explanation
import math
which = 10001
# which = 6
x, count = 1, 0
def isPrime(anumber):
if anumber == 2:
return(True)
for x in range(2, int(math.sqrt(anumber))+1):
if anumber % x == 0:
return(False)
return(True)
while count < which:
x += 1
if isPrime(x):
count += 1
print("The ", count, " prime is ", x)
Explanation: Problem 7 By listing the first six prime numbers: 2, 3, 5, 7, 11, and 13, we can see that the 6th prime is 13.
What is the 10 001st prime number?
End of explanation
num = 100
#num = 10
total1,total2 = 0,0
for x in range(num,0,-1):
total1 = total1 + x
total2 = total2 + (x*x)
print(total1* total1)
print(total2)
print((total1* total1) - total2)
Explanation: Day 2 - December ??
Problem 6 The sum of the squares of the first ten natural numbers is,
12 + 22 + ... + 102 = 385
The square of the sum of the first ten natural numbers is,
(1 + 2 + ... + 10)2 = 552 = 3025
Hence the difference between the sum of the squares of the first ten natural numbers and the square of the sum is 3025 − 385 = 2640.
Find the difference between the sum of the squares of the first one hundred natural numbers and the square of the sum.
End of explanation
import math
top = 20
# top = 10
total = 1
for x in range(top, 1, -1):
while total % x != 0:
foo = math.gcd(total, x)
total = int(total * (x/foo))
print(total)
Explanation: Problem 5 2520 is the smallest number that can be divided by each of the numbers from 1 to 10 without any remainder. What is the smallest positive number that is evenly divisible by all of the numbers from 1 to 20?
Note: Set of all the factors
End of explanation
tops = 999 * 999
bots = 698896 # 698896
digits = 3
# tops = 99 * 99
# bots = 676
# digits = 2
def isPalindromic(num):
string = str(num)
length = int(len(string)/2)
# print(" num, string, len ", num, string, length)
if string[length:][::-1] == string[:length]:
return(True)
return(False)
def isProductSize(num, digits):
results = set()
for i in range(1, int(num ** .5)+1):
div, mod = divmod(num, i)
if mod == 0:
results |= {i, div}
if len(str(i)) == digits and len(str(div)) == digits:
print(i, div)
return(True)
print('Tops', tops, 'bots ', bots, 'digits ', digits)
for x in range(tops, bots, -1 ):
if isPalindromic(x):
# print("Yep, a palidrom ", x)
if isProductSize(x, digits):
print("The Highest Palindrom is ", x)
break
isPalindromic(999999)
Explanation: Problem 4 A palindromic number reads the same both ways. The largest palindrome made from the product of two 2-digit numbers is 9009 = 91 × 99.
Find the largest palindrome made from the product of two 3-digit numbers.
End of explanation
below = 1000
# below = 10
total = 0
for x in range(below):
if (x % 3 == 0) or (x % 5 == 0):
total = total + x
print(total)
Explanation: Day 1 - December 27
Problem 1 If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000.
You are the 715697th person to have solved this problem.
End of explanation
top = 4000000
# top = 89
first = 1
second = 2
total, last = 0 , 0
while last < top:
last = first + second
if last % 2 == 0:
total = total + last
first = second
second = last
print("Total evens ", total + 2)
print("last, second, first", last, second, first)
Explanation: Problem 2 Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
End of explanation
import math
mynum = 600851475143
# mynum = 13195
myprimes = []
top = math.floor(math.sqrt(mynum)) # Does this need to be ceil instead
print("Can not be divisible by more then sqrt", top)
def isPrime(anumber):
if anumber < 2:
return(False)
for x in range(2, int(math.sqrt(anumber))+1): # does this need to be sqrt + 1?
if anumber % x == 0:
return(False)
return(True)
for x in range(1, top, 2):
if isPrime(x):
if mynum % x == 0:
myprimes.append(x)
# print(myprimes)
print("The highest prime factor ", max(myprimes))
Explanation: Problem 3 The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ?
TODO: You could performance tune the hell out of this one. ie go through it in reverse until found.
End of explanation |
3,318 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtering and plotting
Here we provide a quick example of how to filter and plot with your data, making the most of the capability provided by pyam's IamDataFrame.
Step1: Run MAGICC6.
Step2: Filter and plot results. | Python Code:
# NBVAL_IGNORE_OUTPUT
from pymagicc import MAGICC6
from pymagicc import rcp26
import matplotlib.pyplot as plt
plt.style.use("bmh")
Explanation: Filtering and plotting
Here we provide a quick example of how to filter and plot with your data, making the most of the capability provided by pyam's IamDataFrame.
End of explanation
with MAGICC6() as magicc:
results = magicc.run(rcp26)
# NBVAL_IGNORE_OUTPUT
# Note that row ordering may vary
results.head()
Explanation: Run MAGICC6.
End of explanation
plt.figure(figsize=(16, 9))
results.filter(variable="*Concentrations|CO2",).line_plot(x="time");
plt.figure(figsize=(16, 9))
results.filter(variable="*Temperature*").line_plot(x="time");
Explanation: Filter and plot results.
End of explanation |
3,319 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification of Organisms
Using a Digital Dichotomous Key
This Juptyer Notebook will allow you to search through different organisms based on their physical characteristics using a tool known as a Dichotomous Key. Start out by going to Kernal -> Restart & Run All to begin!
Step1: Pre-Questions
A Dichotomous Key is....
a tool that allows scienctists to identify and classify organisms in the natural world. Based on their characterists, scienctists can narrow down species into groups such as trees, flowers, mammals, reptiles, rocks, and fish. A Dichotomous Key can help to understand how scientists have classified organisms using Bionomial Nomenclature.
Dichotomous Key Video
Step2: PART 1
Step3: Importing the Organism Characteristics/Conditions
Step4: PART 2
Step5: Part 3 | Python Code:
# Import modules that contain functions we need
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Our data is the dichotomous key table and is defined as the word 'key'.
# key is set equal to the .csv file that is read by pandas.
# The .csv file must be in the same directory as the program.
#If the data is being pulled locally use the code that is commented out below
#key = pd.read_csv("Classification of Organisms- Jupyter Data.csv")
#key2 = pd.read_csv("Classification of Organisms- Jupyter Data KEY 2.csv")
key = pd.read_csv("https://gist.githubusercontent.com/GoodmanSciences/f4d51945a169ef3125234c57b878e058/raw/bebeaae8038f0b418ed37c2a98b82aa9d3cc38d1/Classification%2520of%2520Organisms-Jupyter%2520Data.csv")
key2 = pd.read_csv("https://gist.githubusercontent.com/GoodmanSciences/4060d993635e90cdcc46fe637c92ee37/raw/d9031747855b9762b239dea07a60254eaa6051f7/Classification%2520of%2520Organisms-%2520Jupyter%2520Data%2520KEY%25202.csv")
# This sets Organism as the index instead of numbers
#key = data.set_index("organism")
Explanation: Classification of Organisms
Using a Digital Dichotomous Key
This Juptyer Notebook will allow you to search through different organisms based on their physical characteristics using a tool known as a Dichotomous Key. Start out by going to Kernal -> Restart & Run All to begin!
End of explanation
# Here is a helpful image of a sample Dichotomous Key!
from IPython.display import Image
from IPython.core.display import HTML
Image(url= 'http://biology-igcse.weebly.com/uploads/1/5/0/7/15070316/8196495_orig.gif')
Explanation: Pre-Questions
A Dichotomous Key is....
a tool that allows scienctists to identify and classify organisms in the natural world. Based on their characterists, scienctists can narrow down species into groups such as trees, flowers, mammals, reptiles, rocks, and fish. A Dichotomous Key can help to understand how scientists have classified organisms using Bionomial Nomenclature.
Dichotomous Key Video
End of explanation
# Animal options in Dichotomous Key
# Displays all row titles as an array
key.organism
Explanation: PART 1: Sorting Organisms by One Characteristic
We will be looking at the characterists of 75 unique organisms in our Dichotomous Key. The imput below will provide us with a some of the possible organisms you may discover.
End of explanation
# Conditions/Questions for finding the correct animal
# Displays all column titles as an array
key.columns
Explanation: Importing the Organism Characteristics/Conditions
End of explanation
key[(key['decomposer'] == 'yes')]
# This conditional allows us to query a column and if the data within that cell matches it will display the animal(s).
#if you are unsure of what to put try making that column a comment by adding # in front of it.
key[
#physical characteristics
(key['fur'] == 'yes') & \
(key['feathers'] == 'no') & \
(key['poisonous'] == 'no') & \
(key['scales'] == 'no') & \
(key['multicellular'] == 'yes') & \
(key['fins'] == 'no') & \
(key['wings'] == 'no') & \
(key['vertebrate'] == 'yes') & \
#environmental characteristics
(key['marine'] == 'no') & \
(key['terrestrial'] == 'yes') & \
#feeding characteristics
#decomposers get their food by breaking down decaying organisms
(key['decomposer'] == 'no') & \
#carnivores get their food by eating animals
(key['carnivore'] == 'no') & \
#herbivores get their food by eating plants
(key['herbivore'] == 'yes') & \
#omnivores get their food by eating both plants and animals
(key['omnivore'] == 'no') & \
#photosynthesis is the process of making food using energy from sunlight
(key['photosynthesis'] == 'no') & \
#autotrophs are organisms that generate their own food inside themselves
(key['autotroph'] == 'no') & \
#possible kingdoms include: animalia, plantae, fungi
(key['kingdom'] == 'animalia') & \
#cell type
(key['eukaryotic'] == 'yes') & \
(key['prokaryotic'] == 'no')
]
Explanation: PART 2: Sorting Organisms by Many Characteristics
These are the conditions or the characteristics in which ceratin answers are categorized for certain organisms. Each characteristic/condition has a yes/no except for the Kingdoms. Change the conditionals in the code below to change what organism(s) are displayed. For most, the only change needs to be the 'yes' or 'no'.
Capitalization matters so be careful. You also must put in only allowed answers in every condition or the code will break!
End of explanation
#sort your organisms by their taxonomical classification
# This conditional allows us to query a column and if the data within that cell matches,
# it will display the corresponding animal(s)
key2[(key2['kingdom'] == 'animalia')]
#Done?? Insert a image for one of the organisms you found using the dichotomous key.
from IPython.display import Image
from IPython.core.display import HTML
Image(url= 'https://lms.mrc.ac.uk/wp-content/uploads/insert-pretty-picture-here1.jpg')
Explanation: Part 3: Scientific Classification of Organisms
End of explanation |
3,320 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Example
Step1: Inject into the interpreter the functions.
Step2: Construct the histogram containing the input data
Step3: Create the function and try to fit it without setting any parameter
Step4: Less than optimal. Set parameters and fit again, draw histogram with error bars
Step5: Much better. Now time to beautify the plot. Construct a TF1 for the background and Lorentzian functions and draw them in the same canvas.
We save the fit results and set the parameters of the functions accordingly
Step6: We can now add a legend | Python Code:
import ROOT
Explanation: Fitting Example
End of explanation
%%cpp -d
//Define functions for fitting
// Quadratic background function
double background(double *x, double *par) {
return par[0] + par[1]*x[0] + par[2]*x[0]*x[0];
}
// Lorenzian Peak function
double lorentzianPeak(double *x, double *par) {
return (0.5*par[0]*par[1]/TMath::Pi()) /
TMath::Max(1.e-10,(x[0]-par[2])*(x[0]-par[2])
+ .25*par[1]*par[1]);
}
// Sum of background and peak function
double fitFunction(double *x, double *par) {
return background(x, par) + lorentzianPeak(x, &par[3]);
}
Explanation: Inject into the interpreter the functions.
End of explanation
nbins = 60
data = [ 6,1,10,12,6,13,23,22,15,21,
23,26,36,25,27,35,40,44,66,81,
75,57,48,45,46,41,35,36,53,32,
40,37,38,31,36,44,42,37,32,32,
43,44,35,33,33,39,29,41,32,44,
26,39,29,35,32,21,21,15,25,15 ]
xlow = 0
xup = 3
histo = ROOT.TH1F('histo', 'Lorentzian Peak on Quadratic Background', nbins, xlow, xup)
for i in range(nbins):
histo.SetBinContent(i+1, data[i])
Explanation: Construct the histogram containing the input data
End of explanation
%jsroot on
c = ROOT.TCanvas()
nparams = 6
fitFcn = ROOT.TF1('fitFcn', ROOT.fitFunction, xlow, xup, nparams)
histo.Fit(fitFcn)
c.Draw()
Explanation: Create the function and try to fit it without setting any parameter
End of explanation
fitFcn.SetParameter(4, 0.2); # width
fitFcn.SetParameter(5, 1); # peak
histo.Fit(fitFcn)
histo.Draw('E')
c.Draw()
Explanation: Less than optimal. Set parameters and fit again, draw histogram with error bars
End of explanation
pars = fitFcn.GetParameters()
backFcn = ROOT.TF1('backFcn', ROOT.background, xlow, xup, 3)
backFcn.SetLineColor(ROOT.kGreen)
backFcn.Draw('Same')
backFcn.SetParameters(pars[0], pars[1], pars[2])
c.Draw()
signalFcn = ROOT.TF1('signalFcn', ROOT.lorentzianPeak, xlow, xup, 3)
signalFcn.SetLineColor(ROOT.kBlue)
signalFcn.SetParameters(pars[3], pars[4], pars[5])
signalFcn.Draw('Same')
c.Draw()
Explanation: Much better. Now time to beautify the plot. Construct a TF1 for the background and Lorentzian functions and draw them in the same canvas.
We save the fit results and set the parameters of the functions accordingly
End of explanation
legend = ROOT.TLegend(0.45, 0.65, 0.73, 0.85)
legend.SetTextFont(72)
legend.SetTextSize(0.04)
legend.AddEntry(histo, 'Data', 'LE')
legend.AddEntry(backFcn, 'Background fit', 'L')
legend.AddEntry(signalFcn, 'Signal fit', 'L')
legend.AddEntry(fitFcn, 'Global Fit', 'L')
legend.Draw('Same')
c.Draw()
Explanation: We can now add a legend
End of explanation |
3,321 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Embedding a Bokeh server in a Notebook
This notebook shows how a Bokeh server application can be embedded inside a Jupyter notebook.
Step2: There are various application handlers that can be used to build up Bokeh documents. For example, there is a ScriptHandler that uses the code from a .py file to produce Bokeh documents. This is the handler that is used when we run bokeh serve app.py. In the notebook we can use a function to define a Bokehg application.
Here is the function bkapp(doc) that defines our app
Step3: Now we can display our application using show, which will automatically create an Application that wraps bkapp using FunctionHandler. The end result is that the Bokeh server will call bkapp to build new documents for every new sessions that is opened.
Note | Python Code:
import yaml
from bokeh.layouts import column
from bokeh.models import ColumnDataSource, Slider
from bokeh.plotting import figure
from bokeh.themes import Theme
from bokeh.io import show, output_notebook
from bokeh.sampledata.sea_surface_temperature import sea_surface_temperature
output_notebook()
Explanation: Embedding a Bokeh server in a Notebook
This notebook shows how a Bokeh server application can be embedded inside a Jupyter notebook.
End of explanation
def bkapp(doc):
df = sea_surface_temperature.copy()
source = ColumnDataSource(data=df)
plot = figure(x_axis_type='datetime', y_range=(0, 25),
y_axis_label='Temperature (Celsius)',
title="Sea Surface Temperature at 43.18, -70.43")
plot.line('time', 'temperature', source=source)
def callback(attr, old, new):
if new == 0:
data = df
else:
data = df.rolling('{0}D'.format(new)).mean()
source.data = ColumnDataSource.from_df(data)
slider = Slider(start=0, end=30, value=0, step=1, title="Smoothing by N Days")
slider.on_change('value', callback)
doc.add_root(column(slider, plot))
doc.theme = Theme(json=yaml.load(
attrs:
Figure:
background_fill_color: "#DDDDDD"
outline_line_color: white
toolbar_location: above
height: 500
width: 800
Grid:
grid_line_dash: [6, 4]
grid_line_color: white
, Loader=yaml.FullLoader))
Explanation: There are various application handlers that can be used to build up Bokeh documents. For example, there is a ScriptHandler that uses the code from a .py file to produce Bokeh documents. This is the handler that is used when we run bokeh serve app.py. In the notebook we can use a function to define a Bokehg application.
Here is the function bkapp(doc) that defines our app:
End of explanation
show(bkapp) # notebook_url="http://localhost:8888"
Explanation: Now we can display our application using show, which will automatically create an Application that wraps bkapp using FunctionHandler. The end result is that the Bokeh server will call bkapp to build new documents for every new sessions that is opened.
Note: If the current notebook is not displayed at the default URL, you must update the notebook_url parameter in the comment below to match, and pass it to show.
End of explanation |
3,322 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a data-flickr-embed="true" href="https
Step1: You'll notice the hashing algorithm has already been applied by the time we import this JSON data. I'll be showing you the Python source code for scripts that aren't Jupyter Notebook scripts, for that part of the pipeline.
Step2: Remember how the .loc attribute uses enhanced slice notation ("enhanced" in the sense core Python does not support it). | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
dinos = pd.read_json("dino_hash.json")
Explanation: <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/27963484878/in/album-72157693427665102/" title="Barry at Large"><img src="https://farm1.staticflickr.com/969/27963484878_b38f0db42a_m.jpg" width="240" height="180" alt="Barry at Large"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
PYTHON FOR DATA SCIENCE
HASHING THE DINOS
This Notebook is admittedly a little bit weird in terms of the topics it mixes. We bring in a large number of dinosuar names, in the sense of species, as discovered from the fossil record, and perhaps from other records. However this set of strings only serves to fuel the purely mathematical process of performing hashlib.sha256 magic on each one.
Think of dino names as passwords. We may consider these insecure but lets not assume the game is that serious. For the purposes of today's exercise, they're secure enough.
However, just because you've picked a password does not mean a DBA needs to keep it in her database, where it might get stolen. Rather, a hash of your password serves as a deterministic fingerprint. Just save the fingerprint. No one with a wrong password will get through the gate.
End of explanation
dinos.head()
Explanation: You'll notice the hashing algorithm has already been applied by the time we import this JSON data. I'll be showing you the Python source code for scripts that aren't Jupyter Notebook scripts, for that part of the pipeline.
End of explanation
dinos.loc["Mo":"N"]
dinos.dtypes
dinos.index.is_unique
code = dinos.loc['Mtapaiasaurus'][0]
code
len(code)
dinos.info()
int(code, base=16)
0xafe4c2b017ed3996bf5f4f3b937f0ae22e649df2f620787e136ed6bd3ea32e2d
Explanation: Remember how the .loc attribute uses enhanced slice notation ("enhanced" in the sense core Python does not support it).
End of explanation |
3,323 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 3 - Learning PySpark
Resilient Distributed Datasets
Creating RDDs
There are two ways to create an RDD in PySpark. You can parallelize a list
Step1: or read from a repository (a file or a database)
Step2: Note, that to execute the code above you will have to change the path where the data is stored. The dataset can be downloaded from http
Step3: You can access the data in the object as you would normally do in Python.
Step4: Reading from files
When you read from a text file, each row from the file forms an element of an RDD.
Step5: User defined functions
You can create longer method to transform your data instead of using the lambda expression.
Step6: Now, instead of using lambda we will use the extractInformation(...) method to split and convert our dataset.
Step7: Transformations
.map(...)
The method is applied to each element of the RDD
Step8: You can combine more columns.
Step9: .filter(...)
The .filter(...) method allows you to select elements of your dataset that fit specified criteria.
Step10: .flatMap(...)
The .flatMap(...) method works similarly to .map(...) but returns a flattened results instead of a list.
Step11: .distinct()
This method returns a list of distinct values in a specified column.
Step12: .sample(...)
The .sample() method returns a randomized sample from the dataset.
Step13: Let's confirm that we really got 10% of all the records.
Step14: .leftOuterJoin(...)
Left outer join, just like the SQL world, joins two RDDs based on the values found in both datasets, and returns records from the left RDD with records from the right one appended where the two RDDs match.
Step15: If we used .join(...) method instead we would have gotten only the values for 'a' and 'b' as these two values intersect between these two RDDs.
Step16: Another useful method is the .intersection(...) that returns the records that are equal in both RDDs.
Step17: .repartition(...)
Repartitioning the dataset changes the number of partitions the dataset is divided into.
Step18: Actions
.take(...)
The method returns n top rows from a single data partition.
Step19: If you want somewhat randomized records you can use .takeSample(...) instead.
Step20: .reduce(...)
Another action that processes your data, the .reduce(...) method reduces the elements of an RDD using a specified method.
Step21: If the reducing function is not associative and commutative you will sometimes get wrong results depending how your data is partitioned.
Step22: I we were to reduce the data in a manner that we would like to divide the current result by the subsequent one, we would expect a value of 10
Step23: However, if you were to partition the data into 3 partitions, the result will be wrong.
Step24: The .reduceByKey(...) method works in a similar way to the .reduce(...) method but performs a reduction on a key-by-key basis.
Step25: .count()
The .count() method counts the number of elements in the RDD.
Step26: It has the same effect as the method below but does not require shifting the data to the driver.
Step27: If your dataset is in a form of a key-value you can use the .countByKey() method to get the counts of distinct keys.
Step28: .saveAsTextFile(...)
As the name suggests, the .saveAsTextFile() the RDD and saves it to text files
Step29: To read it back, you need to parse it back as, as before, all the rows are treated as strings.
Step30: .foreach(...)
A method that applies the same function to each element of the RDD in an iterative way. | Python Code:
data = sc.parallelize(
[('Amber', 22), ('Alfred', 23), ('Skye',4), ('Albert', 12),
('Amber', 9)])
Explanation: Chapter 3 - Learning PySpark
Resilient Distributed Datasets
Creating RDDs
There are two ways to create an RDD in PySpark. You can parallelize a list
End of explanation
data_from_file = sc.\
textFile(
'/Users/drabast/Documents/PySpark_Data/VS14MORT.txt.gz',
4)
Explanation: or read from a repository (a file or a database)
End of explanation
data_heterogenous = sc.parallelize([('Ferrari', 'fast'), {'Porsche': 100000}, ['Spain','visited', 4504]]).collect()
data_heterogenous
Explanation: Note, that to execute the code above you will have to change the path where the data is stored. The dataset can be downloaded from http://tomdrabas.com/data/VS14MORT.txt.gz
Schema
RDDs are schema-less data structures.
End of explanation
data_heterogenous[1]['Porsche']
Explanation: You can access the data in the object as you would normally do in Python.
End of explanation
data_from_file.take(1)
Explanation: Reading from files
When you read from a text file, each row from the file forms an element of an RDD.
End of explanation
def extractInformation(row):
import re
import numpy as np
selected_indices = [
2,4,5,6,7,9,10,11,12,13,14,15,16,17,18,
19,21,22,23,24,25,27,28,29,30,32,33,34,
36,37,38,39,40,41,42,43,44,45,46,47,48,
49,50,51,52,53,54,55,56,58,60,61,62,63,
64,65,66,67,68,69,70,71,72,73,74,75,76,
77,78,79,81,82,83,84,85,87,89
]
'''
Input record schema
schema: n-m (o) -- xxx
n - position from
m - position to
o - number of characters
xxx - description
1. 1-19 (19) -- reserved positions
2. 20 (1) -- resident status
3. 21-60 (40) -- reserved positions
4. 61-62 (2) -- education code (1989 revision)
5. 63 (1) -- education code (2003 revision)
6. 64 (1) -- education reporting flag
7. 65-66 (2) -- month of death
8. 67-68 (2) -- reserved positions
9. 69 (1) -- sex
10. 70 (1) -- age: 1-years, 2-months, 4-days, 5-hours, 6-minutes, 9-not stated
11. 71-73 (3) -- number of units (years, months etc)
12. 74 (1) -- age substitution flag (if the age reported in positions 70-74 is calculated using dates of birth and death)
13. 75-76 (2) -- age recoded into 52 categories
14. 77-78 (2) -- age recoded into 27 categories
15. 79-80 (2) -- age recoded into 12 categories
16. 81-82 (2) -- infant age recoded into 22 categories
17. 83 (1) -- place of death
18. 84 (1) -- marital status
19. 85 (1) -- day of the week of death
20. 86-101 (16) -- reserved positions
21. 102-105 (4) -- current year
22. 106 (1) -- injury at work
23. 107 (1) -- manner of death
24. 108 (1) -- manner of disposition
25. 109 (1) -- autopsy
26. 110-143 (34) -- reserved positions
27. 144 (1) -- activity code
28. 145 (1) -- place of injury
29. 146-149 (4) -- ICD code
30. 150-152 (3) -- 358 cause recode
31. 153 (1) -- reserved position
32. 154-156 (3) -- 113 cause recode
33. 157-159 (3) -- 130 infant cause recode
34. 160-161 (2) -- 39 cause recode
35. 162 (1) -- reserved position
36. 163-164 (2) -- number of entity-axis conditions
37-56. 165-304 (140) -- list of up to 20 conditions
57. 305-340 (36) -- reserved positions
58. 341-342 (2) -- number of record axis conditions
59. 343 (1) -- reserved position
60-79. 344-443 (100) -- record axis conditions
80. 444 (1) -- reserve position
81. 445-446 (2) -- race
82. 447 (1) -- bridged race flag
83. 448 (1) -- race imputation flag
84. 449 (1) -- race recode (3 categories)
85. 450 (1) -- race recode (5 categories)
86. 461-483 (33) -- reserved positions
87. 484-486 (3) -- Hispanic origin
88. 487 (1) -- reserved
89. 488 (1) -- Hispanic origin/race recode
'''
record_split = re\
.compile(
r'([\s]{19})([0-9]{1})([\s]{40})([0-9\s]{2})([0-9\s]{1})([0-9]{1})([0-9]{2})' +
r'([\s]{2})([FM]{1})([0-9]{1})([0-9]{3})([0-9\s]{1})([0-9]{2})([0-9]{2})' +
r'([0-9]{2})([0-9\s]{2})([0-9]{1})([SMWDU]{1})([0-9]{1})([\s]{16})([0-9]{4})' +
r'([YNU]{1})([0-9\s]{1})([BCOU]{1})([YNU]{1})([\s]{34})([0-9\s]{1})([0-9\s]{1})' +
r'([A-Z0-9\s]{4})([0-9]{3})([\s]{1})([0-9\s]{3})([0-9\s]{3})([0-9\s]{2})([\s]{1})' +
r'([0-9\s]{2})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})([A-Z0-9\s]{7})' +
r'([A-Z0-9\s]{7})([\s]{36})([A-Z0-9\s]{2})([\s]{1})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})' +
r'([A-Z0-9\s]{5})([A-Z0-9\s]{5})([A-Z0-9\s]{5})([\s]{1})([0-9\s]{2})([0-9\s]{1})' +
r'([0-9\s]{1})([0-9\s]{1})([0-9\s]{1})([\s]{33})([0-9\s]{3})([0-9\s]{1})([0-9\s]{1})')
try:
rs = np.array(record_split.split(row))[selected_indices]
except:
rs = np.array(['-99'] * len(selected_indices))
return rs
# return record_split.split(row)
Explanation: User defined functions
You can create longer method to transform your data instead of using the lambda expression.
End of explanation
data_from_file_conv = data_from_file.map(extractInformation)
data_from_file_conv.map(lambda row: row).take(1)
Explanation: Now, instead of using lambda we will use the extractInformation(...) method to split and convert our dataset.
End of explanation
data_2014 = data_from_file_conv.map(lambda row: int(row[16]))
data_2014.take(10)
Explanation: Transformations
.map(...)
The method is applied to each element of the RDD: in the case for the data_from_file_conv dataset you can think of this as a transformation of each row.
End of explanation
data_2014_2 = data_from_file_conv.map(lambda row: (row[16], int(row[16])))
data_2014_2.take(10)
Explanation: You can combine more columns.
End of explanation
data_filtered = data_from_file_conv.filter(lambda row: row[5] == 'F' and row[21] == '0')
data_filtered.count()
Explanation: .filter(...)
The .filter(...) method allows you to select elements of your dataset that fit specified criteria.
End of explanation
data_2014_flat = data_from_file_conv.flatMap(lambda row: (row[16], int(row[16]) + 1))
data_2014_flat.take(10)
Explanation: .flatMap(...)
The .flatMap(...) method works similarly to .map(...) but returns a flattened results instead of a list.
End of explanation
distinct_gender = data_from_file_conv.map(lambda row: row[5]).distinct().collect()
distinct_gender
Explanation: .distinct()
This method returns a list of distinct values in a specified column.
End of explanation
fraction = 0.1
data_sample = data_from_file_conv.sample(False, fraction, 666)
data_sample.take(1)
Explanation: .sample(...)
The .sample() method returns a randomized sample from the dataset.
End of explanation
print('Original dataset: {0}, sample: {1}'.format(data_from_file_conv.count(), data_sample.count()))
Explanation: Let's confirm that we really got 10% of all the records.
End of explanation
rdd1 = sc.parallelize([('a', 1), ('b', 4), ('c',10)])
rdd2 = sc.parallelize([('a', 4), ('a', 1), ('b', '6'), ('d', 15)])
rdd3 = rdd1.leftOuterJoin(rdd2)
rdd3.take(5)
Explanation: .leftOuterJoin(...)
Left outer join, just like the SQL world, joins two RDDs based on the values found in both datasets, and returns records from the left RDD with records from the right one appended where the two RDDs match.
End of explanation
rdd4 = rdd1.join(rdd2)
rdd4.collect()
Explanation: If we used .join(...) method instead we would have gotten only the values for 'a' and 'b' as these two values intersect between these two RDDs.
End of explanation
rdd5 = rdd1.intersection(rdd2)
rdd5.collect()
Explanation: Another useful method is the .intersection(...) that returns the records that are equal in both RDDs.
End of explanation
rdd1 = rdd1.repartition(4)
len(rdd1.glom().collect())
Explanation: .repartition(...)
Repartitioning the dataset changes the number of partitions the dataset is divided into.
End of explanation
data_first = data_from_file_conv.take(1)
data_first
Explanation: Actions
.take(...)
The method returns n top rows from a single data partition.
End of explanation
data_take_sampled = data_from_file_conv.takeSample(False, 1, 667)
data_take_sampled
Explanation: If you want somewhat randomized records you can use .takeSample(...) instead.
End of explanation
rdd1.map(lambda row: row[1]).reduce(lambda x, y: x + y)
Explanation: .reduce(...)
Another action that processes your data, the .reduce(...) method reduces the elements of an RDD using a specified method.
End of explanation
data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 1)
Explanation: If the reducing function is not associative and commutative you will sometimes get wrong results depending how your data is partitioned.
End of explanation
works = data_reduce.reduce(lambda x, y: x / y)
works
Explanation: I we were to reduce the data in a manner that we would like to divide the current result by the subsequent one, we would expect a value of 10
End of explanation
data_reduce = sc.parallelize([1, 2, .5, .1, 5, .2], 3)
data_reduce.reduce(lambda x, y: x / y)
Explanation: However, if you were to partition the data into 3 partitions, the result will be wrong.
End of explanation
data_key = sc.parallelize([('a', 4),('b', 3),('c', 2),('a', 8),('d', 2),('b', 1),('d', 3)],4)
data_key.reduceByKey(lambda x, y: x + y).collect()
Explanation: The .reduceByKey(...) method works in a similar way to the .reduce(...) method but performs a reduction on a key-by-key basis.
End of explanation
data_reduce.count()
Explanation: .count()
The .count() method counts the number of elements in the RDD.
End of explanation
len(data_reduce.collect()) # WRONG -- DON'T DO THIS!
Explanation: It has the same effect as the method below but does not require shifting the data to the driver.
End of explanation
data_key.countByKey().items()
Explanation: If your dataset is in a form of a key-value you can use the .countByKey() method to get the counts of distinct keys.
End of explanation
data_key.saveAsTextFile('/Users/drabast/Documents/PySpark_Data/data_key.txt')
Explanation: .saveAsTextFile(...)
As the name suggests, the .saveAsTextFile() the RDD and saves it to text files: each partition to a separate file.
End of explanation
def parseInput(row):
import re
pattern = re.compile(r'\(\'([a-z])\', ([0-9])\)')
row_split = pattern.split(row)
return (row_split[1], int(row_split[2]))
data_key_reread = sc \
.textFile('/Users/drabast/Documents/PySpark_Data/data_key.txt') \
.map(parseInput)
data_key_reread.collect()
Explanation: To read it back, you need to parse it back as, as before, all the rows are treated as strings.
End of explanation
def f(x):
print(x)
data_key.foreach(f)
Explanation: .foreach(...)
A method that applies the same function to each element of the RDD in an iterative way.
End of explanation |
3,324 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A1 Data Curation
The goal is to construct, analyze, and publish a dataset of monthly traffic on English Wikipedia from July 1 2008 - September 30 2017
The section below is to establish shared variables and library among different stages
Step1: Stage 1. Data Acquisition
Collect all months traffic data using two different Wikimedia REST API endpoints, Pagecounts and Pageviews, for both mobile and desktop (excluding spider bot).
Output is 5 JSON files
Step2: The first step is to query each Wikipedia Rest API endpoint using $wikipedia_query$ function with different parameter combination. Each response then saved to a json file using $store_to_json$ function. Print statements are used to debug the number of items the response returns with.
Two JSON files from two extra queries (all views for each API) are also produced to help with data processing in the next stage.
Stage 2. Data Processing
Process these data files to prepare them for analysis by combining each JSON file into a single CSV file with these headers
Step3: There are two basicways to combine number of views from pageviews and pagecounts API | Python Code:
import pprint
import requests
import json
# Global variables
pagecounts_url = 'https://wikimedia.org/api/rest_v1/metrics/legacy/{apiname}/aggregate/en.wikipedia.org/{access}/monthly/{start}/{end}'
pageviews_url = 'https://wikimedia.org/api/rest_v1/metrics/{apiname}/aggregate/en.wikipedia.org/{access}/{agent}/monthly/{start}/{end}'
my_github = 'reyadji'
my_email = '[email protected]'
default_params = {
'apiname': 'pageviews',
'access': 'mobile-web',
'agent': 'spider',
'start': '2008010100',
'end': '2016060100'}
json_files = []
def wikipedia_query(params=default_params):
headers = {
'User-Agent': 'https://github.com/{}'.format(my_github),
'From': my_email}
url = ''
if params['apiname'] is 'pageviews':
url = pageviews_url
params['start'] = '2015070100'
params['end'] = '2017100100'
elif params['apiname'] is 'pagecounts':
url = pagecounts_url
params['start'] = '2008010100'
params['end'] = '2016080100'
r = requests.get(url.format(**params), headers=headers)
# print('URL: {}'.format(r.uri))
print('Response status code: {}'.format(r.status_code))
# print('Response JSON: {}'.format(pprint.pprint(r.json())))
return r
def store_to_json(params, r):
params['firstmonth'] = params['start'][:-2]
params['lastmonth'] = params['end'][:-2]
filename = '{apiname}_{access}_{firstmonth}-{lastmonth}.json'.format(**params)
with open(filename, 'w+') as f:
json.dump(r.json()['items'], f, indent=4)
json_files.append(filename)
def load_json(file):
with open(file, 'r') as f:
return json.load(f)
Explanation: A1 Data Curation
The goal is to construct, analyze, and publish a dataset of monthly traffic on English Wikipedia from July 1 2008 - September 30 2017
The section below is to establish shared variables and library among different stages
End of explanation
pc_desk_params = {
'apiname': 'pagecounts',
'access': 'desktop-site',
'agent': ''}
r = wikipedia_query(pc_desk_params)
store_to_json(pc_desk_params, r)
print('Number of items: {}'.format(len(r.json()['items'])))
pc_mob_params = {
'apiname': 'pagecounts',
'access': 'mobile-site',
'agent': ''}
r = wikipedia_query(pc_mob_params)
store_to_json(pc_mob_params, r)
print('Number of items: {}'.format(len(r.json()['items'])))
pv_desk_params = {
'apiname': 'pageviews',
'access': 'desktop',
'agent': 'user'}
r = wikipedia_query(pv_desk_params)
store_to_json(pv_desk_params, r)
print('Number of items: {}'.format(len(r.json()['items'])))
pv_mobapp_params = {
'apiname': 'pageviews',
'access': 'mobile-app',
'agent': 'user'}
r = wikipedia_query(pv_mobapp_params)
store_to_json(pv_mobapp_params, r)
print('Number of items: {}'.format(len(r.json()['items'])))
pv_mobweb_params = {
'apiname': 'pageviews',
'access': 'mobile-web',
'agent': 'user'}
r = wikipedia_query(pv_mobweb_params)
store_to_json(pv_mobweb_params, r)
print('Number of items: {}'.format(len(r.json()['items'])))
print('JSON files: P{}'.format(json_files))
Explanation: Stage 1. Data Acquisition
Collect all months traffic data using two different Wikimedia REST API endpoints, Pagecounts and Pageviews, for both mobile and desktop (excluding spider bot).
Output is 5 JSON files:
1. pagecounts desktop
2. pagecounts mobile
3. pageview desktop
4. pageview mobile web
5. pageview mobile app
End of explanation
import csv
import pandas as pd
csv_file = 'en-wikipedia_traffic_200801-201709.csv'
headers = [
'timestamp',
'year',
'month',
'pagecounts_all_views',
'pagecounts_desktop_views',
'pagecounts_mobile_views',
'pageviews_all_views',
'pageviews_desktop_views',
'pageviews_mobile_views']
def load_df(file):
apiname = file.split('_')[0]
accesstype = file.split('_')[1].split('-')[0]
column = apiname + '_' + accesstype + '_views'
with open(file, 'r') as f:
views = json.load(f)
data = pd.DataFrame.from_dict(views)
if apiname == 'pageviews':
data = data.drop(['access','agent','granularity','project'], axis=1)
data = data.rename(columns = {'views': column})
# if 'mobile' in view['access']:
else:
data = data.drop(['access-site', 'granularity','project'], axis=1)
data = data.rename(columns = {'count': column})
return data
df = pd.DataFrame()
for i in json_files:
# Load json file to pandas dataframe
data = load_df(i)
if len(df) == 0:
df = data.copy(True)
else:
df = df.merge(data, on='timestamp', how='outer')
# Create year and month out of timestamp attribute
df = df.assign(year=df.timestamp.str[0:4])
df = df.assign(month=df.timestamp.str[4:6])
df.timestamp = df.timestamp.str[:-2]
# Combining two pageviews_mobile_views columns, one from mobile-app, and the other from mobile-web
df = df.assign(pageviews_mobile_views= lambda x: x.pageviews_mobile_views_x + x.pageviews_mobile_views_y)
df = df.drop(['pageviews_mobile_views_x', 'pageviews_mobile_views_y'], axis=1)
# Sum mobile and desktop to get all views
df = df.assign(pageviews_all_views= lambda x: x.pageviews_mobile_views + x.pageviews_desktop_views)
df = df.assign(pagecounts_all_views= lambda x: x.pagecounts_mobile_views + x.pagecounts_desktop_views)
df = df.fillna(value=0)
df.to_csv(csv_file, columns=headers, index = False)
Explanation: The first step is to query each Wikipedia Rest API endpoint using $wikipedia_query$ function with different parameter combination. Each response then saved to a json file using $store_to_json$ function. Print statements are used to debug the number of items the response returns with.
Two JSON files from two extra queries (all views for each API) are also produced to help with data processing in the next stage.
Stage 2. Data Processing
Process these data files to prepare them for analysis by combining each JSON file into a single CSV file with these headers:
- year
- month
- pagecount_all_views
- pagecount_desktop_views
- pagecount_mobile_views
- pageview_all_views
- pageview_desktop_views
- pageview_mobile_views
End of explanation
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('ggplot')
csv_file = 'en-wikipedia_traffic_200801-201709.csv'
# Read the CSV file to pandas dataframe in which the 'timestamp' column is the new index
df = pd.read_csv(csv_file, index_col=0, parse_dates=[0], infer_datetime_format=True)
# Drop year and month columns since it's not needed for plotting
df = df.drop(['year','month'], axis=1)
plt.figure()
df.plot()
plt.xlabel('datetime')
plt.ylabel('views (in 10 millions)')
plt.title('Wikipedia Traffic Data')
plt.legend()
plt.savefig('en-wikipedia_traffic_200801-201709.png')
plt.show()
Explanation: There are two basicways to combine number of views from pageviews and pagecounts API: using Pandas library or using Python built-in library. I found out it is very difficult to pivot from the JSON files to year and month using Python built-in basic library. On the other hand, pandas provide a nice way to mung the data. Each JSON file is loaded to a pandas dataframe, all of which is merged on timestamp to a single dataframe. Year and month are derived from the timestamp, and pageviews' mobile app and mobile web numbers are combined. Mobile and desktop views are summed to get pagecounts' and apgeviews' all views. Finally, I replaced all non-existant value with 0 before saving the dataframe to a CSV file.
Stage 3. Analysis
Visualize the dataset as a time series graph. This will include mobile, desktop, and combined (mobile+desktop) traffic.
End of explanation |
3,325 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook arguments
sigma (float)
Step1: Fitting models
Models used to fit the data.
1. Simple Exponential
In this model, we define the model function as an exponential transient
Step2: 2. Integrated Exponential
A more realistic model needs to take into account that each data point
is the result of an integration over a time window $w$
Step3: Generative model
These are the models used to generate the simulates (noisy) data.
1. Simple Exponential + Noise
In this simple model, we simulate random data $Y$ as an exponential decay plus
additive Gaussian noise
Step4: An ideal transient (no noise, no integration)
Step5: A simulated transient (including noise + integration)
Step6: Plot the computed curves
Step7: Fit data
Fit the "Integrated Exponential" model
Step8: Fit the "Simple Exponential" model
Step9: Print and plot fit results
Step10: Monte-Carlo Simulation
Here, fixed the model paramenters, we generate and fit several noisy datasets. Then, by plotting the distribution of the fitted parameters, we assess the stability and accuracy of the fit.
Parameters
The number simulation cycles is defined by num_sim_cycles. Current value is
Step11: The fixed kinetic curve parameters are
Step12: While tau is varied, taking the following values
Step13: <div class="alert alert-info">
**NOTE**
Step14: Run Monte-Carlo simulation
Run the Monte-Carlo fit for a set of different time-constants (taus)
and save results in two DataFrames, one for each model.
Step15: <div class="alert alert-danger">
**WARNING**
Step16: Results2 - Integrated Exponential | Python Code:
%matplotlib inline
import numpy as np
import lmfit
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import models # custom module
Explanation: Notebook arguments
sigma (float): standard deviation of additive Gaussian noise to be simulated
time_window (float): seconds, integration window duration
time_step (float): seconds, time step for the moving integration window
time_start (float): seconds, start of time axis (kinetics starts at t = t0).
time_stop (float): seconds, stop of time axis (kinetics starts at t = t0).
t0_vary (bool): whether models should vary the curve origin (t0) during the fit
true_params (dict): parameters used to generate simulated kinetic curves
num_sim_cycles (int): number of times fit is repeated (Monte-Carlo)
taus (tuple): list of values for time-costant tau simulated during repeated fits (Monte-Carlo).
Simulated Kinetic Curve Fit
<p class=lead>This notebook fits simulated exponential transients with additive Gaissian noise in order to study time-constant fitting accuracy.
In particular we compare a simple exponential model with a more realistic model
with integration window, checking the effect on the fit results.
<p>
You can either run this notebook directly, or run it through the [master notebook](Simulated Kinetic Curve Fit - Run-All.ipynb) for batch processing.
## Imports
End of explanation
labels = ('tau', 'init_value', 'final_value')
model = models.factory_model_exp(t0_vary=True)
Explanation: Fitting models
Models used to fit the data.
1. Simple Exponential
In this model, we define the model function as an exponential transient:
$$ y = f(t) = A \cdot e^{-t/\tau} + K$$
The python function implementing it is:
models.exp_func().
Next cell defines and initializes the fitting model (lmfit.model.Model) including parameters' constrains:
End of explanation
modelw = models.factory_model_expwin(t_window=time_window, decimation=decimation, t0_vary=t0_vary)
Explanation: 2. Integrated Exponential
A more realistic model needs to take into account that each data point
is the result of an integration over a time window $w$:
$$f(t) = A \cdot e^{-t/\tau} + K$$
$$y(t) = \int_{t}^{t+w} f(t')\;dt'$$
In other words, when we process a measurement in time chunks, we are integrating
a non-stationary signal $f(t)$ over a time window $w$. This integration causes
a smoothing of $f(t)$, regardless of the fact that time is binned or
is swiped-through with a moving windows (overlapping chunks).
Numerically, $t$ is discretized with step equal to (time_step / decimation).
The python function implementing this model function is:
models.expwindec_func().
And, finally, we define and initialize the fitting model parameters' constrains:
End of explanation
t = np.arange(time_start, time_stop-time_window, time_step).astype(float)
t.size
Explanation: Generative model
These are the models used to generate the simulates (noisy) data.
1. Simple Exponential + Noise
In this simple model, we simulate random data $Y$ as an exponential decay plus
additive Gaussian noise:
$$ Y(t_k) = f(t_k) + N_k $$
$$ {N_k} \sim {\rm Normal}{\mu=0; \sigma}$$
$$ \Delta t = t_k - t_{k-1} = \texttt{time_step}$$
2. Integrated Exponential + Noise
For the "integrating window" model, we first define a finer time axis $\theta_i$
which oversamples $t_k$ by a factor $n$. Then we define the function $Y_f$
adding Gaussian noise $\sqrt{n}\,N_i$, with $n$ times larger variance:
$$ Y_f(\theta_i) = f(\theta_i) + \sqrt{n}\,N_i $$
$$ \Delta \theta = \theta_i - \theta_{i-1} = \texttt{time_step} \;/\; n$$
Finally, by averaging each time window, we compute the data on the coarse time axis $t_k$:
$$ Y_w(t_k) = \frac{1}{m}\sum_{i} Y_f(\theta_i)$$
Here, for each $t_k$, we compute the mean of $m$ consecutive $Y_f$ values. The number $m$
is chosen so that $m\, \Delta \theta$ is equal to the time window.
Noise amplitude
The amplitude of the additive noise ($\sigma$) is estimated from the experimental kinetic curves.
In particular we take the variance from the POST period (i.e. the steady state period after the transient).
The POST period has been chosen because it exhibits higher variance than the PRE period (i.e. the steady state period
before the transient). These values have been calculated in 8-spot bubble-bubble kinetics - Summary.
In both models we define the noise amplitude as sigma (see first cell):
sigma = 0.016
Time axis
We also define the parameters for the time axis $t$:
time_start = -900 # seconds
time_stop = 900 # seconds
time_step = 5 # seconds
Kinetic curve paramenters
The simulated kinetic curve has the following parameters:
true_params = dict(
tau = 60, # time constant
init_value = 0.3, # initial value (for t < t0)
final_value = 0.8, # final value (for t -> +inf)
t0 = 0) # time origin
<div class="alert alert-info">
**NOTE**: All previous parameters are defined in the first notebook cell.
</div>
Single kinetic curve fit
Here we simulate one kinetic curve and fit it with the two models (simple exponential and integrated exponential).
Draw simulated data
Time axis for simulated data:
End of explanation
y = models.expwindec_func(t, t_window=time_window, **true_params)
y.shape
Explanation: An ideal transient (no noise, no integration):
End of explanation
time_window, time_step
yr = models.expwindec_func(t, t_window=time_window, sigma=sigma, **true_params)
yr.shape
Explanation: A simulated transient (including noise + integration):
End of explanation
plt.plot(t, y, '-', label='model')
plt.plot(t, yr, 'o', label='model + noise')
Explanation: Plot the computed curves:
End of explanation
#%%timeit
resw = modelw.fit(yr, t=t, tau=10, init_value=0.1, final_value=0.9, verbose=False)
Explanation: Fit data
Fit the "Integrated Exponential" model:
End of explanation
#%%timeit
res = model.fit(yr, t=t + 0.5*time_window, tau=10, init_value=0.1, final_value=0.9, verbose=False)
Explanation: Fit the "Simple Exponential" model:
End of explanation
fig = plt.figure(figsize=(14, 8))
res.plot(fig=fig)
ci = lmfit.conf_interval(res, res)
lmfit.report_fit(res)
print(lmfit.ci_report(ci, with_offset=False))
#plt.xlim(-300, 300)
fig = plt.figure(figsize=(14, 8))
resw.plot(fig=fig)
ci = lmfit.conf_interval(resw, resw)
lmfit.report_fit(resw)
print(lmfit.ci_report(ci, with_offset=False))
#plt.xlim(-300, 300)
Explanation: Print and plot fit results:
End of explanation
num_sim_cycles
Explanation: Monte-Carlo Simulation
Here, fixed the model paramenters, we generate and fit several noisy datasets. Then, by plotting the distribution of the fitted parameters, we assess the stability and accuracy of the fit.
Parameters
The number simulation cycles is defined by num_sim_cycles. Current value is:
End of explanation
{k: v for k, v in true_params.items() if k is not "tau"}
Explanation: The fixed kinetic curve parameters are:
End of explanation
taus
t0_vary
Explanation: While tau is varied, taking the following values:
End of explanation
def draw_samples_and_fit(true_params):
# Create the data
t = np.arange(time_start, time_stop-time_window, time_step).astype(float)
yr = models.expwindec_func(t, t_window=time_window, sigma=sigma, decimation=100, **true_params)
# Fit the model
tc = t + 0.5*time_window
kws = dict(fit_kws=dict(nan_policy='omit'), verbose=False)
res = model.fit(yr, t=tc, tau=90, method='nelder', **kws)
res = model.fit(yr, t=tc, **kws)
resw = modelw.fit(yr, t=t, tau=400, decimation=decimation, method='nelder', **kws)
resw = modelw.fit(yr, t=t, decimation=decimation, **kws)
return res, resw
def monte_carlo_sim(true_params, N):
df1 = pd.DataFrame(index=range(N), columns=labels)
df2 = df1.copy()
for i in range(N):
res1, res2 = draw_samples_and_fit(true_params)
for var in labels:
df1.loc[i, var] = res1.values[var]
df2.loc[i, var] = res2.values[var]
return df1, df2
Explanation: <div class="alert alert-info">
**NOTE**: All previous parameters are defined in the first notebook cell.
</div>
Functions
Here we define two functions:
draw_samples_and_fit() draws a set of data and fits it with both models
monte_carlo_sim() run the Monte-Carlo simulation: calls draw_samples_and_fit() many times.
NOTE: Global variables are used by previous functions.
End of explanation
mc_results1, mc_results2 = [], []
%%timeit -n1 -r1 # <-- prints execution time
for tau in taus:
true_params['tau'] = tau
df1, df2 = monte_carlo_sim(true_params, num_sim_cycles)
mc_results1.append(df1)
mc_results2.append(df2)
Explanation: Run Monte-Carlo simulation
Run the Monte-Carlo fit for a set of different time-constants (taus)
and save results in two DataFrames, one for each model.
End of explanation
for tau, df in zip(taus, mc_results1):
true_params['tau'] = tau
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
for i, var in enumerate(labels):
std = df[var].std()
df[var].hist(bins=30, ax=ax[i])
ax[i].set_title("%s = %.1f (%.3f)" % (var, true_params[var], std), fontsize=18)
ax[i].axvline(true_params[var], color='r', ls='--')
#print('True parameters: %s' % true_params)
Explanation: <div class="alert alert-danger">
**WARNING**: The previous cell can take a long to execute. Execution time scales with **`num_sim_cycles * len(taus)`**.
</div>
Results1 - Simple Exponential
End of explanation
for tau, df in zip(taus, mc_results2):
true_params['tau'] = tau
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
for i, var in enumerate(labels):
std = df[var].std()
df[var].hist(bins=30, ax=ax[i])
ax[i].set_title("%s = %.1f (%.3f)" % (var, true_params[var], std), fontsize=18)
ax[i].axvline(true_params[var], color='r', ls='--')
#print('True parameters: %s' % true_params)
Explanation: Results2 - Integrated Exponential
End of explanation |
3,326 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to NumPy
Topics
Basic Synatx
creating vectors matrices
special
Step1: This code sets up Ipython Notebook environments (lines beginning with %), and loads several libraries and functions. The core scientific stack in python consists of a number of free libraries. The ones I have loaded above include
Step2: Vectors and Lists
The numpy library (we will reference it by np) is the workhorse library for linear algebra in python. To creat a vector simply surround a python list ($[1,2,3]$) with the np.array function
Step3: We could have done this by defining a python list and converting it to an array
Step4: Matrix Addition and Subtraction
Adding or subtracting a scalar value to a matrix
To learn the basics, consider a small matrix of dimension $2 \times 2$, where $2 \times 2$ denotes the number of rows $\times$ the number of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Consider adding a scalar value (e.g. 3) to the A.
$$
\begin{equation}
A+3=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}+3
=\begin{bmatrix}
a_{11}+3 & a_{12}+3 \
a_{21}+3 & a_{22}+3
\end{bmatrix}
\end{equation}
$$
The same basic principle holds true for A-3
Step5: Adding or subtracting two matrices
Consider two small $2 \times 2$ matrices, where $2 \times 2$ denotes the # of rows $\times$ the # of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$ and $B$=$\bigl( \begin{smallmatrix} b_{11} & b_{12} \ b_{21} & b_{22} \end{smallmatrix} \bigr)$. To find the result of $A-B$, simply subtract each element of A with the corresponding element of B
Step6: Matrix Multiplication
Multiplying a scalar value times a matrix
As before, let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Suppose we want to multiply A times a scalar value (e.g. $3 \times A$)
$$
\begin{equation}
3 \times A = 3 \times \begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}
=
\begin{bmatrix}
3a_{11} & 3a_{12} \
3a_{21} & 3a_{22}
\end{bmatrix}
\end{equation}
$$
is of dimension (2,2). Scalar multiplication is commutative, so that $3 \times A$=$A \times 3$. Notice that the product is defined for a matrix A of any dimension.
Similar to scalar addition and subtration, the code is simple
Step7: Multiplying two matricies
Now, consider the $2 \times 1$ vector $C=\bigl( \begin{smallmatrix} c_{11} \
c_{21}
\end{smallmatrix} \bigr)$
Consider multiplying matrix $A_{2 \times 2}$ and the vector $C_{2 \times 1}$. Unlike the addition and subtraction case, this product is defined. Here, conformability depends not on the row and column dimensions, but rather on the column dimensions of the first operand and the row dimensions of the second operand. We can write this operation as follows
$$
\begin{equation}
A_{2 \times 2} \times C_{2 \times 1} =
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}{2 \times 2}
\times
\begin{bmatrix}
c{11} \
c_{21}
\end{bmatrix}{2 \times 1}
=
\begin{bmatrix}
a{11}c_{11} + a_{12}c_{21} \
a_{21}c_{11} + a_{22}c_{21}
\end{bmatrix}_{2 \times 1}
\end{equation}
$$
Alternatively, consider a matrix C of dimension $2 \times 3$ and a matrix A of dimension $3 \times 2$
$$
\begin{equation}
A_{3 \times 2}=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
,
C{2 \times 3} =
\begin{bmatrix}
c_{11} & c_{12} & c_{13} \
c_{21} & c_{22} & c_{23} \
\end{bmatrix}_{2 \times 3}
\end{equation}
$$
Here, A $\times$ C is
$$
\begin{align}
A_{3 \times 2} \times C_{2 \times 3}=&
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
\times
\begin{bmatrix}
c{11} & c_{12} & c_{13} \
c_{21} & c_{22} & c_{23}
\end{bmatrix}{2 \times 3} \
=&
\begin{bmatrix}
a{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \
a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \
a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23}
\end{bmatrix}_{3 \times 3}
\end{align}
$$
So in general, $X_{r_x \times c_x} \times Y_{r_y \times c_y}$ we have two important things to remember
Step8: We will use the numpy dot operator to perform the these multiplications. You can use it two ways to yield the same result
Step9: Matrix Division
The term matrix division is actually a misnomer. To divide in a matrix algebra world we first need to invert the matrix. It is useful to consider the analog case in a scalar work. Suppose we want to divide the $f$ by $g$. We could do this in two different ways
Step10: Check that $C\times C^{-1} = I$
Step11: Transposing a Matrix
At times it is useful to pivot a matrix for conformability- that is in order to matrix divide or multiply, we need to switch the rows and column dimensions of matrices. Consider the matrix
$$
\begin{equation}
A_{3 \times 2}=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
\end{equation}
$$
The transpose of A (denoted as $A^{\prime}$) is
$$
\begin{equation}
A^{\prime}=\begin{bmatrix}
a{11} & a_{21} & a_{31} \
a_{12} & a_{22} & a_{32} \
\end{bmatrix}_{2 \times 3}
\end{equation}
$$
Step12: One important property of transposing a matrix is the transpose of a product of two matrices. Let matrix A be of dimension $N \times M$ and let B of of dimension $M \times P$. Then
$$
\begin{equation}
(AB)^{\prime}=B^{\prime}A^{\prime}
\end{equation}
$$
For more information, see this http
Step13: Mechanics
Indexing and Slicing
examples from https
Step14: Logic, Comparison
Step15: Concatenate, Reshape
Step16: Random Numbers
Step17: Numpy load, save data files
Step18: Similarity
Step19: Example
Step20: Let's add the bias, i.e. a column of $1$s to the explanatory variables
Step21: Closed-form Linear Regression
And compute the parametes $\beta_0$ and $\beta_1$ according to
$$ \beta = (X^\prime X)^{-1} X^\prime y $$
Note
Step22: Multiple Linear Regression
Step23: Evaluation
Step24: Regularization, Ridge-Regression
Regularization, in mathematics and statistics and particularly in the fields of machine learning and inverse problems, is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting.
In general, a regularization term $R(f)$ is introduced to a general loss function | Python Code:
%matplotlib inline
import math
import numpy as np
import matplotlib.pyplot as plt
##import seaborn as sbn
##from scipy import *
Explanation: Introduction to NumPy
Topics
Basic Synatx
creating vectors matrices
special: ones, zeros, identity eye
add, product, inverse
Mechanics: indexing, slicing, concatenating, reshape, zip
Numpy load, save data files
Random numbers $\rightarrow$ distributions
Similarity: Euclidian vs Cosine
Example Nearest Neighbor search
Example Linear Regression
References
Quick Start Tutorial https://docs.scipy.org/doc/numpy-dev/user/quickstart.html
NumPy Basic https://docs.scipy.org/doc/numpy-dev/user/basics.html
NumPy Refernce https://docs.scipy.org/doc/numpy-dev/reference/index.html
Basics
this section uses content created by Rob Hicks http://rlhick.people.wm.edu/stories/linear-algebra-python-basics.html
Loading libraries
The python universe has a huge number of libraries that extend the capabilities of python. Nearly all of these are open source, unlike packages like stata or matlab where some key libraries are proprietary (and can cost lots of money). In lots of my code, you will see this at the top:
End of explanation
x = .5
print x
Explanation: This code sets up Ipython Notebook environments (lines beginning with %), and loads several libraries and functions. The core scientific stack in python consists of a number of free libraries. The ones I have loaded above include:
sympy: provides for symbolic computation (solving algebra problems)
numpy: provides for linear algebra computations
matplotlib.pyplot: provides for the ability to graph functions and draw figures
scipy: scientific python provides a plethora of capabilities
seaborn: makes matplotlib figures even pretties (another library like this is called bokeh). This is entirely optional and is purely for eye candy.
Creating arrays, scalars, and matrices in Python
Scalars can be created easily like this:
End of explanation
x_vector = np.array([1,2,3])
print x_vector
print type(x_vector)
Explanation: Vectors and Lists
The numpy library (we will reference it by np) is the workhorse library for linear algebra in python. To creat a vector simply surround a python list ($[1,2,3]$) with the np.array function:
End of explanation
c_list = [1,2]
print "The list:",c_list
print "Has length:", len(c_list)
c_vector = np.array(c_list)
print "The vector:", c_vector
print "Has shape:",c_vector.shape
z = [5,6]
print "This is a list, not an array:",z
print type(z)
A = np.array([[0, 1, 2], [5, 6, 7]])
print A.shape
print type(A)
v = np.array([1,2,3,4])
print v.shape
print len(v)
Explanation: We could have done this by defining a python list and converting it to an array:
End of explanation
A
result = A + 3
#or
result = 3 + A
print result
Explanation: Matrix Addition and Subtraction
Adding or subtracting a scalar value to a matrix
To learn the basics, consider a small matrix of dimension $2 \times 2$, where $2 \times 2$ denotes the number of rows $\times$ the number of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Consider adding a scalar value (e.g. 3) to the A.
$$
\begin{equation}
A+3=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}+3
=\begin{bmatrix}
a_{11}+3 & a_{12}+3 \
a_{21}+3 & a_{22}+3
\end{bmatrix}
\end{equation}
$$
The same basic principle holds true for A-3:
$$
\begin{equation}
A-3=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}-3
=\begin{bmatrix}
a_{11}-3 & a_{12}-3 \
a_{21}-3 & a_{22}-3
\end{bmatrix}
\end{equation}
$$
Notice that we add (or subtract) the scalar value to each element in the matrix A. A can be of any dimension.
This is trivial to implement, now that we have defined our matrix A:
End of explanation
B = np.random.randn(2,2)
print B
A = np.array([[1,0], [0,1]])
A
A*B
Explanation: Adding or subtracting two matrices
Consider two small $2 \times 2$ matrices, where $2 \times 2$ denotes the # of rows $\times$ the # of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$ and $B$=$\bigl( \begin{smallmatrix} b_{11} & b_{12} \ b_{21} & b_{22} \end{smallmatrix} \bigr)$. To find the result of $A-B$, simply subtract each element of A with the corresponding element of B:
$$
\begin{equation}
A -B =
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix} -
\begin{bmatrix} b_{11} & b_{12} \
b_{21} & b_{22}
\end{bmatrix}
=
\begin{bmatrix}
a_{11}-b_{11} & a_{12}-b_{12} \
a_{21}-b_{21} & a_{22}-b_{22}
\end{bmatrix}
\end{equation}
$$
Addition works exactly the same way:
$$
\begin{equation}
A + B =
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix} +
\begin{bmatrix} b_{11} & b_{12} \
b_{21} & b_{22}
\end{bmatrix}
=
\begin{bmatrix}
a_{11}+b_{11} & a_{12}+b_{12} \
a_{21}+b_{21} & a_{22}+b_{22}
\end{bmatrix}
\end{equation}
$$
An important point to know about matrix addition and subtraction is that it is only defined when $A$ and $B$ are of the same size. Here, both are $2 \times 2$. Since operations are performed element by element, these two matrices must be conformable- and for addition and subtraction that means they must have the same numbers of rows and columns. I like to be explicit about the dimensions of matrices for checking conformability as I write the equations, so write
$$
A_{2 \times 2} + B_{2 \times 2}= \begin{bmatrix}
a_{11}+b_{11} & a_{12}+b_{12} \
a_{21}+b_{21} & a_{22}+b_{22}
\end{bmatrix}_{2 \times 2}
$$
Notice that the result of a matrix addition or subtraction operation is always of the same dimension as the two operands.
Let's define another matrix, B, that is also $2 \times 2$ and add it to A:
End of explanation
A * 3
Explanation: Matrix Multiplication
Multiplying a scalar value times a matrix
As before, let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Suppose we want to multiply A times a scalar value (e.g. $3 \times A$)
$$
\begin{equation}
3 \times A = 3 \times \begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}
=
\begin{bmatrix}
3a_{11} & 3a_{12} \
3a_{21} & 3a_{22}
\end{bmatrix}
\end{equation}
$$
is of dimension (2,2). Scalar multiplication is commutative, so that $3 \times A$=$A \times 3$. Notice that the product is defined for a matrix A of any dimension.
Similar to scalar addition and subtration, the code is simple:
End of explanation
# Let's redefine A and C to demonstrate matrix multiplication:
A = np.arange(6).reshape((3,2))
C = np.random.randn(2,2)
print A.shape
print C.shape
Explanation: Multiplying two matricies
Now, consider the $2 \times 1$ vector $C=\bigl( \begin{smallmatrix} c_{11} \
c_{21}
\end{smallmatrix} \bigr)$
Consider multiplying matrix $A_{2 \times 2}$ and the vector $C_{2 \times 1}$. Unlike the addition and subtraction case, this product is defined. Here, conformability depends not on the row and column dimensions, but rather on the column dimensions of the first operand and the row dimensions of the second operand. We can write this operation as follows
$$
\begin{equation}
A_{2 \times 2} \times C_{2 \times 1} =
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}{2 \times 2}
\times
\begin{bmatrix}
c{11} \
c_{21}
\end{bmatrix}{2 \times 1}
=
\begin{bmatrix}
a{11}c_{11} + a_{12}c_{21} \
a_{21}c_{11} + a_{22}c_{21}
\end{bmatrix}_{2 \times 1}
\end{equation}
$$
Alternatively, consider a matrix C of dimension $2 \times 3$ and a matrix A of dimension $3 \times 2$
$$
\begin{equation}
A_{3 \times 2}=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
,
C{2 \times 3} =
\begin{bmatrix}
c_{11} & c_{12} & c_{13} \
c_{21} & c_{22} & c_{23} \
\end{bmatrix}_{2 \times 3}
\end{equation}
$$
Here, A $\times$ C is
$$
\begin{align}
A_{3 \times 2} \times C_{2 \times 3}=&
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
\times
\begin{bmatrix}
c{11} & c_{12} & c_{13} \
c_{21} & c_{22} & c_{23}
\end{bmatrix}{2 \times 3} \
=&
\begin{bmatrix}
a{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \
a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \
a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23}
\end{bmatrix}_{3 \times 3}
\end{align}
$$
So in general, $X_{r_x \times c_x} \times Y_{r_y \times c_y}$ we have two important things to remember:
For conformability in matrix multiplication, $c_x=r_y$, or the columns in the first operand must be equal to the rows of the second operand.
The result will be of dimension $r_x \times c_y$, or of dimensions equal to the rows of the first operand and columns equal to columns of the second operand.
Given these facts, you should convince yourself that matrix multiplication is not generally commutative, that the relationship $X \times Y = Y \times X$ does not hold in all cases.
For this reason, we will always be very explicit about whether we are pre multiplying ($X \times Y$) or post multiplying ($Y \times X$) the vectors/matrices $X$ and $Y$.
For more information on this topic, see this
http://en.wikipedia.org/wiki/Matrix_multiplication.
End of explanation
print A.dot(C)
print np.dot(A,C)
# What would happen to
C.dot(A)
Explanation: We will use the numpy dot operator to perform the these multiplications. You can use it two ways to yield the same result:
End of explanation
# note, we need a square matrix (# rows = # cols), use C:
C_inverse = np.linalg.inv(C)
print C_inverse
Explanation: Matrix Division
The term matrix division is actually a misnomer. To divide in a matrix algebra world we first need to invert the matrix. It is useful to consider the analog case in a scalar work. Suppose we want to divide the $f$ by $g$. We could do this in two different ways:
$$
\begin{equation}
\frac{f}{g}=f \times g^{-1}.
\end{equation}
$$
In a scalar seeting, these are equivalent ways of solving the division problem. The second one requires two steps: first we invert g and then we multiply f times g. In a matrix world, we need to think about this second approach. First we have to invert the matrix g and then we will need to pre or post multiply depending on the exact situation we encounter (this is intended to be vague for now).
Inverting a Matrix
As before, consider the square $2 \times 2$ matrix $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \ a_{21} & a_{22}\end{smallmatrix} \bigr)$. Let the inverse of matrix A (denoted as $A^{-1}$) be
$$
\begin{equation}
A^{-1}=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
\end{bmatrix}^{-1}=\frac{1}{a_{11}a_{22}-a_{12}a_{21}} \begin{bmatrix}
a_{22} & -a_{12} \
-a_{21} & a_{11}
\end{bmatrix}
\end{equation}
$$
The inverted matrix $A^{-1}$ has a useful property:
$$
\begin{equation}
A \times A^{-1}=A^{-1} \times A=I
\end{equation}
$$
where I, the identity matrix (the matrix equivalent of the scalar value 1), is
$$
\begin{equation}
I_{2 \times 2}=\begin{bmatrix}
1 & 0 \
0 & 1
\end{bmatrix}
\end{equation}
$$
furthermore, $A \times I = A$ and $I \times A = A$.
An important feature about matrix inversion is that it is undefined if (in the $2 \times 2$ case), $a_{11}a_{22}-a_{12}a_{21}=0$. If this relationship is equal to zero the inverse of A does not exist. If this term is very close to zero, an inverse may exist but $A^{-1}$ may be poorly conditioned meaning it is prone to rounding error and is likely not well identified computationally. The term $a_{11}a_{22}-a_{12}a_{21}$ is the determinant of matrix A, and for square matrices of size greater than $2 \times 2$, if equal to zero indicates that you have a problem with your data matrix (columns are linearly dependent on other columns). The inverse of matrix A exists if A is square and is of full rank (ie. the columns of A are not linear combinations of other columns of A).
For more information on this topic, see this
http://en.wikipedia.org/wiki/Matrix_inversion, for example, on inverting matrices.
End of explanation
print C.dot(C_inverse)
print "Is identical to:"
print C_inverse.dot(C)
Explanation: Check that $C\times C^{-1} = I$:
End of explanation
A = np.arange(6).reshape((6,1))
B = np.arange(6).reshape((1,6))
A.dot(B)
B.dot(A)
A = np.arange(6).reshape((3,2))
B = np.arange(8).reshape((2,4))
print "A is"
print A
print "The Transpose of A is"
print A.T
Explanation: Transposing a Matrix
At times it is useful to pivot a matrix for conformability- that is in order to matrix divide or multiply, we need to switch the rows and column dimensions of matrices. Consider the matrix
$$
\begin{equation}
A_{3 \times 2}=\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}{3 \times 2}
\end{equation}
$$
The transpose of A (denoted as $A^{\prime}$) is
$$
\begin{equation}
A^{\prime}=\begin{bmatrix}
a{11} & a_{21} & a_{31} \
a_{12} & a_{22} & a_{32} \
\end{bmatrix}_{2 \times 3}
\end{equation}
$$
End of explanation
print B.T.dot(A.T)
print "Is identical to:"
print (A.dot(B)).T
B.shape
B[0, 3]
A = np.arange(12).reshape((3,4))
A
A[2,:].shape
A[:,1].reshape(1,3).shape
Explanation: One important property of transposing a matrix is the transpose of a product of two matrices. Let matrix A be of dimension $N \times M$ and let B of of dimension $M \times P$. Then
$$
\begin{equation}
(AB)^{\prime}=B^{\prime}A^{\prime}
\end{equation}
$$
For more information, see this http://en.wikipedia.org/wiki/Matrix_transposition on matrix transposition. This is also easy to implement:
End of explanation
a = np.arange(10)
s = slice(2,7,2)
print a[s]
a = np.arange(10)
b = a[2:7:2]
print b
a = np.arange(10)
b = a[5]
print b
a = np.arange(10)
print a
print a[2:5]
import numpy as np
a = np.arange(10)
print a[2:5]
a = np.array([[1,2,3],[3,4,5],[4,5,6]])
print a
# slice items starting from index
print 'Now we will slice the array from the index a[1:]'
print a[1:]
# array to begin with
a = np.array([[1,2,3],[3,4,5],[4,5,6]])
print 'Our array is:'
print a
print '\n'
# this returns array of items in the second column
print 'The items in the second column are:'
print a[...,1]
print '\n'
# Now we will slice all items from the second row
print 'The items in the second row are:'
print a[1,...]
print '\n'
# Now we will slice all items from column 1 onwards
print 'The items column 1 onwards are:'
print a[...,1:]
Explanation: Mechanics
Indexing and Slicing
examples from https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
End of explanation
A = np.random.rand(5,5)*10
print A
print (A < 5)
print A[A < 5]
A[A<5] = 0
A
A[A>=5] = 1
A
Explanation: Logic, Comparison
End of explanation
np.ones((10,5), int)
np.zeros((10,5), int)
np.eye(5, dtype="int")
Explanation: Concatenate, Reshape
End of explanation
v1 = np.random.rand(100)
v2 = np.random.randn(100)
plt.plot(range(v1.shape[0]), v1, '.')
plt.plot(range(v2.shape[0]), v2, '.')
plt.hist(v1)
;
v2 = np.random.randn(10000)
plt.hist(v2, bins=100)
;
v3 = np.random.beta(3,2, 1000)
plt.hist(v3, bins=100)
;
Explanation: Random Numbers
End of explanation
ls -l HW03/
%%sh
./HW03/preprocess_data.sh HW03/Camera.csv HW03/Camera_cleaned.csv
head HW03/Camera_cleaned.csv
DATA = np.genfromtxt('HW03/Camera_cleaned.csv', delimiter=';')
DATA.
np.max(DATA[1:,2])
np.nanargmin()
Explanation: Numpy load, save data files
End of explanation
### Pure iterative Python ###
points = [[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]]
qPoint = [4,5,3]
minIdx = -1
minDist = -1
for idx, point in enumerate(points): # iterate over all points
print "index is %d, point is %s" % (idx, point)
dist = sum([(dp-dq)**2 for dp,dq in zip(point,qPoint)])**0.5 # compute the euclidean distance for each point to q
if dist < minDist or minDist < 0: # if necessary, update minimum distance and index of the corresponding point
minDist = dist
minIdx = idx
print 'Nearest point to q: ', points[minIdx]
zip(point, qPoint)
# # # Equivalent NumPy vectorization # # #
import numpy as np
points = np.array([[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]])
qPoint = np.array([4,5,3])
minIdx = np.argmin(np.linalg.norm(points-qPoint,axis=1)) # compute all euclidean distances at once and return the index of the smallest one
print 'Nearest point to q: ', points[minIdx]
print points.shape
print qPoint.shape
print points
print qPoint
print points-qPoint
from numpy.linalg import norm
norm(points-qPoint)
1.0-points[0,:].dot(qPoint)/(norm(points[0,:])*norm(qPoint))
Explanation: Similarity: Euclidian vs Cosine
Example Nearest Neighbor search
Nearest Neighbor search is a common technique in Machine Learning
End of explanation
n = 100 # numeber of samples
Xr = np.random.rand(n)*99.0
y = -7.3 + 2.5*Xr + np.random.randn(n)*27.0
plt.plot(Xr, y, "o", alpha=0.5)
Explanation: Example: Linear Regression
Linear regression is an approach for modeling the relationship between a scalar dependent variable $y$ and one or more explanatory variables (or independent variables) denoted $X$. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression.$^1$ (This term is distinct from multivariate linear regression, where multiple correlated dependent variables are predicted, rather than a single scalar variable.$^2$
We assume that the equation
$y_i = \beta_0 + \beta_1 X_i + \epsilon_i$ where $\epsilon_i \approx N(0, \sigma^2)$
$^1$ David A. Freedman (2009). Statistical Models: Theory and Practice. Cambridge University Press. p. 26. A simple regression equation has on the right hand side an intercept and an explanatory variable with a slope coefficient. A multiple regression equation has two or more explanatory variables on the right hand side, each with its own slope coefficient
$^2$ Rencher, Alvin C.; Christensen, William F. (2012), "Chapter 10, Multivariate regression – Section 10.1, Introduction", Methods of Multivariate Analysis, Wiley Series in Probability and Statistics, 709 (3rd ed.), John Wiley & Sons, p. 19, ISBN 9781118391679.
End of explanation
X = np.vstack((np.ones(n), Xr)).T
print X.shape
X[0:10,:]
Explanation: Let's add the bias, i.e. a column of $1$s to the explanatory variables
End of explanation
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
yhat = X.dot(beta)
yhat.shape
plt.plot(X[:,1], y, "o", alpha=0.5)
plt.plot(X[:,1], yhat, "-", alpha=1, color="red")
Explanation: Closed-form Linear Regression
And compute the parametes $\beta_0$ and $\beta_1$ according to
$$ \beta = (X^\prime X)^{-1} X^\prime y $$
Note:
This not only looks elegant but can also be written in Julia code. However, matrix inversion $M^{-1}$ requires $O(d^3)$ iterations for a $d\times d$ matrix.<br />
https://www.coursera.org/learn/ml-regression/lecture/jOVX8/discussing-the-closed-form-solution
End of explanation
n = 100 # numeber of samples
X1 = np.random.rand(n)*99.0
X2 = np.random.rand(n)*51.0 - 26.8
X3 = np.random.rand(n)*5.0 + 6.1
X4 = np.random.rand(n)*1.0 - 0.5
X5 = np.random.rand(n)*300.0
y_m = -7.3 + 2.5*X1 + -7.9*X2 + 1.5*X3 + 10.0*X4 + 0.13*X5 + np.random.randn(n)*7.0
plt.hist(y_m, bins=20)
;
X_m = np.vstack((np.ones(n), X1, X2, X3, X4, X5)).T
X_m.shape
beta_m = np.linalg.inv(X_m.T.dot(X_m)).dot(X_m.T).dot(y_m)
beta_m
yhat_m = X.dot(beta_m)
yhat_m.shape
Explanation: Multiple Linear Regression
End of explanation
import math
RSMD = math.sqrt(np.square(yhat_m-y_m).sum()/n)
print RSMD
Explanation: Evaluation: Root-mean-square Deviation
The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSD serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSD is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent.$^1$
$^1$ Hyndman, Rob J. Koehler, Anne B.; Koehler (2006). "Another look at measures of forecast accuracy". International Journal of Forecasting. 22 (4): 679–688. doi:10.1016/j.ijforecast.2006.03.001.
End of explanation
p = X.shape[1] ## get number of parameters
lam = 10.0
p, lam
beta2 = np.linalg.inv(X.T.dot(X) + lam*np.eye(p)).dot(X.T).dot(y)
yhat2 = X.dot(beta2)
RSMD2 = math.sqrt(np.square(yhat2-y).sum()/n)
print RSMD2
##n = float(X.shape[0])
print " RMSE = ", math.sqrt(np.square(yhat-y).sum()/n)
print "Ridge RMSE = ", math.sqrt(np.square(yhat2-y).sum()/n)
plt.plot(X[:,1], y, "o", alpha=0.5)
plt.plot(X[:,1], yhat, "-", alpha=0.7, color="red")
plt.plot(X[:,1], yhat2, "-", alpha=0.7, color="green")
Explanation: Regularization, Ridge-Regression
Regularization, in mathematics and statistics and particularly in the fields of machine learning and inverse problems, is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting.
In general, a regularization term $R(f)$ is introduced to a general loss function:
for a loss function $V$ that describes the cost of predicting $f(x)$ when the label is
$y$, such as the square loss or hinge loss, and for the term
$\lambda$ which controls the importance of the regularization term.
$R(f)$ is typically a penalty on the complexity of
$f$, such as restrictions for smoothness or bounds on the vector space norm.$^1$
A theoretical justification for regularization is that it attempts to impose Occam's razor on the solution, as depicted in the figure. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters.
Regularization can be used to learn simpler models, induce models to be sparse, introduce group structure into the learning problem, and more.
We're going to add the L2 term $\lambda||\beta||_2^2$ to the regression equation, which yields to$^2$
$$ \beta = (X^\prime X + \lambda I)^{-1} X^\prime y $$
$^1$ Bishop, Christopher M. (2007). Pattern recognition and machine learning (Corr. printing. ed.). New York: Springer. ISBN 978-0387310732.
$^2$ http://stats.stackexchange.com/questions/69205/how-to-derive-the-ridge-regression-solution
End of explanation |
3,327 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Image classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Import the required packages.
Step3: Simple End-to-End Example
Get the data path
Let's get some images to play with this simple end-to-end example. Hundreds of images is a good start for Model Maker while more data could achieve better accuracy.
Step4: You could replace image_path with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
<img src="https
Step5: Step 2. Customize the TensorFlow model.
Step6: Step 3. Evaluate the model.
Step7: Step 4. Export to TensorFlow Lite model.
Here, we export TensorFlow Lite model with metadata which provides a standard for model descriptions.
You could download it in the left sidebar same as the uploading part for your own use.
Step8: After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in image classification reference app.
Detailed Process
Currently, we support several models such as EfficientNet-Lite* models, MobileNetV2, ResNet50 as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.
The following walks through this end-to-end example step by step to show more detail.
Step 1
Step9: Use ImageClassifierDataLoader class to load data.
As for from_folder() method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
Step10: Split it to training data (80%), validation data (10%, optional) and testing data (10%).
Step11: Show 25 image examples with labels.
Step12: Step 2
Step13: Have a look at the detailed model structure.
Step14: Step 3
Step15: We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
Step16: If the accuracy doesn't meet the app requirement, one could refer to Advanced Usage to explore alternatives such as changing to a larger model, adjusting re-training parameters etc.
Step 4
Step17: The TensorFlow Lite model file and label file could be used in image classification reference app.
As for android reference app as an example, we could add flower_classifier.tflite and flower_label.txt in assets folder. Meanwhile, change label filename in code and TensorFlow Lite file name in code. Thus, we could run the retrained float TensorFlow Lite model on the android app.
You can also evalute the tflite model with the evaluate_tflite method.
Step18: Advanced Usage
The create function is the critical part of this library. It uses transfer learning with a pretrained model similiar to the tutorial.
The createfunction contains the following steps
Step19: Then we export TensorFlow Lite model with such configuration.
Step20: In Colab, you can download the model named model_quant.tflite from the left sidebar, same as the uploading part mentioned above.
Change the model
Change to the model that's supported in this library.
This library supports EfficientNet-Lite models, MobileNetV2, ResNet50 by now. EfficientNet-Lite are a family of image classification models that could achieve state-of-art accuracy and suitable for Edge devices. The default model is EfficientNet-Lite0.
We could switch model to MobileNetV2 by just setting parameter model_spec to mobilenet_v2_spec in create method.
Step21: Evaluate the newly retrained MobileNetV2 model to see the accuracy and loss in testing data.
Step22: Change to the model in TensorFlow Hub
Moreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.
As Inception V3 model as an example, we could define inception_v3_spec which is an object of ImageModelSpec and contains the specification of the Inception V3 model.
We need to specify the model name name, the url of the TensorFlow Hub model uri. Meanwhile, the default value of input_image_shape is [224, 224]. We need to change it to [299, 299] for Inception V3 model.
Step23: Then, by setting parameter model_spec to inception_v3_spec in create method, we could retrain the Inception V3 model.
The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end.
Change your own custom model
If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export ModelSpec in TensorFlow Hub.
Then start to define ImageModelSpec object like the process above.
Change the training hyperparameters
We could also change the training hyperparameters like epochs, dropout_rate and batch_size that could affect the model accuracy. For instance,
epochs
Step24: Evaluate the newly retrained model with 10 training epochs. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install tflite-model-maker
Explanation: Image classification with TensorFlow Lite Model Maker
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_image_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Model Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.
This notebook shows an end-to-end example that utilizes this Model Maker library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device.
Prerequisites
To run this example, we first need to install serveral required packages, including Model Maker package that in github repo.
End of explanation
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import configs
from tflite_model_maker import image_classifier
from tflite_model_maker import ImageClassifierDataLoader
from tflite_model_maker import model_spec
import matplotlib.pyplot as plt
Explanation: Import the required packages.
End of explanation
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
Explanation: Simple End-to-End Example
Get the data path
Let's get some images to play with this simple end-to-end example. Hundreds of images is a good start for Model Maker while more data could achieve better accuracy.
End of explanation
data = ImageClassifierDataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
Explanation: You could replace image_path with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.
<img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_image_classification.png" alt="Upload File" width="800" hspace="100">
If you prefer not to upload your images to the cloud, you could try to run the library locally following the guide in github.
Run the example
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process.
Step 1. Load input data specific to an on-device ML app. Split it to training data and testing data.
End of explanation
model = image_classifier.create(train_data)
Explanation: Step 2. Customize the TensorFlow model.
End of explanation
loss, accuracy = model.evaluate(test_data)
Explanation: Step 3. Evaluate the model.
End of explanation
model.export(export_dir='.')
Explanation: Step 4. Export to TensorFlow Lite model.
Here, we export TensorFlow Lite model with metadata which provides a standard for model descriptions.
You could download it in the left sidebar same as the uploading part for your own use.
End of explanation
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
Explanation: After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in image classification reference app.
Detailed Process
Currently, we support several models such as EfficientNet-Lite* models, MobileNetV2, ResNet50 as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.
The following walks through this end-to-end example step by step to show more detail.
Step 1: Load Input Data Specific to an On-device ML App
The flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.
The dataset has the following directory structure:
<pre>
<b>flower_photos</b>
|__ <b>daisy</b>
|______ 100080576_f52e8ee070_n.jpg
|______ 14167534527_781ceb1b7a_n.jpg
|______ ...
|__ <b>dandelion</b>
|______ 10043234166_e6dd915111_n.jpg
|______ 1426682852_e62169221f_m.jpg
|______ ...
|__ <b>roses</b>
|______ 102501987_3cdb8e5394_n.jpg
|______ 14982802401_a3dfb22afb.jpg
|______ ...
|__ <b>sunflowers</b>
|______ 12471791574_bb1be83df4.jpg
|______ 15122112402_cafa41934f.jpg
|______ ...
|__ <b>tulips</b>
|______ 13976522214_ccec508fe7.jpg
|______ 14487943607_651e8062a1_m.jpg
|______ ...
</pre>
End of explanation
data = ImageClassifierDataLoader.from_folder(image_path)
Explanation: Use ImageClassifierDataLoader class to load data.
As for from_folder() method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
End of explanation
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
Explanation: Split it to training data (80%), validation data (10%, optional) and testing data (10%).
End of explanation
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
Explanation: Show 25 image examples with labels.
End of explanation
model = image_classifier.create(train_data, validation_data=validation_data)
Explanation: Step 2: Customize the TensorFlow Model
Create a custom image classifier model based on the loaded data. The default model is EfficientNet-Lite0.
End of explanation
model.summary()
Explanation: Have a look at the detailed model structure.
End of explanation
loss, accuracy = model.evaluate(test_data)
Explanation: Step 3: Evaluate the Customized Model
Evaluate the result of the model, get the loss and accuracy of the model.
End of explanation
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_top_k(test_data)
for i, (image, label) in enumerate(test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
Explanation: We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
End of explanation
model.export(export_dir='.')
Explanation: If the accuracy doesn't meet the app requirement, one could refer to Advanced Usage to explore alternatives such as changing to a larger model, adjusting re-training parameters etc.
Step 4: Export to TensorFlow Lite Model
Convert the existing model to TensorFlow Lite model format and save the image labels in label file. The default TFLite filename is model.tflite, the default label filename is label.txt.
End of explanation
model.evaluate_tflite('model.tflite', test_data)
Explanation: The TensorFlow Lite model file and label file could be used in image classification reference app.
As for android reference app as an example, we could add flower_classifier.tflite and flower_label.txt in assets folder. Meanwhile, change label filename in code and TensorFlow Lite file name in code. Thus, we could run the retrained float TensorFlow Lite model on the android app.
You can also evalute the tflite model with the evaluate_tflite method.
End of explanation
config = configs.QuantizationConfig.create_full_integer_quantization(representative_data=test_data, is_integer_only=True)
Explanation: Advanced Usage
The create function is the critical part of this library. It uses transfer learning with a pretrained model similiar to the tutorial.
The createfunction contains the following steps:
Split the data into training, validation, testing data according to parameter validation_ratio and test_ratio. The default value of validation_ratio and test_ratio are 0.1 and 0.1.
Download a Image Feature Vector as the base model from TensorFlow Hub. The default pre-trained model is EfficientNet-Lite0.
Add a classifier head with a Dropout Layer with dropout_rate between head layer and pre-trained model. The default dropout_rate is the default dropout_rate value from make_image_classifier_lib by TensorFlow Hub.
Preprocess the raw input data. Currently, preprocessing steps including normalizing the value of each image pixel to model input scale and resizing it to model input size. EfficientNet-Lite0 have the input scale [0, 1] and the input image size [224, 224, 3].
Feed the data into the classifier model. By default, the training parameters such as training epochs, batch size, learning rate, momentum are the default values from make_image_classifier_lib by TensorFlow Hub. Only the classifier head is trained.
In this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc.
Post-training quantization on the TensorFLow Lite model
Post-training quantization is a conversion technique that can reduce model size and inference latency, while also improving CPU and hardware accelerator latency, with little degradation in model accuracy. Thus, it's widely used to optimize the model.
Model Maker supports multiple post-training quantization options. Let's take full integer quantization as an instance. First, define the quantization config to enforce enforce full integer quantization for all ops including the input and output. The input type and output type are uint8 by default. You may also change them to other types like int8 by setting inference_input_type and inference_output_type in config.
End of explanation
model.export(export_dir='.', tflite_filename='model_quant.tflite', quantization_config=config)
Explanation: Then we export TensorFlow Lite model with such configuration.
End of explanation
model = image_classifier.create(train_data, model_spec=model_spec.mobilenet_v2_spec, validation_data=validation_data)
Explanation: In Colab, you can download the model named model_quant.tflite from the left sidebar, same as the uploading part mentioned above.
Change the model
Change to the model that's supported in this library.
This library supports EfficientNet-Lite models, MobileNetV2, ResNet50 by now. EfficientNet-Lite are a family of image classification models that could achieve state-of-art accuracy and suitable for Edge devices. The default model is EfficientNet-Lite0.
We could switch model to MobileNetV2 by just setting parameter model_spec to mobilenet_v2_spec in create method.
End of explanation
loss, accuracy = model.evaluate(test_data)
Explanation: Evaluate the newly retrained MobileNetV2 model to see the accuracy and loss in testing data.
End of explanation
inception_v3_spec = model_spec.ImageModelSpec(
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
Explanation: Change to the model in TensorFlow Hub
Moreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.
As Inception V3 model as an example, we could define inception_v3_spec which is an object of ImageModelSpec and contains the specification of the Inception V3 model.
We need to specify the model name name, the url of the TensorFlow Hub model uri. Meanwhile, the default value of input_image_shape is [224, 224]. We need to change it to [299, 299] for Inception V3 model.
End of explanation
model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)
Explanation: Then, by setting parameter model_spec to inception_v3_spec in create method, we could retrain the Inception V3 model.
The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end.
Change your own custom model
If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export ModelSpec in TensorFlow Hub.
Then start to define ImageModelSpec object like the process above.
Change the training hyperparameters
We could also change the training hyperparameters like epochs, dropout_rate and batch_size that could affect the model accuracy. For instance,
epochs: more epochs could achieve better accuracy until it converges but training for too many epochs may lead to overfitting.
dropout_rate: avoid overfitting.
batch_size: number of samples to use in one training step.
validation_data: number of samples to use in one training step.
For example, we could train with more epochs.
End of explanation
loss, accuracy = model.evaluate(test_data)
Explanation: Evaluate the newly retrained model with 10 training epochs.
End of explanation |
3,328 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Session 3
Step2: <a name="assignment-synopsis"></a>
Assignment Synopsis
In the last session we created our first neural network. We saw that in order to create a neural network, we needed to define a cost function which would allow gradient descent to optimize all the parameters in our network. We also saw how neural networks become much more expressive by introducing series of linearities followed by non-linearities, or activation functions. We then explored a fun application of neural networks using regression to learn to paint color values given x, y positions. This allowed us to build up a sort of painterly like version of an image.
In this session, we'll see how to construct a few more types of neural networks. First, we'll explore a generative network called autoencoders. This network can be extended in a variety of ways to include convolution, denoising, or a variational layer. In Part Two, you'll then use a general autoencoder framework to encode your own list of images. In Part three, we'll then explore a discriminative network used for classification, and see how this can be used for audio classification of music or speech.
One main difference between these two networks are the data that we'll use to train them. In the first case, we will only work with "unlabeled" data and perform unsupervised learning. An example would be a collection of images, just like the one you created for assignment 1. Contrast this with "labeled" data which allows us to make use of supervised learning. For instance, we're given both images, and some other data about those images such as some text describing what object is in the image. This allows us to optimize a network where we model a distribution over the images given that it should be labeled as something. This is often a much simpler distribution to train, but with the expense of it being much harder to collect.
One of the major directions of future research will be in how to better make use of unlabeled data and unsupervised learning methods.
<a name="part-one---autoencoders"></a>
Part One - Autoencoders
<a name="instructions"></a>
Instructions
Work with a dataset of images and train an autoencoder. You can work with the same dataset from assignment 1, or try a larger dataset. But be careful with the image sizes, and make sure to keep it relatively small (e.g. < 100 x 100 px).
Recall from the lecture that autoencoders are great at "compressing" information. The network's construction and cost function are just like what we've done in the last session. The network is composed of a series of matrix multiplications and nonlinearities. The only difference is the output of the network has exactly the same shape as what is input. This allows us to train the network by saying that the output of the network needs to be just like the input to it, so that it tries to "compress" all the information in that video.
Autoencoders have some great potential for creative applications, as they allow us to compress a dataset of information and even generate new data from that encoding. We'll see exactly how to do this with a basic autoencoder, and then you'll be asked to explore some of the extensions to produce your own encodings.
<a name="code"></a>
Code
We'll now go through the process of building an autoencoder just like in the lecture. First, let's load some data. You can use the first 100 images of the Celeb Net, your own dataset, or anything else approximately under 1,000 images. Make sure you resize the images so that they are <= 100x100 pixels, otherwise the training will be very slow, and the montages we create will be too large.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step3: We'll now make use of something I've written to help us store this data. It provides some interfaces for generating "batches" of data, as well as splitting the data into training, validation, and testing sets. To use it, we pass in the data and optionally its labels. If we don't have labels, we just pass in the data. In the second half of this notebook, we'll explore using a dataset's labels as well.
Step4: It allows us to easily find the mean
Step5: Or the deviation
Step6: Recall we can calculate the mean of the standard deviation across each color channel
Step7: All the input data we gave as input to our Datasets object, previously stored in Xs is now stored in a variable as part of our ds Datasets object, X
Step8: It takes a parameter, split at the time of creation, which allows us to create train/valid/test sets. By default, this is set to [1.0, 0.0, 0.0], which means to take all the data in the train set, and nothing in the validation and testing sets. We can access "batch generators" of each of these sets by saying
Step9: This returns X and y as a tuple. Since we're not using labels, we'll just ignore this. The next_batch method takes a parameter, batch_size, which we'll set appropriately to our batch size. Notice it runs for exactly 10 iterations to iterate over our 100 examples, then the loop exits. The order in which it iterates over the 100 examples is randomized each time you iterate.
Write two functions to preprocess (normalize) any given image, and to unprocess it, i.e. unnormalize it by removing the normalization. The preprocess function should perform exactly the task you learned to do in assignment 1
Step10: We're going to now work on creating an autoencoder. To start, we'll only use linear connections, like in the last assignment. This means, we need a 2-dimensional input
Step11: Let's create a list of how many neurons we want in each layer. This should be for just one half of the network, the encoder only. It should start large, then get smaller and smaller. We're also going to try an encode our dataset to an inner layer of just 2 values. So from our number of features, we'll go all the way down to expressing that image by just 2 values. Try a small network to begin with, then explore deeper networks
Step12: Now create a placeholder just like in the last session in the tensorflow graph that will be able to get any number (None) of n_features inputs.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step13: Now complete the function encode below. This takes as input our input placeholder, X, our list of dimensions, and an activation function, e.g. tf.nn.relu or tf.nn.tanh, to apply to each layer's output, and creates a series of fully connected layers. This works just like in the last session! We multiply our input, add a bias, then apply a non-linearity. Instead of having 20 neurons in each layer, we're going to use our dimensions list to tell us how many neurons we want in each layer.
One important difference is that we're going to also store every weight matrix we create! This is so that we can use the same weight matrices when we go to build our decoder. This is a very powerful concept that creeps up in a few different neural network architectures called weight sharing. Weight sharing isn't necessary to do of course, but can speed up training and offer a different set of features depending on your dataset. Explore trying both. We'll also see how another form of weight sharing works in convolutional networks.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step14: We now have a function for encoding an input X. Take note of which activation function you use as this will be important for the behavior of the latent encoding, z, later on.
Step15: Let's take a look at the graph
Step16: So we've created a few layers, encoding our input X all the way down to 2 values in the tensor z. We do this by multiplying our input X by a set of matrices shaped as
Step17: Resulting in a layer which is shaped as
Step18: Building the Decoder
Here is a helpful animation on what the matrix "transpose" operation does
Step19: Now we'll build the decoder. I've shown you how to do this. Read through the code to fully understand what it is doing
Step20: Let's take a look at the new operations we've just added. They will all be prefixed by "decoder" so we can use list comprehension to help us with this
Step21: And let's take a look at the output of the autoencoder
Step22: Great! So we should have a synthesized version of our input placeholder, X, inside of Y. This Y is the result of many matrix multiplications, first a series of multiplications in our encoder all the way down to 2 dimensions, and then back to the original dimensions through our decoder. Let's now create a pixel-to-pixel measure of error. This should measure the difference in our synthesized output, Y, and our input, X. You can use the $l_1$ or $l_2$ norm, just like in assignment 2. If you don't remember, go back to homework 2 where we calculated the cost function and try the same idea here.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step23: Now for the standard training code. We'll pass our cost to an optimizer, and then use mini batch gradient descent to optimize our network's parameters. We just have to be careful to make sure we're preprocessing our input and feed it in the right shape, a 2-dimensional matrix of [batch_size, n_features] in dimensions.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step24: Below is the training code for our autoencoder. Please go through each line of code to make sure you understand what is happening, and fill in the missing pieces. This will take awhile. On my machine, it takes about 15 minutes. If you're impatient, you can "Interrupt" the kernel by going to the Kernel menu above, and continue with the notebook. Though, the longer you leave this to train, the better the result will be.
What I really want you to notice is what the network learns to encode first, based on what it is able to reconstruct. It won't able to reconstruct everything. At first, it will just be the mean image. Then, other major changes in the dataset. For the first 100 images of celeb net, this seems to be the background
Step25: Note that if you run into "InternalError" or "ResourceExhaustedError", it is likely that you have run out of memory! Try a smaller network! For instance, restart the notebook's kernel, and then go back to defining encoder_dimensions = [256, 2] instead. If you run into memory problems below, you can also try changing the batch_size to 50.
Step26: Let's take a look a the final reconstruction
Step27: <a name="visualize-the-embedding"></a>
Visualize the Embedding
Let's now try visualizing our dataset's inner most layer's activations. Since these are already 2-dimensional, we can use the values of this layer to position any input image in a 2-dimensional space. We hope to find similar looking images closer together.
We'll first ask for the inner most layer's activations when given our example images. This will run our images through the network, half way, stopping at the end of the encoder part of the network.
Step28: Recall that this layer has 2 neurons
Step29: Let's see what the activations look like for our 100 images as a scatter plot.
Step30: If you view this plot over time, and let the process train longer, you will see something similar to the visualization here on the right
Step31: To do this, we can use scipy and an algorithm for solving this assignment problem known as the hungarian algorithm. With a few points, this algorithm runs pretty fast. But be careful if you have many more points, e.g. > 1000, as it is not a very efficient algorithm!
Step32: The result tells us the matching indexes from our autoencoder embedding of 2 dimensions, to our idealized grid
Step33: In other words, this algorithm has just found the best arrangement of our previous zs as a grid. We can now plot our images using the order of our assignment problem to see what it looks like
Step34: <a name="2d-latent-manifold"></a>
2D Latent Manifold
We'll now explore the inner most layer of the network. Recall we go from the number of image features (the number of pixels), down to 2 values using successive matrix multiplications, back to the number of image features through more matrix multiplications. These inner 2 values are enough to represent our entire dataset (+ some loss, depending on how well we did). Let's explore how the decoder, the second half of the network, operates, from just these two values. We'll bypass the input placeholder, X, and the entire encoder network, and start from Z. Let's first get some data which will sample Z in 2 dimensions from -1 to 1. This range may be different for you depending on what your latent space's range of values are. You can try looking at the activations for your z variable for a set of test images, as we've done before, and look at the range of these values. Or try to guess based on what activation function you may have used on the z variable, if any.
Then we'll use this range to create a linear interpolation of latent values, and feed these values through the decoder network to have our synthesized images to see what they look like.
Step35: Now calculate the reconstructed images using our new zs. You'll want to start from the beginning of the decoder! That is the z variable! Then calculate the Y given our synthetic values for z stored in zs.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step36: And now we can plot the reconstructed montage representing our latent space
Step37: <a name="part-two---general-autoencoder-framework"></a>
Part Two - General Autoencoder Framework
There are a number of extensions we can explore w/ an autoencoder. I've provided a module under the libs folder, vae.py, which you will need to explore for Part Two. It has a function, VAE, to create an autoencoder, optionally with Convolution, Denoising, and/or Variational Layers. Please read through the documentation and try to understand the different parameters.
Step38: Included in the vae.py module is the train_vae function. This will take a list of file paths, and train an autoencoder with the provided options. This will spit out a bunch of images of the reconstruction and latent manifold created by the encoder/variational encoder. Feel free to read through the code, as it is documented.
Step39: I've also included three examples of how to use the VAE(...) and train_vae(...) functions. First look at the one using MNIST. Then look at the other two
Step40: <a name="part-three---deep-audio-classification-network"></a>
Part Three - Deep Audio Classification Network
<a name="instructions-2"></a>
Instructions
In this last section, we'll explore using a regression network, one that predicts continuous outputs, to perform classification, a model capable of predicting discrete outputs. We'll explore the use of one-hot encodings and using a softmax layer to convert our regression outputs to a probability which we can use for classification. In the lecture, we saw how this works for the MNIST dataset, a dataset of 28 x 28 pixel handwritten digits labeled from 0 - 9. We converted our 28 x 28 pixels into a vector of 784 values, and used a fully connected network to output 10 values, the one hot encoding of our 0 - 9 labels.
In addition to the lecture material, I find these two links very helpful to try to understand classification w/ neural networks
Step41: Inside the dst directory, we now have folders for music and speech. Let's get the list of all the wav files for music and speech
Step42: We now need to load each file. We can use the scipy.io.wavefile module to load the audio as a signal.
Audio can be represented in a few ways, including as floating point or short byte data (16-bit data). This dataset is the latter and so can range from -32768 to +32767. We'll use the function I've provided in the utils module to load and convert an audio signal to a -1.0 to 1.0 floating point datatype by dividing by the maximum absolute value. Let's try this with just one of the files we have
Step43: Now, instead of using the raw audio signal, we're going to use the Discrete Fourier Transform to represent our audio as matched filters of different sinuoids. Unfortunately, this is a class on Tensorflow and I can't get into Digital Signal Processing basics. If you want to know more about this topic, I highly encourage you to take this course taught by the legendary Perry Cook and Julius Smith
Step44: What we're seeing are the features of the audio (in columns) over time (in rows). We can see this a bit better by taking the logarithm of the magnitudes converting it to a psuedo-decibel scale. This is more similar to the logarithmic perception of loudness we have. Let's visualize this below, and I'll transpose the matrix just for display purposes
Step45: We could just take just a single row (or column in the second plot of the magnitudes just above, as we transposed it in that plot) as an input to a neural network. However, that just represents about an 80th of a second of audio data, and is not nearly enough data to say whether something is music or speech. We'll need to use more than a single row to get a decent length of time. One way to do this is to use a sliding 2D window from the top of the image down to the bottom of the image (or left to right). Let's start by specifying how large our sliding window is.
Step46: Now we can collect all the sliding windows into a list of Xs and label them based on being music as 0 or speech as 1 into a collection of ys.
Step47: The code below will perform this for us, as well as create the inputs and outputs to our classification network by specifying 0s for the music dataset and 1s for the speech dataset. Let's just take a look at the first sliding window, and see it's label
Step48: Since this was the first audio file of the music dataset, we've set it to a label of 0. And now the second one, which should have 50% overlap with the previous one, and still a label of 0
Step49: So hopefully you can see that the window is sliding down 250 milliseconds at a time, and since our window is 500 ms long, or half a second, it has 50% new content at the bottom. Let's do this for every audio file now
Step50: Just to confirm it's doing the same as above, let's plot the first magnitude matrix
Step51: Let's describe the shape of our input to the network
Step52: We'll now use the Dataset object I've provided for you under libs/datasets.py. This will accept the Xs, ys, a list defining our dataset split into training, validation, and testing proportions, and a parameter one_hot stating whether we want our ys to be converted to a one hot vector or not.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step53: Let's take a look at the batch generator this object provides. We can all any of the splits, the train, valid, or test splits as properties of the object. And each split provides a next_batch method which gives us a batch generator. We should have specified that we wanted one_hot=True to have our batch generator return our ys with 2 features, one for each possible class.
Step54: Let's take a look at the first element of the randomized batch
Step55: And the second one
Step56: So we have a randomized order in minibatches generated for us, and the ys are represented as a one-hot vector with each class, music and speech, encoded as a 0 or 1. Since the next_batch method is a generator, we can use it in a loop until it is exhausted to run through our entire dataset in mini-batches.
<a name="creating-the-network"></a>
Creating the Network
Let's now create the neural network. Recall our input X is 4-dimensional, with the same shape that we've just seen as returned from our batch generator above. We're going to create a deep convolutional neural network with a few layers of convolution and 2 finals layers which are fully connected. The very last layer must have only 2 neurons corresponding to our one-hot vector of ys, so that we can properly measure the cross-entropy (just like we did with MNIST and our 10 element one-hot encoding of the digit label). First let's create our placeholders
Step57: Let's now create our deep convolutional network. Start by first creating the convolutional layers. Try different numbers of layers, different numbers of filters per layer, different activation functions, and varying the parameters to get the best training/validation score when training below. Try first using a kernel size of 3 and a stride of 1. You can use the utils.conv2d function to help you create the convolution.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step58: We'll now connect our last convolutional layer to a fully connected layer of 100 neurons. This is essentially combining the spatial information, thus losing the spatial information. You can use the utils.linear function to do this, which will internally also reshape the 4-d tensor to a 2-d tensor so that it can be connected to a fully-connected layer (i.e. perform a matrix multiplication).
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step59: We'll now create our cost. Unlike the MNIST network, we're going to use a binary cross entropy as we only have 2 possible classes. You can use the utils.binary_cross_entropy function to help you with this. Remember, the final cost measure the average loss of your batches.
Step60: Just like in MNIST, we'll now also create a measure of accuracy by finding the prediction of our network. This is just for us to monitor the training and is not used to optimize the weights of the network! Look back to the MNIST network in the lecture if you are unsure of how this works (it is exactly the same)
Step61: We'll now create an optimizer and train our network
Step62: Now we're ready to train. This is a pretty simple dataset for a deep convolutional network. As a result, I've included code which demonstrates how to monitor validation performance. A validation set is data that the network has never seen, and is not used for optimizing the weights of the network. We use validation to better understand how well the performance of a network "generalizes" to unseen data.
You can easily run the risk of overfitting to the training set of this problem. Overfitting simply means that the number of parameters in our model are so high that we are not generalizing our model, and instead trying to model each individual point, rather than the general cause of the data. This is a very common problem that can be addressed by using less parameters, or enforcing regularization techniques which we didn't have a chance to cover (dropout, batch norm, l2, augmenting the dataset, and others).
For this dataset, if you notice that your validation set is performing worse than your training set, then you know you have overfit! You should be able to easily get 97+% on the validation set within < 10 epochs. If you've got great training performance, but poor validation performance, then you likely have "overfit" to the training dataset, and are unable to generalize to the validation set. Try varying the network definition, number of filters/layers until you get 97+% on your validation set!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step63: Let's try to inspect how the network is accomplishing this task, just like we did with the MNIST network. First, let's see what the names of our operations in our network are.
Step64: Now let's visualize the W tensor's weights for the first layer using the utils function montage_filters, just like we did for the MNIST dataset during the lecture. Recall from the lecture that this is another great way to inspect the performance of your network. If many of the filters look uniform, then you know the network is either under or overperforming. What you want to see are filters that look like they are responding to information such as edges or corners.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step65: We can also look at every layer's filters using a loop
Step66: In the next session, we'll learn some much more powerful methods of inspecting such networks.
<a name="assignment-submission"></a>
Assignment Submission
After you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as | Python Code:
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n' \
'You should consider updating to Python 3.4.0 or ' \
'higher as the libraries built for this course ' \
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda '
'and then restart `jupyter notebook`:\n' \
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
import IPython.display as ipyd
except ImportError:
print('You are missing some packages! ' \
'We will try installing them before continuing!')
!pip install "numpy>=1.11.0" "matplotlib>=1.5.1" "scikit-image>=0.11.3" "scikit-learn>=0.17" "scipy>=0.17.0"
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
import IPython.display as ipyd
print('Done!')
# Import Tensorflow
try:
import tensorflow as tf
except ImportError:
print("You do not have tensorflow installed!")
print("Follow the instructions on the following link")
print("to install tensorflow before continuing:")
print("")
print("https://github.com/pkmital/CADL#installation-preliminaries")
# This cell includes the provided libraries from the zip file
# and a library for displaying images from ipython, which
# we will use to display the gif
try:
from libs import utils, gif, datasets, dataset_utils, vae, dft
except ImportError:
print("Make sure you have started notebook in the same directory" +
" as the provided zip file which includes the 'libs' folder" +
" and the file 'utils.py' inside of it. You will NOT be able"
" to complete this assignment unless you restart jupyter"
" notebook inside the directory created by extracting"
" the zip file or cloning the github repo.")
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML(<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>)
Explanation: Session 3: Unsupervised and Supervised Learning
<p class="lead">
Assignment: Build Unsupervised and Supervised Networks
</p>
<p class="lead">
Parag K. Mital<br />
<a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning w/ Tensorflow</a><br />
<a href="https://www.kadenze.com/partners/kadenze-academy">Kadenze Academy</a><br />
<a href="https://twitter.com/hashtag/CADL">#CADL</a>
</p>
<a name="learning-goals"></a>
Learning Goals
Learn how to build an autoencoder
Learn how to explore latent/hidden representations of an autoencoder.
Learn how to build a classification network using softmax and onehot encoding
Outline
<!-- MarkdownTOC autolink=true autoanchor=true bracket=round -->
Assignment Synopsis
Part One - Autoencoders
Instructions
Code
Visualize the Embedding
Reorganize to Grid
2D Latent Manifold
Part Two - General Autoencoder Framework
Instructions
Part Three - Deep Audio Classification Network
Instructions
Preparing the Data
Creating the Network
Assignment Submission
Coming Up
<!-- /MarkdownTOC -->
This next section will just make sure you have the right version of python and the libraries that we'll be using. Don't change the code here but make sure you "run" it (use "shift+enter")!
End of explanation
# See how this works w/ Celeb Images or try your own dataset instead:
imgs = ...
# Then convert the list of images to a 4d array (e.g. use np.array to convert a list to a 4d array):
Xs = ...
print(Xs.shape)
assert(Xs.ndim == 4 and Xs.shape[1] <= 100 and Xs.shape[2] <= 100)
Explanation: <a name="assignment-synopsis"></a>
Assignment Synopsis
In the last session we created our first neural network. We saw that in order to create a neural network, we needed to define a cost function which would allow gradient descent to optimize all the parameters in our network. We also saw how neural networks become much more expressive by introducing series of linearities followed by non-linearities, or activation functions. We then explored a fun application of neural networks using regression to learn to paint color values given x, y positions. This allowed us to build up a sort of painterly like version of an image.
In this session, we'll see how to construct a few more types of neural networks. First, we'll explore a generative network called autoencoders. This network can be extended in a variety of ways to include convolution, denoising, or a variational layer. In Part Two, you'll then use a general autoencoder framework to encode your own list of images. In Part three, we'll then explore a discriminative network used for classification, and see how this can be used for audio classification of music or speech.
One main difference between these two networks are the data that we'll use to train them. In the first case, we will only work with "unlabeled" data and perform unsupervised learning. An example would be a collection of images, just like the one you created for assignment 1. Contrast this with "labeled" data which allows us to make use of supervised learning. For instance, we're given both images, and some other data about those images such as some text describing what object is in the image. This allows us to optimize a network where we model a distribution over the images given that it should be labeled as something. This is often a much simpler distribution to train, but with the expense of it being much harder to collect.
One of the major directions of future research will be in how to better make use of unlabeled data and unsupervised learning methods.
<a name="part-one---autoencoders"></a>
Part One - Autoencoders
<a name="instructions"></a>
Instructions
Work with a dataset of images and train an autoencoder. You can work with the same dataset from assignment 1, or try a larger dataset. But be careful with the image sizes, and make sure to keep it relatively small (e.g. < 100 x 100 px).
Recall from the lecture that autoencoders are great at "compressing" information. The network's construction and cost function are just like what we've done in the last session. The network is composed of a series of matrix multiplications and nonlinearities. The only difference is the output of the network has exactly the same shape as what is input. This allows us to train the network by saying that the output of the network needs to be just like the input to it, so that it tries to "compress" all the information in that video.
Autoencoders have some great potential for creative applications, as they allow us to compress a dataset of information and even generate new data from that encoding. We'll see exactly how to do this with a basic autoencoder, and then you'll be asked to explore some of the extensions to produce your own encodings.
<a name="code"></a>
Code
We'll now go through the process of building an autoencoder just like in the lecture. First, let's load some data. You can use the first 100 images of the Celeb Net, your own dataset, or anything else approximately under 1,000 images. Make sure you resize the images so that they are <= 100x100 pixels, otherwise the training will be very slow, and the montages we create will be too large.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
ds = datasets.Dataset(Xs)
# ds = datasets.CIFAR10(flatten=False)
Explanation: We'll now make use of something I've written to help us store this data. It provides some interfaces for generating "batches" of data, as well as splitting the data into training, validation, and testing sets. To use it, we pass in the data and optionally its labels. If we don't have labels, we just pass in the data. In the second half of this notebook, we'll explore using a dataset's labels as well.
End of explanation
mean_img = ds.mean().astype(np.uint8)
plt.imshow(mean_img)
# If your image comes out entirely black, try w/o the `astype(np.uint8)`
# that means your images are read in as 0-255, rather than 0-1 and
# this simply depends on the version of matplotlib you are using.
Explanation: It allows us to easily find the mean:
End of explanation
std_img = ds.std()
plt.imshow(std_img)
print(std_img.shape)
Explanation: Or the deviation:
End of explanation
std_img = np.mean(std_img, axis=2)
plt.imshow(std_img)
Explanation: Recall we can calculate the mean of the standard deviation across each color channel:
End of explanation
plt.imshow(ds.X[0])
print(ds.X.shape)
Explanation: All the input data we gave as input to our Datasets object, previously stored in Xs is now stored in a variable as part of our ds Datasets object, X:
End of explanation
for (X, y) in ds.train.next_batch(batch_size=10):
print(X.shape)
Explanation: It takes a parameter, split at the time of creation, which allows us to create train/valid/test sets. By default, this is set to [1.0, 0.0, 0.0], which means to take all the data in the train set, and nothing in the validation and testing sets. We can access "batch generators" of each of these sets by saying: ds.train.next_batch. A generator is a really powerful way of handling iteration in Python. If you are unfamiliar with the idea of generators, I recommend reading up a little bit on it, e.g. here: http://intermediatepythonista.com/python-generators - think of it as a for loop, but as a function. It returns one iteration of the loop each time you call it.
This generator will automatically handle the randomization of the dataset. Let's try looping over the dataset using the batch generator:
End of explanation
# Write a function to preprocess/normalize an image, given its dataset object
# (which stores the mean and standard deviation!)
def preprocess(img, ds):
norm_img = (img - ...) / ...
return norm_img
# Write a function to undo the normalization of an image, given its dataset object
# (which stores the mean and standard deviation!)
def deprocess(norm_img, ds):
img = norm_img * ... + ...
return img
Explanation: This returns X and y as a tuple. Since we're not using labels, we'll just ignore this. The next_batch method takes a parameter, batch_size, which we'll set appropriately to our batch size. Notice it runs for exactly 10 iterations to iterate over our 100 examples, then the loop exits. The order in which it iterates over the 100 examples is randomized each time you iterate.
Write two functions to preprocess (normalize) any given image, and to unprocess it, i.e. unnormalize it by removing the normalization. The preprocess function should perform exactly the task you learned to do in assignment 1: subtract the mean, then divide by the standard deviation. The deprocess function should take the preprocessed image and undo the preprocessing steps. Recall that the ds object contains the mean and std functions for access the mean and standarad deviation. We'll be using the preprocess and deprocess functions on the input and outputs of the network. Note, we could use Tensorflow to do this instead of numpy, but for sake of clarity, I'm keeping this separate from the Tensorflow graph.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Calculate the number of features in your image.
# This is the total number of pixels, or (height x width x channels).
n_features = ...
print(n_features)
Explanation: We're going to now work on creating an autoencoder. To start, we'll only use linear connections, like in the last assignment. This means, we need a 2-dimensional input: Batch Size x Number of Features. We currently have a 4-dimensional input: Batch Size x Height x Width x Channels. We'll have to calculate the number of features we have to help construct the Tensorflow Graph for our autoencoder neural network. Then, when we are ready to train the network, we'll reshape our 4-dimensional dataset into a 2-dimensional one when feeding the input of the network. Optionally, we could create a tf.reshape as the first operation of the network, so that we can still pass in our 4-dimensional array, and the Tensorflow graph would reshape it for us. We'll try the former method, by reshaping manually, and then you can explore the latter method, of handling 4-dimensional inputs on your own.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
encoder_dimensions = [128, 2]
Explanation: Let's create a list of how many neurons we want in each layer. This should be for just one half of the network, the encoder only. It should start large, then get smaller and smaller. We're also going to try an encode our dataset to an inner layer of just 2 values. So from our number of features, we'll go all the way down to expressing that image by just 2 values. Try a small network to begin with, then explore deeper networks:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
X = tf.placeholder(...
assert(X.get_shape().as_list() == [None, n_features])
Explanation: Now create a placeholder just like in the last session in the tensorflow graph that will be able to get any number (None) of n_features inputs.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
def encode(X, dimensions, activation=tf.nn.tanh):
# We're going to keep every matrix we create so let's create a list to hold them all
Ws = []
# We'll create a for loop to create each layer:
for layer_i, n_output in enumerate(dimensions):
# TODO: just like in the last session,
# we'll use a variable scope to help encapsulate our variables
# This will simply prefix all the variables made in this scope
# with the name we give it. Make sure it is a unique name
# for each layer, e.g., 'encoder/layer1', 'encoder/layer2', or
# 'encoder/1', 'encoder/2',...
with tf.variable_scope(...)
# TODO: Create a weight matrix which will increasingly reduce
# down the amount of information in the input by performing
# a matrix multiplication. You can use the utils.linear function.
h, W = ...
# TODO: Apply an activation function (unless you used the parameter
# for activation function in the utils.linear call)
# Finally we'll store the weight matrix.
# We need to keep track of all
# the weight matrices we've used in our encoder
# so that we can build the decoder using the
# same weight matrices.
Ws.append(W)
# Replace X with the current layer's output, so we can
# use it in the next layer.
X = h
z = X
return Ws, z
Explanation: Now complete the function encode below. This takes as input our input placeholder, X, our list of dimensions, and an activation function, e.g. tf.nn.relu or tf.nn.tanh, to apply to each layer's output, and creates a series of fully connected layers. This works just like in the last session! We multiply our input, add a bias, then apply a non-linearity. Instead of having 20 neurons in each layer, we're going to use our dimensions list to tell us how many neurons we want in each layer.
One important difference is that we're going to also store every weight matrix we create! This is so that we can use the same weight matrices when we go to build our decoder. This is a very powerful concept that creeps up in a few different neural network architectures called weight sharing. Weight sharing isn't necessary to do of course, but can speed up training and offer a different set of features depending on your dataset. Explore trying both. We'll also see how another form of weight sharing works in convolutional networks.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Then call the function
Ws, z = encode(X, encoder_dimensions)
# And just some checks to make sure you've done it right.
assert(z.get_shape().as_list() == [None, 2])
assert(len(Ws) == len(encoder_dimensions))
Explanation: We now have a function for encoding an input X. Take note of which activation function you use as this will be important for the behavior of the latent encoding, z, later on.
End of explanation
[op.name for op in tf.get_default_graph().get_operations()]
Explanation: Let's take a look at the graph:
End of explanation
[W_i.get_shape().as_list() for W_i in Ws]
Explanation: So we've created a few layers, encoding our input X all the way down to 2 values in the tensor z. We do this by multiplying our input X by a set of matrices shaped as:
End of explanation
z.get_shape().as_list()
Explanation: Resulting in a layer which is shaped as:
End of explanation
# We'll first reverse the order of our weight matrices
decoder_Ws = Ws[::-1]
# then reverse the order of our dimensions
# appending the last layers number of inputs.
decoder_dimensions = encoder_dimensions[::-1][1:] + [n_features]
print(decoder_dimensions)
assert(decoder_dimensions[-1] == n_features)
Explanation: Building the Decoder
Here is a helpful animation on what the matrix "transpose" operation does:
Basically what is happening is rows becomes columns, and vice-versa. We're going to use our existing weight matrices but transpose them so that we can go in the opposite direction. In order to build our decoder, we'll have to do the opposite of what we've just done, multiplying z by the transpose of our weight matrices, to get back to a reconstructed version of X. First, we'll reverse the order of our weight matrics, and then append to the list of dimensions the final output layer's shape to match our input:
End of explanation
def decode(z, dimensions, Ws, activation=tf.nn.tanh):
current_input = z
for layer_i, n_output in enumerate(dimensions):
# we'll use a variable scope again to help encapsulate our variables
# This will simply prefix all the variables made in this scope
# with the name we give it.
with tf.variable_scope("decoder/layer/{}".format(layer_i)):
# Now we'll grab the weight matrix we created before and transpose it
# So a 3072 x 784 matrix would become 784 x 3072
# or a 256 x 64 matrix, would become 64 x 256
W = tf.transpose(Ws[layer_i])
# Now we'll multiply our input by our transposed W matrix
h = tf.matmul(current_input, W)
# And then use a relu activation function on its output
current_input = activation(h)
# We'll also replace n_input with the current n_output, so that on the
# next iteration, our new number inputs will be correct.
n_input = n_output
Y = current_input
return Y
Y = decode(z, decoder_dimensions, decoder_Ws)
Explanation: Now we'll build the decoder. I've shown you how to do this. Read through the code to fully understand what it is doing:
End of explanation
[op.name for op in tf.get_default_graph().get_operations()
if op.name.startswith('decoder')]
Explanation: Let's take a look at the new operations we've just added. They will all be prefixed by "decoder" so we can use list comprehension to help us with this:
End of explanation
Y.get_shape().as_list()
Explanation: And let's take a look at the output of the autoencoder:
End of explanation
# Calculate some measure of loss, e.g. the pixel to pixel absolute difference or squared difference
loss = ...
# Now sum over every pixel and then calculate the mean over the batch dimension (just like session 2!)
# hint, use tf.reduce_mean and tf.reduce_sum
cost = ...
Explanation: Great! So we should have a synthesized version of our input placeholder, X, inside of Y. This Y is the result of many matrix multiplications, first a series of multiplications in our encoder all the way down to 2 dimensions, and then back to the original dimensions through our decoder. Let's now create a pixel-to-pixel measure of error. This should measure the difference in our synthesized output, Y, and our input, X. You can use the $l_1$ or $l_2$ norm, just like in assignment 2. If you don't remember, go back to homework 2 where we calculated the cost function and try the same idea here.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
learning_rate = ...
optimizer = tf.train.AdamOptimizer(...).minimize(...)
Explanation: Now for the standard training code. We'll pass our cost to an optimizer, and then use mini batch gradient descent to optimize our network's parameters. We just have to be careful to make sure we're preprocessing our input and feed it in the right shape, a 2-dimensional matrix of [batch_size, n_features] in dimensions.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# (TODO) Create a tensorflow session and initialize all of our weights:
sess = ...
sess.run(tf.global_variables_initializer())
Explanation: Below is the training code for our autoencoder. Please go through each line of code to make sure you understand what is happening, and fill in the missing pieces. This will take awhile. On my machine, it takes about 15 minutes. If you're impatient, you can "Interrupt" the kernel by going to the Kernel menu above, and continue with the notebook. Though, the longer you leave this to train, the better the result will be.
What I really want you to notice is what the network learns to encode first, based on what it is able to reconstruct. It won't able to reconstruct everything. At first, it will just be the mean image. Then, other major changes in the dataset. For the first 100 images of celeb net, this seems to be the background: white, blue, black backgrounds. From this basic interpretation, you can reason that the autoencoder has learned a representation of the backgrounds, and is able to encode that knowledge of the background in its inner most layer of just two values. It then goes on to represent the major variations in skin tone and hair. Then perhaps some facial features such as lips. So the features it is able to encode tend to be the major things at first, then the smaller things.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Some parameters for training
batch_size = 100
n_epochs = 31
step = 10
# We'll try to reconstruct the same first 100 images and show how
# The network does over the course of training.
examples = ds.X[:100]
# We have to preprocess the images before feeding them to the network.
# I'll do this once here, so we don't have to do it every iteration.
test_examples = preprocess(examples, ds).reshape(-1, n_features)
# If we want to just visualize them, we can create a montage.
test_images = utils.montage(examples).astype(np.uint8)
# Store images so we can make a gif
gifs = []
# Now for our training:
for epoch_i in range(n_epochs):
# Keep track of the cost
this_cost = 0
# Iterate over the entire dataset in batches
for batch_X, _ in ds.train.next_batch(batch_size=batch_size):
# (TODO) Preprocess and reshape our current batch, batch_X:
this_batch = preprocess(..., ds).reshape(-1, n_features)
# Compute the cost, and run the optimizer.
this_cost += sess.run([cost, optimizer], feed_dict={X: this_batch})[0]
# Average cost of this epoch
avg_cost = this_cost / ds.X.shape[0] / batch_size
print(epoch_i, avg_cost)
# Let's also try to see how the network currently reconstructs the input.
# We'll draw the reconstruction every `step` iterations.
if epoch_i % step == 0:
# (TODO) Ask for the output of the network, Y, and give it our test examples
recon = sess.run(...
# Resize the 2d to the 4d representation:
rsz = recon.reshape(examples.shape)
# We have to unprocess the image now, removing the normalization
unnorm_img = deprocess(rsz, ds)
# Clip to avoid saturation
# TODO: Make sure this image is the correct range, e.g.
# for float32 0-1, you should clip between 0 and 1
# for uint8 0-255, you should clip between 0 and 255!
clipped = np.clip(unnorm_img, 0, 255)
# And we can create a montage of the reconstruction
recon = utils.montage(clipped)
# Store for gif
gifs.append(recon)
fig, axs = plt.subplots(1, 2, figsize=(10, 10))
axs[0].imshow(test_images)
axs[0].set_title('Original')
axs[1].imshow(recon)
axs[1].set_title('Synthesis')
fig.canvas.draw()
plt.show()
Explanation: Note that if you run into "InternalError" or "ResourceExhaustedError", it is likely that you have run out of memory! Try a smaller network! For instance, restart the notebook's kernel, and then go back to defining encoder_dimensions = [256, 2] instead. If you run into memory problems below, you can also try changing the batch_size to 50.
End of explanation
fig, axs = plt.subplots(1, 2, figsize=(10, 10))
axs[0].imshow(test_images)
axs[0].set_title('Original')
axs[1].imshow(recon)
axs[1].set_title('Synthesis')
fig.canvas.draw()
plt.show()
plt.imsave(arr=test_images, fname='test.png')
plt.imsave(arr=recon, fname='recon.png')
Explanation: Let's take a look a the final reconstruction:
End of explanation
zs = sess.run(z, feed_dict={X:test_examples})
Explanation: <a name="visualize-the-embedding"></a>
Visualize the Embedding
Let's now try visualizing our dataset's inner most layer's activations. Since these are already 2-dimensional, we can use the values of this layer to position any input image in a 2-dimensional space. We hope to find similar looking images closer together.
We'll first ask for the inner most layer's activations when given our example images. This will run our images through the network, half way, stopping at the end of the encoder part of the network.
End of explanation
zs.shape
Explanation: Recall that this layer has 2 neurons:
End of explanation
plt.scatter(zs[:, 0], zs[:, 1])
Explanation: Let's see what the activations look like for our 100 images as a scatter plot.
End of explanation
n_images = 100
idxs = np.linspace(np.min(zs) * 2.0, np.max(zs) * 2.0,
int(np.ceil(np.sqrt(n_images))))
xs, ys = np.meshgrid(idxs, idxs)
grid = np.dstack((ys, xs)).reshape(-1, 2)[:n_images,:]
fig, axs = plt.subplots(1,2,figsize=(8,3))
axs[0].scatter(zs[:, 0], zs[:, 1],
edgecolors='none', marker='o', s=2)
axs[0].set_title('Autoencoder Embedding')
axs[1].scatter(grid[:,0], grid[:,1],
edgecolors='none', marker='o', s=2)
axs[1].set_title('Ideal Grid')
Explanation: If you view this plot over time, and let the process train longer, you will see something similar to the visualization here on the right: https://vimeo.com/155061675 - the manifold is able to express more and more possible ideas, or put another way, it is able to encode more data. As it grows more expressive, with more data, and longer training, or deeper networks, it will fill in more of the space, and have different modes expressing different clusters of the data. With just 100 examples of our dataset, this is very small to try to model with such a deep network. In any case, the techniques we've learned up to now apply in exactly the same way, even if we had 1k, 100k, or even many millions of images.
Let's try to see how this minimal example, with just 100 images, and just 100 epochs looks when we use this embedding to sort our dataset, just like we tried to do in the 1st assignment, but now with our autoencoders embedding.
<a name="reorganize-to-grid"></a>
Reorganize to Grid
We'll use these points to try to find an assignment to a grid. This is a well-known problem known as the "assignment problem": https://en.wikipedia.org/wiki/Assignment_problem - This is unrelated to the applications we're investigating in this course, but I thought it would be a fun extra to show you how to do. What we're going to do is take our scatter plot above, and find the best way to stretch and scale it so that each point is placed in a grid. We try to do this in a way that keeps nearby points close together when they are reassigned in their grid.
End of explanation
from scipy.spatial.distance import cdist
cost = cdist(grid[:, :], zs[:, :], 'sqeuclidean')
from scipy.optimize._hungarian import linear_sum_assignment
indexes = linear_sum_assignment(cost)
Explanation: To do this, we can use scipy and an algorithm for solving this assignment problem known as the hungarian algorithm. With a few points, this algorithm runs pretty fast. But be careful if you have many more points, e.g. > 1000, as it is not a very efficient algorithm!
End of explanation
indexes
plt.figure(figsize=(5, 5))
for i in range(len(zs)):
plt.plot([zs[indexes[1][i], 0], grid[i, 0]],
[zs[indexes[1][i], 1], grid[i, 1]], 'r')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
Explanation: The result tells us the matching indexes from our autoencoder embedding of 2 dimensions, to our idealized grid:
End of explanation
examples_sorted = []
for i in indexes[1]:
examples_sorted.append(examples[i])
plt.figure(figsize=(15, 15))
img = utils.montage(np.array(examples_sorted)).astype(np.uint8)
plt.imshow(img,
interpolation='nearest')
plt.imsave(arr=img, fname='sorted.png')
Explanation: In other words, this algorithm has just found the best arrangement of our previous zs as a grid. We can now plot our images using the order of our assignment problem to see what it looks like:
End of explanation
# This is a quick way to do what we could have done as
# a nested for loop:
zs = np.meshgrid(np.linspace(-1, 1, 10),
np.linspace(-1, 1, 10))
# Now we have 100 x 2 values of every possible position
# in a 2D grid from -1 to 1:
zs = np.c_[zs[0].ravel(), zs[1].ravel()]
Explanation: <a name="2d-latent-manifold"></a>
2D Latent Manifold
We'll now explore the inner most layer of the network. Recall we go from the number of image features (the number of pixels), down to 2 values using successive matrix multiplications, back to the number of image features through more matrix multiplications. These inner 2 values are enough to represent our entire dataset (+ some loss, depending on how well we did). Let's explore how the decoder, the second half of the network, operates, from just these two values. We'll bypass the input placeholder, X, and the entire encoder network, and start from Z. Let's first get some data which will sample Z in 2 dimensions from -1 to 1. This range may be different for you depending on what your latent space's range of values are. You can try looking at the activations for your z variable for a set of test images, as we've done before, and look at the range of these values. Or try to guess based on what activation function you may have used on the z variable, if any.
Then we'll use this range to create a linear interpolation of latent values, and feed these values through the decoder network to have our synthesized images to see what they look like.
End of explanation
recon = sess.run(Y, feed_dict={...})
# reshape the result to an image:
rsz = recon.reshape(examples.shape)
# Deprocess the result, unnormalizing it
unnorm_img = deprocess(rsz, ds)
# clip to avoid saturation
clipped = np.clip(unnorm_img, 0, 255)
# Create a montage
img_i = utils.montage(clipped).astype(np.uint8)
Explanation: Now calculate the reconstructed images using our new zs. You'll want to start from the beginning of the decoder! That is the z variable! Then calculate the Y given our synthetic values for z stored in zs.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
plt.figure(figsize=(15, 15))
plt.imshow(img_i)
plt.imsave(arr=img_i, fname='manifold.png')
Explanation: And now we can plot the reconstructed montage representing our latent space:
End of explanation
help(vae.VAE)
Explanation: <a name="part-two---general-autoencoder-framework"></a>
Part Two - General Autoencoder Framework
There are a number of extensions we can explore w/ an autoencoder. I've provided a module under the libs folder, vae.py, which you will need to explore for Part Two. It has a function, VAE, to create an autoencoder, optionally with Convolution, Denoising, and/or Variational Layers. Please read through the documentation and try to understand the different parameters.
End of explanation
help(vae.train_vae)
Explanation: Included in the vae.py module is the train_vae function. This will take a list of file paths, and train an autoencoder with the provided options. This will spit out a bunch of images of the reconstruction and latent manifold created by the encoder/variational encoder. Feel free to read through the code, as it is documented.
End of explanation
# Get a list of jpg file (Only JPG works!)
files = [os.path.join(some_dir, file_i) for file_i in os.listdir(some_dir) if file_i.endswith('.jpg')]
# Ensure that you have the latest TensorFlow version installed, otherwise you may have encountered
# 'rsz_shape' error because of the backward incompatible API.
# Train it! Change these parameters!
vae.train_vae(files,
input_shape,
learning_rate=0.0001,
batch_size=100,
n_epochs=50,
n_examples=10,
crop_shape=[64, 64, 3],
crop_factor=0.8,
n_filters=[100, 100, 100, 100],
n_hidden=256,
n_code=50,
convolutional=True,
variational=True,
filter_sizes=[3, 3, 3, 3],
dropout=True,
keep_prob=0.8,
activation=tf.nn.relu,
img_step=100,
save_step=100,
ckpt_name="vae.ckpt")
Explanation: I've also included three examples of how to use the VAE(...) and train_vae(...) functions. First look at the one using MNIST. Then look at the other two: one using the Celeb Dataset; and lastly one which will download Sita Sings the Blues, rip the frames, and train a Variational Autoencoder on it. This last one requires ffmpeg be installed (e.g. for OSX users, brew install ffmpeg, Linux users, sudo apt-get ffmpeg-dev, or else: https://ffmpeg.org/download.html). The Celeb and Sita Sings the Blues training require us to use an image pipeline, which I've mentioned briefly during the lecture. This does many things for us: it loads data from disk in batches, decodes the data as an image, resizes/crops the image, and uses a multithreaded graph to handle it all. It is very efficient and is the way to go when handling large image datasets.
The MNIST training does not use this. Instead, the entire dataset is loaded into the CPU memory, and then fed in minibatches to the graph using Python/Numpy. This is far less efficient, but will not be an issue for such a small dataset, e.g. 70k examples of 28x28 pixels = ~1.6 MB of data, easily fits into memory (in fact, it would really be better to use a Tensorflow variable with this entire dataset defined). When you consider the Celeb Net, you have 200k examples of 218x178x3 pixels = ~700 MB of data. That's just for the dataset. When you factor in everything required for the network and its weights, then you are pushing it. Basically this image pipeline will handle loading the data from disk, rather than storing it in memory.
<a name="instructions-1"></a>
Instructions
You'll now try to train your own autoencoder using this framework. You'll need to get a directory full of 'jpg' files. You'll then use the VAE framework and the vae.train_vae function to train a variational autoencoder on your own dataset. This accepts a list of files, and will output images of the training in the same directory. These are named "test_xs.png" as well as many images named prefixed by "manifold" and "reconstruction" for each iteration of the training. After you are happy with your training, you will need to create a forum post with the "test_xs.png" and the very last manifold and reconstruction image created to demonstrate how the variational autoencoder worked for your dataset. You'll likely need a lot more than 100 images for this to be successful.
Note that this will also create "checkpoints" which save the model! If you change the model, and already have a checkpoint by the same name, it will try to load the previous model and will fail. Be sure to remove the old checkpoint or specify a new name for ckpt_name! The default parameters shown below are what I have used for the celeb net dataset which has over 200k images. You will definitely want to use a smaller model if you do not have this many images! Explore!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
dst = 'gtzan_music_speech'
if not os.path.exists(dst):
dataset_utils.gtzan_music_speech_download(dst)
Explanation: <a name="part-three---deep-audio-classification-network"></a>
Part Three - Deep Audio Classification Network
<a name="instructions-2"></a>
Instructions
In this last section, we'll explore using a regression network, one that predicts continuous outputs, to perform classification, a model capable of predicting discrete outputs. We'll explore the use of one-hot encodings and using a softmax layer to convert our regression outputs to a probability which we can use for classification. In the lecture, we saw how this works for the MNIST dataset, a dataset of 28 x 28 pixel handwritten digits labeled from 0 - 9. We converted our 28 x 28 pixels into a vector of 784 values, and used a fully connected network to output 10 values, the one hot encoding of our 0 - 9 labels.
In addition to the lecture material, I find these two links very helpful to try to understand classification w/ neural networks:
https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/
https://cs.stanford.edu/people/karpathy/convnetjs//demo/classify2d.html
The GTZAN Music and Speech dataset has 64 music and 64 speech files, each 30 seconds long, and each at a sample rate of 22050 Hz, meaning there are 22050 samplings of the audio signal per second. What we're going to do is use all of this data to build a classification network capable of knowing whether something is music or speech. So we will have audio as input, and a probability of 2 possible values, music and speech, as output. This is very similar to the MNIST network. We just have to decide on how to represent our input data, prepare the data and its labels, build batch generators for our data, create the network, and train it. We'll make use of the libs/datasets.py module to help with some of this.
<a name="preparing-the-data"></a>
Preparing the Data
Let's first download the GTZAN music and speech dataset. I've included a helper function to do this.
End of explanation
# Get the full path to the directory
music_dir = os.path.join(os.path.join(dst, 'music_speech'), 'music_wav')
# Now use list comprehension to combine the path of the directory with any wave files
music = [os.path.join(music_dir, file_i)
for file_i in os.listdir(music_dir)
if file_i.endswith('.wav')]
# Similarly, for the speech folder:
speech_dir = os.path.join(os.path.join(dst, 'music_speech'), 'speech_wav')
speech = [os.path.join(speech_dir, file_i)
for file_i in os.listdir(speech_dir)
if file_i.endswith('.wav')]
# Let's see all the file names
print(music, speech)
Explanation: Inside the dst directory, we now have folders for music and speech. Let's get the list of all the wav files for music and speech:
End of explanation
file_i = music[0]
s = utils.load_audio(file_i)
plt.plot(s)
Explanation: We now need to load each file. We can use the scipy.io.wavefile module to load the audio as a signal.
Audio can be represented in a few ways, including as floating point or short byte data (16-bit data). This dataset is the latter and so can range from -32768 to +32767. We'll use the function I've provided in the utils module to load and convert an audio signal to a -1.0 to 1.0 floating point datatype by dividing by the maximum absolute value. Let's try this with just one of the files we have:
End of explanation
# Parameters for our dft transform. Sorry we can't go into the
# details of this in this course. Please look into DSP texts or the
# course by Perry Cook linked above if you are unfamiliar with this.
fft_size = 512
hop_size = 256
re, im = dft.dft_np(s, hop_size=256, fft_size=512)
mag, phs = dft.ztoc(re, im)
print(mag.shape)
plt.imshow(mag)
Explanation: Now, instead of using the raw audio signal, we're going to use the Discrete Fourier Transform to represent our audio as matched filters of different sinuoids. Unfortunately, this is a class on Tensorflow and I can't get into Digital Signal Processing basics. If you want to know more about this topic, I highly encourage you to take this course taught by the legendary Perry Cook and Julius Smith: https://www.kadenze.com/courses/physics-based-sound-synthesis-for-games-and-interactive-systems/info - there is no one better to teach this content, and in fact, I myself learned DSP from Perry Cook almost 10 years ago.
After taking the DFT, this will return our signal as real and imaginary components, a polar complex value representation which we will convert to a cartesian representation capable of saying what magnitudes and phases are in our signal.
End of explanation
plt.figure(figsize=(10, 4))
plt.imshow(np.log(mag.T))
plt.xlabel('Time')
plt.ylabel('Frequency Bin')
Explanation: What we're seeing are the features of the audio (in columns) over time (in rows). We can see this a bit better by taking the logarithm of the magnitudes converting it to a psuedo-decibel scale. This is more similar to the logarithmic perception of loudness we have. Let's visualize this below, and I'll transpose the matrix just for display purposes:
End of explanation
# The sample rate from our audio is 22050 Hz.
sr = 22050
# We can calculate how many hops there are in a second
# which will tell us how many frames of magnitudes
# we have per second
n_frames_per_second = sr // hop_size
# We want 500 milliseconds of audio in our window
n_frames = n_frames_per_second // 2
# And we'll move our window by 250 ms at a time
frame_hops = n_frames_per_second // 4
# We'll therefore have this many sliding windows:
n_hops = (len(mag) - n_frames) // frame_hops
Explanation: We could just take just a single row (or column in the second plot of the magnitudes just above, as we transposed it in that plot) as an input to a neural network. However, that just represents about an 80th of a second of audio data, and is not nearly enough data to say whether something is music or speech. We'll need to use more than a single row to get a decent length of time. One way to do this is to use a sliding 2D window from the top of the image down to the bottom of the image (or left to right). Let's start by specifying how large our sliding window is.
End of explanation
Xs = []
ys = []
for hop_i in range(n_hops):
# Creating our sliding window
frames = mag[(hop_i * frame_hops):(hop_i * frame_hops + n_frames)]
# Store them with a new 3rd axis and as a logarithmic scale
# We'll ensure that we aren't taking a log of 0 just by adding
# a small value, also known as epsilon.
Xs.append(np.log(np.abs(frames[..., np.newaxis]) + 1e-10))
# And then store the label
ys.append(0)
Explanation: Now we can collect all the sliding windows into a list of Xs and label them based on being music as 0 or speech as 1 into a collection of ys.
End of explanation
plt.imshow(Xs[0][..., 0])
plt.title('label:{}'.format(ys[1]))
Explanation: The code below will perform this for us, as well as create the inputs and outputs to our classification network by specifying 0s for the music dataset and 1s for the speech dataset. Let's just take a look at the first sliding window, and see it's label:
End of explanation
plt.imshow(Xs[1][..., 0])
plt.title('label:{}'.format(ys[1]))
Explanation: Since this was the first audio file of the music dataset, we've set it to a label of 0. And now the second one, which should have 50% overlap with the previous one, and still a label of 0:
End of explanation
# Store every magnitude frame and its label of being music: 0 or speech: 1
Xs, ys = [], []
# Let's start with the music files
for i in music:
# Load the ith file:
s = utils.load_audio(i)
# Now take the dft of it (take a DSP course!):
re, im = dft.dft_np(s, fft_size=fft_size, hop_size=hop_size)
# And convert the complex representation to magnitudes/phases (take a DSP course!):
mag, phs = dft.ztoc(re, im)
# This is how many sliding windows we have:
n_hops = (len(mag) - n_frames) // frame_hops
# Let's extract them all:
for hop_i in range(n_hops):
# Get the current sliding window
frames = mag[(hop_i * frame_hops):(hop_i * frame_hops + n_frames)]
# We'll take the log magnitudes, as this is a nicer representation:
this_X = np.log(np.abs(frames[..., np.newaxis]) + 1e-10)
# And store it:
Xs.append(this_X)
# And be sure that we store the correct label of this observation:
ys.append(0)
# Now do the same thing with speech (TODO)!
for i in speech:
# Load the ith file:
s = ...
# Now take the dft of it (take a DSP course!):
re, im = ...
# And convert the complex representation to magnitudes/phases (take a DSP course!):
mag, phs = ...
# This is how many sliding windows we have:
n_hops = (len(mag) - n_frames) // frame_hops
# Let's extract them all:
for hop_i in range(n_hops):
# Get the current sliding window
frames = mag[(hop_i * frame_hops):(hop_i * frame_hops + n_frames)]
# We'll take the log magnitudes, as this is a nicer representation:
this_X = np.log(np.abs(frames[..., np.newaxis]) + 1e-10)
# And store it:
Xs.append(this_X)
# Make sure we use the right label (TODO!)!
ys.append...
# Convert them to an array:
Xs = np.array(Xs)
ys = np.array(ys)
print(Xs.shape, ys.shape)
# Just to make sure you've done it right. If you've changed any of the
# parameters of the dft/hop size, then this will fail. If that's what you
# wanted to do, then don't worry about this assertion.
assert(Xs.shape == (15360, 43, 256, 1) and ys.shape == (15360,))
Explanation: So hopefully you can see that the window is sliding down 250 milliseconds at a time, and since our window is 500 ms long, or half a second, it has 50% new content at the bottom. Let's do this for every audio file now:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
plt.imshow(Xs[0][..., 0])
plt.title('label:{}'.format(ys[0]))
Explanation: Just to confirm it's doing the same as above, let's plot the first magnitude matrix:
End of explanation
n_observations, n_height, n_width, n_channels = Xs.shape
Explanation: Let's describe the shape of our input to the network:
End of explanation
ds = datasets.Dataset(Xs=..., ys=..., split=[0.8, 0.1, 0.1], one_hot=True)
Explanation: We'll now use the Dataset object I've provided for you under libs/datasets.py. This will accept the Xs, ys, a list defining our dataset split into training, validation, and testing proportions, and a parameter one_hot stating whether we want our ys to be converted to a one hot vector or not.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
Xs_i, ys_i = next(ds.train.next_batch())
# Notice the shape this returns. This will become the shape of our input and output of the network:
print(Xs_i.shape, ys_i.shape)
assert(ys_i.shape == (100, 2))
Explanation: Let's take a look at the batch generator this object provides. We can all any of the splits, the train, valid, or test splits as properties of the object. And each split provides a next_batch method which gives us a batch generator. We should have specified that we wanted one_hot=True to have our batch generator return our ys with 2 features, one for each possible class.
End of explanation
plt.imshow(Xs_i[0, :, :, 0])
plt.title('label:{}'.format(ys_i[0]))
Explanation: Let's take a look at the first element of the randomized batch:
End of explanation
plt.imshow(Xs_i[1, :, :, 0])
plt.title('label:{}'.format(ys_i[1]))
Explanation: And the second one:
End of explanation
tf.reset_default_graph()
# Create the input to the network. This is a 4-dimensional tensor!
# Don't forget that we should use None as a shape for the first dimension
# Recall that we are using sliding windows of our magnitudes (TODO):
X = tf.placeholder(name='X', shape=..., dtype=tf.float32)
# Create the output to the network. This is our one hot encoding of 2 possible values (TODO)!
Y = tf.placeholder(name='Y', shape=..., dtype=tf.float32)
Explanation: So we have a randomized order in minibatches generated for us, and the ys are represented as a one-hot vector with each class, music and speech, encoded as a 0 or 1. Since the next_batch method is a generator, we can use it in a loop until it is exhausted to run through our entire dataset in mini-batches.
<a name="creating-the-network"></a>
Creating the Network
Let's now create the neural network. Recall our input X is 4-dimensional, with the same shape that we've just seen as returned from our batch generator above. We're going to create a deep convolutional neural network with a few layers of convolution and 2 finals layers which are fully connected. The very last layer must have only 2 neurons corresponding to our one-hot vector of ys, so that we can properly measure the cross-entropy (just like we did with MNIST and our 10 element one-hot encoding of the digit label). First let's create our placeholders:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# TODO: Explore different numbers of layers, and sizes of the network
n_filters = [9, 9, 9, 9]
# Now let's loop over our n_filters and create the deep convolutional neural network
H = X
for layer_i, n_filters_i in enumerate(n_filters):
# Let's use the helper function to create our connection to the next layer:
# TODO: explore changing the parameters here:
H, W = utils.conv2d(
H, n_filters_i, k_h=3, k_w=3, d_h=2, d_w=2,
name=str(layer_i))
# And use a nonlinearity
# TODO: explore changing the activation here:
H = tf.nn.relu(H)
# Just to check what's happening:
print(H.get_shape().as_list())
Explanation: Let's now create our deep convolutional network. Start by first creating the convolutional layers. Try different numbers of layers, different numbers of filters per layer, different activation functions, and varying the parameters to get the best training/validation score when training below. Try first using a kernel size of 3 and a stride of 1. You can use the utils.conv2d function to help you create the convolution.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Connect the last convolutional layer to a fully connected network (TODO)!
fc, W = utils.linear(H, ...
# And another fully connected layer, now with just 2 outputs, the number of outputs that our
# one hot encoding has (TODO)!
Y_pred, W = utils.linear(fc, ...
Explanation: We'll now connect our last convolutional layer to a fully connected layer of 100 neurons. This is essentially combining the spatial information, thus losing the spatial information. You can use the utils.linear function to do this, which will internally also reshape the 4-d tensor to a 2-d tensor so that it can be connected to a fully-connected layer (i.e. perform a matrix multiplication).
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
loss = utils.binary_cross_entropy(Y_pred, Y)
cost = tf.reduce_mean(tf.reduce_sum(loss, 1))
Explanation: We'll now create our cost. Unlike the MNIST network, we're going to use a binary cross entropy as we only have 2 possible classes. You can use the utils.binary_cross_entropy function to help you with this. Remember, the final cost measure the average loss of your batches.
End of explanation
predicted_y = tf.argmax(...
actual_y = tf.argmax(...
correct_prediction = tf.equal(...
accuracy = tf.reduce_mean(...
Explanation: Just like in MNIST, we'll now also create a measure of accuracy by finding the prediction of our network. This is just for us to monitor the training and is not used to optimize the weights of the network! Look back to the MNIST network in the lecture if you are unsure of how this works (it is exactly the same):
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
learning_rate = ...
optimizer = tf.train.AdamOptimizer(...).minimize(...)
Explanation: We'll now create an optimizer and train our network:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Explore these parameters: (TODO)
n_epochs = 10
batch_size = 200
# Create a session and init!
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# Now iterate over our dataset n_epoch times
for epoch_i in range(n_epochs):
print('Epoch: ', epoch_i)
# Train
this_accuracy = 0
its = 0
# Do our mini batches:
for Xs_i, ys_i in ds.train.next_batch(batch_size):
# Note here: we are running the optimizer so
# that the network parameters train!
this_accuracy += sess.run([accuracy, optimizer], feed_dict={
X:Xs_i, Y:ys_i})[0]
its += 1
print(this_accuracy / its)
print('Training accuracy: ', this_accuracy / its)
# Validation (see how the network does on unseen data).
this_accuracy = 0
its = 0
# Do our mini batches:
for Xs_i, ys_i in ds.valid.next_batch(batch_size):
# Note here: we are NOT running the optimizer!
# we only measure the accuracy!
this_accuracy += sess.run(accuracy, feed_dict={
X:Xs_i, Y:ys_i})
its += 1
print('Validation accuracy: ', this_accuracy / its)
Explanation: Now we're ready to train. This is a pretty simple dataset for a deep convolutional network. As a result, I've included code which demonstrates how to monitor validation performance. A validation set is data that the network has never seen, and is not used for optimizing the weights of the network. We use validation to better understand how well the performance of a network "generalizes" to unseen data.
You can easily run the risk of overfitting to the training set of this problem. Overfitting simply means that the number of parameters in our model are so high that we are not generalizing our model, and instead trying to model each individual point, rather than the general cause of the data. This is a very common problem that can be addressed by using less parameters, or enforcing regularization techniques which we didn't have a chance to cover (dropout, batch norm, l2, augmenting the dataset, and others).
For this dataset, if you notice that your validation set is performing worse than your training set, then you know you have overfit! You should be able to easily get 97+% on the validation set within < 10 epochs. If you've got great training performance, but poor validation performance, then you likely have "overfit" to the training dataset, and are unable to generalize to the validation set. Try varying the network definition, number of filters/layers until you get 97+% on your validation set!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
g = tf.get_default_graph()
[op.name for op in g.get_operations()]
Explanation: Let's try to inspect how the network is accomplishing this task, just like we did with the MNIST network. First, let's see what the names of our operations in our network are.
End of explanation
g = tf.get_default_graph()
W = ...
assert(W.dtype == np.float32)
m = montage_filters(W)
plt.figure(figsize=(5, 5))
plt.imshow(m)
plt.imsave(arr=m, fname='audio.png')
Explanation: Now let's visualize the W tensor's weights for the first layer using the utils function montage_filters, just like we did for the MNIST dataset during the lecture. Recall from the lecture that this is another great way to inspect the performance of your network. If many of the filters look uniform, then you know the network is either under or overperforming. What you want to see are filters that look like they are responding to information such as edges or corners.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
g = tf.get_default_graph()
for layer_i in range(len(n_filters)):
W = sess.run(g.get_tensor_by_name('{}/W:0'.format(layer_i)))
plt.figure(figsize=(5, 5))
plt.imshow(montage_filters(W))
plt.title('Layer {}\'s Learned Convolution Kernels'.format(layer_i))
Explanation: We can also look at every layer's filters using a loop:
End of explanation
utils.build_submission('session-3.zip',
('test.png',
'recon.png',
'sorted.png',
'manifold.png',
'test_xs.png',
'audio.png',
'session-3.ipynb'))
Explanation: In the next session, we'll learn some much more powerful methods of inspecting such networks.
<a name="assignment-submission"></a>
Assignment Submission
After you've completed the notebook, create a zip file of the current directory using the code below. This code will make sure you have included this completed ipython notebook and the following files named exactly as:
<pre>
session-3/
session-3.ipynb
test.png
recon.png
sorted.png
manifold.png
test_xs.png
audio.png
</pre>
You'll then submit this zip file for your third assignment on Kadenze for "Assignment 3: Build Unsupervised and Supervised Networks"! Remember to post Part Two to the Forum to receive full credit! If you have any questions, remember to reach out on the forums and connect with your peers or with me.
To get assessed, you'll need to be a premium student! This will allow you to build an online portfolio of all of your work and receive grades. If you aren't already enrolled as a student, register now at http://www.kadenze.com/ and join the #CADL community to see what your peers are doing! https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info
Also, if you share any of the GIFs on Facebook/Twitter/Instagram/etc..., be sure to use the #CADL hashtag so that other students can find your work!
End of explanation |
3,329 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, we will work through a simulation of psychophysiological interaction
Step1: Load the data generated using the DCM forward model. In this model, there should be a significant PPI between roi 0 and rois 2 and 4 (see the B matrix in the DCM notebook)
Step2: Set up the PPI model, using ROI 0 as the seed
Step3: Let's plot the relation between the ROIs as a function of the task | Python Code:
import os,sys
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
sys.path.insert(0,'../')
from utils.mkdesign import create_design_singlecondition
from nipy.modalities.fmri.hemodynamic_models import spm_hrf,compute_regressor
from utils.make_data import make_continuous_data
from statsmodels.tsa.arima_process import arma_generate_sample
import scipy.stats
import seaborn as sns
sns.set_style("white")
results_dir = os.path.abspath("../results")
if not os.path.exists(results_dir):
os.mkdir(results_dir)
Explanation: In this notebook, we will work through a simulation of psychophysiological interaction
End of explanation
dcmdata=numpy.load(os.path.join(results_dir,'dcmdata.npz'))
data_conv=dcmdata['data']
# downsample to 1 second TR
data=data_conv[range(0,data_conv.shape[0],100)]
ntp=data.shape[0]
# create a blocked design
d,design=create_design_singlecondition(blockiness=1.0,deslength=ntp,blocklength=20,offset=20)
regressor,_=compute_regressor(design,'spm',numpy.arange(0,ntp))
for i in range(data.shape[1]):
plt.plot(data[:,i], label="ROI %d"%i)
plt.legend(loc="best")
plt.xlim([-50,300])
Explanation: Load the data generated using the DCM forward model. In this model, there should be a significant PPI between roi 0 and rois 2 and 4 (see the B matrix in the DCM notebook)
End of explanation
seed=0
X=numpy.vstack((regressor[:,0],data[:,seed],regressor[:,0]*data[:,seed],numpy.ones(data.shape[0]))).T
hat_mtx=numpy.linalg.inv(X.T.dot(X)).dot(X.T)
for i in range(data.shape[1]):
beta_hat=hat_mtx.dot(data[:,i])
resid=data[:,i] - X.dot(beta_hat)
sigma2hat=(resid.dot(resid))/(X.shape[0] - X.shape[1])
c=numpy.array([0,0,1,0]) # contrast for PPI
t=c.dot(beta_hat)/numpy.sqrt(c.dot(numpy.linalg.inv(X.T.dot(X)).dot(c))*sigma2hat)
print ('ROI %d:'%i, t, 1.0 - scipy.stats.t.cdf(t,X.shape[0] - X.shape[1]))
import seaborn as sns
sns.heatmap(X, vmin=0, xticklabels=["task", "seed", "task*seed", "mean"],
yticklabels=False)
Explanation: Set up the PPI model, using ROI 0 as the seed
End of explanation
on_tp=numpy.where(regressor>0.9)[0]
off_tp=numpy.where(regressor<0.01)[0]
roinum=4
plt.scatter(data[on_tp,0],data[on_tp,roinum], label="task ON")
fit = numpy.polyfit(data[on_tp,0],data[on_tp,roinum],1)
plt.plot(data[on_tp,0],data[on_tp,0]*fit[0] +fit[1])
plt.scatter(data[off_tp,0],data[off_tp,roinum],color='red', label="task OFF")
fit = numpy.polyfit(data[off_tp,0],data[off_tp,roinum],1)
plt.plot(data[off_tp,0],data[off_tp,0]*fit[0] +fit[1],color='red')
plt.xlabel("activation in ROI 0")
plt.ylabel("activation in ROI %d"%roinum)
plt.legend(loc="best")
Explanation: Let's plot the relation between the ROIs as a function of the task
End of explanation |
3,330 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Forecasting
In this tutorial, we will demonstrate how to build a model for time series forecasting in NumPyro. Specifically, we will replicate the Seasonal, Global Trend (SGT) model from the Rlgt
Step1: Data
First, lets import and take a look at the dataset.
Step2: The time series has a length of 114 (a data point for each year), and by looking at the plot, we can observe seasonality in this dataset, which is the recurrence of similar patterns at specific time periods. e.g. in this dataset, we observe a cyclical pattern every 10 years, but there is also a less obvious but clear spike in the number of trappings every 40 years. Let us see if we can model this effect in NumPyro.
In this tutorial, we will use the first 80 values for training and the last 34 values for testing.
Step3: Model
The model we are going to use is called Seasonal, Global Trend, which when tested on 3003 time series of the M-3 competition, has been known to outperform other models originally participating in the competition
Step4: Note that level and s are updated recursively while we collect the expected value at each time step. NumPyro uses JAX in the backend to JIT compile many critical parts of the NUTS algorithm, including the verlet integrator and the tree building process. However, doing so using Python's for loop in the model will result in a long compilation time for the model, so we use scan - which is a wrapper of lax.scan with supports for NumPyro primitives and handlers. A detailed explanation for using this utility can be found in NumPyro documentation. Here we use it to collect y values while the triple (level, s, moving_sum) plays the role of carrying state.
Another note is that instead of declaring the observation site y in transition_fn
python
numpyro.sample("y", dist.StudentT(nu, exp_val, omega), obs=y[t])
, we have used condition handler here. The reason is we also want to use this model for forecasting. In forecasting, future values of y are non-observable, so obs=y[t] does not make sense when t >= len(y) (caution
Step5: Now, let us run $4$ MCMC chains (using the No-U-Turn Sampler algorithm) with $5000$ warmup steps and $5000$ sampling steps per each chain. The returned value will be a collection of $20000$ samples.
Step6: Forecasting
Given samples from mcmc, we want to do forecasting for the testing dataset y_test. NumPyro provides a convenient utility Predictive to get predictive distribution. Let's see how to use it to get forecasting values.
Notice that in the sgt model defined above, there is a keyword future which controls the execution of the model - depending on whether future > 0 or future == 0. The following code predicts the last 34 values from the original time-series.
Step7: Let's get sMAPE, root mean square error of the prediction, and visualize the result with the mean prediction and the 90% highest posterior density interval (HPDI).
Step8: Finally, let's plot the result to verify that we get the expected one. | Python Code:
!pip install -q numpyro@git+https://github.com/pyro-ppl/numpyro
import os
import matplotlib.pyplot as plt
import pandas as pd
from IPython.display import set_matplotlib_formats
import jax.numpy as jnp
from jax import random
import numpyro
import numpyro.distributions as dist
from numpyro.contrib.control_flow import scan
from numpyro.diagnostics import autocorrelation, hpdi
from numpyro.infer import MCMC, NUTS, Predictive
if "NUMPYRO_SPHINXBUILD" in os.environ:
set_matplotlib_formats("svg")
numpyro.set_host_device_count(4)
assert numpyro.__version__.startswith("0.9.2")
Explanation: Time Series Forecasting
In this tutorial, we will demonstrate how to build a model for time series forecasting in NumPyro. Specifically, we will replicate the Seasonal, Global Trend (SGT) model from the Rlgt: Bayesian Exponential Smoothing Models with Trend Modifications package. The time series data that we will use for this tutorial is the lynx dataset, which contains annual numbers of lynx trappings from 1821 to 1934 in Canada.
End of explanation
URL = "https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/datasets/lynx.csv"
lynx = pd.read_csv(URL, index_col=0)
data = lynx["value"].values
print("Length of time series:", data.shape[0])
plt.figure(figsize=(8, 4))
plt.plot(lynx["time"], data)
plt.show()
Explanation: Data
First, lets import and take a look at the dataset.
End of explanation
y_train, y_test = jnp.array(data[:80], dtype=jnp.float32), data[80:]
Explanation: The time series has a length of 114 (a data point for each year), and by looking at the plot, we can observe seasonality in this dataset, which is the recurrence of similar patterns at specific time periods. e.g. in this dataset, we observe a cyclical pattern every 10 years, but there is also a less obvious but clear spike in the number of trappings every 40 years. Let us see if we can model this effect in NumPyro.
In this tutorial, we will use the first 80 values for training and the last 34 values for testing.
End of explanation
def sgt(y, seasonality, future=0):
# heuristically, standard derivation of Cauchy prior depends on
# the max value of data
cauchy_sd = jnp.max(y) / 150
# NB: priors' parameters are taken from
# https://github.com/cbergmeir/Rlgt/blob/master/Rlgt/R/rlgtcontrol.R
nu = numpyro.sample("nu", dist.Uniform(2, 20))
powx = numpyro.sample("powx", dist.Uniform(0, 1))
sigma = numpyro.sample("sigma", dist.HalfCauchy(cauchy_sd))
offset_sigma = numpyro.sample(
"offset_sigma", dist.TruncatedCauchy(low=1e-10, loc=1e-10, scale=cauchy_sd)
)
coef_trend = numpyro.sample("coef_trend", dist.Cauchy(0, cauchy_sd))
pow_trend_beta = numpyro.sample("pow_trend_beta", dist.Beta(1, 1))
# pow_trend takes values from -0.5 to 1
pow_trend = 1.5 * pow_trend_beta - 0.5
pow_season = numpyro.sample("pow_season", dist.Beta(1, 1))
level_sm = numpyro.sample("level_sm", dist.Beta(1, 2))
s_sm = numpyro.sample("s_sm", dist.Uniform(0, 1))
init_s = numpyro.sample("init_s", dist.Cauchy(0, y[:seasonality] * 0.3))
def transition_fn(carry, t):
level, s, moving_sum = carry
season = s[0] * level**pow_season
exp_val = level + coef_trend * level**pow_trend + season
exp_val = jnp.clip(exp_val, a_min=0)
# use expected vale when forecasting
y_t = jnp.where(t >= N, exp_val, y[t])
moving_sum = (
moving_sum + y[t] - jnp.where(t >= seasonality, y[t - seasonality], 0.0)
)
level_p = jnp.where(t >= seasonality, moving_sum / seasonality, y_t - season)
level = level_sm * level_p + (1 - level_sm) * level
level = jnp.clip(level, a_min=0)
new_s = (s_sm * (y_t - level) / season + (1 - s_sm)) * s[0]
# repeat s when forecasting
new_s = jnp.where(t >= N, s[0], new_s)
s = jnp.concatenate([s[1:], new_s[None]], axis=0)
omega = sigma * exp_val**powx + offset_sigma
y_ = numpyro.sample("y", dist.StudentT(nu, exp_val, omega))
return (level, s, moving_sum), y_
N = y.shape[0]
level_init = y[0]
s_init = jnp.concatenate([init_s[1:], init_s[:1]], axis=0)
moving_sum = level_init
with numpyro.handlers.condition(data={"y": y[1:]}):
_, ys = scan(
transition_fn, (level_init, s_init, moving_sum), jnp.arange(1, N + future)
)
if future > 0:
numpyro.deterministic("y_forecast", ys[-future:])
Explanation: Model
The model we are going to use is called Seasonal, Global Trend, which when tested on 3003 time series of the M-3 competition, has been known to outperform other models originally participating in the competition:
\begin{align}
\text{exp-val}{t} &= \text{level}{t-1} + \text{coef-trend} \times \text{level}{t-1}^{\text{pow-trend}} + \text{s}_t \times \text{level}{t-1}^{\text{pow-season}}, \
\sigma_{t} &= \sigma \times \text{exp-val}{t}^{\text{powx}} + \text{offset}, \
y{t} &\sim \text{StudentT}(\nu, \text{exp-val}{t}, \sigma{t})
\end{align}
, where level and s follows the following recursion rules:
\begin{align}
\text{level-p} &=
\begin{cases}
y_t - \text{s}t \times \text{level}{t-1}^{\text{pow-season}} & \text{if } t \le \text{seasonality}, \
\text{Average} \left[y(t - \text{seasonality} + 1), \ldots, y(t)\right] & \text{otherwise},
\end{cases} \
\text{level}{t} &= \text{level-sm} \times \text{level-p} + (1 - \text{level-sm}) \times \text{level}{t-1}, \
\text{s}{t + \text{seasonality}} &= \text{s-sm} \times \frac{y{t} - \text{level}{t}}{\text{level}{t-1}^{\text{pow-trend}}}
+ (1 - \text{s-sm}) \times \text{s}_{t}.
\end{align}
A more detailed explanation for SGT model can be found in this vignette from the authors of the Rlgt package. Here we summarize the core ideas of this model:
Student's t-distribution, which has heavier tails than normal distribution, is used for the likelihood.
The expected value exp_val consists of a trending component and a seasonal component:
The trend is governed by the map $x \mapsto x + ax^b$, where $x$ is level, $a$ is coef_trend, and $b$ is pow_trend. Note that when $b \sim 0$, the trend is linear with $a$ is the slope, and when $b \sim 1$, the trend is exponential with $a$ is the rate. So that function can cover a large family of trend.
When time changes, level and s are updated to new values. Coefficients level_sm and s_sm are used to make the transition smoothly.
When powx is near $0$, the error $\sigma_t$ will be nearly constant while when powx is near $1$, the error will be propotional to the expected value.
There are several varieties of SGT. In this tutorial, we use generalized seasonality and seasonal average method.
We are ready to specify the model using NumPyro primitives. In NumPyro, we use the primitive sample(name, prior) to declare a latent random variable with a corresponding prior. These primitives can have custom interpretations depending on the effect handlers that are used by NumPyro inference algorithms in the backend. e.g. we can condition on specific values using the condition handler, or record values at these sample sites in the execution trace using the trace handler. Note that these details are not important for specifying the model, or running inference, but curious readers are encouraged to read the tutorial on effect handlers in Pyro.
End of explanation
print("Lag values sorted according to their autocorrelation values:\n")
print(jnp.argsort(autocorrelation(y_train))[::-1])
Explanation: Note that level and s are updated recursively while we collect the expected value at each time step. NumPyro uses JAX in the backend to JIT compile many critical parts of the NUTS algorithm, including the verlet integrator and the tree building process. However, doing so using Python's for loop in the model will result in a long compilation time for the model, so we use scan - which is a wrapper of lax.scan with supports for NumPyro primitives and handlers. A detailed explanation for using this utility can be found in NumPyro documentation. Here we use it to collect y values while the triple (level, s, moving_sum) plays the role of carrying state.
Another note is that instead of declaring the observation site y in transition_fn
python
numpyro.sample("y", dist.StudentT(nu, exp_val, omega), obs=y[t])
, we have used condition handler here. The reason is we also want to use this model for forecasting. In forecasting, future values of y are non-observable, so obs=y[t] does not make sense when t >= len(y) (caution: index out-of-bound errors do not get raised in JAX, e.g. jnp.arange(3)[10] == 2). Using condition, when the length of scan is larger than the length of the conditioned/observed site, unobserved values will be sampled from the distribution of that site.
Inference
First, we want to choose a good value for seasonality. Following the demo in Rlgt, we will set seasonality=38. Indeed, this value can be guessed by looking at the plot of the training data, where the second order seasonality effect has a periodicity around $40$ years. Note that $38$ is also one of the highest-autocorrelation lags.
End of explanation
%%time
kernel = NUTS(sgt)
mcmc = MCMC(kernel, num_warmup=5000, num_samples=5000, num_chains=4)
mcmc.run(random.PRNGKey(0), y_train, seasonality=38)
mcmc.print_summary()
samples = mcmc.get_samples()
Explanation: Now, let us run $4$ MCMC chains (using the No-U-Turn Sampler algorithm) with $5000$ warmup steps and $5000$ sampling steps per each chain. The returned value will be a collection of $20000$ samples.
End of explanation
predictive = Predictive(sgt, samples, return_sites=["y_forecast"])
forecast_marginal = predictive(random.PRNGKey(1), y_train, seasonality=38, future=34)[
"y_forecast"
]
Explanation: Forecasting
Given samples from mcmc, we want to do forecasting for the testing dataset y_test. NumPyro provides a convenient utility Predictive to get predictive distribution. Let's see how to use it to get forecasting values.
Notice that in the sgt model defined above, there is a keyword future which controls the execution of the model - depending on whether future > 0 or future == 0. The following code predicts the last 34 values from the original time-series.
End of explanation
y_pred = jnp.mean(forecast_marginal, axis=0)
sMAPE = jnp.mean(jnp.abs(y_pred - y_test) / (y_pred + y_test)) * 200
msqrt = jnp.sqrt(jnp.mean((y_pred - y_test) ** 2))
print("sMAPE: {:.2f}, rmse: {:.2f}".format(sMAPE, msqrt))
Explanation: Let's get sMAPE, root mean square error of the prediction, and visualize the result with the mean prediction and the 90% highest posterior density interval (HPDI).
End of explanation
plt.figure(figsize=(8, 4))
plt.plot(lynx["time"], data)
t_future = lynx["time"][80:]
hpd_low, hpd_high = hpdi(forecast_marginal)
plt.plot(t_future, y_pred, lw=2)
plt.fill_between(t_future, hpd_low, hpd_high, alpha=0.3)
plt.title("Forecasting lynx dataset with SGT model (90% HPDI)")
plt.show()
Explanation: Finally, let's plot the result to verify that we get the expected one.
End of explanation |
3,331 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Passive and active colloidal chemotaxis in a microfluidic channel
Step5: Below, we plot the mean-square displacement (MSD) of the dimer in cartesian coordinates.
There are thus three components. The z component saturates because of the confinement.
The x and y components result from a mixing of the parallel and transverse diffusion
coefficients.
The fit is for the long-time behaviour of the x-y MSD.
Step6: We use the velocity autocorrelation function (VACF) of the transverse and
parallel components of the velocity.
Integrating those functions yields the transverse and parallel diffusion
coefficients.
The integration is stopped when it reaches a plateau. This is done by setting
a limit in time, that is highlighted by reference lines in the plots.
We proceed in the same fashion for the planar angle diffusion coefficient. | Python Code:
%matplotlib inline
import h5py
import matplotlib.pyplot as plt
from matplotlib.figure import SubplotParams
import numpy as np
from scipy.signal import fftconvolve
from scipy.optimize import leastsq, curve_fit
from scipy.integrate import simps, cumtrapz
from glob import glob
plt.rcParams['figure.figsize'] = (12, 6)
plt.rcParams['figure.subplot.hspace'] = 0.25
plt.rcParams['figure.subplot.wspace'] = 0.25
plt.rcParams['figure.subplot.left'] = 0.17
plt.rcParams['axes.labelsize'] = 16
def expfitfunc(t, f0, tau):
Exponential fitting function
return f0*np.exp(-t/tau)
def fitfunc(p, t):
Linear fitting function
return p[0] + p[1]*t
def errfunc(p, t, y):
Error function for `fitfunc`
return fitfunc(p, t) - y
def get_block_data(group, name, dim=3):
Return the time and correlation function for the data
read from RMPCDMD output files.
block = group[name]['value'][:]
count = group[name]['count'][:]
block /= count.reshape((-1, 1, 1, 1))
t_data = [np.array([0])]
data = [block[0,:1,:,:].reshape((-1,dim))]
dt = group[name]['time'][()]
for i in range(block.shape[0]):
t = dt*np.arange(block.shape[1])*block.shape[1]**i
t_data.append(t[1:])
data.append(block[i,1:,:,:].reshape((-1,dim)))
return np.concatenate(t_data), np.concatenate(data)
# Collect simulation data
runs = glob('cceq_*.h5')
runs.sort()
msd_all = []
vacf_all = []
tvacf_all = []
pvacf_all = []
wacf_all = []
for f in runs:
a = h5py.File(f, 'r')
group = a['block_correlators']
msd_t, msd_data = get_block_data(group, 'mean_square_displacement')
msd_all.append(msd_data)
vacf_t, vacf_data = get_block_data(group, 'velocity_autocorrelation')
vacf_all.append(vacf_data)
do_pvacf = 'parallel_velocity_autocorrelation' in group
if do_pvacf:
pvacf_t, pvacf_data = get_block_data(group, 'parallel_velocity_autocorrelation')
pvacf_all.append(pvacf_data)
do_tvacf = 'transverse_velocity_autocorrelation' in group
if do_tvacf:
tvacf_t, tvacf_data = get_block_data(group, 'transverse_velocity_autocorrelation')
tvacf_all.append(tvacf_data)
do_wacf = 'planar_angular_velocity_autocorrelation' in group
if do_wacf:
wacf_t, w_data = get_block_data(group, 'planar_angular_velocity_autocorrelation', dim=1)
wacf_all.append(w_data.flatten())
a.close()
msd_all = np.array(msd_all)
vacf_all = np.array(vacf_all)
pvacf_all = np.array(pvacf_all)
tvacf_all = np.array(tvacf_all)
wacf_all = np.array(wacf_all)
Explanation: Passive and active colloidal chemotaxis in a microfluidic channel: mesoscopic and stochastic models
Author: Pierre de Buyl
Supplemental information to the article by L. Deprez and P. de Buyl
This notebook reports the characterization of the diffusion coefficients for a rigid dimer
confined between plates.
The data originates from the RMPCDMD simulation program. Please read its documentation and the
published paper for meaningful use of this notebook.
The correlation functions are computed online in RMPCDMD and stored in the H5MD files. They are read here
and integrated to obtain the diffusion coefficients. A time limit on the integral is set for all integrals,
and displayed in the figures, to obtain the value of the plateau of the running integral for D.
End of explanation
# Plot and fit the mean-squared displacement
plt.ylabel(r'$\langle (\mathbf{r}(\tau) - \mathbf{r}(0))^2 \rangle$')
m = msd_all.mean(axis=0)
# Plot all three components
plt.plot(msd_t, m, marker='o')
# Sum only xy components
m = m[...,:2].sum(axis=-1)
# Fit data to t>100
mask = msd_t>100
solution, ierr = leastsq(errfunc, [0, 0.1], args=(msd_t[mask], m[mask]))
intercept, D = solution
# MSD = 2 d D t = 4 D t -> The coefficient of the linear fit must be divided by 4
# as the diffusion in z is bounded by the confining plates.
D = D/4
plt.plot(msd_t, fitfunc((intercept, 2*D), msd_t))
plt.xlabel(r'$\tau$')
plt.loglog()
# Via the MSD, we can only access the sum of D_parallel and D_perp
print("D_parallel + D_perp = ", 2*D)
Explanation: Below, we plot the mean-square displacement (MSD) of the dimer in cartesian coordinates.
There are thus three components. The z component saturates because of the confinement.
The x and y components result from a mixing of the parallel and transverse diffusion
coefficients.
The fit is for the long-time behaviour of the x-y MSD.
End of explanation
# Integrate the VACF
limit = 800
params = SubplotParams(hspace=0.08, wspace=0.15)
plt.figure(figsize=(14,8), subplotpars=params)
# Transverse VACF
m = tvacf_all[...,:2].sum(axis=-1).mean(axis=0)
ax1 = plt.subplot(221)
plt.plot(tvacf_t, m, marker='o')
plt.axvline(limit)
plt.xscale('log')
plt.xticks([])
plt.ylabel(r'Transv. VACF')
# Integral of transverse VACF
ax1_int = plt.subplot(222)
plt.plot(tvacf_t, cumtrapz(m, tvacf_t, initial=0))
plt.axvline(limit)
plt.xscale('log')
plt.xticks([])
idx = np.searchsorted(tvacf_t, limit)
integrated_Dt = simps(m[:idx], tvacf_t[:idx])
plt.axhline(integrated_Dt)
ax1_int.yaxis.tick_right()
ax1_int.yaxis.set_label_position('right')
plt.ylabel(r'Integral of transv. VACF')
plt.ylim(-0.0002,0.0025)
# Parallel VACF
ax2 = plt.subplot(223)
m = pvacf_all[...,:2].sum(axis=-1).mean(axis=0)
plt.plot(pvacf_t, m, marker='o')
plt.axvline(limit)
plt.xscale('log')
plt.xlabel(r'$\tau$')
plt.ylabel(r'Parallel VACF')
# Integral of parallel VACF
ax2_int = plt.subplot(224)
plt.plot(pvacf_t, cumtrapz(m, pvacf_t, initial=0))
plt.axvline(limit)
plt.xscale('log')
plt.xlabel(r'$\tau$')
idx = np.searchsorted(pvacf_t, limit)
integrated_Dp = simps(m[:idx], pvacf_t[:idx])
plt.axhline(integrated_Dp)
plt.ylim(-0.0002,0.0025)
ax2_int.yaxis.tick_right()
ax2_int.yaxis.set_label_position('right')
plt.ylabel(r'Integral of parallel VACF')
print('Transverse D:', integrated_Dt)
print('Parallel D:', integrated_Dp)
print("Sum of the D's", integrated_Dt+integrated_Dp)
plt.figure(figsize=(14,4), subplotpars=params)
m = wacf_all.mean(axis=0)
s = wacf_all.std(axis=0)
ax1 = plt.subplot(121)
plt.xscale('log')
plt.plot(wacf_t, m, marker='o')
plt.axvline(limit)
plt.xlim(.5, 1e4)
plt.xlabel(r'$\tau$')
plt.ylabel(r'Orientational ACF')
ax2 = plt.subplot(122)
plt.xscale('log')
ax2.yaxis.tick_right()
ax2.yaxis.set_label_position('right')
plt.plot(wacf_t, cumtrapz(m, wacf_t, initial=0))
plt.xlim(.5, 1e4)
plt.ylim(-1e-6, 2e-4)
plt.xlabel(r'$\tau$')
plt.ylabel(r'Integral of orientational ACF')
limit = 800
idx = np.searchsorted(wacf_t, limit)
plt.axvline(limit)
D_integral = simps(m[:idx], wacf_t[:idx])
print('Integrated rotational diffusion coefficient', D_integral)
plt.axhline(D_integral)
plt.xlabel(r'$\tau$')
Explanation: We use the velocity autocorrelation function (VACF) of the transverse and
parallel components of the velocity.
Integrating those functions yields the transverse and parallel diffusion
coefficients.
The integration is stopped when it reaches a plateau. This is done by setting
a limit in time, that is highlighted by reference lines in the plots.
We proceed in the same fashion for the planar angle diffusion coefficient.
End of explanation |
3,332 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas and Datetimes
Pandas helps ease the pain of timezones, even as it provides many useful tools for generating DateTimeIndex based time Series.
Step1: Timestamp type (for individual datetimes)
Step2: LAB CHALLENGE
What time is it right now in | Python Code:
import pandas as pd
from pandas import DataFrame, Series
import numpy as np
rng = pd.date_range('3/9/2012 9:30', periods=6, freq='D')
rng
type(rng)
rng2 = pd.date_range('3/9/2012 9:30', periods=6, freq='M')
rng2
ts = Series(np.random.randn(len(rng)), index=rng)
type(ts)
ts
ts.index.tz
rng.tz
ts_utc = ts.tz_localize('UTC')
ts_utc.index.tz
ts_utc
ts_pacific = ts_utc.tz_convert('US/Pacific')
ts_pacific
from IPython.display import YouTubeVideo
YouTubeVideo("k4EUTMPuvHo")
ts_eastern = ts_pacific.tz_convert('US/Eastern')
ts_eastern
ts_berlin = ts_pacific.tz_convert('Europe/Berlin')
ts_berlin
Explanation: Pandas and Datetimes
Pandas helps ease the pain of timezones, even as it provides many useful tools for generating DateTimeIndex based time Series.
End of explanation
stamp = pd.Timestamp('2011-03-12 04:00')
stamp2 = pd.Timestamp('Wed May 23 11:35:54 2018') # will this work too?
type(stamp2)
stamp2_pac = stamp2.tz_localize('US/Pacific')
stamp2_pac
stamp2_pac.tz_convert('Europe/Moscow')
stamp2_pac.value # nanoseconds since the UNIX Epoch, Jan 1 1970
stamp2_pac.tz_convert('Europe/Moscow').value
stamp3 = pd.Timestamp('Wed May 23 11:35:54 1950')
stamp3.value # negative number because before the UNIX Epoch
ts
ts_sum = ts_eastern + ts_utc.tz_convert("Europe/Moscow")
ts_sum.index
Explanation: Timestamp type (for individual datetimes)
End of explanation
pd.Timestamp.now(tz='US/Pacific') # getting you started
Explanation: LAB CHALLENGE
What time is it right now in:
Moscow
Berlin
Tokyo
End of explanation |
3,333 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 3
Imports
Step1: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are
Step4: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
Step5: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
Step7: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
Step8: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
Step9: Use interact to explore the plot_pendulum function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
Explanation: Ordinary Differential Equations Exercise 3
Imports
End of explanation
g = 9.81 # m/s^2
l = 0.5 # length of pendulum, in meters
tmax = 50. # seconds
t = np.linspace(0, tmax, int(100*tmax))
Explanation: Damped, driven nonlinear pendulum
The equations of motion for a simple pendulum of mass $m$, length $l$ are:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta
$$
When a damping and periodic driving force are added the resulting system has much richer and interesting dynamics:
$$
\frac{d^2\theta}{dt^2} = \frac{-g}{\ell}\sin\theta - a \omega - b \sin(\omega_0 t)
$$
In this equation:
$a$ governs the strength of the damping.
$b$ governs the strength of the driving force.
$\omega_0$ is the angular frequency of the driving force.
When $a=0$ and $b=0$, the energy/mass is conserved:
$$E/m =g\ell(1-\cos(\theta)) + \frac{1}{2}\ell^2\omega^2$$
Basic setup
Here are the basic parameters we are going to use for this exercise:
End of explanation
def derivs(y, t, a, b, omega0):
Compute the derivatives of the damped, driven pendulum.
Parameters
----------
y : ndarray
The solution vector at the current time t[i]: [theta[i],omega[i]].
t : float
The current time t[i].
a, b, omega0: float
The parameters in the differential equation.
Returns
-------
dy : ndarray
The vector of derviatives at t[i]: [dtheta[i],domega[i]].
dtheta = y[1]
domega = -g/l*np.sin(y[0])-a*y[1]-b*np.sin(omega0*t)
return [dtheta, domega]
assert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])
def energy(y):
Compute the energy for the state array y.
The state array y can have two forms:
1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.
2. It could be an ndim=2 array where each row is the [theta,omega] at single
time.
Parameters
----------
y : ndarray, list, tuple
A solution vector
Returns
-------
E/m : float (ndim=1) or ndarray (ndim=2)
The energy per mass.
try:
if y.shape[1]:
return g*l*(1-np.cos(y[:,0]))+.5*l**2*y[:,1]**2
except:
Epm = g*l*(1-np.cos(y[0]))+.5*l**2*y[1]**2
return Epm
assert np.allclose(energy(np.array([np.pi,0])),g)
assert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))
Explanation: Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\vec{y}(t) = (\theta(t),\omega(t))$.
End of explanation
a=0
b=0
omega0=2
pend = odeint(derivs, [0,0], t, args=(a,b,omega0),atol=1e-3, rtol=1e-2)
f=plt.figure(figsize=(15,10))
ax = plt.subplot(311)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(t,pend[:,0]);
plt.title(r"$\theta$ vs. time")
plt.xlabel("Time")
plt.ylabel(r"$\theta$")
ax = plt.subplot(312)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(t,pend[:,1]);
plt.title(r"$\omega$ vs. time")
plt.xlabel("Time")
plt.ylabel(r"$\omega$")
ax = plt.subplot(313)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(t,energy(pend));
plt.title(r"Energy vs. time")
plt.xlabel("Time")
plt.ylabel("$Energy$")
plt.tight_layout()
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the two plots and their tuning of atol, rtol.
Explanation: Simple pendulum
Use the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.
Integrate the equations of motion.
Plot $E/m$ versus time.
Plot $\theta(t)$ and $\omega(t)$ versus time.
Tune the atol and rtol arguments of odeint until $E/m$, $\theta(t)$ and $\omega(t)$ are constant.
Anytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.
End of explanation
def plot_pendulum(a=0.0, b=0.0, omega0=0.0):
Integrate the damped, driven pendulum and make a phase plot of the solution.
pend = odeint(derivs, [-np.pi+0.1,0], t, args=(a,b,omega0),atol=1e-11, rtol=1e-10)
f=plt.figure(figsize=(15,10))
ax = plt.subplot(111)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(pend[:,0], pend[:,1]);
plt.xlim(-2*np.pi, 2*np.pi)
plt.ylim(-10, 10)
plt.title(r"$\theta$ vs. $\omega$")
plt.xlabel(r"$\omega$")
plt.ylabel(r"$\theta$")
Explanation: Damped pendulum
Write a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\omega_0]$.
Use the initial conditions $\theta(0)=-\pi + 0.1$ and $\omega=0$.
Decrease your atol and rtol even futher and make sure your solutions have converged.
Make a parametric plot of $[\theta(t),\omega(t)]$ versus time.
Use the plot limits $\theta \in [-2 \pi,2 \pi]$ and $\theta \in [-10,10]$
Label your axes and customize your plot to make it beautiful and effective.
End of explanation
plot_pendulum(0.5, 0.0, 0.0)
Explanation: Here is an example of the output of your plot_pendulum function that should show a decaying spiral.
End of explanation
interact(plot_pendulum, a=(0.0,1.0, 0.1), b=(0.0,1.0,0.1), omega0=(0.0,10.0,0.1))
Explanation: Use interact to explore the plot_pendulum function with:
a: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$.
b: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
omega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.
End of explanation |
3,334 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data analytics for home appliances identification
by Ayush Garg, Gabriel Vizcaino and Pradeep Somi Ganeshbabu
Table of content
Loading and processing the PLAID dataset
Saving or loading the processed dataset
Fitting the classifier
Testing the accuracy of the chosen classifiers
Identifying appliance type per house
Conclusions and future work
Step1: Loading and processing the PLAID dataset
We are analizing the PLAID dataset available in this link. To parse the csv files, we use the following code, which is a snippet of the Copyright (C) 2015 script developed by Jingkun Gao ([email protected]) and available in the same website.
Step2: To run this notebook you can either download the PLAID dataset and run the above script to parese the data (this takes some time), or you may directly load the appData.pkl file (available here) which contains the required information using the code below.
Step3: To facilitate working with the data, we extract the data contained in the dictionary data and create the following variables
Step4: In order to identify identify patterns, it is useful to plot the data first. The following script plots the V-I profile of the last 10 cycles of five randomly picked appliances of each type.
Step5: Saving or loading the processed dataset
Here you can also directly load or save all of the above variables available in the Data_matrices.pkl file.
Step6: Preparing the data
From the V-I plots above we can conclude that, especially in the steady state, the combination of linear and non-linear elements within each appliance type produces a similar pattern of voltage vs. current across appliances of the same type. Though not perfectly consistent, we can harness this characteristic in order to build features that help us classify an appliance given its voltage and currents signals.
We explored different transformations to extract features from voltage and current signals like directly using the voltage and current values, calculating the Fourier transform of the current to identify harmonics, descriptive statistics (e.g. standard deviations and variation coefficients over a cycle) and printing images of V-I plots in order to extract the pixels’ characteristics. While all of them provide useful information to identify appliances, the latter (i.e. images) is the transformation that yields the highest predicting accuracy. Therefore, we stick with this approach.
Assuming that the power consumption of each appliance ends at steady state in the dataset, the following script extracts and produces standard plots of the last cycle of normalized currents and voltages for each appliance, and then saves those graphs as *.png files. The V-I pattern images saved as png files significantly use less memory than the raw data in csv files (~8 MB the whole folder).
Step7: To run the notebook hereafter you can either go through the process of printing the images and saving them in a folder, or you may directly load them from the "pics_505_1" folder using the following script.
Step8: After printing all the V-I pattern as images, the above script loads, cropes, convert to grayscale, and transforms those images (see examples) in arrays, in order to create a new matrix, temp (1074x32400), which will become the matrix of features.
Fitting the classifier
To build a well-performing classifier that identifies the appliance type based on its voltage and current signals as inputs, particularly the V-I profile at steady state, we start by evaluating different multi-class classifiers on the features matrix. To prevent overfitting, the dataset is randomly divided into three sub-sets
Step9: Eight models are evaluated on the fractionated dataset. The function below fits the assigned model using the input training data and prints both, the score of the predictions on the input validation data and the fitting time. The score of the default classifier (i.e. a random prediction) is also printed for the sake of comparison.
Step10: In general, the evaluated classifiers remarkably improve over the default classifier - expect for the Naive Bayes classifier using Bernoulli distributions (as expected given the input data). The one-vs-the-rest model, using a support vector machine estimator, is the one showing the highest accuracy on the validation subset. However, this classier, along with the Gradient Boosting (which also presents a good performance), takes significantly more time to fit than the others. On the contrary, the K-nearest-neighbors and Random Forest classifiers also achieve high accuracy but much faster. For these reasons, we are going to fine tune the main parameters of the latter two classifiers, re-train them, and then test again their performance on the testing subset.
Step11: For the KNN classifier, the above graph suggests that the less number of neighbors to consider, the better the accuracy. Therefore, we are going to set this parameter to have only one neighbor in the KNN classifier.
Having this new parameters, we re-trained both classifiers using the training and validation sub-sets, and test the fitted model on the testing set.
Step12: Although the characteristic of the Random Forest classifier entails that the shape of the above graph changes every time it is run, the general behavior suggests that having more than 10 sub-trees notably improves the performance of the classifier. Progressively increasing the number of trees after this threshold slightly improves the performance further, up to a point, around 70-90, when the accuracy starts decreasing. Therefore, we are going to set this parameter at 80 sub-trees.
Step13: Both classifiers improved their performance after the tuning their parameters. KNN even outweighs the performance of the one-vs-the-rest classifier. Although the score of the Random Forest classifier slightly lags behind KNN, this fitting time of this one is 8x times faster than KNN.
Testing the accuracy of the chosen classifiers
To further test the perfomance of both classifiers, we now perfom a random 10-fold cross-validation process on both models using the whole dataset.
Step14: The results from the 10-fold cross-validation are very promising. Both models present more than 92% average accuracy and though KNN scores slightly higher, the Random Forest still shows significantly lesser fitting time.
Identifying appliance type per house
One last step to test the performance of the KNN and Random Forest classifiers would be to predict or identify the type of appliances in particular house, based on the voltage and current signals, by training the model on the data from the rest of the houses. There are 55 homes surveyed and each appliance has a label indicating its corresponding house; hence, it is possible to split the data in this fashion. This is another kind of cross-validation.
Step15: The results of the cross-validation per home show an median accuracy above 80% for both classifiers. Out of the 55 home appliance predictions, 9 scored 100% accuracy and around 20 had scores above 90%. Only 3 and 2 houses had a scored below 50% using KNN and RF respectively.
In general, the presented outcome suggests that the chosen classifers work fairly well, although they perfom poorly for certain homes. In order to identify why is this the case, it is worth it to plot the predictions and actual type of a couple of those home appliances.
Step16: By running the above script over different wrong predictions we noticed that many of them correspond to signals either in transient or sub-transient state; which means that the shape of the V-I plot is not fully defined, so identifying the appliance type based on such image is very hard even for human eye. Furthermore, in several homes the list of associated appliances contain the same appliance sampled in different times. For example, in home46, in which we get an accuracy of 0%, the only signals correspond to a microwave whose V-I profile is very fuzzy. Therefore, in cases like this one, the classifiers are meant to failed repeatedly in a single house.
Conclusions and future work
The present notebook presents a data-driven approach to the problem of identifying home appliances type based on their corresponding electrical signals. Different multi-class classifiers are trained and tested on the PLAID dataset in order to identify the most accurate and less computationally expensive models. An image recognition approach of Voltage-Current profiles in steady state is used to model the inputs of the appliance classifiers. Based on the analyses undertaken we are able to identify some common patterns and draw conclusions about the two best performed classifiers identified in terms of time and accuracy, K-nearest-neighbors and Random Forest Decision Tree | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import pickle, time, seaborn, random, json, os
%matplotlib inline
from sklearn import tree
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.ensemble import GradientBoostingRegressor, GradientBoostingClassifier, RandomForestClassifier
from sklearn.naive_bayes import GaussianNB, BernoulliNB
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import LinearSVC
from PIL import Image
from sklearn.neighbors import KNeighborsClassifier
Explanation: Data analytics for home appliances identification
by Ayush Garg, Gabriel Vizcaino and Pradeep Somi Ganeshbabu
Table of content
Loading and processing the PLAID dataset
Saving or loading the processed dataset
Fitting the classifier
Testing the accuracy of the chosen classifiers
Identifying appliance type per house
Conclusions and future work
End of explanation
#Setting up the path in the working directory
Data_path = 'PLAID/'
csv_path = Data_path + 'CSV/'
csv_files = os.listdir(csv_path)
#Load meta data
with open(Data_path + 'meta1.json') as data_file:
meta1 = json.load(data_file)
meta = [meta1]
#Functions to parse meta data stored in JSON format
def clean_meta(ist):
'''remove '' elements in Meta Data '''
clean_ist = ist.copy()
for k,v in ist.items():
if len(v) == 0:
del clean_ist[k]
return clean_ist
def parse_meta(meta):
'''parse meta data for easy access'''
M = {}
for m in meta:
for app in m:
M[int(app['id'])] = clean_meta(app['meta'])
return M
Meta = parse_meta(meta)
# Unique appliance types
types = list(set([x['type'] for x in Meta.values()]))
types.sort()
#print(Unq_type)
def read_data_given_id_limited(path,ids,val,progress=False,last_offset=0):
'''read data given a list of ids and CSV paths'''
n = len(ids)
if n == 0:
return {}
else:
data = {}
for (i,ist_id) in enumerate(ids, start=1):
if last_offset==0:
data[ist_id] = np.genfromtxt(path+str(ist_id)+'.csv',
delimiter=',',names='current,voltage',dtype=(float,float))
else:
p=subprocess.Popen(['tail','-'+str(int(offset)),path+
str(ist_id)+'.csv'],stdout=subprocess.PIPE)
data[ist_id] = np.genfromtxt(p.stdout,delimiter=',',
names='current,voltage',dtype=(float,float))
data[ist_id]=data[ist_id][-val:]
return data
#get all the data points
data={}
val=30000 # take only last 30,000 values as they are most likely to be in the steady state
ids_to_draw = {}
for (ii,t) in enumerate(Unq_type):
t_ids = [i for i,j in enumerate(Types,start=1) if j == t]
ids_to_draw[t] = t_ids
data[t]=read_data_given_id_limited(csv_path, ids_to_draw[t], False,val)
Explanation: Loading and processing the PLAID dataset
We are analizing the PLAID dataset available in this link. To parse the csv files, we use the following code, which is a snippet of the Copyright (C) 2015 script developed by Jingkun Gao ([email protected]) and available in the same website.
End of explanation
# Saving or loading the main dictionary pickle file
saving = False
if saving:
pickle_file = open('AppData.pkl','wb')
pickle.dump(data,pickle_file,protocol=2)
pickle_file.close()
else:
pkf = open('AppData.pkl','rb')
data = pickle.load(pkf)
pkf.close()
#get house number and ids for each CSV
houses=[]
org_ids=[]
for i in range(0,len(Meta)):
houses.append(Meta[i+1].get('location'))
org_ids.append(i+1)
houses = np.hstack([np.array(houses)[:,None],np.array(org_ids)[:,None]])
Explanation: To run this notebook you can either download the PLAID dataset and run the above script to parese the data (this takes some time), or you may directly load the appData.pkl file (available here) which contains the required information using the code below.
End of explanation
cycle = 30000; num_cycles = 1; till = -cycle*num_cycles
resh = np.int(-till/num_cycles); tot = np.sum([len(data[x]) for x in data]); org_ids,c = [], 0
V = np.empty([resh,tot]); I = np.empty([resh,tot]); y = np.zeros(tot)
for ap_num,ap in enumerate(types):
for i in data[ap]:
V[:,c] = np.mean(np.reshape(data[ap][i]['voltage'][till:],(-1,cycle)),axis=0)
I[:,c] = np.mean(np.reshape(data[ap][i]['current'][till:],(-1,cycle)),axis=0)
y[c] = ap_num
org_ids.append(i)
c += 1
pass
V_org = V.T; I_org = I.T
Explanation: To facilitate working with the data, we extract the data contained in the dictionary data and create the following variables:
- V_org: Matrix of orginal voltage signals collected from every appliance (1074x30000)
- I_org: Matrix of originla current signals collected from every appliance (1074x30000)
- types: List of the types of appliances available in the dataset in alphabetic order
- y_org: Array of numerical encoding for each appliance type (1074x1)
- org_ids: List of original identification number of each appliance in the dataset
- house: Matrix of the identification number of each appliance and the corresponding house name
End of explanation
# plot V-I of last 10 steady state periods
num_figs = 5; fig, ax = plt.subplots(len(types),num_figs,figsize=(10,20)); till = -505*10
for (i,t) in enumerate(types):
j = 0; p = random.sample(list(data[t].keys()),num_figs)
for (k,v) in data[t].items():
if j > num_figs-1:
break
if k not in p:
continue
ax[i,j].plot(v['current'][till:],v['voltage'][till:],linewidth=1)
ax[i,j].set_title('Org_id: {}'.format(k),fontsize = 10); ax[i,j].set_xlabel('Current (A)',fontsize = 8)
ax[i,j].tick_params(axis='x', labelsize=5); ax[i,j].tick_params(axis='y', labelsize=8)
j += 1
ax[i,0].set_ylabel('{} (V)'.format(t), fontsize=10)
fig.tight_layout()
Explanation: In order to identify identify patterns, it is useful to plot the data first. The following script plots the V-I profile of the last 10 cycles of five randomly picked appliances of each type.
End of explanation
saving = False
if saving:
pickle_file = open('Data_matrices.pkl','wb')
pickle.dump([V_org,I_org,y_org,org_ids,houses,types],pickle_file,protocol=2)
pickle_file.close()
else:
pkf = open('Data_matrices.pkl','rb')
V_org,I_org,y_org,org_ids,houses,types = pickle.load(pkf)
pkf.close()
Explanation: Saving or loading the processed dataset
Here you can also directly load or save all of the above variables available in the Data_matrices.pkl file.
End of explanation
cycle = 505; num_cycles = 1; till = -cycle*num_cycles
V = np.empty((V_org.shape[0],cycle)); I = np.empty((V_org.shape[0],cycle)); y = y_org; c = 0
for i,val in enumerate(V_org):
V[i] = np.mean(np.reshape(V_org[i,till:],(-1,cycle)),axis=0)
I[i] = np.mean(np.reshape(I_org[i,till:],(-1,cycle)),axis=0)
V = (V-np.mean(V,axis=1)[:,None]) / np.std(V,axis=1)[:,None]; I = (I-np.mean(I,axis=1)[:,None]) / np.std(I,axis=1)[:,None]
Explanation: Preparing the data
From the V-I plots above we can conclude that, especially in the steady state, the combination of linear and non-linear elements within each appliance type produces a similar pattern of voltage vs. current across appliances of the same type. Though not perfectly consistent, we can harness this characteristic in order to build features that help us classify an appliance given its voltage and currents signals.
We explored different transformations to extract features from voltage and current signals like directly using the voltage and current values, calculating the Fourier transform of the current to identify harmonics, descriptive statistics (e.g. standard deviations and variation coefficients over a cycle) and printing images of V-I plots in order to extract the pixels’ characteristics. While all of them provide useful information to identify appliances, the latter (i.e. images) is the transformation that yields the highest predicting accuracy. Therefore, we stick with this approach.
Assuming that the power consumption of each appliance ends at steady state in the dataset, the following script extracts and produces standard plots of the last cycle of normalized currents and voltages for each appliance, and then saves those graphs as *.png files. The V-I pattern images saved as png files significantly use less memory than the raw data in csv files (~8 MB the whole folder).
End of explanation
print_images = False; seaborn.reset_orig()
m = V.shape[0]; j = 0
temp = np.empty((m,32400)); p = random.sample(range(m),3)
for i in range(m):
if print_images:
fig = plt.figure(figsize=(2,2))
plt.plot(I[i],V[i],linewidth=0.8,color='b'); plt.xlim([-4,4]); plt.ylim([-2,2]);
plt.savefig('pics_505_1/Ap_{}.png'.format(i))
plt.close()
else:
im = Image.open('pics_505_1/Ap_{}.png'.format(i)).crop((20,0,200,200-20))
im = im.convert('L')
temp[i] = np.array(im).reshape((-1,))
if i in p:
display(im)
j += 1
pass
seaborn.set()
%matplotlib inline
Explanation: To run the notebook hereafter you can either go through the process of printing the images and saving them in a folder, or you may directly load them from the "pics_505_1" folder using the following script.
End of explanation
X = temp; y = y_org
X_, X_test, y_, y_test = train_test_split(X,y, test_size=0.2)
X_train, X_cv, y_train, y_cv = train_test_split(X_, y_, test_size=0.2)
Explanation: After printing all the V-I pattern as images, the above script loads, cropes, convert to grayscale, and transforms those images (see examples) in arrays, in order to create a new matrix, temp (1074x32400), which will become the matrix of features.
Fitting the classifier
To build a well-performing classifier that identifies the appliance type based on its voltage and current signals as inputs, particularly the V-I profile at steady state, we start by evaluating different multi-class classifiers on the features matrix. To prevent overfitting, the dataset is randomly divided into three sub-sets: training, validation, and test. The models are fitted using the training subset and then the accuracy is tested on the validation subset. After this evaluation the best models are fine tuned and then tested using the testing subset. Since the objective is to accurately identify the type of an appliance based on its electrical signals, the following formula is used to measure accuracy:
$$Accurancy\space (Score) = \frac{Number\space of\space positive\space predictions}{Number\space of\space predictions}$$
End of explanation
def eval_cfls(models,X,y,X_te,y_te):
ss = []; tt = []
for m in models:
start = time.time()
m.fit(X,y)
ss.append(np.round(m.score(X_te,y_te),4))
print(str(m).split('(')[0],': {}'.format(ss[-1]),'...Time: {} s'.format(np.round(time.time()-start,3)))
tt.append(np.round(time.time()-start,3))
return ss,tt
models = [OneVsRestClassifier(LinearSVC(random_state=0)),tree.ExtraTreeClassifier(),tree.DecisionTreeClassifier(),GaussianNB(),
BernoulliNB(),GradientBoostingClassifier(), KNeighborsClassifier(),RandomForestClassifier()]
ss,tt = eval_cfls(models,X_train,y_train,X_cv,y_cv)
rand_guess = np.random.randint(0,len(set(y_train)),size=y_cv.shape[0])
print('Random Guess: {}'.format(np.round(np.mean(rand_guess == y_cv),4)))
Explanation: Eight models are evaluated on the fractionated dataset. The function below fits the assigned model using the input training data and prints both, the score of the predictions on the input validation data and the fitting time. The score of the default classifier (i.e. a random prediction) is also printed for the sake of comparison.
End of explanation
scores = []
for n in range(1,11,2):
clf = KNeighborsClassifier(n_neighbors=n,weights='distance')
clf.fit(X_train,y_train)
scores.append(clf.score(X_cv, y_cv))
plt.plot(range(1,11,2),scores); plt.xlabel('Number of neighbors'); plt.ylabel('Accuracy'); plt.ylim([0.8,1]);
plt.title('K-nearest-neighbors classifier');
Explanation: In general, the evaluated classifiers remarkably improve over the default classifier - expect for the Naive Bayes classifier using Bernoulli distributions (as expected given the input data). The one-vs-the-rest model, using a support vector machine estimator, is the one showing the highest accuracy on the validation subset. However, this classier, along with the Gradient Boosting (which also presents a good performance), takes significantly more time to fit than the others. On the contrary, the K-nearest-neighbors and Random Forest classifiers also achieve high accuracy but much faster. For these reasons, we are going to fine tune the main parameters of the latter two classifiers, re-train them, and then test again their performance on the testing subset.
End of explanation
scores = []
for n in range(5,120,10):
clf = RandomForestClassifier(n_estimators=n)
clf.fit(X_train,y_train)
scores.append(clf.score(X_cv, y_cv))
plt.plot(range(5,120,10),scores); plt.xlabel('Number of sub-trees'); plt.ylabel('Accuracy'); plt.ylim([0.8,1]);
plt.title('Random Forest classifier');
Explanation: For the KNN classifier, the above graph suggests that the less number of neighbors to consider, the better the accuracy. Therefore, we are going to set this parameter to have only one neighbor in the KNN classifier.
Having this new parameters, we re-trained both classifiers using the training and validation sub-sets, and test the fitted model on the testing set.
End of explanation
models = [KNeighborsClassifier(n_neighbors=1,weights='distance'),RandomForestClassifier(n_estimators=80)]
eval_cfls(models,np.vstack([X_train,X_cv]),np.hstack([y_train,y_cv]),X_test,y_test);
Explanation: Although the characteristic of the Random Forest classifier entails that the shape of the above graph changes every time it is run, the general behavior suggests that having more than 10 sub-trees notably improves the performance of the classifier. Progressively increasing the number of trees after this threshold slightly improves the performance further, up to a point, around 70-90, when the accuracy starts decreasing. Therefore, we are going to set this parameter at 80 sub-trees.
End of explanation
cv_scores = []; X = temp; y = y_org
p = np.random.permutation(X.shape[0])
X = X[p]; y = y[p];
for m in models:
start = time.time()
cv_scores.append(cross_val_score(m, X, y, cv=10))
print(str(m).split('(')[0],'average score: {}'.format(np.round(np.mean(cv_scores),3)),
'...10-fold CV Time: {} s'.format(np.round(time.time()-start,3)))
Explanation: Both classifiers improved their performance after the tuning their parameters. KNN even outweighs the performance of the one-vs-the-rest classifier. Although the score of the Random Forest classifier slightly lags behind KNN, this fitting time of this one is 8x times faster than KNN.
Testing the accuracy of the chosen classifiers
To further test the perfomance of both classifiers, we now perfom a random 10-fold cross-validation process on both models using the whole dataset.
End of explanation
def held_house(name,houses):
ids_te = houses[np.where(houses[:,0] == name),1].astype(int);
ids_test,ids_train = [],[]
for i,ID in enumerate(org_ids):
if ID in ids_te:
ids_test.append(i)
else:
ids_train.append(i)
return ids_test,ids_train
X = temp; y = y_org; h_names = ['house{}'.format(i+1) for i in range(len(set(houses[:,0])))]
scores = np.zeros((len(h_names),2))
for i,m in enumerate(models):
ss = []
for h in h_names:
ids_test,ids_train = held_house(h,houses)
X_train, X_test = X[ids_train], X[ids_test];
y_train,y_test = y[ids_train],y[ids_test];
m.fit(X_train,y_train)
ss.append(m.score(X_test,y_test))
scores[:,i] = np.array(ss)
plt.figure(figsize = (12,3))
plt.bar(np.arange(len(h_names)),scores[:,i],width=0.8); plt.xlim([0,len(h_names)]); plt.yticks(np.arange(0.1,1.1,0.1));
plt.ylabel('Accuracy');
plt.title('{} cross-validation per home. Median accuracy: {}'.format(str(m).split('(')[0],
np.round(np.median(scores[:,i]),3)))
plt.xticks(np.arange(len(h_names))+0.4,h_names,rotation='vertical');
plt.show()
df = pd.DataFrame(np.array([np.mean(scores,axis=0),np.sum(scores == 1,axis=0),
np.sum(scores >= 0.9,axis=0),np.sum(scores < 0.8,axis=0),np.sum(scores < 0.5,axis=0)]),columns=['KNN','RF'])
df['Stats'] = ['Avg. accuracy','100% accuracy','Above 90%','Above 80%','Below 50%'];
df.set_index('Stats',inplace=True); df.head()
Explanation: The results from the 10-fold cross-validation are very promising. Both models present more than 92% average accuracy and though KNN scores slightly higher, the Random Forest still shows significantly lesser fitting time.
Identifying appliance type per house
One last step to test the performance of the KNN and Random Forest classifiers would be to predict or identify the type of appliances in particular house, based on the voltage and current signals, by training the model on the data from the rest of the houses. There are 55 homes surveyed and each appliance has a label indicating its corresponding house; hence, it is possible to split the data in this fashion. This is another kind of cross-validation.
End of explanation
X = temp; y = y_org;
ids_test, ids_train = held_house('house46',houses)
X_train, X_test = X[ids_train], X[ids_test]; y_train,y_test = y[ids_train],y[ids_test];
V_,V_test = V[ids_train],V[ids_test]; I_,I_test = I[ids_train],I[ids_test]; org_ids_test = np.array(org_ids)[ids_test]
models[1].fit(X_train,y_train)
pred = models[1].predict(X_test)
items = np.where(pred != y_test)[0]
print('Number of wrong predictions in house13: {}'.format(len(items)))
for ids in items[:2]:
print('Prediction: '+ types[int(pred[ids])],', Actual: '+types[int(y_test[ids])])
fig,ax = plt.subplots(1,3,figsize=(11,3))
ax[0].plot(I_test[ids],V_test[ids],linewidth=0.5); ax[0].set_title('Actual data. ID: {}'.format(org_ids_test[ids]));
ax[1].plot(I_[y_train==y_test[ids]].T,V_[y_train==y_test[ids]].T,linewidth=0.5);
ax[1].set_title('Profiles of {}'.format(types[int(y_test[ids])]))
ax[2].plot(I_[y_train==pred[ids]].T,V_[y_train==pred[ids]].T,linewidth=0.5);
ax[2].set_title('Profiles of {}'.format(types[int(pred[ids])]));
Explanation: The results of the cross-validation per home show an median accuracy above 80% for both classifiers. Out of the 55 home appliance predictions, 9 scored 100% accuracy and around 20 had scores above 90%. Only 3 and 2 houses had a scored below 50% using KNN and RF respectively.
In general, the presented outcome suggests that the chosen classifers work fairly well, although they perfom poorly for certain homes. In order to identify why is this the case, it is worth it to plot the predictions and actual type of a couple of those home appliances.
End of explanation
def plot_clf_samples(model,X,X_te,y,y_te,n):
model.fit(X[:n], y[:n])
return np.array([model.score(X[:n], y[:n]), model.score(X_te, y_te)])
X = temp; y = y_org;
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
models[1].fit(X_train,y_train)
models[1].score(X_test, y_test)
nsamples = [int(x) for x in np.linspace(10, X_train.shape[0], 20)]
errors = np.array([plot_clf_samples(clf, X_train, X_test, y_train,y_test, n) for n in nsamples])
plt.plot(nsamples, errors[:,0], nsamples, errors[:,1]); plt.xlabel('Number of appliances'); plt.ylabel('Accuracy');
plt.ylim([0.4,1.1])
plt.legend(['Training accuracy','Test accuracy'],loc=4); plt.title('RF accuracy with respect of number of samples');
Explanation: By running the above script over different wrong predictions we noticed that many of them correspond to signals either in transient or sub-transient state; which means that the shape of the V-I plot is not fully defined, so identifying the appliance type based on such image is very hard even for human eye. Furthermore, in several homes the list of associated appliances contain the same appliance sampled in different times. For example, in home46, in which we get an accuracy of 0%, the only signals correspond to a microwave whose V-I profile is very fuzzy. Therefore, in cases like this one, the classifiers are meant to failed repeatedly in a single house.
Conclusions and future work
The present notebook presents a data-driven approach to the problem of identifying home appliances type based on their corresponding electrical signals. Different multi-class classifiers are trained and tested on the PLAID dataset in order to identify the most accurate and less computationally expensive models. An image recognition approach of Voltage-Current profiles in steady state is used to model the inputs of the appliance classifiers. Based on the analyses undertaken we are able to identify some common patterns and draw conclusions about the two best performed classifiers identified in terms of time and accuracy, K-nearest-neighbors and Random Forest Decision Tree:
- After fine tuning their corresponding parameters on a training sub-set, the average accuracy of KNN and RF, applying 10-fold cross-validation, is greater than 91%.
- The One-vs-the-rest and Gradient Boosting Decision Trees classifiers also show high accuracy; however, the fitting time is in the order of minutes (almost 15 min. for Gradient Boosting), whereas KNN and RF take seconds to do the job.
- Though KNN scores slightly higher than RF, the latter takes significantly shorter fitting time (about 8x time less).
- While high accuracy in both classifiers is achieved using traditional cross-validation techniques, when applying cross-validation per individual home, the accuracy decreased to 80% on average.
- While debugging the classifers we noticed that many of the input signals of current and voltage do not reach steady state in different appliances. Therefore, their corresponding V-I profile is not well defined which makes the prediction harder even for a human expert eye. We also noticed that in several homes, the list of associated appliances contain the same appliance sampled in different times. Therefore, in those cases the classifiers are meant to failed repeatedly in a single house.
The following task are proposed as future work in order to improve the performance of the trained appliance classifiers:
- Collect more data: The figure bellow shows the training and test accuracy evolution of the RF classifier with respect to the number of samples. While only slight increments are realized after 700-800 samples, it seems that there is still room for improvement in this sense.
End of explanation |
3,335 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Background information on filtering
Here we give some background information on filtering in general, and
how it is done in MNE-Python in particular.
Recommended reading for practical applications of digital
filter design can be found in Parks & Burrus (1987) [1] and
Ifeachor & Jervis (2002) [2], and for filtering in an
M/EEG context we recommend reading Widmann et al. (2015) [7]_.
To see how to use the default filters in MNE-Python on actual data, see
the tut-filter-resample tutorial.
Problem statement
Practical issues with filtering electrophysiological data are covered
in Widmann et al. (2012) [6]_, where they conclude with this statement
Step1: Take for example an ideal low-pass filter, which would give a magnitude
response of 1 in the pass-band (up to frequency $f_p$) and a magnitude
response of 0 in the stop-band (down to frequency $f_s$) such that
$f_p=f_s=40$ Hz here (shown to a lower limit of -60 dB for simplicity)
Step2: This filter hypothetically achieves zero ripple in the frequency domain,
perfect attenuation, and perfect steepness. However, due to the discontinuity
in the frequency response, the filter would require infinite ringing in the
time domain (i.e., infinite order) to be realized. Another way to think of
this is that a rectangular window in the frequency domain is actually a sinc_
function in the time domain, which requires an infinite number of samples
(and thus infinite time) to represent. So although this filter has ideal
frequency suppression, it has poor time-domain characteristics.
Let's try to naïvely make a brick-wall filter of length 0.1 s, and look
at the filter itself in the time domain and the frequency domain
Step3: This is not so good! Making the filter 10 times longer (1 s) gets us a
slightly better stop-band suppression, but still has a lot of ringing in
the time domain. Note the x-axis is an order of magnitude longer here,
and the filter has a correspondingly much longer group delay (again equal
to half the filter length, or 0.5 seconds)
Step4: Let's make the stop-band tighter still with a longer filter (10 s),
with a resulting larger x-axis
Step5: Now we have very sharp frequency suppression, but our filter rings for the
entire 10 seconds. So this naïve method is probably not a good way to build
our low-pass filter.
Fortunately, there are multiple established methods to design FIR filters
based on desired response characteristics. These include
Step6: Accepting a shallower roll-off of the filter in the frequency domain makes
our time-domain response potentially much better. We end up with a more
gradual slope through the transition region, but a much cleaner time
domain signal. Here again for the 1 s filter
Step7: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
use a shorter filter (5 cycles at 10 Hz = 0.5 s) and still get acceptable
stop-band attenuation
Step8: But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s),
our effective stop frequency gets pushed out past 60 Hz
Step9: If we want a filter that is only 0.1 seconds long, we should probably use
something more like a 25 Hz transition band (0.2 s = 5 cycles @ 25 Hz)
Step10: So far, we have only discussed non-causal filtering, which means that each
sample at each time point $t$ is filtered using samples that come
after ($t + \Delta t$) and before ($t - \Delta t$) the current
time point $t$.
In this sense, each sample is influenced by samples that come both before
and after it. This is useful in many cases, especially because it does not
delay the timing of events.
However, sometimes it can be beneficial to use causal filtering,
whereby each sample $t$ is filtered only using time points that came
after it.
Note that the delay is variable (whereas for linear/zero-phase filters it
is constant) but small in the pass-band. Unlike zero-phase filters, which
require time-shifting backward the output of a linear-phase filtering stage
(and thus becoming non-causal), minimum-phase filters do not require any
compensation to achieve small delays in the pass-band. Note that as an
artifact of the minimum phase filter construction step, the filter does
not end up being as steep as the linear/zero-phase version.
We can construct a minimum-phase filter from our existing linear-phase
filter with the
Step11: Applying FIR filters
Now lets look at some practical effects of these filters by applying
them to some data.
Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)
plus noise (random and line). Note that the original clean signal contains
frequency content in both the pass band and transition bands of our
low-pass filter.
Step12: Filter it with a shallow cutoff, linear-phase FIR (which allows us to
compensate for the constant filter delay)
Step13: Filter it with a different design method fir_design="firwin2", and also
compensate for the constant filter delay. This method does not produce
quite as sharp a transition compared to fir_design="firwin", despite
being twice as long
Step14: Let's also filter with the MNE-Python 0.13 default, which is a
long-duration, steep cutoff FIR that gets applied twice
Step15: Let's also filter it with the MNE-C default, which is a long-duration
steep-slope FIR filter designed using frequency-domain techniques
Step16: And now an example of a minimum-phase filter
Step18: Both the MNE-Python 0.13 and MNE-C filters have excellent frequency
attenuation, but it comes at a cost of potential
ringing (long-lasting ripples) in the time domain. Ringing can occur with
steep filters, especially in signals with frequency content around the
transition band. Our Morlet wavelet signal has power in our transition band,
and the time-domain ringing is thus more pronounced for the steep-slope,
long-duration filter than the shorter, shallower-slope filter
Step19: IIR filters
MNE-Python also offers IIR filtering functionality that is based on the
methods from
Step20: The falloff of this filter is not very steep.
<div class="alert alert-info"><h4>Note</h4><p>Here we have made use of second-order sections (SOS)
by using
Step21: There are other types of IIR filters that we can use. For a complete list,
check out the documentation for
Step22: If we can live with even more ripple, we can get it slightly steeper,
but the impulse response begins to ring substantially longer (note the
different x-axis scale)
Step23: Applying IIR filters
Now let's look at how our shallow and steep Butterworth IIR filters
perform on our Morlet signal from before
Step24: Some pitfalls of filtering
Multiple recent papers have noted potential risks of drawing
errant inferences due to misapplication of filters.
Low-pass problems
Filters in general, especially those that are non-causal (zero-phase), can
make activity appear to occur earlier or later than it truly did. As
mentioned in VanRullen (2011) [3], investigations of commonly (at the time)
used low-pass filters created artifacts when they were applied to simulated
data. However, such deleterious effects were minimal in many real-world
examples in Rousselet (2012) [5].
Perhaps more revealing, it was noted in Widmann & Schröger (2012) [6] that
the problematic low-pass filters from VanRullen (2011) [3]
Step25: Similarly, in a P300 paradigm reported by Kappenman & Luck (2010) [12]_,
they found that applying a 1 Hz high-pass decreased the probability of
finding a significant difference in the N100 response, likely because
the P300 response was smeared (and inverted) in time by the high-pass
filter such that it tended to cancel out the increased N100. However,
they nonetheless note that some high-passing can still be useful to deal
with drifts in the data.
Even though these papers generally advise a 0.1 Hz or lower frequency for
a high-pass, it is important to keep in mind (as most authors note) that
filtering choices should depend on the frequency content of both the
signal(s) of interest and the noise to be suppressed. For example, in
some of the MNE-Python examples involving the sample-dataset dataset,
high-pass values of around 1 Hz are used when looking at auditory
or visual N100 responses, because we analyze standard (not deviant) trials
and thus expect that contamination by later or slower components will
be limited.
Baseline problems (or solutions?)
In an evolving discussion, Tanner et al. (2015) [8] suggest using baseline
correction to remove slow drifts in data. However, Maess et al. (2016) [9]
suggest that baseline correction, which is a form of high-passing, does
not offer substantial advantages over standard high-pass filtering.
Tanner et al. (2016) [10]_ rebutted that baseline correction can correct
for problems with filtering.
To see what they mean, consider again our old simulated signal x from
before
Step26: In response, Maess et al. (2016) [11]_ note that these simulations do not
address cases of pre-stimulus activity that is shared across conditions, as
applying baseline correction will effectively copy the topology outside the
baseline period. We can see this if we give our signal x with some
consistent pre-stimulus activity, which makes everything look bad.
<div class="alert alert-info"><h4>Note</h4><p>An important thing to keep in mind with these plots is that they
are for a single simulated sensor. In multi-electrode recordings
the topology (i.e., spatial pattern) of the pre-stimulus activity
will leak into the post-stimulus period. This will likely create a
spatially varying distortion of the time-domain signals, as the
averaged pre-stimulus spatial pattern gets subtracted from the
sensor time courses.</p></div>
Putting some activity in the baseline period
Step27: Both groups seem to acknowledge that the choices of filtering cutoffs, and
perhaps even the application of baseline correction, depend on the
characteristics of the data being investigated, especially when it comes to | Python Code:
import numpy as np
from numpy.fft import fft, fftfreq
from scipy import signal
import matplotlib.pyplot as plt
from mne.time_frequency.tfr import morlet
from mne.viz import plot_filter, plot_ideal_filter
import mne
sfreq = 1000.
f_p = 40.
flim = (1., sfreq / 2.) # limits for plotting
Explanation: Background information on filtering
Here we give some background information on filtering in general, and
how it is done in MNE-Python in particular.
Recommended reading for practical applications of digital
filter design can be found in Parks & Burrus (1987) [1] and
Ifeachor & Jervis (2002) [2], and for filtering in an
M/EEG context we recommend reading Widmann et al. (2015) [7]_.
To see how to use the default filters in MNE-Python on actual data, see
the tut-filter-resample tutorial.
Problem statement
Practical issues with filtering electrophysiological data are covered
in Widmann et al. (2012) [6]_, where they conclude with this statement:
Filtering can result in considerable distortions of the time course
(and amplitude) of a signal as demonstrated by VanRullen (2011) [[3]_].
Thus, filtering should not be used lightly. However, if effects of
filtering are cautiously considered and filter artifacts are minimized,
a valid interpretation of the temporal dynamics of filtered
electrophysiological data is possible and signals missed otherwise
can be detected with filtering.
In other words, filtering can increase signal-to-noise ratio (SNR), but if it
is not used carefully, it can distort data. Here we hope to cover some
filtering basics so users can better understand filtering trade-offs and why
MNE-Python has chosen particular defaults.
Filtering basics
Let's get some of the basic math down. In the frequency domain, digital
filters have a transfer function that is given by:
\begin{align}H(z) &= \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \ldots + b_M z^{-M}}
{1 + a_1 z^{-1} + a_2 z^{-2} + \ldots + a_N z^{-M}} \
&= \frac{\sum_{k=0}^Mb_kz^{-k}}{\sum_{k=1}^Na_kz^{-k}}\end{align}
In the time domain, the numerator coefficients $b_k$ and denominator
coefficients $a_k$ can be used to obtain our output data
$y(n)$ in terms of our input data $x(n)$ as:
\begin{align}:label: summations
y(n) &= b_0 x(n) + b_1 x(n-1) + \ldots + b_M x(n-M)
- a_1 y(n-1) - a_2 y(n - 2) - \ldots - a_N y(n - N)\\
&= \sum_{k=0}^M b_k x(n-k) - \sum_{k=1}^N a_k y(n-k)\end{align}
In other words, the output at time $n$ is determined by a sum over
1. the numerator coefficients $b_k$, which get multiplied by
the previous input values $x(n-k)$, and
2. the denominator coefficients $a_k$, which get multiplied by
the previous output values $y(n-k)$.
Note that these summations correspond to (1) a weighted moving average and
(2) an autoregression.
Filters are broken into two classes: FIR_ (finite impulse response) and
IIR_ (infinite impulse response) based on these coefficients.
FIR filters use a finite number of numerator
coefficients $b_k$ ($\forall k, a_k=0$), and thus each output
value of $y(n)$ depends only on the $M$ previous input values.
IIR filters depend on the previous input and output values, and thus can have
effectively infinite impulse responses.
As outlined in Parks & Burrus (1987) [1]_, FIR and IIR have different
trade-offs:
* A causal FIR filter can be linear-phase -- i.e., the same time delay
across all frequencies -- whereas a causal IIR filter cannot. The phase
and group delay characteristics are also usually better for FIR filters.
* IIR filters can generally have a steeper cutoff than an FIR filter of
equivalent order.
* IIR filters are generally less numerically stable, in part due to
accumulating error (due to its recursive calculations).
In MNE-Python we default to using FIR filtering. As noted in Widmann et al.
(2015) [7]_:
Despite IIR filters often being considered as computationally more
efficient, they are recommended only when high throughput and sharp
cutoffs are required (Ifeachor and Jervis, 2002 [[2]_], p. 321)...
FIR filters are easier to control, are always stable, have a
well-defined passband, can be corrected to zero-phase without
additional computations, and can be converted to minimum-phase.
We therefore recommend FIR filters for most purposes in
electrophysiological data analysis.
When designing a filter (FIR or IIR), there are always trade-offs that
need to be considered, including but not limited to:
1. Ripple in the pass-band
2. Attenuation of the stop-band
3. Steepness of roll-off
4. Filter order (i.e., length for FIR filters)
5. Time-domain ringing
In general, the sharper something is in frequency, the broader it is in time,
and vice-versa. This is a fundamental time-frequency trade-off, and it will
show up below.
FIR Filters
First, we will focus on FIR filters, which are the default filters used by
MNE-Python.
Designing FIR filters
Here we'll try to design a low-pass filter and look at trade-offs in terms
of time- and frequency-domain filter characteristics. Later, in
tut_effect_on_signals, we'll look at how such filters can affect
signals when they are used.
First let's import some useful tools for filtering, and set some default
values for our data that are reasonable for M/EEG.
End of explanation
nyq = sfreq / 2. # the Nyquist frequency is half our sample rate
freq = [0, f_p, f_p, nyq]
gain = [1, 1, 0, 0]
third_height = np.array(plt.rcParams['figure.figsize']) * [1, 1. / 3.]
ax = plt.subplots(1, figsize=third_height)[1]
plot_ideal_filter(freq, gain, ax, title='Ideal %s Hz lowpass' % f_p, flim=flim)
Explanation: Take for example an ideal low-pass filter, which would give a magnitude
response of 1 in the pass-band (up to frequency $f_p$) and a magnitude
response of 0 in the stop-band (down to frequency $f_s$) such that
$f_p=f_s=40$ Hz here (shown to a lower limit of -60 dB for simplicity):
End of explanation
n = int(round(0.1 * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq # center our sinc
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (0.1 s)', flim=flim, compensate=True)
Explanation: This filter hypothetically achieves zero ripple in the frequency domain,
perfect attenuation, and perfect steepness. However, due to the discontinuity
in the frequency response, the filter would require infinite ringing in the
time domain (i.e., infinite order) to be realized. Another way to think of
this is that a rectangular window in the frequency domain is actually a sinc_
function in the time domain, which requires an infinite number of samples
(and thus infinite time) to represent. So although this filter has ideal
frequency suppression, it has poor time-domain characteristics.
Let's try to naïvely make a brick-wall filter of length 0.1 s, and look
at the filter itself in the time domain and the frequency domain:
End of explanation
n = int(round(1. * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (1.0 s)', flim=flim, compensate=True)
Explanation: This is not so good! Making the filter 10 times longer (1 s) gets us a
slightly better stop-band suppression, but still has a lot of ringing in
the time domain. Note the x-axis is an order of magnitude longer here,
and the filter has a correspondingly much longer group delay (again equal
to half the filter length, or 0.5 seconds):
End of explanation
n = int(round(10. * sfreq))
n -= n % 2 - 1 # make it odd
t = np.arange(-(n // 2), n // 2 + 1) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, sfreq, freq, gain, 'Sinc (10.0 s)', flim=flim, compensate=True)
Explanation: Let's make the stop-band tighter still with a longer filter (10 s),
with a resulting larger x-axis:
End of explanation
trans_bandwidth = 10 # 10 Hz transition band
f_s = f_p + trans_bandwidth # = 50 Hz
freq = [0., f_p, f_s, nyq]
gain = [1., 1., 0., 0.]
ax = plt.subplots(1, figsize=third_height)[1]
title = '%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth)
plot_ideal_filter(freq, gain, ax, title=title, flim=flim)
Explanation: Now we have very sharp frequency suppression, but our filter rings for the
entire 10 seconds. So this naïve method is probably not a good way to build
our low-pass filter.
Fortunately, there are multiple established methods to design FIR filters
based on desired response characteristics. These include:
1. The Remez_ algorithm (:func:`scipy.signal.remez`, `MATLAB firpm`_)
2. Windowed FIR design (:func:`scipy.signal.firwin2`,
:func:`scipy.signal.firwin`, and `MATLAB fir2`_)
3. Least squares designs (:func:`scipy.signal.firls`, `MATLAB firls`_)
4. Frequency-domain design (construct filter in Fourier
domain and use an :func:`IFFT <numpy.fft.ifft>` to invert it)
<div class="alert alert-info"><h4>Note</h4><p>Remez and least squares designs have advantages when there are
"do not care" regions in our frequency response. However, we want
well controlled responses in all frequency regions.
Frequency-domain construction is good when an arbitrary response
is desired, but generally less clean (due to sampling issues) than
a windowed approach for more straightforward filter applications.
Since our filters (low-pass, high-pass, band-pass, band-stop)
are fairly simple and we require precise control of all frequency
regions, we will primarily use and explore windowed FIR design.</p></div>
If we relax our frequency-domain filter requirements a little bit, we can
use these functions to construct a lowpass filter that instead has a
transition band, or a region between the pass frequency $f_p$
and stop frequency $f_s$, e.g.:
End of explanation
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (1.0 s)',
flim=flim, compensate=True)
Explanation: Accepting a shallower roll-off of the filter in the frequency domain makes
our time-domain response potentially much better. We end up with a more
gradual slope through the transition region, but a much cleaner time
domain signal. Here again for the 1 s filter:
End of explanation
n = int(round(sfreq * 0.5)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.5 s)',
flim=flim, compensate=True)
Explanation: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
use a shorter filter (5 cycles at 10 Hz = 0.5 s) and still get acceptable
stop-band attenuation:
End of explanation
n = int(round(sfreq * 0.2)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.2 s)',
flim=flim, compensate=True)
Explanation: But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s),
our effective stop frequency gets pushed out past 60 Hz:
End of explanation
trans_bandwidth = 25
f_s = f_p + trans_bandwidth
freq = [0, f_p, f_s, nyq]
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, sfreq, freq, gain, 'Windowed 50 Hz transition (0.2 s)',
flim=flim, compensate=True)
Explanation: If we want a filter that is only 0.1 seconds long, we should probably use
something more like a 25 Hz transition band (0.2 s = 5 cycles @ 25 Hz):
End of explanation
h_min = signal.minimum_phase(h)
plot_filter(h_min, sfreq, freq, gain, 'Minimum-phase', flim=flim)
Explanation: So far, we have only discussed non-causal filtering, which means that each
sample at each time point $t$ is filtered using samples that come
after ($t + \Delta t$) and before ($t - \Delta t$) the current
time point $t$.
In this sense, each sample is influenced by samples that come both before
and after it. This is useful in many cases, especially because it does not
delay the timing of events.
However, sometimes it can be beneficial to use causal filtering,
whereby each sample $t$ is filtered only using time points that came
after it.
Note that the delay is variable (whereas for linear/zero-phase filters it
is constant) but small in the pass-band. Unlike zero-phase filters, which
require time-shifting backward the output of a linear-phase filtering stage
(and thus becoming non-causal), minimum-phase filters do not require any
compensation to achieve small delays in the pass-band. Note that as an
artifact of the minimum phase filter construction step, the filter does
not end up being as steep as the linear/zero-phase version.
We can construct a minimum-phase filter from our existing linear-phase
filter with the :func:scipy.signal.minimum_phase function, and note
that the falloff is not as steep:
End of explanation
dur = 10.
center = 2.
morlet_freq = f_p
tlim = [center - 0.2, center + 0.2]
tticks = [tlim[0], center, tlim[1]]
flim = [20, 70]
x = np.zeros(int(sfreq * dur) + 1)
blip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20.
n_onset = int(center * sfreq) - len(blip) // 2
x[n_onset:n_onset + len(blip)] += blip
x_orig = x.copy()
rng = np.random.RandomState(0)
x += rng.randn(len(x)) / 1000.
x += np.sin(2. * np.pi * 60. * np.arange(len(x)) / sfreq) / 2000.
Explanation: Applying FIR filters
Now lets look at some practical effects of these filters by applying
them to some data.
Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)
plus noise (random and line). Note that the original clean signal contains
frequency content in both the pass band and transition bands of our
low-pass filter.
End of explanation
transition_band = 0.25 * f_p
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin', verbose=True)
x_v16 = np.convolve(h, x)
# this is the linear->zero phase, causal-to-non-causal conversion / shift
x_v16 = x_v16[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.16 default', flim=flim,
compensate=True)
Explanation: Filter it with a shallow cutoff, linear-phase FIR (which allows us to
compensate for the constant filter delay):
End of explanation
transition_band = 0.25 * f_p
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent:
# filter_dur = 6.6 / transition_band # sec
# n = int(sfreq * filter_dur)
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
fir_design='firwin2', verbose=True)
x_v14 = np.convolve(h, x)[len(h) // 2:]
plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.14 default', flim=flim,
compensate=True)
Explanation: Filter it with a different design method fir_design="firwin2", and also
compensate for the constant filter delay. This method does not produce
quite as sharp a transition compared to fir_design="firwin", despite
being twice as long:
End of explanation
transition_band = 0.5 # Hz
f_s = f_p + transition_band
filter_dur = 10. # sec
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
# This would be equivalent
# n = int(sfreq * filter_dur)
# h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
h_trans_bandwidth=transition_band,
filter_length='%ss' % filter_dur,
fir_design='firwin2', verbose=True)
x_v13 = np.convolve(np.convolve(h, x)[::-1], h)[::-1][len(h) - 1:-len(h) - 1]
# the effective h is one that is applied to the time-reversed version of itself
h_eff = np.convolve(h, h[::-1])
plot_filter(h_eff, sfreq, freq, gain, 'MNE-Python 0.13 default', flim=flim,
compensate=True)
Explanation: Let's also filter with the MNE-Python 0.13 default, which is a
long-duration, steep cutoff FIR that gets applied twice:
End of explanation
h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5)
x_mne_c = np.convolve(h, x)[len(h) // 2:]
transition_band = 5 # Hz (default in MNE-C)
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'MNE-C default', flim=flim, compensate=True)
Explanation: Let's also filter it with the MNE-C default, which is a long-duration
steep-slope FIR filter designed using frequency-domain techniques:
End of explanation
h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
phase='minimum', fir_design='firwin',
verbose=True)
x_min = np.convolve(h, x)
transition_band = 0.25 * f_p
f_s = f_p + transition_band
filter_dur = 6.6 / transition_band # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, sfreq, freq, gain, 'Minimum-phase filter', flim=flim)
Explanation: And now an example of a minimum-phase filter:
End of explanation
axes = plt.subplots(1, 2)[1]
def plot_signal(x, offset):
Plot a signal.
t = np.arange(len(x)) / sfreq
axes[0].plot(t, x + offset)
axes[0].set(xlabel='Time (s)', xlim=t[[0, -1]])
X = fft(x)
freqs = fftfreq(len(x), 1. / sfreq)
mask = freqs >= 0
X = X[mask]
freqs = freqs[mask]
axes[1].plot(freqs, 20 * np.log10(np.maximum(np.abs(X), 1e-16)))
axes[1].set(xlim=flim)
yscale = 30
yticklabels = ['Original', 'Noisy', 'FIR-firwin (0.16)', 'FIR-firwin2 (0.14)',
'FIR-steep (0.13)', 'FIR-steep (MNE-C)', 'Minimum-phase']
yticks = -np.arange(len(yticklabels)) / yscale
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_v16, offset=yticks[2])
plot_signal(x_v14, offset=yticks[3])
plot_signal(x_v13, offset=yticks[4])
plot_signal(x_mne_c, offset=yticks[5])
plot_signal(x_min, offset=yticks[6])
axes[0].set(xlim=tlim, title='FIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-len(yticks) / yscale, 1. / yscale],
yticks=yticks, yticklabels=yticklabels)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.tight_layout()
plt.show()
Explanation: Both the MNE-Python 0.13 and MNE-C filters have excellent frequency
attenuation, but it comes at a cost of potential
ringing (long-lasting ripples) in the time domain. Ringing can occur with
steep filters, especially in signals with frequency content around the
transition band. Our Morlet wavelet signal has power in our transition band,
and the time-domain ringing is thus more pronounced for the steep-slope,
long-duration filter than the shorter, shallower-slope filter:
End of explanation
sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(dict(sos=sos), sfreq, freq, gain, 'Butterworth order=2', flim=flim,
compensate=True)
x_shallow = signal.sosfiltfilt(sos, x)
del sos
Explanation: IIR filters
MNE-Python also offers IIR filtering functionality that is based on the
methods from :mod:scipy.signal. Specifically, we use the general-purpose
functions :func:scipy.signal.iirfilter and :func:scipy.signal.iirdesign,
which provide unified interfaces to IIR filter design.
Designing IIR filters
Let's continue with our design of a 40 Hz low-pass filter and look at
some trade-offs of different IIR filters.
Often the default IIR filter is a Butterworth filter_, which is designed
to have a maximally flat pass-band. Let's look at a few filter orders,
i.e., a few different number of coefficients used and therefore steepness
of the filter:
<div class="alert alert-info"><h4>Note</h4><p>Notice that the group delay (which is related to the phase) of
the IIR filters below are not constant. In the FIR case, we can
design so-called linear-phase filters that have a constant group
delay, and thus compensate for the delay (making the filter
non-causal) if necessary. This cannot be done with IIR filters, as
they have a non-linear phase (non-constant group delay). As the
filter order increases, the phase distortion near and in the
transition band worsens. However, if non-causal (forward-backward)
filtering can be used, e.g. with :func:`scipy.signal.filtfilt`,
these phase issues can theoretically be mitigated.</p></div>
End of explanation
iir_params = dict(order=8, ftype='butter')
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain, 'Butterworth order=8', flim=flim,
compensate=True)
x_steep = signal.sosfiltfilt(filt['sos'], x)
Explanation: The falloff of this filter is not very steep.
<div class="alert alert-info"><h4>Note</h4><p>Here we have made use of second-order sections (SOS)
by using :func:`scipy.signal.sosfilt` and, under the
hood, :func:`scipy.signal.zpk2sos` when passing the
``output='sos'`` keyword argument to
:func:`scipy.signal.iirfilter`. The filter definitions
given `above <tut_filtering_basics>` use the polynomial
numerator/denominator (sometimes called "tf") form ``(b, a)``,
which are theoretically equivalent to the SOS form used here.
In practice, however, the SOS form can give much better results
due to issues with numerical precision (see
:func:`scipy.signal.sosfilt` for an example), so SOS should be
used whenever possible.</p></div>
Let's increase the order, and note that now we have better attenuation,
with a longer impulse response. Let's also switch to using the MNE filter
design function, which simplifies a few things and gives us some information
about the resulting filter:
End of explanation
iir_params.update(ftype='cheby1',
rp=1., # dB of acceptable pass-band ripple
)
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain,
'Chebychev-1 order=8, ripple=1 dB', flim=flim, compensate=True)
Explanation: There are other types of IIR filters that we can use. For a complete list,
check out the documentation for :func:scipy.signal.iirdesign. Let's
try a Chebychev (type I) filter, which trades off ripple in the pass-band
to get better attenuation in the stop-band:
End of explanation
iir_params['rp'] = 6.
filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p,
method='iir', iir_params=iir_params,
verbose=True)
plot_filter(filt, sfreq, freq, gain,
'Chebychev-1 order=8, ripple=6 dB', flim=flim,
compensate=True)
Explanation: If we can live with even more ripple, we can get it slightly steeper,
but the impulse response begins to ring substantially longer (note the
different x-axis scale):
End of explanation
axes = plt.subplots(1, 2)[1]
yticks = np.arange(4) / -30.
yticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_shallow, offset=yticks[2])
plot_signal(x_steep, offset=yticks[3])
axes[0].set(xlim=tlim, title='IIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.125, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axes[0].get_yticklabels():
text.set(rotation=45, size=8)
axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
Explanation: Applying IIR filters
Now let's look at how our shallow and steep Butterworth IIR filters
perform on our Morlet signal from before:
End of explanation
x = np.zeros(int(2 * sfreq))
t = np.arange(0, len(x)) / sfreq - 0.2
onset = np.where(t >= 0.5)[0][0]
cos_t = np.arange(0, int(sfreq * 0.8)) / sfreq
sig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t)
x[onset:onset + len(sig)] = sig
iir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass')
iir_hp_p1 = signal.iirfilter(2, 0.1 / sfreq, btype='highpass')
iir_lp_2 = signal.iirfilter(2, 2. / sfreq, btype='lowpass')
iir_hp_2 = signal.iirfilter(2, 2. / sfreq, btype='highpass')
x_lp_30 = signal.filtfilt(iir_lp_30[0], iir_lp_30[1], x, padlen=0)
x_hp_p1 = signal.filtfilt(iir_hp_p1[0], iir_hp_p1[1], x, padlen=0)
x_lp_2 = signal.filtfilt(iir_lp_2[0], iir_lp_2[1], x, padlen=0)
x_hp_2 = signal.filtfilt(iir_hp_2[0], iir_hp_2[1], x, padlen=0)
xlim = t[[0, -1]]
ylim = [-2, 6]
xlabel = 'Time (sec)'
ylabel = r'Amplitude ($\mu$V)'
tticks = [0, 0.5, 1.3, t[-1]]
axes = plt.subplots(2, 2)[1].ravel()
for ax, x_f, title in zip(axes, [x_lp_2, x_lp_30, x_hp_2, x_hp_p1],
['LP$_2$', 'LP$_{30}$', 'HP$_2$', 'LP$_{0.1}$']):
ax.plot(t, x, color='0.5')
ax.plot(t, x_f, color='k', linestyle='--')
ax.set(ylim=ylim, xlim=xlim, xticks=tticks,
title=title, xlabel=xlabel, ylabel=ylabel)
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.show()
Explanation: Some pitfalls of filtering
Multiple recent papers have noted potential risks of drawing
errant inferences due to misapplication of filters.
Low-pass problems
Filters in general, especially those that are non-causal (zero-phase), can
make activity appear to occur earlier or later than it truly did. As
mentioned in VanRullen (2011) [3], investigations of commonly (at the time)
used low-pass filters created artifacts when they were applied to simulated
data. However, such deleterious effects were minimal in many real-world
examples in Rousselet (2012) [5].
Perhaps more revealing, it was noted in Widmann & Schröger (2012) [6] that
the problematic low-pass filters from VanRullen (2011) [3]:
Used a least-squares design (like :func:scipy.signal.firls) that
included "do-not-care" transition regions, which can lead to
uncontrolled behavior.
Had a filter length that was independent of the transition bandwidth,
which can cause excessive ringing and signal distortion.
High-pass problems
When it comes to high-pass filtering, using corner frequencies above 0.1 Hz
were found in Acunzo et al. (2012) [4]_ to:
"... generate a systematic bias easily leading to misinterpretations of
neural activity.”
In a related paper, Widmann et al. (2015) [7] also came to suggest a
0.1 Hz highpass. More evidence followed in Tanner et al. (2015) [8] of
such distortions. Using data from language ERP studies of semantic and
syntactic processing (i.e., N400 and P600), using a high-pass above 0.3 Hz
caused significant effects to be introduced implausibly early when compared
to the unfiltered data. From this, the authors suggested the optimal
high-pass value for language processing to be 0.1 Hz.
We can recreate a problematic simulation from Tanner et al. (2015) [8]_:
"The simulated component is a single-cycle cosine wave with an amplitude
of 5µV [sic], onset of 500 ms poststimulus, and duration of 800 ms. The
simulated component was embedded in 20 s of zero values to avoid
filtering edge effects... Distortions [were] caused by 2 Hz low-pass
and high-pass filters... No visible distortion to the original
waveform [occurred] with 30 Hz low-pass and 0.01 Hz high-pass filters...
Filter frequencies correspond to the half-amplitude (-6 dB) cutoff
(12 dB/octave roll-off)."
<div class="alert alert-info"><h4>Note</h4><p>This simulated signal contains energy not just within the
pass-band, but also within the transition and stop-bands -- perhaps
most easily understood because the signal has a non-zero DC value,
but also because it is a shifted cosine that has been
*windowed* (here multiplied by a rectangular window), which
makes the cosine and DC frequencies spread to other frequencies
(multiplication in time is convolution in frequency, so multiplying
by a rectangular window in the time domain means convolving a sinc
function with the impulses at DC and the cosine frequency in the
frequency domain).</p></div>
End of explanation
def baseline_plot(x):
all_axes = plt.subplots(3, 2)[1]
for ri, (axes, freq) in enumerate(zip(all_axes, [0.1, 0.3, 0.5])):
for ci, ax in enumerate(axes):
if ci == 0:
iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass',
output='sos')
x_hp = signal.sosfiltfilt(iir_hp, x, padlen=0)
else:
x_hp -= x_hp[t < 0].mean()
ax.plot(t, x, color='0.5')
ax.plot(t, x_hp, color='k', linestyle='--')
if ri == 0:
ax.set(title=('No ' if ci == 0 else '') +
'Baseline Correction')
ax.set(xticks=tticks, ylim=ylim, xlim=xlim, xlabel=xlabel)
ax.set_ylabel('%0.1f Hz' % freq, rotation=0,
horizontalalignment='right')
mne.viz.adjust_axes(axes)
mne.viz.tight_layout()
plt.suptitle(title)
plt.show()
baseline_plot(x)
Explanation: Similarly, in a P300 paradigm reported by Kappenman & Luck (2010) [12]_,
they found that applying a 1 Hz high-pass decreased the probability of
finding a significant difference in the N100 response, likely because
the P300 response was smeared (and inverted) in time by the high-pass
filter such that it tended to cancel out the increased N100. However,
they nonetheless note that some high-passing can still be useful to deal
with drifts in the data.
Even though these papers generally advise a 0.1 Hz or lower frequency for
a high-pass, it is important to keep in mind (as most authors note) that
filtering choices should depend on the frequency content of both the
signal(s) of interest and the noise to be suppressed. For example, in
some of the MNE-Python examples involving the sample-dataset dataset,
high-pass values of around 1 Hz are used when looking at auditory
or visual N100 responses, because we analyze standard (not deviant) trials
and thus expect that contamination by later or slower components will
be limited.
Baseline problems (or solutions?)
In an evolving discussion, Tanner et al. (2015) [8] suggest using baseline
correction to remove slow drifts in data. However, Maess et al. (2016) [9]
suggest that baseline correction, which is a form of high-passing, does
not offer substantial advantages over standard high-pass filtering.
Tanner et al. (2016) [10]_ rebutted that baseline correction can correct
for problems with filtering.
To see what they mean, consider again our old simulated signal x from
before:
End of explanation
n_pre = (t < 0).sum()
sig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre))
x[:n_pre] += sig_pre
baseline_plot(x)
Explanation: In response, Maess et al. (2016) [11]_ note that these simulations do not
address cases of pre-stimulus activity that is shared across conditions, as
applying baseline correction will effectively copy the topology outside the
baseline period. We can see this if we give our signal x with some
consistent pre-stimulus activity, which makes everything look bad.
<div class="alert alert-info"><h4>Note</h4><p>An important thing to keep in mind with these plots is that they
are for a single simulated sensor. In multi-electrode recordings
the topology (i.e., spatial pattern) of the pre-stimulus activity
will leak into the post-stimulus period. This will likely create a
spatially varying distortion of the time-domain signals, as the
averaged pre-stimulus spatial pattern gets subtracted from the
sensor time courses.</p></div>
Putting some activity in the baseline period:
End of explanation
# Use the same settings as when calling e.g., `raw.filter()`
fir_coefs = mne.filter.create_filter(
data=None, # data is only used for sanity checking, not strictly needed
sfreq=1000., # sfreq of your data in Hz
l_freq=None,
h_freq=40., # assuming a lowpass of 40 Hz
method='fir',
fir_window='hamming',
fir_design='firwin',
verbose=True)
# See the printed log for the transition bandwidth and filter length.
# Alternatively, get the filter length through:
filter_length = fir_coefs.shape[0]
Explanation: Both groups seem to acknowledge that the choices of filtering cutoffs, and
perhaps even the application of baseline correction, depend on the
characteristics of the data being investigated, especially when it comes to:
The frequency content of the underlying evoked activity relative
to the filtering parameters.
The validity of the assumption of no consistent evoked activity
in the baseline period.
We thus recommend carefully applying baseline correction and/or high-pass
values based on the characteristics of the data to be analyzed.
Filtering defaults
Defaults in MNE-Python
Most often, filtering in MNE-Python is done at the :class:mne.io.Raw level,
and thus :func:mne.io.Raw.filter is used. This function under the hood
(among other things) calls :func:mne.filter.filter_data to actually
filter the data, which by default applies a zero-phase FIR filter designed
using :func:scipy.signal.firwin. In Widmann et al. (2015) [7]_, they
suggest a specific set of parameters to use for high-pass filtering,
including:
"... providing a transition bandwidth of 25% of the lower passband
edge but, where possible, not lower than 2 Hz and otherwise the
distance from the passband edge to the critical frequency.”
In practice, this means that for each high-pass value l_freq or
low-pass value h_freq below, you would get this corresponding
l_trans_bandwidth or h_trans_bandwidth, respectively,
if the sample rate were 100 Hz (i.e., Nyquist frequency of 50 Hz):
+------------------+-------------------+-------------------+
| l_freq or h_freq | l_trans_bandwidth | h_trans_bandwidth |
+==================+===================+===================+
| 0.01 | 0.01 | 2.0 |
+------------------+-------------------+-------------------+
| 0.1 | 0.1 | 2.0 |
+------------------+-------------------+-------------------+
| 1.0 | 1.0 | 2.0 |
+------------------+-------------------+-------------------+
| 2.0 | 2.0 | 2.0 |
+------------------+-------------------+-------------------+
| 4.0 | 2.0 | 2.0 |
+------------------+-------------------+-------------------+
| 8.0 | 2.0 | 2.0 |
+------------------+-------------------+-------------------+
| 10.0 | 2.5 | 2.5 |
+------------------+-------------------+-------------------+
| 20.0 | 5.0 | 5.0 |
+------------------+-------------------+-------------------+
| 40.0 | 10.0 | 10.0 |
+------------------+-------------------+-------------------+
| 50.0 | 12.5 | 12.5 |
+------------------+-------------------+-------------------+
MNE-Python has adopted this definition for its high-pass (and low-pass)
transition bandwidth choices when using l_trans_bandwidth='auto' and
h_trans_bandwidth='auto'.
To choose the filter length automatically with filter_length='auto',
the reciprocal of the shortest transition bandwidth is used to ensure
decent attenuation at the stop frequency. Specifically, the reciprocal
(in samples) is multiplied by 3.1, 3.3, or 5.0 for the Hann, Hamming,
or Blackman windows, respectively, as selected by the fir_window
argument for fir_design='firwin', and double these for
fir_design='firwin2' mode.
<div class="alert alert-info"><h4>Note</h4><p>For ``fir_design='firwin2'``, the multiplicative factors are
doubled compared to what is given in Ifeachor & Jervis (2002) [2]_
(p. 357), as :func:`scipy.signal.firwin2` has a smearing effect
on the frequency response, which we compensate for by
increasing the filter length. This is why
``fir_desgin='firwin'`` is preferred to ``fir_design='firwin2'``.</p></div>
In 0.14, we default to using a Hamming window in filter design, as it
provides up to 53 dB of stop-band attenuation with small pass-band ripple.
<div class="alert alert-info"><h4>Note</h4><p>In band-pass applications, often a low-pass filter can operate
effectively with fewer samples than the high-pass filter, so
it is advisable to apply the high-pass and low-pass separately
when using ``fir_design='firwin2'``. For design mode
``fir_design='firwin'``, there is no need to separate the
operations, as the lowpass and highpass elements are constructed
separately to meet the transition band requirements.</p></div>
For more information on how to use the
MNE-Python filtering functions with real data, consult the preprocessing
tutorial on tut-filter-resample.
Defaults in MNE-C
MNE-C by default uses:
5 Hz transition band for low-pass filters.
3-sample transition band for high-pass filters.
Filter length of 8197 samples.
The filter is designed in the frequency domain, creating a linear-phase
filter such that the delay is compensated for as is done with the MNE-Python
phase='zero' filtering option.
Squared-cosine ramps are used in the transition regions. Because these
are used in place of more gradual (e.g., linear) transitions,
a given transition width will result in more temporal ringing but also more
rapid attenuation than the same transition width in windowed FIR designs.
The default filter length will generally have excellent attenuation
but long ringing for the sample rates typically encountered in M/EEG data
(e.g. 500-2000 Hz).
Defaults in other software
A good but possibly outdated comparison of filtering in various software
packages is available in Widmann et al. (2015) [7]_. Briefly:
EEGLAB
MNE-Python 0.14 defaults to behavior very similar to that of EEGLAB
(see the EEGLAB filtering FAQ_ for more information).
FieldTrip
By default FieldTrip applies a forward-backward Butterworth IIR filter
of order 4 (band-pass and band-stop filters) or 2 (for low-pass and
high-pass filters). Similar filters can be achieved in MNE-Python when
filtering with :meth:raw.filter(..., method='iir') <mne.io.Raw.filter>
(see also :func:mne.filter.construct_iir_filter for options).
For more information, see e.g. the
FieldTrip band-pass documentation <ftbp_>_.
Reporting Filters
On page 45 in Widmann et al. (2015) [7]_, there is a convenient list of
important filter parameters that should be reported with each publication:
Filter type (high-pass, low-pass, band-pass, band-stop, FIR, IIR)
Cutoff frequency (including definition)
Filter order (or length)
Roll-off or transition bandwidth
Passband ripple and stopband attenuation
Filter delay (zero-phase, linear-phase, non-linear phase) and causality
Direction of computation (one-pass forward/reverse, or two-pass forward
and reverse)
In the following, we will address how to deal with these parameters in MNE:
Filter type
Depending on the function or method used, the filter type can be specified.
To name an example, in :func:mne.filter.create_filter, the relevant
arguments would be l_freq, h_freq, method, and if the method is
FIR fir_window and fir_design.
Cutoff frequency
The cutoff of FIR filters in MNE is defined as half-amplitude cutoff in the
middle of the transition band. That is, if you construct a lowpass FIR filter
with h_freq = 40, the filter function will provide a transition
bandwidth that depends on the h_trans_bandwidth argument. The desired
half-amplitude cutoff of the lowpass FIR filter is then at
h_freq + transition_bandwidth/2..
Filter length (order) and transition bandwidth (roll-off)
In the tut_filtering_in_python section, we have already talked about
the default filter lengths and transition bandwidths that are used when no
custom values are specified using the respective filter function's arguments.
If you want to find out about the filter length and transition bandwidth that
were used through the 'auto' setting, you can use
:func:mne.filter.create_filter to print out the settings once more:
End of explanation |
3,336 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic read and write operations
In this example we will explore how to create and read a simple Lightning Memory-Mapped Database (LMDB) using pyxis.
Writing data
Step1: Let's start by creating some data that we can store. 2000 images with shape (254, 254, 3) along with 2000 targets are generated.
Step2: The tenth image is set to be a completely red image.
Step3: In the next step we instantiate a pyxis writer to write the data we created above to a directory called data.
Step4: The map_size_limit is the size of the LMDB in MB. For file systems running ext4, there is no big cost associated with making this big. ram_size_limit is the limit on how many GB of data we can push per write operation.
When the LMDB writer is set up, we can run the pyxis.put_samples function to write data. Ideally you should send large data blobs at once. Here we will write all the 2000 samples generated above at once. If the size of the data you have is larger than the size of your RAM, then you can perform multiple calls to pyxis.put_samples.
Step5: Data can be input in two different ways
Step6: Reading data
Step7: To read data from a LMDB we instantiate a pyxis reader using the directory of the database.
Step8: All addressable keys can be found by calling the pyxis.get_data_keys function
Step9: Python list indexing syntax is used to retrieve a whole sample from the LMDB. The pyxis.get_sample function can be used as well. The returned value is a Python dictionary, addressable by the keys used when writing the database.
Step10: The code snippet above retrieves the tenth sample, i.e. the one with the red image.
Step11: Slice objects are supported by the indexing method outlined above. For example
Step12: Python 2.7 tend to favour __getslice__ over __getitem__ when slicing, however __getslice__ has been deprecated since Python 2.0. The functionality may vary when compared with Python 3. See Python 2 Data model documentation for more information.
Data within a sample can be retrieved directly by using the pyxis.get_data_value function. Here we ask the LMDB for the hundredth y value.
Step13: Use len() to return the number of samples in the LMDB.
Step14: Just like when writing, we should make sure to close the LMDB environment after we are done reading. | Python Code:
from __future__ import print_function
import time
import numpy as np
import pyxis as px
np.random.seed(1234)
Explanation: Basic read and write operations
In this example we will explore how to create and read a simple Lightning Memory-Mapped Database (LMDB) using pyxis.
Writing data
End of explanation
nb_samples = 2000
X = np.zeros((nb_samples, 254, 254, 3), dtype=np.uint8)
y = np.arange(nb_samples, dtype=np.uint8)
Explanation: Let's start by creating some data that we can store. 2000 images with shape (254, 254, 3) along with 2000 targets are generated.
End of explanation
X[10, :, :, 0] = 255
Explanation: The tenth image is set to be a completely red image.
End of explanation
db = px.Writer(dirpath='data', map_size_limit=500, ram_gb_limit=2)
Explanation: In the next step we instantiate a pyxis writer to write the data we created above to a directory called data.
End of explanation
start = time.time()
db.put_samples('X', X, 'y', y)
print('Average time per image = {:.4f}s'.format((time.time() - start) / nb_samples))
Explanation: The map_size_limit is the size of the LMDB in MB. For file systems running ext4, there is no big cost associated with making this big. ram_size_limit is the limit on how many GB of data we can push per write operation.
When the LMDB writer is set up, we can run the pyxis.put_samples function to write data. Ideally you should send large data blobs at once. Here we will write all the 2000 samples generated above at once. If the size of the data you have is larger than the size of your RAM, then you can perform multiple calls to pyxis.put_samples.
End of explanation
db.close()
Explanation: Data can be input in two different ways:
pyxis.put_samples('key1', value1, 'key2', value2, ...)
pyxis.put_samples({'key1': value1, 'key2': value2, ...})
The code example above uses the first option with alternating keys and values. The second option is simply a Python dictionary.
There are two requirements of data that is put in a pyxis database:
All values must have the type numpy.ndarray.
The first axis in all values must represent the sample number.
Make sure to close the LMDB environment after writing. The close operation makes sure to store the number of samples that has been written.
End of explanation
try:
%matplotlib inline
import matplotlib.pyplot as plt
except ImportError:
raise ImportError('Could not import the matplotlib library required to '
'plot images. Please refer to http://matplotlib.org/ '
'for installation instructions.')
Explanation: Reading data
End of explanation
db = px.Reader('data')
Explanation: To read data from a LMDB we instantiate a pyxis reader using the directory of the database.
End of explanation
db.get_data_keys()
Explanation: All addressable keys can be found by calling the pyxis.get_data_keys function:
End of explanation
sample = db[10]
print('X: ', sample['X'].shape, sample['X'].dtype)
print('y: ', sample['y'].shape, sample['y'].dtype)
Explanation: Python list indexing syntax is used to retrieve a whole sample from the LMDB. The pyxis.get_sample function can be used as well. The returned value is a Python dictionary, addressable by the keys used when writing the database.
End of explanation
plt.figure()
plt.imshow(sample['X'])
plt.axis('off')
plt.show()
Explanation: The code snippet above retrieves the tenth sample, i.e. the one with the red image.
End of explanation
samples = db[0:8:2]
for sample in samples:
print(sample['y'])
Explanation: Slice objects are supported by the indexing method outlined above. For example:
End of explanation
db.get_data_value(100, 'y')
Explanation: Python 2.7 tend to favour __getslice__ over __getitem__ when slicing, however __getslice__ has been deprecated since Python 2.0. The functionality may vary when compared with Python 3. See Python 2 Data model documentation for more information.
Data within a sample can be retrieved directly by using the pyxis.get_data_value function. Here we ask the LMDB for the hundredth y value.
End of explanation
len(db)
Explanation: Use len() to return the number of samples in the LMDB.
End of explanation
db.close()
Explanation: Just like when writing, we should make sure to close the LMDB environment after we are done reading.
End of explanation |
3,337 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook was created by Sergey Tomin ([email protected]). Source and license info is on GitHub. April 2020.
Tutorial N6. Coupler Kick.
Second order tracking with coupler kick in TESLA type cavity of the 200k particles.
As an example, we will use linac L1 of the European XFEL Injector.
The input coupler and the higher order mode couplers of the RF cavities distort the axial symmetry of the electromagnetic (EM) field and affect the electron beam. This effect can be calculated by direct tracking of the particles in the asymmetric (due to the couplers) 3D EM field using a tracking code (e.g. ASTRA). For fast estimation of the coupler effect a discrete coupler model (as described, for example in M. Dohlus et al, Coupler Kick for Very Short Bunches and its Compensation, Proc. of EPAC08, MOPP013 or T.Hellert and M.Dohlus, Detuning related coupler kick variation of a superconducting nine-cell 1.3 GHz cavity) was implemented in OCELOT. Coefficients for 1.3 GHz modules are given in M.Dohlus, Effects of RF coupler kicks in L1 of EXFEL. The 1st order part of the model includes time and offset dependency; the offset dependency has a skew component. To include effect of all couplers, the kicks are applied at the entrance and the exit of each cavity.
The zeroth and first order kick $\vec k$ on a bunch induced by a coupler can be expressed as
\begin{equation}
\vec k(x, y) \approx \frac{eV_0}{E_0} \Re \left{ \left(
\begin{matrix}
V_{x0}\
V_{y0}
\end{matrix} \right) + \left(
\begin{matrix}
V_{xx} & V_{xy} \
V_{yx} & V_{yy}
\end{matrix}\right)
\left(
\begin{matrix}
x\
y
\end{matrix} \right) e^{i \phi}\right}
\end{equation}
with $E_0$ being the beam energy at the corresponding coupler region, $V_0$ and $\phi$ the amplitude and phase of the accelerating field, respectively, $e$ the elementary charge and $x$ and $y$ the transverse beam position at the coupler location. From Maxwell equations it follows that $V_{yy} = −V_{xx}$ and $V_{xy} = V_{yx}$. Thus, coupler kicks are up to first order well described with four normalized coupler kick coefficients $[V_{0x}, V_{0y}, V_{xx}, V_{xy}]$.
In OCELOT one can define copler kick coefficients for upstream and downstream coplers.
python
Cavity(l=0., v=0., phi=0., freq=0., vx_up=0, vy_up=0, vxx_up=0, vxy_up=0,
vx_down=0, vy_down=0, vxx_down=0, vxy_down=0, eid=None)
This example will cover the following topics
Step1: Twiss parameters with and without coupler kick
Step2: Trajectories with Coupler Kick
Step3: Horizantal and vertical emittances
Before start we remove zero order terms (dipole kicks) from coupler kicks coefficients.
And check if we have any asymmetry.
Step4: Tracking of the particles though lattice with coupler kicks
Steps
Step5: Eigenemittance
As can we see, the projected emittances are not preserved, although all matrices are symplectic. The reason is the coupler kicks inroduce coupling between $X$ and $Y$ planes while the projected emittances are invariants under linear uncoupled (with respect to the laboratory coordinate system) symplectic transport.
However, there are invariants under arbitrary (possibly coupled) linear symplectic transformations - eigenemittances. Details can be found here V. Balandin and N. Golubeva "Notes on Linear Theory of Coupled Particle Beams with Equal Eigenemittances" and V.Balandin et al "Twiss Parameters of Coupled Particle Beams with Equal Eigenemittances" | Python Code:
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
from time import time
# this python library provides generic shallow (copy)
# and deep copy (deepcopy) operations
from copy import deepcopy
# import from Ocelot main modules and functions
from ocelot import *
# extra function to track the Particle though a lattice
from ocelot.cpbd.track import lattice_track
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
# import lattice
from xfel_l1 import *
tws0 = Twiss()
tws0.E = 0.005
tws0.beta_x = 7.03383607232
tws0.beta_y = 4.83025657816
tws0.alpha_x = 0.981680481977
tws0.alpha_y = -0.524776086698
tws0.E = 0.1300000928
lat = MagneticLattice(cell_l1, start=bpmf_103_i1, stop=qd_210_b1)
# twiss parameters without coupler kick
tws1 = twiss(lat, tws0)
# adding coupler coefficients in [1/m]
for elem in lat.sequence:
if elem.__class__ == Cavity:
if not(".AH1." in elem.id):
# 1.3 GHz cavities
elem.vx_up = (-56.813 + 10.751j) * 1e-6
elem.vy_up = (-41.091 + 0.5739j) * 1e-6
elem.vxx_up = (0.99943 - 0.81401j) * 1e-3
elem.vxy_up = (3.4065 - 0.4146j) * 1e-3
elem.vx_down = (-24.014 + 12.492j) * 1e-6
elem.vy_down = (36.481 + 7.9888j) * 1e-6
elem.vxx_down = (-4.057 - 0.1369j) * 1e-3
elem.vxy_down = (2.9243 - 0.012891j) * 1e-3
else:
# AH1 cavity (3.9 GHz) module names are 'C3.AH1.1.1.I1', 'C3.AH1.1.2.I1', ...
# Modules with odd and even number X 'C3.AH1.1.X.I1' have different coefficients
module_number = float(elem.id.split(".")[-2])
if module_number % 2 == 1:
elem.vx_up = -0.00057076 - 1.3166e-05j
elem.vy_up = -3.5079e-05 + 0.00012636j
elem.vxx_up = -0.026045 - 0.042918j
elem.vxy_up = 0.0055553 - 0.023455j
elem.vx_down = -8.8766e-05 - 0.00024852j
elem.vy_down = 2.9889e-05 + 0.00014486j
elem.vxx_down = -0.0050593 - 0.013491j
elem.vxy_down = 0.0051488 + 0.024771j
else:
elem.vx_up = 0.00057076 + 1.3166e-05j
elem.vy_up = 3.5079e-05 - 0.00012636j
elem.vxx_up = -0.026045 - 0.042918j
elem.vxy_up = 0.0055553 - 0.023455j
elem.vx_down = 8.8766e-05 + 0.00024852j
elem.vy_down = -2.9889e-05 - 0.00014486j
elem.vxx_down = -0.0050593 - 0.013491j
elem.vxy_down = 0.0051488 + 0.024771j
# update transfer maps
lat.update_transfer_maps()
tws = twiss(lat, tws0)
Explanation: This notebook was created by Sergey Tomin ([email protected]). Source and license info is on GitHub. April 2020.
Tutorial N6. Coupler Kick.
Second order tracking with coupler kick in TESLA type cavity of the 200k particles.
As an example, we will use linac L1 of the European XFEL Injector.
The input coupler and the higher order mode couplers of the RF cavities distort the axial symmetry of the electromagnetic (EM) field and affect the electron beam. This effect can be calculated by direct tracking of the particles in the asymmetric (due to the couplers) 3D EM field using a tracking code (e.g. ASTRA). For fast estimation of the coupler effect a discrete coupler model (as described, for example in M. Dohlus et al, Coupler Kick for Very Short Bunches and its Compensation, Proc. of EPAC08, MOPP013 or T.Hellert and M.Dohlus, Detuning related coupler kick variation of a superconducting nine-cell 1.3 GHz cavity) was implemented in OCELOT. Coefficients for 1.3 GHz modules are given in M.Dohlus, Effects of RF coupler kicks in L1 of EXFEL. The 1st order part of the model includes time and offset dependency; the offset dependency has a skew component. To include effect of all couplers, the kicks are applied at the entrance and the exit of each cavity.
The zeroth and first order kick $\vec k$ on a bunch induced by a coupler can be expressed as
\begin{equation}
\vec k(x, y) \approx \frac{eV_0}{E_0} \Re \left{ \left(
\begin{matrix}
V_{x0}\
V_{y0}
\end{matrix} \right) + \left(
\begin{matrix}
V_{xx} & V_{xy} \
V_{yx} & V_{yy}
\end{matrix}\right)
\left(
\begin{matrix}
x\
y
\end{matrix} \right) e^{i \phi}\right}
\end{equation}
with $E_0$ being the beam energy at the corresponding coupler region, $V_0$ and $\phi$ the amplitude and phase of the accelerating field, respectively, $e$ the elementary charge and $x$ and $y$ the transverse beam position at the coupler location. From Maxwell equations it follows that $V_{yy} = −V_{xx}$ and $V_{xy} = V_{yx}$. Thus, coupler kicks are up to first order well described with four normalized coupler kick coefficients $[V_{0x}, V_{0y}, V_{xx}, V_{xy}]$.
In OCELOT one can define copler kick coefficients for upstream and downstream coplers.
python
Cavity(l=0., v=0., phi=0., freq=0., vx_up=0, vy_up=0, vxx_up=0, vxy_up=0,
vx_down=0, vy_down=0, vxx_down=0, vxy_down=0, eid=None)
This example will cover the following topics:
Defining the coupler coefficients for Cavity
tracking of second order with Coupler Kick effect.
Details of implementation in the code
New in version 20.04.0
The coupler kicks are implemented in the code the same way as it was done for Edge elements. At the moment of inizialisation of MagneticLattice around Cavity element are created elemnents CouplerKick, the coupler kick before Cavity use coefficents with suffix "_up" (upstream) and after Cavity is placed CouplerKick with coefficent "_down" (downstream). The Coupler Kick elements are created even though coupler kick coefficennts are zeros.
End of explanation
bx0 = [tw.beta_x for tw in tws1]
by0 = [tw.beta_y for tw in tws1]
s0 = [tw.s for tw in tws1]
bx = [tw.beta_x for tw in tws]
by = [tw.beta_y for tw in tws]
s = [tw.s for tw in tws]
fig, ax = plot_API(lat, legend=False)
ax.plot(s0, bx0, "b", lw=1, label=r"$\beta_x$")
ax.plot(s, bx, "b--", lw=1, label=r"$\beta_x$, CK")
ax.plot(s0, by0, "r", lw=1, label=r"$\beta_y$")
ax.plot(s, by, "r--", lw=1, label=r"$\beta_y$, CK")
ax.set_ylabel(r"$\beta_{x,y}$, m")
ax.legend()
plt.show()
Explanation: Twiss parameters with and without coupler kick
End of explanation
def plot_trajectories(lat):
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
for a in np.arange(-0.6, 0.6, 0.1):
cix_118_i1.angle = a*0.001
lat.update_transfer_maps()
p = Particle(px=0, E=0.130)
plist = lattice_track(lat, p)
s = [p.s for p in plist]
x = [p.x for p in plist]
y = [p.y for p in plist]
px = [p.px for p in plist]
py = [p.py for p in plist]
ax1.plot(s, x)
ax2.plot(s, y)
plt.xlabel("z [m]")
plt.show()
plot_trajectories(lat)
Explanation: Trajectories with Coupler Kick
End of explanation
for elem in lat.sequence:
if elem.__class__ == Cavity:
if not(".AH1." in elem.id):
# 1.3 GHz cavities
elem.vx_up = 0.
elem.vy_up = 0.
elem.vxx_up = (0.99943 - 0.81401j) * 1e-3
elem.vxy_up = (3.4065 - 0.4146j) * 1e-3
elem.vx_down = 0.
elem.vy_down = 0.
elem.vxx_down = (-4.057 - 0.1369j) * 1e-3
elem.vxy_down = (2.9243 - 0.012891j) * 1e-3
# update transfer maps
lat.update_transfer_maps()
# plot the trajectories
plot_trajectories(lat)
Explanation: Horizantal and vertical emittances
Before start we remove zero order terms (dipole kicks) from coupler kicks coefficients.
And check if we have any asymmetry.
End of explanation
# create ParticleArray with "one clice"
parray = generate_parray(sigma_tau=0., sigma_p=0.0, chirp=0.0)
print(parray)
# track the beam though the lattice
navi = Navigator(lat)
tws_track, _ = track(lat, parray, navi)
# plot emittances
emit_x = np.array([tw.emit_x for tw in tws_track])
emit_y = np.array([tw.emit_y for tw in tws_track])
gamma = np.array([tw.E for tw in tws_track])/m_e_GeV
s = [tw.s for tw in tws_track]
fig, ax = plot_API(lat, legend=False)
ax.plot(s, emit_x * gamma * 1e6, "b", lw=1, label=r"$\varepsilon_x$ [mm $\cdot$ mrad]")
ax.plot(s, emit_y * gamma * 1e6, "r", lw=1, label=r"$\varepsilon_y$ [mm $\cdot$ mrad]")
ax.set_ylabel(r"$\varepsilon_{x,y}$ [mm $\cdot$ mrad]")
ax.legend()
plt.show()
Explanation: Tracking of the particles though lattice with coupler kicks
Steps:
* create ParticleArray with zero length and zero energy spread and chirp
* track the Particle array through the lattice
* plot the emittances
End of explanation
# plot emittances
emit_x = np.array([tw.eigemit_1 for tw in tws_track])
emit_y = np.array([tw.eigemit_2 for tw in tws_track])
gamma = np.array([tw.E for tw in tws_track])/m_e_GeV
s = [tw.s for tw in tws_track]
fig, ax = plot_API(lat, legend=False)
ax.plot(s, emit_x * gamma * 1e6, "b", lw=1, label=r"$\varepsilon_x$ [mm $\cdot$ mrad]")
ax.plot(s, emit_y * gamma * 1e6, "r", lw=1, label=r"$\varepsilon_y$ [mm $\cdot$ mrad]")
ax.set_ylabel(r"$\varepsilon_{x,y}$ [mm $\cdot$ mrad]")
ax.legend()
plt.show()
Explanation: Eigenemittance
As can we see, the projected emittances are not preserved, although all matrices are symplectic. The reason is the coupler kicks inroduce coupling between $X$ and $Y$ planes while the projected emittances are invariants under linear uncoupled (with respect to the laboratory coordinate system) symplectic transport.
However, there are invariants under arbitrary (possibly coupled) linear symplectic transformations - eigenemittances. Details can be found here V. Balandin and N. Golubeva "Notes on Linear Theory of Coupled Particle Beams with Equal Eigenemittances" and V.Balandin et al "Twiss Parameters of Coupled Particle Beams with Equal Eigenemittances"
End of explanation |
3,338 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
\title{Digital Arithmetic Cells with myHDL}
\author{Steven K Armour}
\maketitle
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Compression-of-Number-System-Values" data-toc-modified-id="Compression-of-Number-System-Values-1"><span class="toc-item-num">1 </span>Compression of Number System Values</a></span></li><li><span><a href="#Half-Adder" data-toc-modified-id="Half-Adder-2"><span class="toc-item-num">2 </span>Half Adder</a></span></li><li><span><a href="#The-full-adder" data-toc-modified-id="The-full-adder-3"><span class="toc-item-num">3 </span>The full adder</a></span></li></ul></div>
Step1: Compression of Number System Values
Step2: Notice that when we add $1_2+1_2$ we get instead of just a single bit we also get a new bit in the next binary column which we call the carry bit. This yields the following truth table for what is called the Two Bit or Half adder
Half Adder
Step3: Looking at this truth table we can surmise the following
Step6: We can thus generate the following myHDL
Step7: Wich, we can then confirm via these results to the expected results via
Step9: HalfAdder RTL
<img src='HalfAdderRTL.png'>
HalfAdder Synthesis
<img src='HalfAdderSynth.png'>
Step10: Where the above warning can be ignored since we are using testx1 and testx2 as stores for the binay set of inputs to apply to x1 and x2
The full adder
If we assume that a carry is an input we can then extend the truth table for the two-bit adder to a three-bit adder yielding
Step11: ! need a way to make K-Maps in python
whereupon making and reducing the maps from the three-bit adder truth table we get the following equations
Step13: yielding the following next myHDL module
Step15: FullAdder RTL
<img src='FullAdderRTL.png'>
FullAdder Synthesis
<img src='FullAdderSynth.png'> | Python Code:
from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import random
#https://github.com/jrjohansson/version_information
%load_ext version_information
%version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
Explanation: \title{Digital Arithmetic Cells with myHDL}
\author{Steven K Armour}
\maketitle
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Compression-of-Number-System-Values" data-toc-modified-id="Compression-of-Number-System-Values-1"><span class="toc-item-num">1 </span>Compression of Number System Values</a></span></li><li><span><a href="#Half-Adder" data-toc-modified-id="Half-Adder-2"><span class="toc-item-num">2 </span>Half Adder</a></span></li><li><span><a href="#The-full-adder" data-toc-modified-id="The-full-adder-3"><span class="toc-item-num">3 </span>The full adder</a></span></li></ul></div>
End of explanation
ConversionTable=pd.DataFrame()
ConversionTable['Decimal']=np.arange(0, 21)
ConversionTable['Binary']=[bin(i, 3) for i in np.arange(0, 21)]
ConversionTable['hex']=[hex(i) for i in np.arange(0, 21)]
ConversionTable['oct']=[oct(i) for i in np.arange(0, 21)]
ConversionTable
binarySum=lambda a, b, bits=2: np.binary_repr(a+b, bits)
for i in [[0,0], [0,1], [1,0], [1,1]]:
print(f'{i[0]} + {i[1]} yields {binarySum(*i)}_2, {int(binarySum(*i), 2)}_10')
Explanation: Compression of Number System Values
End of explanation
TwoBitAdderTT=pd.DataFrame()
TwoBitAdderTT['x2']=[0,0,1,1]
TwoBitAdderTT['x1']=[0,1,0,1]
TwoBitAdderTT['Sum']=[0,1,1,0]
TwoBitAdderTT['Carry']=[0,0,0,1]
TwoBitAdderTT
Explanation: Notice that when we add $1_2+1_2$ we get instead of just a single bit we also get a new bit in the next binary column which we call the carry bit. This yields the following truth table for what is called the Two Bit or Half adder
Half Adder
End of explanation
x1, x2=symbols('x_1, x_2')
Sum, Carry=symbols(r'\text{Sum}, \text{Carry}')
HASumDef=x1^x2; HACarryDef=x1 & x2
HASumEq=Eq(Sum, HASumDef); HACarryDef=Eq(Carry, HACarryDef)
HASumEq, HACarryDef
Explanation: Looking at this truth table we can surmise the following
End of explanation
@block
def HalfAdder(x1, x2, Sum, Carry):
Half adder in myHDL
I/O:
x1 (bool): x1 input
x2 (bool): x2 input
Sum (bool): the sum Half Adder ouput
Carry (bool): the carry Half Adder ouput
@always_comb
def logic():
Sum.next=x1^x2
Carry.next=x1 & x2
return logic
Peeker.clear()
x1=Signal(bool(0)); Peeker(x1, 'x1')
x2=Signal(bool(0)); Peeker(x2, 'x2')
Sum=Signal(bool(0)); Peeker(Sum, 'Sum')
Carry=Signal(bool(0)); Peeker(Carry, 'Carry')
DUT=HalfAdder(x1, x2, Sum, Carry)
def HalfAdder_TB():
Half Adder Testbench for use in python only
@instance
def Stimules():
for _, row in TwoBitAdderTT.iterrows():
x1.next=row['x1']
x2.next=row['x2']
yield delay(1)
raise StopSimulation()
return instances()
sim = Simulation(DUT, HalfAdder_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('x2 x1 | Sum Carry', title='Half Adder Wave Form', tock=True)
HARes=Peeker.to_dataframe()
HARes=HARes[['x2', 'x1','Sum', 'Carry']]
HARes
Explanation: We can thus generate the following myHDL
End of explanation
TwoBitAdderTT==HARes
DUT.convert()
VerilogTextReader('HalfAdder');
Explanation: Wich, we can then confirm via these results to the expected results via
End of explanation
@block
def HalfAdder_TBV():
Half Adder Testbench for use in Verilog
x1 = Signal(bool(0))
x2 = Signal(bool(0))
Sum = Signal(bool(0))
Carry = Signal(bool(0))
DUT = HalfAdder(x1, x2, Sum, Carry)
testx1=Signal(intbv(int("".join([str(i) for i in TwoBitAdderTT['x1'].tolist()]), 2))[4:])
testx2=Signal(intbv(int("".join([str(i) for i in TwoBitAdderTT['x2'].tolist()]), 2))[4:])
@instance
def Stimulus():
for i in range(len(testx1)):
x1.next = testx1[i]
x2.next = testx2[i]
yield delay(1)
raise StopSimulation()
@always_comb
def print_data():
print(x1, x2, Sum, Carry)
return instances()
# create instaince of TB
TB = HalfAdder_TBV()
# convert to verilog with reintilzed values
TB.convert(hdl="verilog", initial_values=True)
VerilogTextReader('HalfAdder_TBV');
Explanation: HalfAdder RTL
<img src='HalfAdderRTL.png'>
HalfAdder Synthesis
<img src='HalfAdderSynth.png'>
End of explanation
ThreeBitAdderTT=pd.DataFrame()
ThreeBitAdderTT['c1']=[0,0,0,0,1,1,1,1]
ThreeBitAdderTT['x2']=[0,0,1,1,0,0,1,1]
ThreeBitAdderTT['x1']=[0,1,0,1,0,1,0,1]
ThreeBitAdderTT['Sum']=[0,1,1,0,1,0,0,1]
ThreeBitAdderTT['Carry']=[0,0,0,1,0,1,1,1]
ThreeBitAdderTT
Explanation: Where the above warning can be ignored since we are using testx1 and testx2 as stores for the binay set of inputs to apply to x1 and x2
The full adder
If we assume that a carry is an input we can then extend the truth table for the two-bit adder to a three-bit adder yielding
End of explanation
c1, x1, x2=symbols('c_1,x_1, x_2')
Sum, Carry=symbols(r'\text{Sum}, \text{Carry}')
FASumDef=x1^x2^c1; FACarryDef=x1&x2 | x1&c1 | x2&c1
FASumEq=Eq(Sum, FASumDef); FACarryDef=Eq(Carry, FACarryDef)
FASumEq, FACarryDef
Explanation: ! need a way to make K-Maps in python
whereupon making and reducing the maps from the three-bit adder truth table we get the following equations
End of explanation
@block
def FullAdder(x1, x2, c1, Sum, Carry):
Full adder in myHDL
I/O:
x1 (bool): x1 input
x2 (bool): x2 input
c1 (bool): carry input
Sum (bool): the sum Full Adder ouput
Carry (bool): the carry Full Adder ouput
Note!:
There something wrong on the HDL side at the moment
@always_comb
def logic():
Sum.next=x1 ^ x2 ^c1
Carry.next=(x1 & x2) | (x1 & c1) | (x2 & c1)
return logic
Peeker.clear()
x1=Signal(bool(0)); Peeker(x1, 'x1')
x2=Signal(bool(0)); Peeker(x2, 'x2')
c1=Signal(bool(0)); Peeker(c1, 'c1')
Sum=Signal(bool(0)); Peeker(Sum, 'Sum')
Carry=Signal(bool(0)); Peeker(Carry, 'Carry')
DUT=FullAdder(x1, x2, c1, Sum, Carry)
def FullAdder_TB():
@instance
def Stimules():
for _, row in ThreeBitAdderTT.iterrows():
x1.next=row['x1']
x2.next=row['x2']
c1.next=row['c1']
yield delay(1)
raise StopSimulation()
return instances()
sim = Simulation(DUT, HalfAdder_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom('c1 x2 x1 | Sum Carry', title='Full Adder Wave Form', tock=True)
FARes=Peeker.to_dataframe()
FARes=FARes[['c1, x2','x1','Sum', 'Carry']]
FARes
ThreeBitAdderTT==FARes
DUT.convert()
VerilogTextReader('FullAdder');
Explanation: yielding the following next myHDL module
End of explanation
@block
def FullAdder_TBV():
Full Adder Testbench for use in Verilog
x1=Signal(bool(0))
x2=Signal(bool(0))
c1=Signal(bool(0))
Sum=Signal(bool(0))
Carry=Signal(bool(0))
DUT = FullAdder(x1, x2, c1, Sum, Carry)
testx1=Signal(intbv(int("".join([str(i) for i in ThreeBitAdderTT['x1'].tolist()]), 2))[4:])
testx2=Signal(intbv(int("".join([str(i) for i in ThreeBitAdderTT['x2'].tolist()]), 2))[4:])
testc1=Signal(intbv(int("".join([str(i) for i in ThreeBitAdderTT['c1'].tolist()]), 2))[4:])
@instance
def Stimulus():
for i in range(len(testx1)):
x1.next=testx1[i]
x2.next=testx2[i]
c1.next=testc1[i]
yield delay(1)
raise StopSimulation()
@always_comb
def print_data():
print(x1, x2, c1, Sum, Carry)
return instances()
# create instaince of TB
TB = FullAdder_TBV()
# convert to verilog with reintilzed values
TB.convert(hdl="verilog", initial_values=True)
VerilogTextReader('FullAdder_TBV');
Explanation: FullAdder RTL
<img src='FullAdderRTL.png'>
FullAdder Synthesis
<img src='FullAdderSynth.png'>
End of explanation |
3,339 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create an example dataframe
Step2: Rename Column Names | Python Code:
# Import modules
import pandas as pd
# Set ipython's max row display
pd.set_option('display.max_row', 1000)
# Set iPython's max column width to 50
pd.set_option('display.max_columns', 50)
Explanation: Title: Rename Multiple Pandas Dataframe Column Names At Once
Slug: pandas_rename_multiple_columns
Summary: Rename Multiple Pandas Dataframe Column Names At Once
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Preliminaries
End of explanation
# Create an example dataframe
data = {'Commander': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'Date': ['2012, 02, 08', '2012, 02, 08', '2012, 02, 08', '2012, 02, 08', '2012, 02, 08'],
'Score': [4, 24, 31, 2, 3]}
df = pd.DataFrame(data, index = ['Cochice', 'Pima', 'Santa Cruz', 'Maricopa', 'Yuma'])
df
Explanation: Create an example dataframe
End of explanation
df.columns = ['Leader', 'Time', 'Score']
df
df.rename(columns={'Leader': 'Commander'}, inplace=True)
df
Explanation: Rename Column Names
End of explanation |
3,340 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: wk4.0
Even more OOP
Step2: Pure functions
Step3: The function creates a new MyTime object and returns a reference to the new object. This is called a pure function because it does not modify any of the objects passed to it as parameters and it has no side effects, such as updating global variables, displaying a value, or getting user input.
```
current_time = MyTime(9, 14, 30)
bread_time = MyTime(3, 35, 0)
done_time = add_time(current_time, bread_time) # When does this break?
print(done_time)
12
Step4: Modifiers
There are times when it is useful for a function to modify one or more of the objects it gets as parameters. Usually, the caller keeps a reference to the objects it passes, so any changes the function makes are visible to the caller. Functions that work this way are called modifiers.
Step5: # Converting increment to a method
Step7: An “Aha!” insight
Often a high-level insight into the problem can make the programming much easier.
In this case, the insight is that a MyTime object is really a three-digit number in base 60! The second component is the ones column, the minute component is the sixties column, and the hour component is the thirty-six hundreds column.
When we wrote add_time and increment, we were effectively doing addition in base 60, which is why we had to carry from one column to the next.
Step9: In OOP we’re really trying to wrap together the data and the operations that apply to it. So we’d like to have this logic inside the MyTime class. A good solution is to rewrite the class initializer so that it can cope with initial values of seconds or minutes that are outside the normalized values. (A normalized time would be something like 3 hours 12 minutes and 20 seconds. The same time, but unnormalized could be 2 hours 70 minutes and 140 seconds.)
Step11: Generalization vs. Specification
Computer scientists vs. mathematicians
Often it may help to try to think about the problem from both points of view — “What would happen if I tried to reduce everything to very few primitive types?”, versus “What would happen if this thing had its own specialized type?”
Example
The after function should compare two times, and tell us whether the first time is strictly after the second, e.g.
```
t1 = MyTime(10, 55, 12)
t2 = MyTime(10, 48, 22)
after(t1, t2) # Is t1 after t2?
True
```
Step12: We invoke this method on one object and pass the other as an argument
Step14: The logic of the if statements deserve special attention here. Lines 11-18 will only be reached if the two hour fields are the same. Similarly, the test at line 16 is only executed if both times have the same hours and the same minutes.
Could we make this easier? Yes!
Step18: Operator overloading
Some languages, including Python, make it possible to have different meanings for the same operator when applied to different types. For example, + in Python means quite different things for integers and for strings. This feature is called operator overloading.
It is especially useful when programmers can also overload the operators for their own user-defined types.
For example, to override the addition operator +, we can provide a method named __add__
Step19: ```
t1 = MyTime(1, 15, 42)
t2 = MyTime(3, 50, 30)
t3 = t1 + t2
print(t3)
05 | Python Code:
class MyTime:
def __init__(self, hrs=0, mins=0, secs=0):
Create a MyTime object initialized to hrs, mins, secs
self.hours = hrs
self.minutes = mins
self.seconds = secs
def __str__(self):
return "{h}:{m}:{s}".format(h=self.hours, m=self.minutes, s=self.seconds)
tim1 = MyTime(11, 59, 30)
tim2 = MyTime()
print(tim1)
print(tim2)
Explanation: wk4.0
Even more OOP
End of explanation
def add_time(t1, t2):
h = t1.hours + t2.hours
m = t1.minutes + t2.minutes
s = t1.seconds + t2.seconds
sum_t = MyTime(h, m, s)
return sum_t
Explanation: Pure functions
End of explanation
current_time = MyTime(1, 10, 0)
bread_time = MyTime(0, 60, 0)
done_time = add_time(current_time, bread_time)
print(done_time)
def add_time(t1, t2): # are we good now?
h = t1.hours + t2.hours
m = t1.minutes + t2.minutes
s = t1.seconds + t2.seconds
if s >= 60:
s -= 60
m += 1
if m >= 60:
m -= 60
h += 1
sum_t = MyTime(h, m, s)
return sum_t
current_time = MyTime(1, 10, 0)
bread_time = MyTime(0, 120, 0)
done_time = add_time(current_time, bread_time)
print(done_time)
Explanation: The function creates a new MyTime object and returns a reference to the new object. This is called a pure function because it does not modify any of the objects passed to it as parameters and it has no side effects, such as updating global variables, displaying a value, or getting user input.
```
current_time = MyTime(9, 14, 30)
bread_time = MyTime(3, 35, 0)
done_time = add_time(current_time, bread_time) # When does this break?
print(done_time)
12:49:30
```
End of explanation
def increment(t, secs): # Is this good?
t.seconds += secs
if t.seconds >= 60:
t.seconds -= 60
t.minutes += 1
if t.minutes >= 60:
t.minutes -= 60
t.hours += 1
def increment(t, seconds): # How about now?
t.seconds += seconds
while t.seconds >= 60:
t.seconds -= 60
t.minutes += 1
while t.minutes >= 60:
t.minutes -= 60
t.hours += 1
Explanation: Modifiers
There are times when it is useful for a function to modify one or more of the objects it gets as parameters. Usually, the caller keeps a reference to the objects it passes, so any changes the function makes are visible to the caller. Functions that work this way are called modifiers.
End of explanation
class MyTime:
# Previous method definitions here...
def increment(self, seconds):
self.seconds += seconds
while self.seconds >= 60:
self.seconds -= 60
self.minutes += 1
while self.minutes >= 60:
self.minutes -= 60
self.hours += 1
Explanation: # Converting increment to a method
End of explanation
class MyTime:
# ...
def to_seconds(self):
Return the number of seconds represented
by this instance
return self.hours * 3600 + self.minutes * 60 + self.seconds
## ---- Core time conversion logic ----- #
hrs = tsecs // 3600
leftoversecs = tsecs % 3600
mins = leftoversecs // 60
secs = leftoversecs % 60
Explanation: An “Aha!” insight
Often a high-level insight into the problem can make the programming much easier.
In this case, the insight is that a MyTime object is really a three-digit number in base 60! The second component is the ones column, the minute component is the sixties column, and the hour component is the thirty-six hundreds column.
When we wrote add_time and increment, we were effectively doing addition in base 60, which is why we had to carry from one column to the next.
End of explanation
class MyTime:
# ...
def __init__(self, hrs=0, mins=0, secs=0):
Create a new MyTime object initialized to hrs, mins, secs.
The values of mins and secs may be outside the range 0-59,
but the resulting MyTime object will be normalized.
# Calculate total seconds to represent
totalsecs = hrs*3600 + mins*60 + secs
self.hours = totalsecs // 3600 # Split in h, m, s
leftoversecs = totalsecs % 3600
self.minutes = leftoversecs // 60
self.seconds = leftoversecs % 60
def add_time(t1, t2):
secs = t1.to_seconds() + t2.to_seconds()
return MyTime(0, 0, secs)
Explanation: In OOP we’re really trying to wrap together the data and the operations that apply to it. So we’d like to have this logic inside the MyTime class. A good solution is to rewrite the class initializer so that it can cope with initial values of seconds or minutes that are outside the normalized values. (A normalized time would be something like 3 hours 12 minutes and 20 seconds. The same time, but unnormalized could be 2 hours 70 minutes and 140 seconds.)
End of explanation
class MyTime:
# Previous method definitions here...
def after(self, time2):
Return True if I am strictly greater than time2
if self.hours > time2.hours:
return True
if self.hours < time2.hours:
return False
if self.minutes > time2.minutes:
return True
if self.minutes < time2.minutes:
return False
if self.seconds > time2.seconds:
return True
return False
Explanation: Generalization vs. Specification
Computer scientists vs. mathematicians
Often it may help to try to think about the problem from both points of view — “What would happen if I tried to reduce everything to very few primitive types?”, versus “What would happen if this thing had its own specialized type?”
Example
The after function should compare two times, and tell us whether the first time is strictly after the second, e.g.
```
t1 = MyTime(10, 55, 12)
t2 = MyTime(10, 48, 22)
after(t1, t2) # Is t1 after t2?
True
```
End of explanation
if current_time.after(done_time):
print("The bread will be done before it starts!")
Explanation: We invoke this method on one object and pass the other as an argument:
End of explanation
class MyTime:
# Previous method definitions here...
def after(self, time2):
Return True if I am strictly greater than time2
return self.to_seconds() > time2.to_seconds()
Explanation: The logic of the if statements deserve special attention here. Lines 11-18 will only be reached if the two hour fields are the same. Similarly, the test at line 16 is only executed if both times have the same hours and the same minutes.
Could we make this easier? Yes!
End of explanation
class MyTime:
This makes the clock.
>>> timer = MyTime(5, 4, 3) # Makes a timer with time 5 hr ...
def __init__(self, hrs=0, mins=0, secs=0):
Create a new MyTime object initialized to hrs, mins, secs.
The values of mins and secs may be outside the range 0-59,
but the resulting MyTime object will be normalized.
# Calculate total seconds to represent
totalsecs = hrs*3600 + mins*60 + secs
self.hours = totalsecs // 3600 # Split in h, m, s
leftoversecs = totalsecs % 3600
self.minutes = leftoversecs // 60
self.seconds = leftoversecs % 60
def __str__(self):
return "{h}:{m}:{s}".format(h=self.hours, m=self.minutes, s=self.seconds)
def __add__(self, other):
return MyTime(0, 0, self.to_seconds() + other.to_seconds())
def to_seconds(self):
Return the number of seconds represented
by this instance
return self.hours * 3600 + self.minutes * 60 + self.seconds
t1 = MyTime(0, 0, 42000)
t2 = MyTime(3, 50, 30)
t3 = t1 + t2
print(t3)
help(MyTime())
Explanation: Operator overloading
Some languages, including Python, make it possible to have different meanings for the same operator when applied to different types. For example, + in Python means quite different things for integers and for strings. This feature is called operator overloading.
It is especially useful when programmers can also overload the operators for their own user-defined types.
For example, to override the addition operator +, we can provide a method named __add__:
End of explanation
def front_and_back(front):
import copy
back = copy.copy(front)
back.reverse()
print(str(front) + str(back))
Explanation: ```
t1 = MyTime(1, 15, 42)
t2 = MyTime(3, 50, 30)
t3 = t1 + t2
print(t3)
05:06:12
```
Polymorphism
End of explanation |
3,341 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a data set like below: | Problem:
import pandas as pd
df = pd.DataFrame({'name': ['matt', 'james', 'adam'],
'status': ['active', 'active', 'inactive'],
'number': [12345, 23456, 34567],
'message': ['[job: , money: none, wife: none]',
'[group: band, wife: yes, money: 10000]',
'[job: none, money: none, wife: , kids: one, group: jail]']})
import yaml
def g(df):
df.message = df.message.replace(['\[','\]'],['{','}'], regex=True).apply(yaml.safe_load)
df1 = pd.DataFrame(df.pop('message').values.tolist(), index=df.index)
result = pd.concat([df, df1], axis=1)
result = result.replace('', 'none')
result = result.replace(np.nan, 'none')
return result
result = g(df.copy()) |
3,342 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Shift-Reduce Parser for Arithmetic Expressions
In this notebook we implement a generic shift reduce parser. The parse table that we use will
implement the following grammar for arithmetic expressions
Step1: The function tokenize transforms the string s into a list of tokens. See below for an example.
Step2: Assume a grammar $G = \langle V, T, R, S \rangle$ is given. A shift-reduce parser
is defined as a 4-Tuple
$$P = \langle Q, q_0, \texttt{action}, \texttt{goto} \rangle$$
where
- $Q$ is the set of states of the shift-reduce parser.
For the purpose of the shift-reduce-parser, states are purely abstract.
- $q_0 \in Q$ is the start state.
- $\texttt{action}$ is a function taking two arguments. The first argument is a state $q \in Q$
and the second argument is a terminal $t \in T$. The result of this function is an element from the set
$$\texttt{Action}
Step3: Testing | Python Code:
import re
Explanation: A Shift-Reduce Parser for Arithmetic Expressions
In this notebook we implement a generic shift reduce parser. The parse table that we use will
implement the following grammar for arithmetic expressions:
$$
\begin{eqnarray}
\mathrm{expr} & \rightarrow & \mathrm{expr}\;\;\texttt{'+'}\;\;\mathrm{product} \
& \mid & \mathrm{expr}\;\;\texttt{'-'}\;\;\mathrm{product} \
& \mid & \mathrm{product} \[0.2cm]
\mathrm{product} & \rightarrow & \mathrm{product}\;\;\texttt{''}\;\;\mathrm{factor} \
& \mid & \mathrm{product}\;\;\texttt{'/'}\;\;\mathrm{factor} \
& \mid & \mathrm{factor} \[0.2cm]
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \texttt{NUMBER}
\end{eqnarray*}
$$
Implementing a Scanner
In order to parse, we need a scanner. We will use the same scanner that we have already used for our top down parser that was presented in the notebook Top-Down-Parser.ipynb.
End of explanation
def tokenize(s):
'''Transform the string s into a list of tokens. The string s
is supposed to represent an arithmetic expression.
'''
lexSpec = r'''([ \t\n]+) | # blanks and tabs
([1-9][0-9]*|0) | # number
([()]) | # parentheses
([-+*/]) | # arithmetical operators
(.) # unrecognized character
'''
tokenList = re.findall(lexSpec, s, re.VERBOSE)
result = []
for ws, number, parenthesis, operator, error in tokenList:
if ws: # skip blanks and tabs
continue
elif number:
result += [ 'NUMBER' ]
elif parenthesis:
result += [ parenthesis ]
elif operator:
result += [ operator ]
else:
result += [ f'ERROR({error})']
return result
tokenize('1 + 2 * (3 - 4)')
Explanation: The function tokenize transforms the string s into a list of tokens. See below for an example.
End of explanation
class ShiftReduceParser():
def __init__(self, actionTable, gotoTable):
self.mActionTable = actionTable
self.mGotoTable = gotoTable
def parse(self, TL):
index = 0 # points to next token
Symbols = [] # stack of symbols
States = ['s0'] # stack of states, s0 is start state
TL += ['EOF']
while True:
q = States[-1]
t = TL[index]
print('Symbols:', ' '.join(Symbols + ['|'] + TL[index:]).strip())
action = self.mActionTable.get((q, t), 'error') # undefined interpreted as error
if action == 'error':
return False
elif action == 'accept':
return True
elif action[0] == 'shift': # action = ('shift', s)
s = action[1]
Symbols += [t]
States += [s]
index += 1
elif action[0] == 'reduce': # action = ('reduce', r)
head, body = action[1]
n = len(body)
Symbols = Symbols[:-n]
States = States [:-n]
Symbols = Symbols + [head]
state = States[-1]
States += [ self.mGotoTable[state, head] ]
ShiftReduceParser.parse = parse
del parse
%run Parse-Table.ipynb
Explanation: Assume a grammar $G = \langle V, T, R, S \rangle$ is given. A shift-reduce parser
is defined as a 4-Tuple
$$P = \langle Q, q_0, \texttt{action}, \texttt{goto} \rangle$$
where
- $Q$ is the set of states of the shift-reduce parser.
For the purpose of the shift-reduce-parser, states are purely abstract.
- $q_0 \in Q$ is the start state.
- $\texttt{action}$ is a function taking two arguments. The first argument is a state $q \in Q$
and the second argument is a terminal $t \in T$. The result of this function is an element from the set
$$\texttt{Action} :=
\bigl{ \langle\texttt{shift}, q\rangle \mid q \in Q \bigr} \cup
\bigl{ \langle\texttt{reduce}, r\rangle \mid r \in R \bigr} \cup
\bigl{ \texttt{accept} \bigr} \cup
\bigl{ \texttt{error} \bigr}.
$$
Here shift, reduce, accept, and error are strings that serve to
distinguish the different kinds of result of the function
action. Therefore the signature of the function \texttt{action} is given as follows:
$$\texttt{action}: Q \times T \rightarrow \texttt{Action}.$$
- goto is a function that takes a state $q \in Q$ and a syntactical variable
$v \in V$ and computes a new state. Therefore the signature of goto is as follows:
$$\texttt{goto}: Q \times V \rightarrow Q.$$
The class ShiftReduceParser maintains two tables as dictionaries:
- mActionTable encodes the function $\texttt{action}: Q \times T \rightarrow \texttt{Action}$.
- mGotoTable encodes the function $\texttt{goto}: Q \times V \rightarrow Q$.
End of explanation
def test(s):
parser = ShiftReduceParser(actionTable, gotoTable)
TL = tokenize(s)
print(f'tokenlist: {TL}\n')
if parser.parse(TL):
print('Parse successful!')
else:
print('Parse failed!')
test('(1 + 2) * 3')
test('1 * 2 + 3 * (4 - 5) / 2')
test('11+22*(33-44)/(5-10*5/(4-3))')
test('1+2*3-')
Explanation: Testing
End of explanation |
3,343 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring extracting data from the EPA web service
https
Step1: Example REST query
Step2: I can use this to extract the table data
Step3: Before we go further, we'll want to know the size of each table
Step4: Someone needs to provide me with a full list of tables, but for now I'm focusing on the ones listed at https
Step5: Performance Profiling.
Are there limits to how much data we can pull per request? How fast we can retrieve it?
Let's try a reasonably big table
Step6: So retrieval time seems to go up significantly once we go past 500 rows.
So let's write an iterative retrieval function
Step7: Let's compare this with our previous method
Step8: So we can use this technique to extract a table piecemeal. Let's try it on a bigger table with a larger stride
The ERM_ANALYSIS is significantly bigger than ERM_ANALYTE | Python Code:
import requests
import io
import pandas
from itertools import chain
Explanation: Exploring extracting data from the EPA web service
https://www.epa.gov/enviro/web-services
I am using the Python Requests http://docs.python-requests.org/en/master/) library to scrape quantitative data from the EPA web service. The REST service is self is reasonably well documented at https://www.epa.gov/enviro/web-services
End of explanation
url='https://iaspub.epa.gov/enviro/efservice/tri_facility/state_abbr/VA/count/json'
requests.get(url).text
Explanation: Example REST query:
End of explanation
def makeurl(tablename,start,end):
return "https://iaspub.epa.gov/enviro/efservice/{tablename}/JSON/rows/{start}:{end}".format_map(locals())
url=makeurl( 't_design_for_environment',1,19)
out=requests.get(url)
pandas.DataFrame(out.json())
Explanation: I can use this to extract the table data:
End of explanation
def table_count(tablename):
url= "https://iaspub.epa.gov/enviro/efservice/{tablename}/COUNT/JSON".format_map(locals())
print(url)
return requests.get(url).json()[0]['TOTALQUERYRESULTS']
table_count('erm_project')
table_count('t_design_for_environment')
Explanation: Before we go further, we'll want to know the size of each table:
End of explanation
tablenames=[
'ERM_RESULT',
'ERM_ANALYSIS',
'ERM_COUNT',
'ERM_SAMPLE',
'ERM_MATRIX',
'ERM_LOCATION',
'ERM_PROJECT',
'ERM_STUDY',
'ERM_ANALYTE',
'ERM_ANA_PROC',
'ERM_DET_TYPE'
]
# there are many more to be added to this list.
#
def table_data(tablename,start=0,end=100):
url=makeurl(tablename,start,end)
print(url)
data=requests.get(url).json()
return pandas.DataFrame(data)
erm_result=table_data('ERM_RESULT')
erm_result.head()
erm_analyte=table_data('ERM_ANALYTE')
erm_analyte.head()
Explanation: Someone needs to provide me with a full list of tables, but for now I'm focusing on the ones listed at https://www.epa.gov/enviro/radnet-model
We're building a complete list at https://docs.google.com/spreadsheets/d/1LDDH-qxJunBqqkS1EfG2mhwgwFi7PylXtz3GYsGjDzA/edit#gid=52614242
End of explanation
table_count('ERM_ANALYSIS')
%%time
erm_analysis = table_data('ERM_ANALYSIS',0,10)
print(len(erm_analysis),"rows retrieved")
erm_analysis.head()
%%time
erm_analysis = table_data('ERM_ANALYSIS',0,100)
print(len(erm_analysis),"rows retrieved")
%%time
erm_analysis = table_data('ERM_ANALYSIS',0,200)
print(len(erm_analysis),"rows retrieved")
%%time
erm_analysis = table_data('ERM_ANALYSIS',0,400)
print(len(erm_analysis),"rows retrieved")
%%time
erm_analysis = table_data('ERM_ANALYSIS',0,800)
print(len(erm_analysis),"rows retrieved")
%%time
erm_analysis = table_data('ERM_ANALYSIS',0,1000)
print(len(erm_analysis),"rows retrieved")
%%time
erm_analysis = table_data('ERM_ANALYSIS',0,500)
print(len(erm_analysis),"rows retrieved")
%%time
erm_analysis = table_data('ERM_ANALYSIS',0,600)
print(len(erm_analysis),"rows retrieved")
Explanation: Performance Profiling.
Are there limits to how much data we can pull per request? How fast we can retrieve it?
Let's try a reasonably big table: ERM_ANALYSIS
End of explanation
def collect(tablename, rowcount=100, limit = 1000):
'''
The API understands start/end to be INCLUSIVE
'''
count =table_count(tablename)
def inner():
start =0
end = rowcount-1
while start <=count:
end=min(end,limit-1)
url = makeurl(tablename,start,end)
print(url)
yield requests.get(url).json()
start+= rowcount
end += rowcount
if start>limit-1:return
return pandas.DataFrame(list(chain.from_iterable(inner())))
erm_analyte=collect('ERM_ANALYTE',rowcount=20,limit=35)
len(erm_analyte)
Explanation: So retrieval time seems to go up significantly once we go past 500 rows.
So let's write an iterative retrieval function
End of explanation
erm_analyte_=table_data('ERM_ANALYTE')
erm_analyte.head()
erm_analyte_.head()
Explanation: Let's compare this with our previous method
End of explanation
table_count('ERM_ANALYSIS')
%%time
erm_analysis=collect('ERM_ANALYSIS',rowcount=100,limit=1000)
Explanation: So we can use this technique to extract a table piecemeal. Let's try it on a bigger table with a larger stride
The ERM_ANALYSIS is significantly bigger than ERM_ANALYTE
End of explanation |
3,344 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture des données
Quand j'explore/analyse des données, la première chose que je fais est toujours
Step1: Pour information/rappel,
Step2: Pour lire un fichier CSV, nous utilisons la bien nommée fonction...
Step3: Ah, oui, j'ai pensé que votre feuille Excel serait exportée avec «;» comme caractère séparateur, car certains champs sont plus susceptibles de contenir des virgules...
Step4: Bien. On dirait que les valeurs inconnues/manquantes sont indiquées par «99». Alors spécifions cela dans les options de la merveilleuse fonction read_csv().
Step5: Nous allons donner un nom à ce «data frame» (structure de données très pratique), ce sera enfants.
Step6: Nous pouvons accéder aux données de garde avec la syntaxe (intuitive) suivante
Step7: ou bien enfants.loc[
Step8: Par défaut, c'est la première entrée qui est conservée (voir la documentation). Nous perdons alors l'information contenue dans les autres entrées. Nous voudrions plutôt les grouper.
Step9: Par défaut, sum() concatène. Pour une meilleure lisibilité, nous voulons peut-être appliquer une fonction faite maison.
Step10: Ecrivons une fonction (à appliquer au data frame).
Step11: Sortie du résultat
Step12: Autres manipulations
Et si nous voulions continuer... | Python Code:
import pandas as pd
Explanation: Lecture des données
Quand j'explore/analyse des données, la première chose que je fais est toujours :
End of explanation
pd.__version__
Explanation: Pour information/rappel,
End of explanation
pd.read_csv('data/enfants.csv')
Explanation: Pour lire un fichier CSV, nous utilisons la bien nommée fonction...
End of explanation
pd.read_csv('data/enfants.csv', sep=';')
Explanation: Ah, oui, j'ai pensé que votre feuille Excel serait exportée avec «;» comme caractère séparateur, car certains champs sont plus susceptibles de contenir des virgules...
End of explanation
pd.read_csv('data/enfants.csv', sep=';', na_values=99)
Explanation: Bien. On dirait que les valeurs inconnues/manquantes sont indiquées par «99». Alors spécifions cela dans les options de la merveilleuse fonction read_csv().
End of explanation
enfants = pd.read_csv('data/enfants.csv', sep=';', na_values='99')
Explanation: Nous allons donner un nom à ce «data frame» (structure de données très pratique), ce sera enfants.
End of explanation
enfants['garde']
Explanation: Nous pouvons accéder aux données de garde avec la syntaxe (intuitive) suivante :
End of explanation
enfants.drop_duplicates(subset=['prénom', 'nom'])
Explanation: ou bien enfants.loc[:, 'garde'].
Retrait des doublons
Remarquons que nous avons deux entrées pour Toto Le Magnifique. Si nous ne voulons conserver qu'une entrée (ligne) par enfant, nous pouvons utiliser la méthode drop_duplicates().
End of explanation
enfants.groupby(by=['prénom', 'nom'])['garde'].sum()
Explanation: Par défaut, c'est la première entrée qui est conservée (voir la documentation). Nous perdons alors l'information contenue dans les autres entrées. Nous voudrions plutôt les grouper.
End of explanation
enfants.groupby(by=['prénom', 'nom'])['garde'].apply(lambda x: '%s' % ', '.join(x.astype(str)))
Explanation: Par défaut, sum() concatène. Pour une meilleure lisibilité, nous voulons peut-être appliquer une fonction faite maison.
End of explanation
def groupe_garde(x):
return pd.Series(dict(age = x['âge'].mean(), garde_complete = '%s' % ', '.join(x['garde'].astype(str))))
enfants.groupby(by=['prénom', 'nom']).apply(groupe_garde)
Explanation: Ecrivons une fonction (à appliquer au data frame).
End of explanation
enfants.groupby(by=['prénom', 'nom']).apply(groupe_garde).to_csv('results/enfants_cleanup.csv', sep=';', na_rep='nan')
Explanation: Sortie du résultat
End of explanation
pd.read_csv('results/enfants_cleanup.csv', sep=';', na_values='nan')
Explanation: Autres manipulations
Et si nous voulions continuer...
End of explanation |
3,345 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing dead time 2 source method
Techniques for Nuclear and Particle Physics Experiments
A How-to Approach
Authors
Step1: Generate some data
Step2: So what are the errors in each measurement? | Python Code:
%matplotlib inline
from pprint import pprint
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as mc
import spacepy.toolbox as tb
import spacepy.plot as spp
import tqdm
from scipy import stats
import seaborn as sns
sns.set()
%matplotlib inline
Explanation: Computing dead time 2 source method
Techniques for Nuclear and Particle Physics Experiments
A How-to Approach
Authors: Leo, William R.
We measure rate1 rate2 and rate12
$n1 = \frac{R1}{1-R1\tau}, n2 = \frac{R2}{1-R2\tau}$
$n1+n2 = \frac{R12}{1-R12\tau}$
this then smplifies to:
$\tau = \frac{R1R2 - [R1R2(R12-R1)(R12-R2)]^{1/2}}{R1R2R12}$
End of explanation
np.random.seed(8675309)
strength1 = 100
strength2 = 70
n_exp = 10
R1 = np.random.poisson(strength1, size=n_exp)
R2 = np.random.poisson(strength2, size=n_exp)
R12 = np.random.poisson(strength1+strength2, size=n_exp)
Rate = np.vstack((R1, R2, R12))
print(Rate)
print(Rate.shape)
print(Rate[0])
print(Rate[1])
print(Rate[2])
print(R1, R2, R12)
print(R1*R2-np.sqrt(R1*R2*(R12-R1)*(R12-R2))/(R1*R2*R12))
def tau_fn(R1, R2, R12):
return R1*R2-np.sqrt(R1*R2*(R12-R1)*(R12-R2))/(R1*R2*R12)
print(tau_fn(R1, R2, R12))
matplotlib.pyplot.rc('figure', figsize=(10,10))
matplotlib.pyplot.rc('lines', lw=3)
plt.plot(Rate.T)
Explanation: Generate some data
End of explanation
with mc.Model() as model:
mu1 = mc.Uniform('mu1', 0, 1000) # true counting rate
mu2 = mc.Uniform('mu2', 0, 1000) # true counting rate
mu12 = mc.Uniform('mu12', 0, 1000) # true counting rate
R1 = mc.Poisson('R1', mu=mu1, observed=Rate[0]) # measured
R2 = mc.Poisson('R2', mu=mu2, observed=Rate[1]) # measured
R12 = mc.Poisson('R12', mu=mu12, observed=Rate[2]) # measured
tau = mc.Deterministic('tau', tau_fn(R1, R2, R12))
start = mc.find_MAP()
trace = mc.sample(10000, start=start, njobs=4)
mc.summary(trace)
mc.traceplot(trace, combined=True)
tau_fn(trace['R1'][100:110], trace['R2'][100:110], trace['R12'][100:110])
Explanation: So what are the errors in each measurement?
End of explanation |
3,346 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
Step1: OCR for reading a volunteer list
Step2: Play with Maps | Python Code:
from PIL import Image
import pytesseract
pytesseract.pytesseract.tesseract_cmd = 'c:/Tesseract-OCR/tesseract'
path = 'c:/learnPython/tess/mm_address.jpg'
path2 = 'mm_address.jpg'
img = Image.open(path2)
text = pytesseract.image_to_string(img)
print(text)
name = text.splitlines()[0]
street = text.splitlines()[2]
city = text.splitlines()[3]
customer = street + ', ' + city
print('The street is: ', customer)
Explanation: Overview: Need a mobile app that will take, as input, pictures of client addresses and output a recommended map to deliver them.
Use: OCR (Optical Character Recognition) + Google Maps
Use Tesseract
Play with OCR
End of explanation
from PIL import Image
import pytesseract
import sys
pytesseract.pytesseract.tesseract_cmd = 'c:/Tesseract-OCR/tesseract'
path = 'c:/Users/ginny/Desktop/running/thtrb.jpg'
path2 = 'thtrb.jpg'
img = Image.open(path2)
text = pytesseract.image_to_string(img)
print(text)
Explanation: OCR for reading a volunteer list
End of explanation
import googlemaps
from datetime import datetime
gmapsC = googlemaps.Client(key='Deleted')
myHome = "2403 Englewood Ave, Durham, NC"
now = datetime.now()
wp=['300 N Roxboro St, Durham, NC 27701','911 W Cornwallis Rd, Durham, NC 27707', '345 W Main Street, Durham, NC 27701' ]
directions = gmapsC.directions("2403 Englewood Ave, Durham, NC",
"Duke Memorial",
mode="driving",
waypoints=wp,
optimize_waypoints=True,
departure_time=now)
print(directions[0].keys())
print(directions[0]['summary'])
print(directions[0]['waypoint_order'])
print(directions[0]['legs'])
gmap = gmplot.GoogleMapPlotter(37.428, -122.145, 16)
gmap.plot(latitudes, longitudes, 'cornflowerblue', edge_width=10)
gmap.scatter(more_lats, more_lngs, '#3B0B39', size=40, marker=False)
gmap.scatter(marker_lats, marker_lngs, 'k', marker=True)
gmap.heatmap(heat_lats, heat_lngs)
gmap.draw("mymap.html")
# Needed the Google Maps JavaScript API to run
import googlemaps
from datetime import datetime
import gmaps
gmapsC = GoogleMaps.Client(key='deleted')
gmaps.configure(api_key='deleted')
homeLong = '2403 Englewood Ave, Durham, NC 27705'
# lat, lng = gmapsC.address_to_latlng(home)
# print(lat, ' + ', lng)
home = (36.0160282, -78.9321707)
foodLion = (36.0193147,-78.9603636)
church = (35.9969749, -78.9091543)
m = gmaps.Map()
dl = gmaps.directions_layer(home, church, waypoints=[foodLion])
m.add_layer(dl)
m
# How do you use optimize:true with jupyter gmaps?
Explanation: Play with Maps
End of explanation |
3,347 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An IPython Introduction to Using TEA for C. elegans researchers
All of the code below was written by David Angeles-Albores. Should you find any errors, typos, or just have general comments, please contact
Step1: Now let's import our dataset. Here, I will use a dataset I obtained from Engelmann et al, 2011 (PLOS One).
Specifically, this is data from an RNA-seq experiment they performed. Briefly, young adult worms were placed in D. coniospora fungus for 24, cleaned and then RNA-seq'ed.
Step2: Let's visualize the first five lines of the dataframe to see what's in it
Step3: Ok. Clearly we can see the dataframe has a few different columns. Of particular interest to use are the columns 'Infection_upregulated' and 'Infection_downregulated', since these are the genes they identified as significantly altered by the treatment relative to an OP50 control. Let's analyze the genes that are upregulated first and see what they can do.
Before we can analyze anything, notice that they don't list WBIDs anywhere. We need to turn the names into WBIDs before we can continue.
To do this, I will load another file containing all the WBID-human readable relationships into a new dataframe called names
Step4: Let's take a look at it
Step5: The Engelmann names look like they are GeneNames.
Next, I'm going to generate a lambda function. This function will take a single argument 'x'. 'x' should the be the column containing the names we want to convert into WBIDs. Once we provide 'x', this function will look in the GeneName column of the names dataframe to see whether a particular entry can be found in the GeneName column.\
For every entry it can find, g returns True. Else, it returns False
Step6: Let's try our new function out!
Step7: Great! Now we can get the WBIDs by simple indexing
Step8: Hmmm. We lost quite a few genes. Let's quickly check to make sure those aren't important
Step9: A quick search in WormBase shows that these genes have been merged into other genes. Hmmmm.. This could be a problem.
To figure out if it really is a problem, let's look at how many of those genes are upregulated during infection.
Step10: Great! So there's almost no loss in our gene name conversion. Now we can go ahead and extract all the IDs that we can find to use for our enrichment analysis
Step11: See how the list changed from before? Great! Now we can put this into TEA
Calling TEA
TEA works by comparing your gene-list to a reference tissue expression ''dictionary''. In order for us to run TEA, we first need to fetch the dictionary. That's done easily enough
Step12: Quick technical note
Step13: Voila! We got our results. Great! But what if we didn't want to show them?'
Step14: We could still look at the results by typing df_res.head()
Step15: What about the unused genes? Let's see how many of those there are
Step16: Ouch! That's a lot! Don't like it? Make GFP reporters and let WormBase know where they are expressed. Seriously. Do it! You'd be helping the whole community a lot!
Now let's plot the results
Step17: Voila! We've analyzed our data! Yay! | Python Code:
import tissue_enrichment_analysis as tea #the main library for this tutorial
import pandas as pd
import os
import importlib as imp
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
#to make IPython plot inline, not req'd if you're not working with an Ipython notebook
%matplotlib inline
Explanation: An IPython Introduction to Using TEA for C. elegans researchers
All of the code below was written by David Angeles-Albores. Should you find any errors, typos, or just have general comments, please contact:
dangeles at caltech dt edu
The work here was submitted and accepted for publication on ....
Please cite Tissue Enrichment Analysis for C. elegans Genomics if this notebook was useful for you in your research.
Please note: I have tried to make this tutorial as complete as possible, with a brief introduction to Pandas dataframes and showing how I typically prepare my dataframes for analysis. Experienced users will want to skip this and go straight to Calling TEA. However, this tutorial is by no means a complete introduction to Python or Pandas - in fact, it's more like a super fast crash course. I apologize for this, and in the future I will consider improving the tutorial.
Best of Luck!
-- David Angeles-Albores
Introduction
What is TEA meant for?
TEA is meant to provide straightforward analysis of large gene lists for C. elegans researchers.
We hope that TEA will function as a hypothesis generator, or alternatively, as a way of understanding the biology behind a dataset.
How is TEA different from GO?
Great question. GO is primarily a molecular/cellular ontology, whereas TEA works from TO, the C. elegans tissue ontology. I believe tissues are, in some senses, fundamental units in biology. Often, it is the case that tissues, not cells, have been studied for considerably longer time, and as a result we have a better intuition for what the function of a tissue is, as compared to the molecular function of a list of genes. In other words, I think GO analysis and TEA are similar, but my guess is that the results from TEA will be easier to interpret, and as a result easier to use for hypotheiss generation.
What TEA is not:
TEA is NOT meant to be used as a quantitative tool!
At best, TEA is a very good guess about what tissues are being affected in your dataset. At worst, TEA is a guess about what tissues are being affected in your dataset. TEA is working directly from the WormBase-curated expression dataset. As a result, we have the very best, most up to date annotations in the world. On the other hand, please remember these annotations suffer from bias. For example, the ASE, ASK and ASI neurons have been very well studied and are quite well annotated, but the individual intestinal cells have not been generally well studied! Thus, our annotations are significantly biased by the research community's interests.
Please use TEA carefully, and always use it as a guiding tool for your research, and never as the final say on anything.
What do you need to do to run this tool?
The gist of the algorithm is:
Get your gene list into WBIDs
Call our analysis function
Call the plotting function
Done.
Batch users:
This script runs on Python > 3.5.
Dependencies: scipy (all), pandas, numpy, matplotlib and seaborn
If you have pip, do
pip install tissue_enrichment_analysis
to install the library in your computer.
Import the module. You may find that the numpy and pandas modules are also often very useful.
For the purposes of this journal, the file structure I'm working with is the following:
src - the folder this file lives in
input - a folder that contains all my input files. Also contains
Engelmann - folder containing the files i will be using
End of explanation
dfDcon= pd.read_csv('../input/Engelmann/coniospora_Engelmann_2011.csv') #Don't forget to change the path in your script!
dfLum = pd.read_csv('../input/Engelmann/luminescens_Engelmann_2011.csv')
dfMarc = pd.read_csv('../input/Engelmann/marcescens_Engelmann_2011.csv')
dfFaec = pd.read_csv('../input/Engelmann/faecalis_Engelmann_2011.csv')
# dfDcon[dfDcon.GenePublicName =='C41A3.1']
# dfLum[dfLum.GenePublicName =='C41A3.1']
dfMarc[dfMarc.GenePublicName =='C41A3.1']
dfFaec[dfFaec.GenePublicName =='C41A3.1']
Explanation: Now let's import our dataset. Here, I will use a dataset I obtained from Engelmann et al, 2011 (PLOS One).
Specifically, this is data from an RNA-seq experiment they performed. Briefly, young adult worms were placed in D. coniospora fungus for 24, cleaned and then RNA-seq'ed.
End of explanation
print('This dataframe has {0} columns and {1} rows'.format(dfDcon.shape[1], dfLum.shape[0]))
dfDcon.head()
Explanation: Let's visualize the first five lines of the dataframe to see what's in it
End of explanation
names= pd.read_csv('../input/Engelmann/c_elegans.PRJNA13758.WS241.livegeneIDs.unmaprm.txt',
sep= '\t',comment= '#')
Explanation: Ok. Clearly we can see the dataframe has a few different columns. Of particular interest to use are the columns 'Infection_upregulated' and 'Infection_downregulated', since these are the genes they identified as significantly altered by the treatment relative to an OP50 control. Let's analyze the genes that are upregulated first and see what they can do.
Before we can analyze anything, notice that they don't list WBIDs anywhere. We need to turn the names into WBIDs before we can continue.
To do this, I will load another file containing all the WBID-human readable relationships into a new dataframe called names
End of explanation
print('The length of this dataframe is:{0}'.format(len(names)))
names.head()
Explanation: Let's take a look at it:
End of explanation
g= lambda x: (names.GeneName.isin(x))
Explanation: The Engelmann names look like they are GeneNames.
Next, I'm going to generate a lambda function. This function will take a single argument 'x'. 'x' should the be the column containing the names we want to convert into WBIDs. Once we provide 'x', this function will look in the GeneName column of the names dataframe to see whether a particular entry can be found in the GeneName column.\
For every entry it can find, g returns True. Else, it returns False
End of explanation
#Remember, dfLum is the dataframe. dfLum['SequenceNameGene'] is the column we want.#
#We store the result in a variable called 'translate'
translate= g(dfDcon['SequenceNameGene'])
#I only want to show the first 5 rows, so I'm going to add [0:5] after translate, since 'g' returns a Series object
print(translate[0:5])
Explanation: Let's try our new function out!
End of explanation
wbids= names[translate].WBID # names[translate] gets rows for every gene name that was found by 'translate'
#The .WBID after names[] tells the computer to get the WBID colum
print('wbids has {} gene IDS. The original dataframe has {} genes'.format(len(wbids), dfLum.shape[0]))
wbids.head() #let's see what we found
Explanation: Great! Now we can get the WBIDs by simple indexing:
End of explanation
not_found= dfDcon[~dfDcon.SequenceNameGene.isin(names[translate].GeneName)]
not_found.head()
Explanation: Hmmm. We lost quite a few genes. Let's quickly check to make sure those aren't important
End of explanation
print('There are {0} upregulated genes, of which {1} can\'t be found in the names dictionary'.format(
dfDcon[dfDcon.Infection_upregulated == 1].shape[0], not_found[not_found.Infection_upregulated == 1].shape[0]))
print('{0:.2}% could not be found'.format(
not_found[not_found.Infection_upregulated == 1].shape[0]/dfDcon[dfDcon.Infection_upregulated == 1].shape[0]))
Explanation: A quick search in WormBase shows that these genes have been merged into other genes. Hmmmm.. This could be a problem.
To figure out if it really is a problem, let's look at how many of those genes are upregulated during infection.
End of explanation
translate= g(dfLum[dfLum.Infection_downregulated == 1]['SequenceNameGene'])
wbids= names[translate].WBID
print(wbids.head())
Explanation: Great! So there's almost no loss in our gene name conversion. Now we can go ahead and extract all the IDs that we can find to use for our enrichment analysis
End of explanation
tissue_df= tea.fetch_dictionary() #this downloads the tissue dictionary we want
tissue_df.head()
Explanation: See how the list changed from before? Great! Now we can put this into TEA
Calling TEA
TEA works by comparing your gene-list to a reference tissue expression ''dictionary''. In order for us to run TEA, we first need to fetch the dictionary. That's done easily enough:
End of explanation
df_res, unused= tea.enrichment_analysis(wbids, tissue_df, show= True, save= False)
Explanation: Quick technical note: We could have placed the dictionary inside the other functions and call them from the inside, but we want you to be able to access the dictionary. Why? Well, you might imagine that you want to get all the genes that are specifically expressed in a tissue, or you may want to take a look at what tissues are included, etc...
In other words, we want you to be able to get your hands on this data! It's up to date, it's easy and it works beautifully.
Now that we have the dictionary, we can run the enrichment analysis. Just so you know what's going on when you call it, the function has the following args.:
enrichment_analysis(gene_list, tissue_df, alpha= 0.05, aname= '', save= False, show= True)
Most of these you can ignore. Mainly, you'll want to assign:
gene_list = your gene list
tissue_df = the result from fetch_dictionary()
alpha= your desired q-value threshold
aname= if you want to save the result to your python interpret, give it a name and complete path
save= if you want to save your file, you must set this to True
This function returns 2 things:
df_res -- a dataframe with all the results
unused -- a list of all the genes that were discarded from the analysis
For now, let's jsut run the analysis and show it here:
End of explanation
df_res, unused= tea.enrichment_analysis(wbids, tissue_df, show= False, save= False)
Explanation: Voila! We got our results. Great! But what if we didn't want to show them?'
End of explanation
df_res.head()
Explanation: We could still look at the results by typing df_res.head():
End of explanation
print('{0} were discarded from the analysis'.format(len(unused)))
Explanation: What about the unused genes? Let's see how many of those there are:
End of explanation
tea.plot_enrichment_results(df_res, title= 'Exercise', save= False)
Explanation: Ouch! That's a lot! Don't like it? Make GFP reporters and let WormBase know where they are expressed. Seriously. Do it! You'd be helping the whole community a lot!
Now let's plot the results
End of explanation
tea.plot_enrichment_results(df_res, title= 'Exercise', save= True, dirGraphs= 'example_graph_dir/')
#This will save the graph in the corresponding directory. If no directory is specified, the graphs will be saved
#to the current working directory.
Explanation: Voila! We've analyzed our data! Yay! :D
If we wanted to save our plot, we would type:
End of explanation |
3,348 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TODO
Step1: Plot histogram of metabolites at m/z
give bin size of x ppm
Step2: Repeat at 5ppm
Step3: Let's try to plot histogram # of isomers vs. m/z | Python Code:
# namespace - at the top of file. fucks with every tag.
# very annoying, so name all tags ns + tag
ns = '{http://www.hmdb.ca}'
nsmap = {None : ns}
# If you're within a metabolite tag
count = 0
seen_mass = 0
d = {}
for event, element in etree.iterparse(xml_file, tag=ns+'metabolite'):
tree = etree.ElementTree(element)
# Aggregate info into a dictionary of
# {HMDB_ID: iso_mass}
accession = []
# Get accession number and masses for each metabolite
# Could be multiple accessions. Grab all of them,
# sort to make unique identifier
for elem in tree.iter():
if elem.tag == ns+'accession':
accession.append(elem.text)
# If you just saw a 'mono_mass' entry,
# get the mass value and reset, saying you
# havent seen 'mono_mass' in the text of next metabolite
if (elem.tag == ns+'value') & (seen_mass == 1):
mass = float(elem.text)
seen_mass = 0
if elem.text == 'mono_mass':
seen_mass = 1
elem.clear()
# sort accession numbers and join with '_'
accession_key = '_'.join(sorted(accession))
# add to dictionary
if mass:
d[accession_key] = mass
# reset mass - only add feature if mass listed
mass = None
# reset accession numbers
accession = []
element.clear()
count += 1
if count % 1000 == 0:
print('Made it through ' + str(count) + ' metabolites')
#pickle.dump(d, open('serumdb_dict.p', 'wb'))
print 'Number of metabolites: %s' % len(d.keys())
# write to file
pickle.dump(d, open(pickle_path, 'wb'))
hmdb_dict = pickle.load(open(pickle_path, 'rb'))
# masses are entries of dict, yes?
hmdb_masses = pd.Series(hmdb_dict, dtype='float32')
Explanation: TODO: make all this so that it checks if these files are present - Claire sent an email about a fxn that helps do this in the past
End of explanation
def plot_mz_hist(series, ppm):
worst_mz_bin = (ppm * series.max() * 10**-6)
#print 'Worst bin size: %s for %s mass' % (worst_mz_bin,
# series.max())
median_mz_bin = (ppm*series.median() * 10**-6)
#print 'Median bin size: %s for %s mass' % (median_mz_bin,
# series.median())
median_bins = np.arange(series.min(), series.max(), median_mz_bin)
worst_bins = np.arange(series.min(), series.max(),
worst_mz_bin)
print 'median bins:', median_bins.shape
print 'worst bins:', worst_bins.shape
#sns.distplot(series, kde=False)
#plt.show()
#plt.hist(series, bins=median_mz_bin)
sns.distplot(series, kde=False, norm_hist=False,
bins=worst_bins)
plt.ylim([0,10])
plt.title('mz overlaps at 30ppm, worst possible binsize (deltaX at highest m/z)')
plt.show()
# norm_hist=False, kde=False)
#plt.show()
return plt
ppm = 30
import copy
my_plot = plot_mz_hist(hmdb_masses, 30)
30*hmdb_masses.max() * 10**-6
hmdb_masses[0:5]
ppm_matrix = combine_mz.ppm_matrix(hmdb_masses, hmdb_masses)
# write to file
np_path = local+path+'/hmdb_serumdb_20170813_ppm_matrix.npy'
np.save(np_path, ppm_matrix)
# reload it
ppm_matrix = np.load(np_path)
# Convert to upper triangular matrix
idx_ppm = np.tril_indices(ppm_matrix.shape[0])
ppm_matrix[idx_ppm] = np.nan
# get indices whose ppm falls below cutoff
# Ignore runtime warning - means we're ignoring NaN values
isomer_indices = np.argwhere(ppm_matrix < ppm)
isomer_indices.shape
print isomer_indices[0:10]
# write isomer indices to file
np.save(local+path+'/hmdb_serumdb_20170813_isomer_indices_%s_ppm.npy' % ppm, isomer_indices)
isomer_indices = np.load(local+path+'/hmdb_serumdb_20170813_isomer_indices_%s_ppm.npy' % ppm)
# TODO - fix this - it takes too long
# 7 seconds for 25,000 molecules ends up being
# 48 hours of run-time
def isomers_from_ppm_matrix(ppm_matrix, ppm):
'''
Only tested on square matrices for now
INPUT - numpy array of ppm values
OUTPUT - list of arrays - position in list is
same as row in matrix, values in each list-entry are
indices along column of array
'''
bool_idx = ppm_matrix < ppm
# Get indices where you have an isomer
# for each row
iso_indices = [np.argwhere(x) for x in bool_idx]
return iso_indices
toy_ppm = np.array([
[0, 20, 15, 50],
[100, 0, 90, 10 ],
[15, 90, 0, 10]],
) # not additive ppms
print 'Input:\n', toy_ppm
isomers_from_ppm_matrix(toy_ppm, 30)
ppm = 30
isomers = isomers_from_ppm_matrix(ppm_matrix, ppm)
np.save(local+path+'isomer_index_per_feature.npy', isomers)
num_isomers = [len(x) for x in isomers]
sns.distplot(num_isomers,)
plt.title('Overlapping features at 30ppm')
plt.xlabel('Number of isomers')
plt.show()
plt.hist(num_isomers, bins=1000)
plt.xlabel('Number of isomers')
plt.title('Overlapping features at 30ppm')
plt.show()
single_isomers = sum([i <= 1 for i in num_isomers])
print ("Number of metabolites from HMDB with no " +
'isomers: {num} out of {hmdb}: {per:.2f} percent'.format(
num=single_isomers, hmdb=ppm_matrix.shape[0],
per = float(single_isomers) / ppm_matrix.shape[0]*100))
Explanation: Plot histogram of metabolites at m/z
give bin size of x ppm
End of explanation
ppm = 3
isomers = isomers_from_ppm_matrix(ppm_matrix, ppm)
np.save(local+path+'isomer_index_per_feature_%s_ppm.npy' % ppm,
isomers)
num_isomers = [len(x) for x in isomers]
sns.distplot(num_isomers,)
plt.title('Overlapping features at %sppm' % ppm)
plt.xlabel('Number of isomers')
plt.show()
plt.hist(num_isomers, bins=1000)
plt.xlabel('Number of isomers')
plt.title('Overlapping features at %sppm' % ppm)
plt.show()
ppm = 1
isomers = isomers_from_ppm_matrix(ppm_matrix, ppm)
np.save(local+path+'isomer_index_per_feature_%s_ppm.npy' % ppm,
isomers)
num_isomers = [len(x) for x in isomers]
sns.distplot(num_isomers,)
plt.title('Overlapping features at %sppm' % ppm)
plt.xlabel('Number of isomers')
plt.show()
plt.hist(num_isomers, bins=1000)
plt.xlabel('Number of isomers')
plt.title('Overlapping features at %sppm' % ppm)
plt.show()
Explanation: Repeat at 5ppm
End of explanation
# First make isomer indices as before
# Use 30, because it's the worst
ppms = [30,3]
isomers = isomers_from_ppm_matrix(ppm_matrix, ppm)
# get the number of isomers at that position
num_isomers = [len(x) for x in isomers]
print isomers[0:5]
print num_isomers[0:5]
# Next get y'values (mz) corresponding to those indices
plt.scatter(x=hmdb_masses, y=num_isomers, s=1)
plt.ylim([0,400])
plt.show()
Explanation: Let's try to plot histogram # of isomers vs. m/z
End of explanation |
3,349 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time series prediction
This is a full example of how to do time series prediction with TensorFlow high level APIs. We'll use the weather dataset available at big query and can be generated with the following query
Step1: Describing the data set and the model
We're using a weather dataset....[DESCRIBE DATA SET, HOW THE DATA WAS GENERATED AND OTHER DETAILS].
The goal is
Step2: Separating training, evaluation and a small test data
The test data is going to be very small (10 sequences) and is being used just for visualization.
Step3: What we want to predict
This is the plot of the labels from the test data.
Step4: Defining Input functions
Step5: RNN Model
Step6: Running model
Step7: Trainning
Step8: Evaluating
Step9: Testing
Step10: Visualizing predictions | Python Code:
# tensorflow
import tensorflow as tf
# rnn common functions
from tensorflow.contrib.learn.python.learn.estimators import rnn_common
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
# helpers
import numpy as np
import pandas as pd
import csv
# enable tensorflow logs
tf.logging.set_verbosity(tf.logging.INFO)
Explanation: Time series prediction
This is a full example of how to do time series prediction with TensorFlow high level APIs. We'll use the weather dataset available at big query and can be generated with the following query:
SELECT year, mo, da,
avg(temp) as avg_tmp,
avg(dewp) as avg_dewp,
avg(slp) as avg_slp
FROM `bigquery-public-data.noaa_gsod.gsod*`
WHERE temp <> 9999.9 and dewp <> 9999.9 and slp <> 9999.9
GROUP BY year, mo, da
ORDER BY year asc, mo asc, da asc
You can download the data from here.
We'll implement a RNN model using the Estimators API that given the average temperature of 10 days can predict the average temperature of the following day (11th).
Dependecies
End of explanation
df = pd.read_csv('weather.csv')
number_of_rows = len(df)
print('number of rows in the dataset:', number_of_rows)
print('how a row looks like:')
print(df.head(11))
print()
print("we don't the year mo da columns, so let's forget about them")
df = df[['avg_tmp', 'avg_dewp', 'avg_slp']]
print(df.head(11))
SEQ_LEN = 10
VALID_ROWS = number_of_rows - SEQ_LEN - 1
NUM_FEATURES = 3
# then we can use indexes to access rows easily
df = np.asarray(df)
# sequences will have shape: [VALID_ROWS, SEQ_LEN, NUM_FEATURES]
sequences = np.zeros((VALID_ROWS, SEQ_LEN, NUM_FEATURES), dtype=np.float32)
labels = np.zeros((VALID_ROWS, 1))
# sequences are 10 days
# label is the avg_tmp for the following day (11th)
for i in range(VALID_ROWS):
sequences[i] = df[i: i + SEQ_LEN]
labels[i] = df[i + SEQ_LEN][0]
print('-' * 20)
print('Example')
print('-' * 20)
print('sequence:')
print(sequences[0])
print('prediction:', labels[0])
Explanation: Describing the data set and the model
We're using a weather dataset....[DESCRIBE DATA SET, HOW THE DATA WAS GENERATED AND OTHER DETAILS].
The goal is: based on features from the past days predict the avg temperature in the next day. More specifically we'll use the data from 10 days in sequence to predict the avg temperature in the next day.
Preparing the data
First let's prepare the data for the RNN, the RNN input is:
x = sequence of features
y = what we want to predict/classify from x, in our casewe want to predict the next avg temperature
Then we have to separe trainning data from test data.
End of explanation
# these values are based on the number of valid rows which is 32083
TRAIN_SIZE = 30000
EVAL_SIZE = 2073
TEST_SIZE = 10
# TODO(@monteirom): suffle
train_seq = sequences[:TRAIN_SIZE]
train_label = np.asarray(labels[:TRAIN_SIZE], dtype=np.float32)
eval_seq = sequences[TRAIN_SIZE: TRAIN_SIZE + EVAL_SIZE]
eval_label = np.asarray(labels[TRAIN_SIZE:TRAIN_SIZE + EVAL_SIZE], dtype=np.float32)
test_seq = sequences[TRAIN_SIZE + EVAL_SIZE: ]
test_label = np.asarray(labels[TRAIN_SIZE + EVAL_SIZE: ], dtype=np.float32)
print('train shape:', train_seq.shape)
print('eval shape:', eval_seq.shape)
print('test shape:', test_seq.shape)
Explanation: Separating training, evaluation and a small test data
The test data is going to be very small (10 sequences) and is being used just for visualization.
End of explanation
# getting test labels
test_plot_data = [test_label[i][0] for i in range(TEST_SIZE)]
# plotting
sns.tsplot(test_plot_data)
plt.show()
Explanation: What we want to predict
This is the plot of the labels from the test data.
End of explanation
BATCH_SIZE = 64
FEATURE_KEY = 'x'
SEQ_LEN_KEY = 'sequence_length'
def make_dict(x):
d = {}
d[FEATURE_KEY] = x
# [SIZE OF DATA SET, 1]
# where the second dimesion contains the sequence of each
# sequence in the data set
d[SEQ_LEN_KEY] = np.asarray(x.shape[0] * [SEQ_LEN], dtype=np.int32)
return d
# Make input function for training:
# num_epochs=None -> will cycle through input data forever
# shuffle=True -> randomize order of input data
train_input_fn = tf.estimator.inputs.numpy_input_fn(x=make_dict(train_seq),
y=train_label,
batch_size=BATCH_SIZE,
shuffle=True,
num_epochs=None)
# Make input function for evaluation:
# shuffle=False -> do not randomize input data
eval_input_fn = tf.estimator.inputs.numpy_input_fn(x=make_dict(eval_seq),
y=eval_label,
batch_size=BATCH_SIZE,
shuffle=False)
# Make input function for testing:
# shuffle=False -> do not randomize input data
test_input_fn = tf.estimator.inputs.numpy_input_fn(x=make_dict(test_seq),
y=test_label,
batch_size=1,
shuffle=False)
Explanation: Defining Input functions
End of explanation
N_OUTPUTS = 1 # 1 prediction
NUM_FEATURES = 3
def get_model_fn(rnn_cell_sizes,
label_dimension,
dnn_layer_sizes=[],
optimizer='SGD',
learning_rate=0.01):
def model_fn(features, labels, mode, params):
x = features[FEATURE_KEY]
sequence_length = features[SEQ_LEN_KEY]
# 1. configure the RNN
# Each RNN layer will consist of a LSTM cell
rnn_layers = [tf.nn.rnn_cell.LSTMCell(size) for size in rnn_cell_sizes]
# Construct the layers
multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(rnn_layers)
outputs, _ = tf.nn.dynamic_rnn(multi_rnn_cell, x, dtype=tf.float32)
# Slice to keep only the last cell of the RNN
last_activations = rnn_common.select_last_activations(outputs,
sequence_length)
# Construct dense layers on top of the last cell of the RNN
for units in dnn_layer_sizes:
last_activations = tf.layers.dense(last_activations,
units,
activation=tf.nn.relu)
# Final dense layer for prediction
predictions = tf.layers.dense(last_activations, label_dimension)
# 2. Define the loss function for training/evaluation
loss = None
eval_metric_ops = None
train_op = None
# if predicting labels can be None
if mode != tf.estimator.ModeKeys.PREDICT:
loss = tf.losses.mean_squared_error(labels, predictions)
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(labels, predictions)
}
# 3. Define the training operation/optimizer
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=learning_rate,
optimizer=optimizer)
# 4. Create predictions
predictions_dict = {"predicted": predictions}
# 5. return ModelFnOps
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
return model_fn
Explanation: RNN Model
End of explanation
model_fn = get_model_fn(rnn_cell_sizes=[64], # size of the hidden layers
label_dimension=1, # since is just 1 prediction
dnn_layer_sizes=[32], # size of units in the dense layers on top of the RNN
optimizer='Adam',
learning_rate=0.001)
estimator = tf.estimator.Estimator(model_fn=model_fn)
Explanation: Running model
End of explanation
estimator.train(input_fn=train_input_fn, steps=10000)
Explanation: Trainning
End of explanation
ev = estimator.evaluate(input_fn=eval_input_fn)
print(ev)
Explanation: Evaluating
End of explanation
preds = list(estimator.predict(input_fn=test_input_fn))
predictions = []
for p in preds:
print(p)
predictions.append(p["predicted"][0])
Explanation: Testing
End of explanation
# plotting real values in black
sns.tsplot(test_plot_data, color="black")
# plotting predictions in red
sns.tsplot(predictions, color="red")
plt.show()
Explanation: Visualizing predictions
End of explanation |
3,350 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fast Lomb-Scargle Periodograms in Python
The Lomb-Scargle Periodogram is a well-known method of finding periodicity in irregularly-sampled time-series data.
The common implementation of the periodogram is relatively slow
Step1: To begin, let's make a function which will create $N$ noisy, irregularly-spaced data points containing a periodic signal, and plot one realization of that data
Step2: From this, our algorithm should be able to identify any periodicity that is present.
Choosing the Frequency Grid
The Lomb-Scargle Periodogram works by evaluating a power for a set of candidate frequencies $f$. So the first question is, how many candidate frequencies should we choose?
It turns out that this question is very important. If you choose the frequency spacing poorly, it may lead you to miss strong periodic signal in the data!
Frequency spacing
First, let's think about the frequency spacing we need in our grid. If you're asking about a candidate frequency $f$, then data with range $T$ contains $T \cdot f$ complete cycles. If our error in frequency is $\delta f$, then $T\cdot\delta f$ is the error in number of cycles between the endpoints of the data.
If this error is a significant fraction of a cycle, this will cause problems. This givs us the criterion
$$
T\cdot\delta f \ll 1
$$
Commonly, we'll choose some oversampling factor around 5 and use $\delta f = (5T)^{-1}$ as our frequency grid spacing.
Frequency limits
Next, we need to choose the limits of the frequency grid. On the low end, $f=0$ is suitable, but causes some problems – we'll go one step away and use $\delta f$ as our minimum frequency.
But on the high end, we need to make a choice
Step3: Now let's use the gatspy tools to plot the periodogram
Step4: The algorithm finds a strong signal at a period of 2.5.
To demonstrate explicitly that the Nyquist rate doesn't apply in irregularly-sampled data, let's use a period below the averaged sampling rate and show that we can find it
Step5: With a data sampling rate of approximately $1$ time unit, we easily find a period of $0.3$ time units. The averaged Nyquist limit clearly does not apply for irregularly-spaced data!
Nevertheless, short of a full analysis of the temporal window function, it remains a useful milepost in estimating the upper limit of frequency.
Scaling with $N$
With these rules in mind, we see that the size of the frequency grid is approximately
$$
N_f = \frac{f_{max}}{\delta f} \propto \frac{N/(2T)}{1/T} \propto N
$$
So for $N$ data points, we will require some multiple of $N$ frequencies (with a constant of proportionality typically on order 10) to suitably explore the frequency space.
This is the source of the $N^2$ scaling of the typical periodogram
Step6: Here, to illustrate the different computational scalings, we'll evaluate the computational time for a number of inputs, using LombScargleAstroML (a fast implementation of the $O[N^2]$ algorithm) and LombScargleFast, which is the fast FFT-based implementation | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# use seaborn's default plotting styles for matplotlib
import seaborn; seaborn.set()
Explanation: Fast Lomb-Scargle Periodograms in Python
The Lomb-Scargle Periodogram is a well-known method of finding periodicity in irregularly-sampled time-series data.
The common implementation of the periodogram is relatively slow: for $N$ data points, a frequency grid of $\sim N$ frequencies is required and the computation scales as $O[N^2]$.
In a 1989 paper, Press and Rybicki presented a faster technique which makes use of fast Fourier transforms to reduce this cost to $O[N\log N]$ on a regular frequency grid.
The gatspy package implement this in the LombScargleFast object, which we'll explore below.
But first, we'll motivate why this algorithm is needed at all.
We'll start this notebook with some standard imports:
End of explanation
def create_data(N, period=2.5, err=0.1, rseed=0):
rng = np.random.RandomState(rseed)
t = np.arange(N, dtype=float) + 0.3 * rng.randn(N)
y = np.sin(2 * np.pi * t / period) + err * rng.randn(N)
return t, y, err
t, y, dy = create_data(100, period=20)
plt.errorbar(t, y, dy, fmt='o');
Explanation: To begin, let's make a function which will create $N$ noisy, irregularly-spaced data points containing a periodic signal, and plot one realization of that data:
End of explanation
def freq_grid(t, oversampling=5, nyquist_factor=3):
T = t.max() - t.min()
N = len(t)
df = 1. / (oversampling * T)
fmax = 0.5 * nyquist_factor * N / T
N = int(fmax // df)
return df + df * np.arange(N)
Explanation: From this, our algorithm should be able to identify any periodicity that is present.
Choosing the Frequency Grid
The Lomb-Scargle Periodogram works by evaluating a power for a set of candidate frequencies $f$. So the first question is, how many candidate frequencies should we choose?
It turns out that this question is very important. If you choose the frequency spacing poorly, it may lead you to miss strong periodic signal in the data!
Frequency spacing
First, let's think about the frequency spacing we need in our grid. If you're asking about a candidate frequency $f$, then data with range $T$ contains $T \cdot f$ complete cycles. If our error in frequency is $\delta f$, then $T\cdot\delta f$ is the error in number of cycles between the endpoints of the data.
If this error is a significant fraction of a cycle, this will cause problems. This givs us the criterion
$$
T\cdot\delta f \ll 1
$$
Commonly, we'll choose some oversampling factor around 5 and use $\delta f = (5T)^{-1}$ as our frequency grid spacing.
Frequency limits
Next, we need to choose the limits of the frequency grid. On the low end, $f=0$ is suitable, but causes some problems – we'll go one step away and use $\delta f$ as our minimum frequency.
But on the high end, we need to make a choice: what's the highest frequency we'd trust our data to be sensitive to?
At this point, many people are tempted to mis-apply the Nyquist-Shannon sampling theorem, and choose some version of the Nyquist limit for the data.
But this is entirely wrong! The Nyquist frequency applies for regularly-sampled data, but irregularly-sampled data can be sensitive to much, much higher frequencies, and the upper limit should be determined based on what kind of signals you are looking for.
Still, a common (if dubious) rule-of-thumb is that the high frequency is some multiple of what Press & Rybicki call the "average" Nyquist frequency,
$$
\hat{f}_{Ny} = \frac{N}{2T}
$$
With this in mind, we'll use the following function to determine a suitable frequency grid:
End of explanation
t, y, dy = create_data(100, period=2.5)
freq = freq_grid(t)
print(len(freq))
from gatspy.periodic import LombScargle
model = LombScargle().fit(t, y, dy)
period = 1. / freq
power = model.periodogram(period)
plt.plot(period, power)
plt.xlim(0, 5);
Explanation: Now let's use the gatspy tools to plot the periodogram:
End of explanation
t, y, dy = create_data(100, period=0.3)
period = 1. / freq_grid(t, nyquist_factor=10)
model = LombScargle().fit(t, y, dy)
power = model.periodogram(period)
plt.plot(period, power)
plt.xlim(0, 1);
Explanation: The algorithm finds a strong signal at a period of 2.5.
To demonstrate explicitly that the Nyquist rate doesn't apply in irregularly-sampled data, let's use a period below the averaged sampling rate and show that we can find it:
End of explanation
from gatspy.periodic import LombScargleFast
help(LombScargleFast.periodogram_auto)
from gatspy.periodic import LombScargleFast
t, y, dy = create_data(100)
model = LombScargleFast().fit(t, y, dy)
period, power = model.periodogram_auto()
plt.plot(period, power)
plt.xlim(0, 5);
Explanation: With a data sampling rate of approximately $1$ time unit, we easily find a period of $0.3$ time units. The averaged Nyquist limit clearly does not apply for irregularly-spaced data!
Nevertheless, short of a full analysis of the temporal window function, it remains a useful milepost in estimating the upper limit of frequency.
Scaling with $N$
With these rules in mind, we see that the size of the frequency grid is approximately
$$
N_f = \frac{f_{max}}{\delta f} \propto \frac{N/(2T)}{1/T} \propto N
$$
So for $N$ data points, we will require some multiple of $N$ frequencies (with a constant of proportionality typically on order 10) to suitably explore the frequency space.
This is the source of the $N^2$ scaling of the typical periodogram: finding periods in $N$ datapoints requires a grid of $\sim 10N$ frequencies, and $O[N^2]$ operations.
When $N$ gets very, very large, this becomes a problem.
Fast Periodograms with LombScargleFast
Finally we get to the meat of this discussion.
In a 1989 paper, Press and Rybicki proposed a clever method whereby a Fast Fourier Transform is used on a grid extirpolated from the original data, such that this problem can be solved in $O[N\log N]$ time. The gatspy package contains a pure-Python implementation of this algorithm, and we'll explore it here.
If you're interested in seeing how the algorithm works in Python, check out the code in the gatspy source.
It's far more readible and understandable than the Fortran source presented in Press et al.
For convenience, the implementation has a periodogram_auto method which automatically selects a frequency/period range based on an oversampling factor and a nyquist factor:
End of explanation
from time import time
from gatspy.periodic import LombScargleAstroML, LombScargleFast
def get_time(N, Model):
t, y, dy = create_data(N)
model = Model().fit(t, y, dy)
t0 = time()
model.periodogram_auto()
t1 = time()
result = t1 - t0
# for fast operations, we should do several and take the median
if result < 0.1:
N = min(50, 0.5 / result)
times = []
for i in range(5):
t0 = time()
model.periodogram_auto()
t1 = time()
times.append(t1 - t0)
result = np.median(times)
return result
N_obs = list(map(int, 10 ** np.linspace(1, 4, 5)))
times1 = [get_time(N, LombScargleAstroML) for N in N_obs]
times2 = [get_time(N, LombScargleFast) for N in N_obs]
plt.loglog(N_obs, times1, label='Naive Implmentation')
plt.loglog(N_obs, times2, label='FFT Implementation')
plt.xlabel('N observations')
plt.ylabel('t (sec)')
plt.legend(loc='upper left');
Explanation: Here, to illustrate the different computational scalings, we'll evaluate the computational time for a number of inputs, using LombScargleAstroML (a fast implementation of the $O[N^2]$ algorithm) and LombScargleFast, which is the fast FFT-based implementation:
End of explanation |
3,351 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<img src="../img/ods_stickers.jpg">
Открытый курс по машинному обучению
</center>
Автор материала
Step1: Посмотрим на Seaborn сразу в действии на данных по моделям месяца по версии журнала Playboy.
Step2: Гистограммы. Метод <a href="https
Step3: Метод <a href="https
Step4: Метод <a href="http
Step5: Метод <a href="http
Step6: Метод jointplot
Step7: Пример визуального анализа данных с Pandas и Seaborn
В 1968 году была опубликована статья под названием "Correlation of Performance Test Scores with Tissue Concentration of Lysergic Acid Diethylamide in Human Subjects".
К статье приложен небольшой набор данных всего из 7 наблюдений.
Step8: Таблица уже отсортирована по колонке Drugs, сделаем сортировку по Score.
Step9: Рисунки
Step10: Видна тенденция...
Step11: Не советуем строить регрессию по 7 наблюдениям | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10, 6)
Explanation: <center>
<img src="../img/ods_stickers.jpg">
Открытый курс по машинному обучению
</center>
Автор материала: программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий
<center>Тема 2. Визуальный анализ данных с Python
<center>Часть 1. Обзор библиотеки Seaborn
Seaborn – дополнение Matplotlib с API как для быстрого построения красивых графиков, так и для детальной кастомизации картинок для презентации.
End of explanation
girls = pd.read_csv('../data/girls.csv')
girls.head(10)
girls.describe(include='all')
Explanation: Посмотрим на Seaborn сразу в действии на данных по моделям месяца по версии журнала Playboy.
End of explanation
girls['Waist'].hist(bins=15);
sns.distplot(girls['Waist'], kde=True);
ax = sns.distplot(girls['Height'], kde=False)
ax.set(xlabel='Playboy girls height', ylabel='Frequency')
sns.set_style('darkgrid')
Explanation: Гистограммы. Метод <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.distplot.html">distplot</a>
End of explanation
def weight_category(weight):
return 'heavier' if weight > 54\
else 'lighter' if weight < 49 else 'median'
girls['weight_cat'] = girls['Weight'].apply(weight_category)
sns.boxplot(x='weight_cat', y='Height', data=girls);
Explanation: Метод <a href="https://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.boxplot.html">boxplot</a>
В наборе данных все признаки численные, так что создадим категорию "weight_cat" из 3 типов веса.
End of explanation
sns.set_palette(sns.color_palette("RdBu"))
sns.pairplot(girls[['Bust', 'Waist', 'Hips', 'Height', 'Weight']]);
girls.corr()
girls.head()
Explanation: Метод <a href="http://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.pairplot.html">pairplot</a>
End of explanation
def height_category(height):
return 'high' if height > 175\
else 'small' if height < 160 else 'median'
girls['height_cat'] = girls['Height'].apply(height_category)
sns.countplot(x='height_cat', hue='weight_cat', data=girls);
pd.crosstab(girls['weight_cat'], girls['height_cat'])
Explanation: Метод <a href="http://stanford.edu/~mwaskom/software/seaborn/generated/seaborn.countplot.html">countplot</a>
End of explanation
sns.jointplot(x='Weight', y='Height',
data=girls, kind='reg');
Explanation: Метод jointplot
End of explanation
data_types = {'Drugs': float,
'Score': float}
df = pd.read_csv('../data/drugs-and-math.csv',
index_col=0, sep=',', dtype=data_types)
print(df.shape)
print(df.columns)
print(df.index)
df
Explanation: Пример визуального анализа данных с Pandas и Seaborn
В 1968 году была опубликована статья под названием "Correlation of Performance Test Scores with Tissue Concentration of Lysergic Acid Diethylamide in Human Subjects".
К статье приложен небольшой набор данных всего из 7 наблюдений.
End of explanation
df.sort_values('Score',
ascending=False,
inplace=True)
df.describe().T # Иногда так лучше
Explanation: Таблица уже отсортирована по колонке Drugs, сделаем сортировку по Score.
End of explanation
df.plot(kind='box');
df.plot(x='Drugs', y='Score', kind='bar');
df.plot(x='Drugs', y='Score', kind='scatter');
Explanation: Рисунки
End of explanation
df.corr(method='pearson')
Explanation: Видна тенденция...
End of explanation
sns.jointplot(x='Drugs', y='Score',
data=df, kind='reg');
Explanation: Не советуем строить регрессию по 7 наблюдениям :)
End of explanation |
3,352 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
INF-482, v0.01, Claudio Torres, [email protected]. DI-UTFSM
Textbook
Step1: Mairhuber-Curtis Theorem
Step2: Halton points vs pseudo-random points in 2D
Step3: Interpolation with Distance Matrix from Halton points
Step4: Defining a test function
Step5: Let's look at $f$
Step6: The interpolation with distance matrix itself
Step7: RBF interpolation | Python Code:
import numpy as np
import ghalton
import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact
from scipy.spatial import distance_matrix
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from ipywidgets import IntSlider
import sympy as sym
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
sym.init_printing()
M=8
def plot_matrices_with_values(ax,M):
N=M.shape[0]
cmap = plt.get_cmap('GnBu')
ax.matshow(M, cmap=cmap)
for i in np.arange(0, N):
for j in np.arange(0, N):
ax.text(i, j, '{:.2f}'.format(M[i,j]), va='center', ha='center', color='r')
Explanation: INF-482, v0.01, Claudio Torres, [email protected]. DI-UTFSM
Textbook: Gregory E. Fasshauer, Meshfree Approximaition Methods with MatLab, Interdisciplinary Mathematical Sciences - Vol. 6, World Scientific Publishers, Singapore, 2007. Link: http://www.math.iit.edu/~fass/
Mairhuber-Curtis Theorem, Halton, Distance Matrix and RBF Interpolation
End of explanation
# Initializing a R^2
sequencer = ghalton.Halton(2)
sequencer.reset()
xH=np.array(sequencer.get(9))
print(xH)
def show_MC_theorem(s_local=0):
i=3
j=4
NC=40
sequencer.reset()
xH=np.array(sequencer.get(9))
phi1= lambda s: (s-0.5)*(s-1)/((0-0.5)*(0-1))
phi2= lambda s: (s-0)*(s-1)/((0.5-0)*(0.5-1))
phi3= lambda s: (s-0)*(s-0.5)/((1-0)*(1-0.5))
C1=lambda s: xH[i,:]*phi1(s)+np.array([0.45,0.55])*phi2(s)+xH[j,:]*phi3(s)
C2=lambda s: xH[j,:]*phi1(s)+np.array([0.15,0.80])*phi2(s)+xH[i,:]*phi3(s)
C1v=np.vectorize(C1,otypes=[np.ndarray])
C2v=np.vectorize(C2,otypes=[np.ndarray])
ss=np.linspace(0,1,NC).reshape((-1, 1))
C1o=np.array(C1v(ss))
C2o=np.array(C2v(ss))
C1plot=np.zeros((NC,2))
C2plot=np.zeros((NC,2))
for k in np.arange(0,NC):
C1plot[k,0]=C1o[k][0][0]
C1plot[k,1]=C1o[k][0][1]
C2plot[k,0]=C2o[k][0][0]
C2plot[k,1]=C2o[k][0][1]
plt.figure(figsize=(2*M,M))
plt.subplot(121)
plt.plot(C1plot[:,0],C1plot[:,1],'r--')
plt.plot(C2plot[:,0],C2plot[:,1],'g--')
plt.scatter(xH[:,0], xH[:,1], s=300, c="b", alpha=1.0, marker='.',
label="Halton")
plt.scatter(C1(s_local)[0], C1(s_local)[1], s=300, c="r", alpha=1.0, marker='d')
plt.scatter(C2(s_local)[0], C2(s_local)[1], s=300, c="g", alpha=1.0, marker='d')
plt.axis([0,1,0,1])
plt.title(r'Quasi-random points (Halton)')
plt.grid(True)
xHm=np.copy(xH)
xHm[i,:]=C1(s_local)
xHm[j,:]=C2(s_local)
R=distance_matrix(xHm, xH)
det_s_local=np.linalg.det(R)
plt.subplot(122)
plt.title(r'det(R_fixed)='+str(det_s_local))
det_s=np.zeros_like(ss)
for k, s in enumerate(ss):
xHm[i,:]=C1plot[k,:]
xHm[j,:]=C2plot[k,:]
R=distance_matrix(xHm, xH)
det_s[k]=np.linalg.det(R)
plt.plot(ss,det_s,'-')
plt.plot(s_local,det_s_local,'dk',markersize=16)
plt.grid(True)
plt.show()
interact(show_MC_theorem,s_local=(0,1,0.1))
Explanation: Mairhuber-Curtis Theorem
End of explanation
def plot_random_vs_Halton(n=100):
# Number of points to be generated
# n=1000
# I am reseting the sequence everytime I generated just to get the same points
sequencer.reset()
xH=np.array(sequencer.get(n))
np.random.seed(0)
xR=np.random.rand(n,2)
plt.figure(figsize=(2*M,M))
plt.subplot(121)
plt.scatter(xR[:,0], xR[:,1], s=100, c="r", alpha=1.0, marker='.',
label="Random", edgecolors='None')
plt.axis([0,1,0,1])
plt.title(r'Pseudo-random points')
plt.grid(True)
plt.subplot(122)
plt.scatter(xH[:,0], xH[:,1], s=100, c="b", alpha=1.0, marker='.',
label="Halton")
plt.axis([0,1,0,1])
plt.title(r'Quasi-random points (Halton)')
plt.grid(True)
plt.show()
interact(plot_random_vs_Halton,n=(20,500,20))
Explanation: Halton points vs pseudo-random points in 2D
End of explanation
def show_R(mH=10):
fig= plt.figure(figsize=(2*M*mH/12,M*mH/12))
ax = plt.gca()
sequencer.reset()
X=np.array(sequencer.get(mH))
R=distance_matrix(X, X)
plot_matrices_with_values(ax,R)
interact(show_R,mH=(2,20,1))
Explanation: Interpolation with Distance Matrix from Halton points
End of explanation
# The function to be interpolated
f=lambda x,y: 16*x*(1-x)*y*(1-y)
def showing_f(n=10, elev=40, azim=230):
fig = plt.figure(figsize=(2*M,M))
# Creating regular mesh
Xr = np.linspace(0, 1, n)
Xm, Ym = np.meshgrid(Xr,Xr)
Z = f(Xm,Ym)
# Wireframe
plt.subplot(221,projection='3d')
ax = fig.gca()
ax.plot_wireframe(Xm, Ym, Z)
ax.view_init(elev,azim)
# imshow
plt.subplot(222)
#plt.imshow(Z,interpolation='none', extent=[0, 1, 0, 1])
plt.contourf(Xm, Ym, Z, 20)
plt.ylabel('$y$')
plt.xlabel('$x$')
plt.axis('equal')
plt.xlim(0,1)
plt.colorbar()
# Contour plot
plt.subplot(223)
plt.contour(Xm, Ym, Z, 20)
plt.axis('equal')
plt.colorbar()
# Surface
plt.subplot(224,projection='3d')
ax = fig.gca()
surf = ax.plot_surface(Xm, Ym, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf)
ax.view_init(elev,azim)
plt.show()
Explanation: Defining a test function
End of explanation
elev_widget = IntSlider(min=0, max=180, step=10, value=40)
azim_widget = IntSlider(min=0, max=360, step=10, value=230)
interact(showing_f,n=(5,50,5),elev=elev_widget,azim=azim_widget)
def eval_interp_distance_matrix(C,X,x,y):
R=distance_matrix(X, np.array([[x,y]]))
return np.dot(C,R)
def showing_f_interpolated(n=10, mH=10, elev=40, azim=230):
fig = plt.figure(figsize=(2*M,M))
## Building distance matrix and solving linear system
sequencer.reset()
X=np.array(sequencer.get(mH))
R=distance_matrix(X, X)
Zs=f(X[:,0],X[:,1])
C=np.linalg.solve(R,Zs)
# f interpolated with distance function
fIR=np.vectorize(eval_interp_distance_matrix, excluded=[0,1])
# Creating regular mesh
Xr = np.linspace(0, 1, n)
Xm, Ym = np.meshgrid(Xr,Xr)
Z = f(Xm,Ym)
# Contour plot - Original Data
plt.subplot(221)
plt.contour(Xm, Ym, Z, 20)
plt.colorbar()
plt.axis('equal')
plt.title(r'$f(x,y)$')
# Surface - Original Data
plt.subplot(222,projection='3d')
ax = fig.gca()
surf = ax.plot_surface(Xm, Ym, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf)
ax.view_init(elev,azim)
plt.title(r'$f(x,y)$')
# Contour plot - Interpolated Data
plt.subplot(223)
plt.contour(Xm, Ym, fIR(C,X,Xm,Ym), 20)
plt.axis('equal')
plt.colorbar()
plt.scatter(X[:,0], X[:,1], s=100, c="r", alpha=0.5, marker='.',
label="Random", edgecolors='None')
plt.title(r'$fIR(x,y)$')
# Surface - Interpolated Data
plt.subplot(224,projection='3d')
ax = fig.gca()
surf = ax.plot_surface(Xm, Ym, fIR(C,X,Xm,Ym), rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf)
ax.view_init(elev,azim)
ax.set_zlim(0,1)
plt.title(r'$fIR(x,y)$')
plt.show()
Explanation: Let's look at $f$
End of explanation
interact(showing_f_interpolated,n=(5,50,5),mH=(5,80,5),elev=elev_widget,azim=azim_widget)
Explanation: The interpolation with distance matrix itself
End of explanation
# Some RBF's
linear_rbf = lambda r,eps: r
gaussian_rbf = lambda r,eps: np.exp(-(eps*r)**2)
MQ_rbf = lambda r,eps: np.sqrt(1+(eps*r)**2)
IMQ_rbf = lambda r,eps: 1./np.sqrt(1+(eps*r)**2)
# The chosen one! But please try all of them!
rbf = lambda r,eps: MQ_rbf(r,eps)
def eval_interp_rbf(C,X,x,y,eps):
A=rbf(distance_matrix(X, np.array([[x,y]])),eps)
return np.dot(C,A)
def showing_f_interpolated_rbf(n=10, mH=10, elev=40, azim=230, eps=1):
fig = plt.figure(figsize=(2*M,M))
# Creating regular mesh
Xr = np.linspace(0, 1, n)
Xm, Ym = np.meshgrid(Xr,Xr)
Z = f(Xm,Ym)
########################################################
## Pseudo-random
## Building distance matrix and solving linear system
np.random.seed(0)
X=np.random.rand(mH,2)
R=distance_matrix(X,X)
A=rbf(R,eps)
Zs=f(X[:,0],X[:,1])
C=np.linalg.solve(A,Zs)
# f interpolated with distance function
fIR=np.vectorize(eval_interp_rbf, excluded=[0,1,4])
# Contour plot - Original Data
plt.subplot(231)
plt.contour(Xm, Ym, fIR(C,X,Xm,Ym,eps), 20)
plt.colorbar()
plt.scatter(X[:,0], X[:,1], s=100, c="r", alpha=0.5, marker='.',
label="Random", edgecolors='None')
plt.title(r'$f(x,y)_{rbf}$ with Pseudo-random points')
# Surface - Original Data
plt.subplot(232,projection='3d')
ax = fig.gca()
surf = ax.plot_surface(Xm, Ym, fIR(C,X,Xm,Ym,eps), rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf)
ax.view_init(elev,azim)
ax.set_zlim(0,1)
plt.title(r'$f(x,y)_{rbf}$ with Pseudo-random points')
# Contour plot - Original Data
plt.subplot(233)
plt.contourf(Xm, Ym, np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)), 20)
#plt.imshow(np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)),interpolation='none', extent=[0, 1, 0, 1])
plt.axis('equal')
plt.xlim(0,1)
plt.colorbar()
plt.scatter(X[:,0], X[:,1], s=100, c="k", alpha=0.8, marker='.',
label="Random", edgecolors='None')
plt.title(r'Error with Pseudo-random points')
########################################################
## HALTON (Quasi-random)
## Building distance matrix and solving linear system
sequencer.reset()
X=np.array(sequencer.get(mH))
R=distance_matrix(X,X)
A=rbf(R,eps)
Zs=f(X[:,0],X[:,1])
C=np.linalg.solve(A,Zs)
# f interpolated with distance function
fIR=np.vectorize(eval_interp_rbf, excluded=[0,1,4])
# Contour plot - Interpolated Data
plt.subplot(234)
plt.contour(Xm, Ym, fIR(C,X,Xm,Ym,eps), 20)
plt.colorbar()
plt.scatter(X[:,0], X[:,1], s=100, c="r", alpha=0.5, marker='.',
label="Random", edgecolors='None')
plt.title(r'$f_{rbf}(x,y)$ with Halton points')
# Surface - Interpolated Data
plt.subplot(235,projection='3d')
ax = fig.gca()
surf = ax.plot_surface(Xm, Ym, fIR(C,X,Xm,Ym,eps), rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
fig.colorbar(surf)
ax.view_init(elev,azim)
ax.set_zlim(0,1)
plt.title(r'$f_{rbf}(x,y)$ with Halton points')
# Contour plot - Original Data
plt.subplot(236)
plt.contourf(Xm, Ym, np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)), 20)
#plt.imshow(np.abs(f(Xm,Ym)-fIR(C,X,Xm,Ym,eps)),interpolation='none', extent=[0, 1, 0, 1])
plt.axis('equal')
plt.xlim(0,1)
plt.colorbar()
plt.scatter(X[:,0], X[:,1], s=100, c="k", alpha=0.8, marker='.',
label="Random", edgecolors='None')
plt.title(r'Error with Halton points')
plt.show()
interact(showing_f_interpolated_rbf,n=(5,50,5),mH=(5,80,5),elev=elev_widget,azim=azim_widget,eps=(0.1,50,0.1))
Explanation: RBF interpolation
End of explanation |
3,353 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This Notebook implements the TensorFlow advanced tutorial which uses a Multilayer Convolutional Network on the MNIST dataset
Step1: Import MNIST Data
Step2: Look at sizes of training, validation and test sets Each image is 28 X 28 pixels Labels are in one hot encoding for use with softmax
Step3: Declare Variables
Step4: Weight Initialization for ReLU nodes
Step5: Convolution and Pooling operations
Step6: First Convolutional Layer
The convolution will compute 32 features for each 5x5 patch. The max_pool_2x2 method will reduce the image size to 14x14.
Step7: Second Convolutional Layer The second layer will have 64 features for each 5x5 patch.
Step8: Densely Connected Layer The image size has been reduced to 7x7 and we add a fully-connected layer with 1024 neurons to allow processing on the entire image.
Step9: Dropout
Step10: Readout Layer
Step11: Implement Model
Step12: Define function that runs the model for given number of batches and returns the training time and accuracy on the validations and test data sets.
Step13: Try different number of epochs | Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plt
import time
Explanation: This Notebook implements the TensorFlow advanced tutorial which uses a Multilayer Convolutional Network on the MNIST dataset
End of explanation
mnist = input_data.read_data_sets("../datasets/MNIST/", one_hot=True)
Explanation: Import MNIST Data
End of explanation
print(mnist.train.num_examples)
print(mnist.validation.num_examples)
print(mnist.test.num_examples)
plt.imshow(mnist.train.images[10004].reshape(28,28),cmap="Greys")
plt.show()
print (mnist.train.labels[10004])
Explanation: Look at sizes of training, validation and test sets Each image is 28 X 28 pixels Labels are in one hot encoding for use with softmax
End of explanation
x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 10])
Explanation: Declare Variables
End of explanation
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
Explanation: Weight Initialization for ReLU nodes
End of explanation
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
Explanation: Convolution and Pooling operations
End of explanation
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x, [-1, 28, 28, 1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
Explanation: First Convolutional Layer
The convolution will compute 32 features for each 5x5 patch. The max_pool_2x2 method will reduce the image size to 14x14.
End of explanation
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
Explanation: Second Convolutional Layer The second layer will have 64 features for each 5x5 patch.
End of explanation
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
Explanation: Densely Connected Layer The image size has been reduced to 7x7 and we add a fully-connected layer with 1024 neurons to allow processing on the entire image.
End of explanation
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
Explanation: Dropout
End of explanation
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
Explanation: Readout Layer
End of explanation
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: Implement Model
End of explanation
def train_and_test_model(batches,batches_per_epoch,verbose=False):
start = time.time()
epoch = 1
results = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(batches):
batch = mnist.train.next_batch(50)
if i % 100 == 0 and verbose:
train_accuracy = accuracy.eval(feed_dict={
x: batch[0], y_: batch[1], keep_prob: 1.0})
print('step %d, training accuracy %g, elapsedtime %g' % (i, train_accuracy, time.time() - start))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
if (i+1) % batches_per_epoch == 0:
test_accuracy = accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0})
validation_accuracy = accuracy.eval(feed_dict={
x: mnist.validation.images, y_: mnist.validation.labels, keep_prob: 1.0})
if verbose:
print('Done with test/val accuracy elapsed time %g' % (time.time() - start))
train_accuracy = accuracy.eval(feed_dict={
x: mnist.train.images[0:10000], y_: mnist.train.labels[0:10000], keep_prob: 1.0})
if verbose:
print('Done with train accuracy elapsed time %g' % (time.time() - start))
time_elapsed = time.time() - start
if verbose:
print(epoch,i+1, time_elapsed, train_accuracy, validation_accuracy, test_accuracy)
results.append((epoch,time_elapsed, train_accuracy, validation_accuracy, test_accuracy))
epoch += 1
return results
Explanation: Define function that runs the model for given number of batches and returns the training time and accuracy on the validations and test data sets.
End of explanation
results=train_and_test_model(7700,1100,verbose=True)
for r in results:
print(r)
Explanation: Try different number of epochs
End of explanation |
3,354 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Best practices
Let's start with pep8 (https
Step1: Pivot Tables w/ pandas
http
Step2: Keyboard shortcuts
Step3: Floating Table of Contents
Creates a new button on the toolbar that pops up a table of contents that you can navigate by.
In your documentation if you indent by 4 spaces, you get monospaced code-style code so you can embed in a Markdown cell | Python Code:
# Best practice for loading libraries?
# Couldn't find what to do with 'magic' imports at the top
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format='retina'
from __future__ import division
from itertools import combinations
import string
from IPython.display import IFrame, HTML, YouTubeVideo
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy as sp
import seaborn as sns; sns.set();
plt.rcParams['figure.figsize'] = (12, 8)
sns.set_style("darkgrid")
sns.set_context("poster", font_scale=1.3)
Explanation: Best practices
Let's start with pep8 (https://www.python.org/dev/peps/pep-0008/)
Imports should be grouped in the following order:
standard library imports
related third party imports
local application/library specific imports
You should put a blank line between each group of imports.
Put any relevant all specification after the imports.
End of explanation
YouTubeVideo("ZbrRrXiWBKc", width=800, height=600)
!pip install pivottablejs
df = pd.read_csv("../data/mps.csv")
df.head()
from pivottablejs import pivot_ui
pivot_ui(df)
# Province, Party, Average, Age, Heatmap
Explanation: Pivot Tables w/ pandas
http://nicolas.kruchten.com/content/2015/09/jupyter_pivottablejs/
End of explanation
# in select mode, shift j/k (to select multiple cells at once)
# split cell with ctrl shift -
first = 1
second = 2
third = 3
Explanation: Keyboard shortcuts
End of explanation
import rpy2
%load_ext rpy2.ipython
X = np.array([0,1,2,3,4])
Y = np.array([3,5,4,6,7])
%%R -i X,Y -o XYcoef
XYlm = lm(Y~X)
XYcoef = coef(XYlm)
print(summary(XYlm))
par(mfrow=c(2,2))
plot(XYlm)
Explanation: Floating Table of Contents
Creates a new button on the toolbar that pops up a table of contents that you can navigate by.
In your documentation if you indent by 4 spaces, you get monospaced code-style code so you can embed in a Markdown cell:
$ mkdir toc
$ cd toc
$ wget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.js
$ wget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.css
$ cd ..
$ jupyter-nbextension install --user toc
$ jupyter-nbextension enable toc/toc
You can also get syntax highlighting if you tell it the language that you're including:
```bash
mkdir toc
cd toc
wget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.js
wget https://raw.githubusercontent.com/minrk/ipython_extensions/master/nbextensions/toc.css
cd ..
jupyter-nbextension install --user toc
jupyter-nbextension enable toc/toc
```
R
pyRserve
rpy2
End of explanation |
3,355 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Database
RDMS(Relational Database Management System
Open Sourse
- MySQL(php and web application)
- PostgreSQl(huge web applications)
- SQLITE (android applications)
Proprietary
- MSSQL
- Oracle
https
Step1: Data base for library management system | Python Code:
import sqlite3
#import the driver
##psycopg2 for protsgeSQL
# pymysql for MySQL
conn = sqlite3.connect('example.sqlite3')
#connecting to sqlite 3 and makes a new database file if file not already present
cur = conn.cursor()
#makes a file cursor we can make multiple cursors as well
cur.execute('CREATE TABLE countries (id integer, name text, iso3 text)')
#creating a new table
cur.execute('SELECT * FROM countries')
#
cur.fetchall()
#fetching the whole
cur.execute('INSERT INTO countries (id,name,iso3) VALUES (1, "Nepal", "NEP")')
cur.execute('SELECT * FROM countries')
cur.fetchall()
sql = '''INSERT INTO countries (id,name,iso3) VALUES (?,?,?)'''
cur.executemany(sql , [(2, 'India','INA'),
(3, 'Bhutan','BHU'),
(4, 'Afghanistan','AFG')])
cur.execute('SELECT * FROM countries')
cur.fetchall()
sql = 'INSERT INTO countries (id,name,iso3) VALUES (4, "PAKISTAN", "PAK")'
cur.execute(sql)
cur.execute('SELECT * FROM countries')
cur.fetchall()
sql = 'UPDATE countries SET id = 5 WHERE iso3 = "4"'
cur.execute(sql)
cur.execute('SELECT * FROM countries')
cur.fetchall()
sql = 'UPDATE countries '
conn.commit()
cur.execute('SELECT * FROM countries WHERE id > 2 ')
cur.fetchall()
cur.execute('SELECT *FROM countries WHERE name LIKE "%an"')
cur.fetchall()
cur.execute('SELECT *FROM countries WHERE name LIKE "%an%"')
cur.fetchall()
cur.execute('SELECT *FROM countries WHERE name LIKE "Pa%"')
cur.fetchall()
cur.execute('DELETE FROM countries')
cur.execute('SELECT *FROM countries')
cur.fetchall()
conn.commit()
import csv
sql = 'INSERT INTO countries (id,name,iso3) VALUES(?,?,?)'
_id = 1
with open('countries.txt','r') as datafile:
csvfile = csv.DictReader(datafile)
for row in csvfile:
if row ['Common Name'] and row['ISO 3166-1 3 Letter Code']:
cur.execute(sql, (_id, row['Common Name'], row['ISO 3166-1 3 Letter Code']))
_id +=1
conn.commit()
cur.execute('DELETE FROM country_list')
cur.execute('SELECT *FROM country_list')
cur.fetchall()
sql = '''CREATE TABLE
country_list(id integer primary key autoincrement,
country_name text not null,
iso3 text not null unique)'''
cur.execute(sql)
sql = 'INSERT INTO country_list (country_name,iso3) VALUES(?,?)'
with open('countries.txt','r') as datafile:
csvfile = csv.DictReader(datafile)
for row in csvfile:
if row ['Formal Name'] and row['Formal Name']:
cur.execute(sql, (row['Formal Name'], row['Formal Name']))
conn.commit()
cur.execute('SELECT *FROM country_list')
cur.fetchall()
Explanation: Database
RDMS(Relational Database Management System
Open Sourse
- MySQL(php and web application)
- PostgreSQl(huge web applications)
- SQLITE (android applications)
Proprietary
- MSSQL
- Oracle
https://docs.python.org/3.6/library/sqlite3.html
End of explanation
connn = sqlite3.connect("Library Management System.txt")
curs = connn.cursor()
sql = '''CREATE TABLE
books(book_id text,
isbn integer not null unique,
book_name text)'''
curs.execute(sql)
sql2 = '''CREATE TABLE
Student( roll number integer not null unique,
name text not null,
faculty text)'''
curs.execute(sql2)
sql3 = '''CREATE TABLE
Teacher(name text not null,
faculty text)
'''
curs.execute(sql3)
Explanation: Data base for library management system
End of explanation |
3,356 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The utils Package
As the name says, this package brings some extra functionalities that you might need while using Maybrain.
Let's start by importing it and initialising a Brain
Step1: Information about Percentages
Imagine that you want to know what would be the ratio between the edges on adjMat above a certain threshold value and the total possible edges of adjMat (the ones different from nan). This might be useful for you to decide which threshold you might apply later.
In our specific matrix, we can verify that we have 124750 possible edges in adjMat, and if we applied a threshold of 0.6, we would get 3387 edges
Step2: While threshold_to_percentage() is based on values from adjMat, we also have another method to calculate a similar ratio from values of the G object. This method is percent_connected(), and it returns the ratio of the current number of edges in our G object and the total number of possible connections.
You can see this difference with other aspects. For example, if adjMat has NaNs, they are not counted in the result of threshold_to_percentage(). On the other hand, percent_connected() calculates the number of total possible connections, using the following formula for an unidirected graph
Step3: The previous ratio is equals to 1 because we applied a threshold where we included all the possible edges from adjMat, thus everything is connected.
We can reach the same ratio value we showed before with threshold_to_percentage() if we apply a threshold of 0.6
Step4: Highlighting Brains
Properties of a Brain can be filtered using highlights. A Highlight is simply a list of nodes and/or edges.
In order to be easier to see the highlighting features, we will be importing a shorter matrix with just 4 nodes (link here) and properties about colours which we already used before (link here).
Step5: The main function to create the highlights is highlight_from_conds(brain, prop, rel, val, mode, label). This function creates an highlight by asking if the property prop is related to val by rel.
The highlight is then stored in a dictionary utils.highlights, where the keys are the ones passed previously in the parameter label. If you don't define a label, it will create one automatically.
The rel parameter can be
Step6: In the next example we use the relation in in different ways. In the first case we want to find the edges whose weights are between 0.6 and 0.8 (exclusive). In the second case we show that it is possible to highlight at the same time nodes and edges, by finding the edges/nodes whose colour property is red or grey.
Step7: Just a last detail about this function. If you put the property x/X/y/Y/z/Z, it will look out for the respective value from the property ct.XYZ in the nodes.
Finally, if you already have your set of nodes and/or edges as your own highlights, you can just store them using make_highlight() | Python Code:
from maybrain import utils
from maybrain import resources as rr
from maybrain import brain as mbt
a = mbt.Brain()
a.import_adj_file(rr.DUMMY_ADJ_FILE_500)
a.import_spatial_info(rr.MNI_SPACE_COORDINATES_500)
a.apply_threshold()
Explanation: The utils Package
As the name says, this package brings some extra functionalities that you might need while using Maybrain.
Let's start by importing it and initialising a Brain:
End of explanation
print("Ratio:", utils.threshold_to_percentage(a, 0.6))
## Checking the previous result
# Creating all possible edges
a.apply_threshold()
print("Total possible edges: ", a.G.number_of_edges())
# Getting the edges thresholded with 0.6
a.apply_threshold(threshold_type="tVal", value=0.6)
print("Number of edges from a threshold of 0.6: ", a.G.number_of_edges())
print("(3387/124750 = ", 3387/124750, ")")
Explanation: Information about Percentages
Imagine that you want to know what would be the ratio between the edges on adjMat above a certain threshold value and the total possible edges of adjMat (the ones different from nan). This might be useful for you to decide which threshold you might apply later.
In our specific matrix, we can verify that we have 124750 possible edges in adjMat, and if we applied a threshold of 0.6, we would get 3387 edges:
End of explanation
a.apply_threshold()
print("Ratio: ", utils.percent_connected(a))
Explanation: While threshold_to_percentage() is based on values from adjMat, we also have another method to calculate a similar ratio from values of the G object. This method is percent_connected(), and it returns the ratio of the current number of edges in our G object and the total number of possible connections.
You can see this difference with other aspects. For example, if adjMat has NaNs, they are not counted in the result of threshold_to_percentage(). On the other hand, percent_connected() calculates the number of total possible connections, using the following formula for an unidirected graph:
$$\left (nodes \times \left (nodes - 1 \right ) \right ) / 2$$
This is equivalent to the upper right side of an adjacency matrix, including possible NaNs that it might have.
For a directed graph, the formula is:
$$nodes \times \left (nodes - 1 \right ) $$
End of explanation
a.apply_threshold(threshold_type="tVal", value=0.6)
print("Ratio in a.G after thresholding of 0.6: ", utils.percent_connected(a))
## Checking the previous value
nodes = a.G.number_of_nodes()
print("Total possible edges from G: ", (nodes * (nodes-1)) / 2)
print("Number of edges in G: ", a.G.number_of_edges())
print("(3387/124750 = ", 3387/124750, ")")
Explanation: The previous ratio is equals to 1 because we applied a threshold where we included all the possible edges from adjMat, thus everything is connected.
We can reach the same ratio value we showed before with threshold_to_percentage() if we apply a threshold of 0.6:
End of explanation
from maybrain import constants as ct
b = mbt.Brain()
b.import_adj_file("data/3d_grid_adj.txt")
b.apply_threshold()
b.import_properties("data/3d_grid_properties.txt")
# Checking the properties we have for the nodes and edges, to confirm the next function calls
for edge in b.G.edges(data=True):
print(edge)
for node in b.G.nodes(data=True):
print(node)
Explanation: Highlighting Brains
Properties of a Brain can be filtered using highlights. A Highlight is simply a list of nodes and/or edges.
In order to be easier to see the highlighting features, we will be importing a shorter matrix with just 4 nodes (link here) and properties about colours which we already used before (link here).
End of explanation
utils.highlight_from_conds(b, 'colour', 'eq', 'green', mode='node', label=1)
# Getting the highlight with the label 1
highlight = utils.highlights[1]
# Printing the results of the highlight
print(highlight.edges) # Empty because we chose to highligh just the nodes
print(highlight.nodes) # Empty because there is no node with the property `colour` as `green`
utils.highlight_from_conds(b, 'colour', 'eq', 'green', mode='edge', label=2)
# Getting the highlight with the label 1
highlight = utils.highlights[2]
# Printing the results of the highlight
print(highlight.edges) # We have edges with the property `colour` as `green`
print(highlight.nodes) # Empty again
Explanation: The main function to create the highlights is highlight_from_conds(brain, prop, rel, val, mode, label). This function creates an highlight by asking if the property prop is related to val by rel.
The highlight is then stored in a dictionary utils.highlights, where the keys are the ones passed previously in the parameter label. If you don't define a label, it will create one automatically.
The rel parameter can be:
'geq' - greater than or equal to
'leq' - less than or equal to
'gt' - strictly greater than
'lt' - strictly less than
'eq' - equal to (i.e. exactly)
'in()', 'in[)', 'in(]', 'in[]' - within an interval, in this case `val` is a list of two numbers
"[" and "]" means inclusive, "(" and ")" means exclusive
'in' - in `val`
In the next example you can see this function in action. In the first case, we filter the brain by getting the nodes which have the property colour equals to green, and we save in the highlights dictionary with the label 1. The second case is the same filtering, but applied to the edges, and saved in the highlights dictionary with the label 2.
End of explanation
utils.highlight_from_conds(b, ct.WEIGHT, 'in()', (0.6, 0.8), mode='edge', label=3)
utils.highlight_from_conds(b, 'colour', 'in', ['red','grey'], mode='node|edge', label=4)
# Getting the highlights and printing them
high3 = utils.highlights[3]
high4 = utils.highlights[4]
print(high3.edges)
print(high3.nodes)
#
print(high4.edges)
print(high4.nodes)
Explanation: In the next example we use the relation in in different ways. In the first case we want to find the edges whose weights are between 0.6 and 0.8 (exclusive). In the second case we show that it is possible to highlight at the same time nodes and edges, by finding the edges/nodes whose colour property is red or grey.
End of explanation
utils.make_highlight(edge_inds=[(1,2), (2,3)], nodes_inds=[3], label='custom1')
print(utils.highlights['custom1'].nodes)
print(utils.highlights['custom1'].edges)
Explanation: Just a last detail about this function. If you put the property x/X/y/Y/z/Z, it will look out for the respective value from the property ct.XYZ in the nodes.
Finally, if you already have your set of nodes and/or edges as your own highlights, you can just store them using make_highlight():
End of explanation |
3,357 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Given the following LP
$\begin{gather}
\min\quad -x_1 - 4x_2\
\begin{aligned}
s.a.
2x_1 - x_2 &\geq 0\
x_1 - 3x_2 &\leq 0 \
x_1 + x_2 &\leq 4 \
\quad x_1, x_2 & \geq 0 \
\end{aligned}
\end{gather}$
Step1: LP in standard form
$\begin{gather}
\min \quad -x_1 - 4x_2\
\begin{aligned}
s.a.
2x_1 - x_2 -x_3 &= 0\
x_1 - 3x_2 +x_4 &= 0 \
x_1 + x_2+x_5 &= 4 \
\quad x_1, x_2, x_3, x_4, x_5 & \geq 0 \
\end{aligned}
\end{gather}$
We see ($x_1,x_2$)=(1,1) is a interior point so we choose it as initial point x_0
x_0=$\begin{bmatrix}1\1\1\2\2\end{bmatrix}$
A= $\begin{bmatrix}2& -1 & -1 & 0 & 0\
1& -3 & 0 & 1 & 0\
1 & 1 & 0 & 0 & 1\
\end{bmatrix}$
Initial Solution z=-5
Step2: Iteration 1
Step3: Iteration 2
Step4: Now lets write a function to run $n$ iterations | Python Code:
x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
Explanation: Given the following LP
$\begin{gather}
\min\quad -x_1 - 4x_2\
\begin{aligned}
s.a.
2x_1 - x_2 &\geq 0\
x_1 - 3x_2 &\leq 0 \
x_1 + x_2 &\leq 4 \
\quad x_1, x_2 & \geq 0 \
\end{aligned}
\end{gather}$
End of explanation
x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
plt.scatter([1],[1],color='black')
plt.annotate('x_0',(1.05,1.05))
Explanation: LP in standard form
$\begin{gather}
\min \quad -x_1 - 4x_2\
\begin{aligned}
s.a.
2x_1 - x_2 -x_3 &= 0\
x_1 - 3x_2 +x_4 &= 0 \
x_1 + x_2+x_5 &= 4 \
\quad x_1, x_2, x_3, x_4, x_5 & \geq 0 \
\end{aligned}
\end{gather}$
We see ($x_1,x_2$)=(1,1) is a interior point so we choose it as initial point x_0
x_0=$\begin{bmatrix}1\1\1\2\2\end{bmatrix}$
A= $\begin{bmatrix}2& -1 & -1 & 0 & 0\
1& -3 & 0 & 1 & 0\
1 & 1 & 0 & 0 & 1\
\end{bmatrix}$
Initial Solution z=-5
End of explanation
mu = 100
gamma = 0.8
A = np.array([[2,-1,-1,0,0],[1,-3,0,1,0],[1,1,0,0,1]])
X = np.array([[1,0,0,0,0],[0,1,0,0,0],[0,0,1,0,0],[0,0,0,2,0],[0,0,0,0,2]])
vector_1 = np.ones((5,1))
c = np.array([[-1],[-4],[0],[0],[0]])
x_0 = np.array([[1],[1],[1],[2],[2]]) #Punto inicial
#RESOLVER ECUACION 4
#------Lado izquierdo
izq_ec_4 = np.matmul( A, np.matmul( np.power(X,2),A.T ) )
#------Lado derecho
# -mu*A*X*1 + AX^2c
der_ec_4 = -mu*np.matmul( A,np.matmul( X,vector_1 ) ) + np.matmul( A,np.matmul( np.power(X,2),c ) )
#------Determino dy
dy = np.linalg.solve(izq_ec_4, der_ec_4)
#RESOLVER ECUACION 3
ds = np.matmul(-1*A.T,dy) #ds=-A^T*dy
#RESOLVER ECUACION 1
izq_ec_1 = mu*np.power(np.linalg.inv(X),2) #mu*X^-2
der_ec_1 = mu*np.matmul(np.linalg.inv(X),vector_1)-c-ds #mu*X^-1*1-c-ds
dx = np.linalg.solve(izq_ec_1,der_ec_1)
#ACTUALIZAR x_0
x_1 = x_0 + dx
x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
plt.scatter([1],[1],color='black')
plt.scatter(x_1[0,0],x_1[1,0],color='black') #graficar x_1
plt.annotate('x_0',(1.05,1.05))
plt.annotate('x_1',(x_1[0,0]+0.05,x_1[1,0]+0.05)) #anotar x_1
Explanation: Iteration 1
End of explanation
mu = mu*gamma
X = np.array([[x_1[0,0],0,0,0,0],[0,x_1[1,0],0,0,0],
[0,0,x_1[2,0],0,0],[0,0,0,x_1[3,0],0],[0,0,0,0,x_1[4,0]]])
#RESOLVER ECUACION 4
#------Lado izquierdo
izq_ec_4 = np.matmul( A, np.matmul( np.power(X,2),A.T ) )
#------Lado derecho
# -mu*A*X*1 + AX^2c
der_ec_4 = -mu*np.matmul( A,np.matmul( X,vector_1 ) ) + np.matmul( A,np.matmul( np.power(X,2),c ) )
#------Determino dy
dy = np.linalg.solve(izq_ec_4, der_ec_4)
#RESOLVER ECUACION 3
ds = np.matmul(-1*A.T,dy) #ds=-A^T*dy
#RESOLVER ECUACION 1
izq_ec_1 = mu*np.power(np.linalg.inv(X),2) #mu*X^-2
der_ec_1 = mu*np.matmul(np.linalg.inv(X),vector_1)-c-ds #mu*X^-1*1-c-ds
dx = np.linalg.solve(izq_ec_1,der_ec_1)
#ACTUALIZAR x_1
x_2 = x_1 + dx
x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
plt.scatter([1],[1],color='black')
plt.scatter(x_1[0,0],x_1[1,0],color='black') #graficar x_1
plt.scatter(x_2[0,0],x_2[1,0],color='black') #graficar x_2
plt.annotate('x_0',(1.05,1.05))
plt.annotate('x_1',(x_1[0,0]+0.05,x_1[1,0]+0.05)) #anotar x_1
plt.annotate('x_2',(x_2[0,0]+0.05,x_2[1,0]+0.05)) #anotar x_2
Explanation: Iteration 2
End of explanation
mu = 100
gamma = 0.8
A = np.array([[2,-1,-1,0,0],[1,-3,0,1,0],[1,1,0,0,1]])
vector_1 = np.ones((5,1))
c = np.array([[-1],[-4],[0],[0],[0]])
x = np.array([[1],[1],[1],[2],[2]]) #Punto inicial
x1s = [] #Lista vacia para guardar x_1's
x2s = [] #Lista vacia para guardar x_2's
x1s.append(x[0,0])
x2s.append(x[1,0])
for iteracion in range(100):
X = np.array([[x[0,0],0,0,0,0],[0,x[1,0],0,0,0],
[0,0,x[2,0],0,0],[0,0,0,x[3,0],0],[0,0,0,0,x[4,0]]])
#RESOLVER ECUACION 4
#------Lado izquierdo
izq_ec_4 = np.matmul( A, np.matmul( np.power(X,2),A.T ) )
#------Lado derecho
# -mu*A*X*1 + AX^2c
der_ec_4 = -mu*np.matmul( A,np.matmul( X,vector_1 ) ) + np.matmul( A,np.matmul( np.power(X,2),c ) )
#------Determino dy
dy = np.linalg.solve(izq_ec_4, der_ec_4)
#RESOLVER ECUACION 3
ds = np.matmul(-1*A.T,dy) #ds=-A^T*dy
#RESOLVER ECUACION 1
izq_ec_1 = mu*np.power(np.linalg.inv(X),2) #mu*X^-2
der_ec_1 = mu*np.matmul(np.linalg.inv(X),vector_1)-c-ds #mu*X^-1*1-c-ds
dx = np.linalg.solve(izq_ec_1,der_ec_1)
#ACTUALIZAR vector x
x = x + dx
mu = mu*gamma
x1s.append( x[0,0] )
x2s.append( x[1,0] )
x = np.linspace(0, 4, 100)
y1 = 2*x
y2 = x/3
y3 = 4 - x
plt.figure(figsize=(8, 6))
plt.plot(x, y1)
plt.plot(x, y2)
plt.plot(x, y3)
plt.xlim((0, 3.5))
plt.ylim((0, 4))
plt.xlabel('x1')
plt.ylabel('x2')
y5 = np.minimum(y1, y3)
plt.fill_between(x[:-25], y2[:-25], y5[:-25], color='red', alpha=0.5)
for iteracion in range(100):
plt.scatter(x1s[iteracion],x2s[iteracion],color='black')
if iteracion % 10 == 0:
nombre = 'x_'+str(iteracion)
plt.annotate(nombre,(x1s[iteracion]+0.05,x2s[iteracion]+0.05))
Explanation: Now lets write a function to run $n$ iterations
End of explanation |
3,358 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization using matplotlib and seaborn
Visualization strategy
Step1: Visualization for a single continuous variable
Step2: Visualization for single categorical variable - frequency plot
Step3: Bar plot using matplotlib visualization
Step4: Association plot between two continuous variables
Continuous vs continuous
Step5: Scatter plot using pandas dataframe plot function
Step6: Continuous vs Categorical
Step7: Show the boxplot of MPG by year
Step8: Association plot between 2 categorical variables
Step9: Classificaition plot
Step10: QQ Plot for normality test | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from matplotlib.mlab import normpdf
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
df = pd.read_csv("http://www-bcf.usc.edu/~gareth/ISL/Auto.data", sep=r"\s+")
df.head(10)
df.info()
df["year"].unique()
df.sample(10)
Explanation: Visualization using matplotlib and seaborn
Visualization strategy:
- Single variable
- numeric continuous variable
- histogram: distribution of values
- boxplot: outlier analysis
- Categorical (string or discrete numeric)
- frequency plot
- Association plot
- continuous vs continuous: scatter plot
- continuous vs categorical: vertical bar and boxplot (regression problems)
- categorical vs continuous: horizontal bar (classification problems)
- categorical vs categorical: heapmap
End of explanation
plt.hist(df["mpg"], bins = 30)
plt.title("Histogram plot of mpg")
plt.xlabel("mpg")
plt.ylabel("Frequency")
plt.boxplot(df["mpg"])
plt.title("Boxplot of mpg\n ")
plt.ylabel("mpg")
#plt.figure(figsize = (10, 6))
plt.subplot(2, 1, 1)
n, bins, patches = plt.hist(df["mpg"], bins = 50, normed = True)
plt.title("Histogram plot of mpg")
plt.xlabel("MPG")
pdf = normpdf(bins, df["mpg"].mean(), df["mpg"].std())
plt.plot(bins, pdf, color = "red")
plt.subplot(2, 1, 2)
plt.boxplot(df["mpg"], vert=False)
plt.title("Boxplot of mpg")
plt.tight_layout()
plt.xlabel("MPG")
normpdf(bins, df["mpg"].mean(), df["mpg"].std())
# using pandas plot function
plt.figure(figsize = (10, 6))
df.mpg.plot.hist(bins = 50, normed = True)
plt.title("Histogram plot of mpg")
plt.xlabel("mpg")
Explanation: Visualization for a single continuous variable
End of explanation
counts = df["year"].value_counts().sort_index()
plt.figure(figsize = (10, 4))
plt.bar(range(len(counts)), counts, align = "center")
plt.xticks(range(len(counts)), counts.index)
plt.xlabel("Year")
plt.ylabel("Frequency")
plt.title("Frequency distribution by year")
Explanation: Visualization for single categorical variable - frequency plot
End of explanation
plt.figure(figsize = (10, 4))
df.year.value_counts().sort_index().plot.bar()
Explanation: Bar plot using matplotlib visualization
End of explanation
corr = np.corrcoef(df["weight"], df["mpg"])[0, 1]
plt.scatter(df["weight"], df["mpg"])
plt.xlabel("Weight")
plt.ylabel("Mpg")
plt.title("Mpg vs Weight, correlation: %.2f" % corr)
Explanation: Association plot between two continuous variables
Continuous vs continuous
End of explanation
df.plot.scatter(x= "weight", y = "mpg")
plt.title("Mpg vs Weight, correlation: %.2f" % corr)
Explanation: Scatter plot using pandas dataframe plot function
End of explanation
mpg_by_year = df.groupby("year")["mpg"].agg([np.median, np.std])
mpg_by_year.head()
mpg_by_year["median"].plot.bar(yerr = mpg_by_year["std"], ecolor = "red")
plt.title("MPG by year")
plt.xlabel("year")
plt.ylabel("MPG")
Explanation: Continuous vs Categorical
End of explanation
plt.figure(figsize=(10, 5))
sns.boxplot("year", "mpg", data = df)
Explanation: Show the boxplot of MPG by year
End of explanation
plt.figure(figsize=(10, 8))
sns.heatmap(df.corr(), cmap=sns.color_palette("RdBu", 10), annot=True)
plt.figure(figsize=(10, 8))
aggr = df.groupby(["year", "cylinders"])["mpg"].agg(np.mean).unstack()
sns.heatmap(aggr, cmap=sns.color_palette("Blues", n_colors= 10), annot=True)
Explanation: Association plot between 2 categorical variables
End of explanation
iris = pd.read_csv("https://raw.githubusercontent.com/abulbasar/data/master/iris.csv")
iris.head()
fig, ax = plt.subplots()
x1, x2 = "SepalLengthCm", "PetalLengthCm"
cmap = sns.color_palette("husl", n_colors=3)
for i, c in enumerate(iris.Species.unique()):
iris[iris.Species == c].plot.scatter(x1, x2, color = cmap[i], label = c, ax = ax)
plt.legend()
Explanation: Classificaition plot
End of explanation
import scipy.stats as stats
p = stats.probplot(df["mpg"], dist="norm", plot=plt)
Explanation: QQ Plot for normality test
End of explanation |
3,359 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring Climate Data
Step1: Above
Step2: One way to interface with the GDP is with the interactive web interface, shown below. In this interface, you can upload a shapefile or draw on the screen to define a polygon region, then you specify the statistics and datasets you want to use via dropdown menus.
Step3: Here we use the python interface to the GDP, called PyGDP, which allows for scripting. You can get the code and documentation at https
Step4: Now just to show that we can access more than climate model time series, let's extract precipitation data from a dry winter (1936-1937) and a normal winter (2009-2010) for Texas County and look at the spatial patterns.
We'll use the netCDF4-Python library, which allows us to open OPeNDAP datasets just as if they were local NetCDF files. | Python Code:
from IPython.core.display import Image
Image('http://www-tc.pbs.org/kenburns/dustbowl/media/photos/s2571-lg.jpg')
Explanation: Exploring Climate Data: Past and Future
Roland Viger, Rich Signell, USGS
First presented at the 2012 Unidata Workshop: Navigating Earth System Science Data, 9-13 July.
What if you were watching Ken Burns's "The Dust Bowl", saw the striking image below, and wondered: "How much precipitation there really was back in the dustbowl years?" How easy is it to access and manipulate climate data in a scientific analysis? Here we'll show some powerful tools that make it easy.
End of explanation
import numpy as np
import matplotlib.dates as mdates
import matplotlib.pyplot as plt
import urllib
import os
from IPython.core.display import HTML
import time
import datetime
import pandas as pd
%matplotlib inline
import pyGDP
import numpy as np
import matplotlib.dates as mdates
import owslib
owslib.__version__
pyGDP.__version__
Explanation: Above:Dust storm hits Hooker, OK, June 4, 1937.
To find out how much rainfall was there during the dust bowl years, we can use the USGS/CIDA GeoDataPortal (GDP) which can compute statistics of a gridded field within specified shapes, such as county outlines. Hooker is in Texas County, Oklahoma, so here we use the GDP to compute a historical time series of mean precipitation in Texas County using the PRISM dataset. We then compare to climate forecast projections to see if similar droughts are predicted to occur in the future, and what the impact of different climate scenarios might be.
End of explanation
HTML('<iframe src=http://screencast.com/t/K7KTcaFrSUc width=800 height=600></iframe>')
Explanation: One way to interface with the GDP is with the interactive web interface, shown below. In this interface, you can upload a shapefile or draw on the screen to define a polygon region, then you specify the statistics and datasets you want to use via dropdown menus.
End of explanation
# Create a pyGDP object
myGDP = pyGDP.pyGDPwebProcessing()
# Let's see what shapefiles are already available on the GDP server
# this changes with time, since uploaded shapefiles are kept for a few days
shapefiles = myGDP.getShapefiles()
print 'Available Shapefiles:'
for s in shapefiles:
print s
# Is our shapefile there already?
# If not, upload it.
OKshapeFile = 'upload:OKCNTYD'
if not OKshapeFile in shapefiles:
shpfile = myGDP.uploadShapeFile('OKCNTYD.zip')
# Let's check the attributes of the shapefile
attributes = myGDP.getAttributes(OKshapeFile)
print "Shapefile attributes:"
for a in attributes:
print a
# In this particular example, we are interested in attribute = 'DESCRIP',
# which provides the County names for Oklahoma
user_attribute = 'DESCRIP'
values = myGDP.getValues(OKshapeFile, user_attribute)
print "Shapefile attribute values:"
for v in values:
print v
# we want Texas County, Oklahoma, which is where Hooker is located
user_value = 'Texas'
# Let's see what gridded datasets are available for the GDP to operate on
dataSets = myGDP.getDataSetURI()
print "Available gridded datasets:"
for d in dataSets:
print d[0]
dataSets[0][0]
df = pd.DataFrame(dataSets[1:],columns=['title','abstract','urls'])
df.head()
print df['title']
df.ix[20].urls
# If you choose a DAP URL, use the "dods:" prefix, even
# if the list above has a "http:" prefix.
# For example: dods://cida.usgs.gov/qa/thredds/dodsC/prism
# Let's see what data variables are in our dataset
dataSetURI = 'dods://cida.usgs.gov/thredds/dodsC/prism'
dataTypes = myGDP.getDataType(dataSetURI)
print "Available variables:"
for d in dataTypes:
print d
# Let's see what the available time range is for our data variable
variable = 'ppt' # precip
timeRange = myGDP.getTimeRange(dataSetURI, variable)
for t in timeRange:
print t
timeBegin = '1900-01-01T00:00:00Z'
timeEnd = '2012-08-01T00:00:00Z'
# Once we have our shapefile, attribute, value, dataset, datatype, and timerange as inputs, we can go ahead
# and submit our request.
name1='gdp_texas_county_prism.csv'
if not os.path.exists(name1):
url_csv = myGDP.submitFeatureWeightedGridStatistics(OKshapeFile, dataSetURI, variable,
timeBegin, timeEnd, user_attribute, user_value, delim='COMMA', stat='MEAN' )
f = urllib.urlretrieve(url_csv,name1)
# load historical PRISM precip
df1=pd.read_csv(name1,skiprows=3,parse_dates=True,index_col=0,
names=['date','observed precip'])
df1.plot(figsize=(12,2),
title='Average Precip for Texas County, Oklahoma, calculated via GDP using PRISM data ');
df1 = pd.stats.moments.rolling_mean(df1,36,center=True)
df1.plot(figsize=(12,2),
title='Average Precip for Texas County, Oklahoma, calculated via GDP using PRISM data ');
HTML('<iframe src=http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html width=900 height=350></iframe>')
#hayhoe_URI ='dods://cida-eros-thredds1.er.usgs.gov:8082/thredds/dodsC/dcp/conus_grid.w_meta.ncml'
dataset ='dods://cida.usgs.gov/thredds/dodsC/maurer/maurer_brekke_w_meta.ncml'
variable = 'sresa2_gfdl-cm2-1_1_Prcp'
timeRange = myGDP.getTimeRange(dataset, variable)
timeRange
# retrieve the GFDL model A2 more "Business-as-Usual" scenario:
time0=time.time();
name2='sresa2_gfdl-cm2-1_1_Prcp.csv'
if not os.path.exists(name2):
variable = 'sresa2_gfdl-cm2-1_1_Prcp'
result2 = myGDP.submitFeatureWeightedGridStatistics(OKshapeFile, dataset, variable,
timeRange[0],timeRange[1],user_attribute,user_value, delim='COMMA', stat='MEAN' )
f = urllib.urlretrieve(result2,name2)
print('elapsed time=%d s' % (time.time()-time0))
# now retrieve the GFDL model B1 "Eco-Friendly" scenario:
time0=time.time();
name3='sresb1_gfdl-cm2-1_1_Prcp.csv'
if not os.path.exists(name3):
variable = 'sresb1_gfdl-cm2-1_1_Prcp'
result3 = myGDP.submitFeatureWeightedGridStatistics(OKshapeFile, dataset, variable,
timeRange[0],timeRange[1],user_attribute,user_value, delim='COMMA', stat='MEAN' )
f = urllib.urlretrieve(result3,name3)
print('elapsed time=%d s' % (time.time()-time0))
# Load the GDP result for: "Business-as-Usual" scenario:
# load historical PRISM precip
df2=pd.read_csv(name2,skiprows=3,parse_dates=True,index_col=0,
names=['date','GFDL A2'])
# Load the GDP result for: "Eco-Friendly" scenario:
df3=pd.read_csv(name3,skiprows=3,parse_dates=True,index_col=0,
names=['date','GFDL B1'])
# convert mm/day to mm/month (approximate):
ts_rng = pd.date_range(start='1/1/1900',end='1/1/2100',freq='30D')
ts = pd.DataFrame(index=ts_rng)
df2['GFDL B1'] = df3['GFDL B1']*30.
df2['GFDL A2'] = df2['GFDL A2']*30.
df2 = pd.stats.moments.rolling_mean(df2,36,center=True)
df2 = pd.concat([df2,ts],axis=1).interpolate(limit=1)
df2['OBS'] = pd.concat([df1,ts],axis=1).interpolate(limit=1)['observed precip']
# interpolate
ax=df2.plot(figsize=(12,2),legend=False,
title='Average Precip for Texas County, Oklahoma, calculated via GDP using PRISM data ');
ax.legend(loc='upper right');
Explanation: Here we use the python interface to the GDP, called PyGDP, which allows for scripting. You can get the code and documentation at https://github.com/USGS-CIDA/pyGDP.
End of explanation
import netCDF4
url='http://cida.usgs.gov/thredds/dodsC/prism'
box = [-102,36.5,-100.95,37] # Bounding box for Texas County, Oklahoma
#box = [-104,36.,-100,39.0] # Bounding box for larger dust bowl region
# define a mean precipitation function, here hard-wired for the PRISM data
def mean_precip(nc,bbox=None,start=None,stop=None):
lon=nc.variables['lon'][:]
lat=nc.variables['lat'][:]
tindex0=netCDF4.date2index(start,nc.variables['time'],select='nearest')
tindex1=netCDF4.date2index(stop,nc.variables['time'],select='nearest')
bi=(lon>=box[0])&(lon<=box[2])
bj=(lat>=box[1])&(lat<=box[3])
p=nc.variables['ppt'][tindex0:tindex1,bj,bi]
latmin=np.min(lat[bj])
p=np.mean(p,axis=0)
lon=lon[bi]
lat=lat[bj]
return p,lon,lat
nc = netCDF4.Dataset(url)
p,lon,lat = mean_precip(nc,bbox=box,start=datetime.datetime(1936,11,1,0,0),
stop=datetime.datetime(1937,4,1,0,0))
p2,lon,lat = mean_precip(nc,bbox=box,start=datetime.datetime(1940,11,1,0,0),
stop=datetime.datetime(1941,4,1,0,0))
latmin = np.min(lat)
import cartopy.crs as ccrs
import cartopy.feature as cfeature
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none')
fig = plt.figure(figsize=(12,5))
ax = fig.add_axes([0.1, 0.15, 0.3, 0.8],projection=ccrs.PlateCarree())
pc = ax.pcolormesh(lon, lat, p, cmap=plt.cm.jet_r,vmin=0,vmax=40)
plt.title('Precip in Dust Bowl Region: Winter 1936-1937')
ax.add_feature(states_provinces,edgecolor='gray')
ax.text(-101,36.86,'Hooker')
ax.plot(-101,36.86,'o')
cb = plt.colorbar(pc, orientation='horizontal')
cb.set_label('Precip (mm/month)')
ax2 = fig.add_axes([0.6, 0.15, 0.3, 0.8],projection=ccrs.PlateCarree())
pc2 = ax2.pcolormesh(lon, lat, p2, cmap=plt.cm.jet_r,vmin=0,vmax=40)
plt.title('Precip in Dust Bowl Region: Winter 1940-1941')
ax2.add_feature(states_provinces,edgecolor='gray')
ax2.text(-101,36.86,'Hooker')
ax2.plot(-101,36.86,'o')
cb2 = plt.colorbar(pc2, orientation='horizontal')
cb2.set_label('Precip (mm/month)')
plt.show()
Explanation: Now just to show that we can access more than climate model time series, let's extract precipitation data from a dry winter (1936-1937) and a normal winter (2009-2010) for Texas County and look at the spatial patterns.
We'll use the netCDF4-Python library, which allows us to open OPeNDAP datasets just as if they were local NetCDF files.
End of explanation |
3,360 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
変数名とデータの内容メモ
CENSUS
Step1: 成約時点別×市区町村別の件数を集計
Step2: 成約時点別×地域ブロック別の件数を集計
Step3: Histogram
価格(真数)
Step4: 価格(自然対数)
Step5: 建築後年数
Step6: Plot
件数の推移
Step7: Main Analysis
OLS part
Step8: 青がOLSの誤差、緑がOLSと深層学習を組み合わせた誤差。 | Python Code:
print(data['CITY_NAME'].value_counts())
Explanation: 変数名とデータの内容メモ
CENSUS: 市区町村コード(9桁)
P: 成約価格
S: 専有面積
L: 土地面積
R: 部屋数
RW: 前面道路幅員
CY: 建築年
A: 建築後年数(成約時)
TS: 最寄駅までの距離
TT: 東京駅までの時間
ACC: ターミナル駅までの時間
WOOD: 木造ダミー
SOUTH: 南向きダミー
RSD: 住居系地域ダミー
CMD: 商業系地域ダミー
IDD: 工業系地域ダミー
FAR: 建ぺい率
FLR: 容積率
TDQ: 成約時点(四半期)
X: 緯度
Y: 経度
CITY_CODE: 市区町村コード(5桁)
CITY_NAME: 市区町村名
BLOCK: 地域ブロック名
市区町村別の件数を集計
End of explanation
print(data.pivot_table(index=['TDQ'], columns=['CITY_NAME']))
Explanation: 成約時点別×市区町村別の件数を集計
End of explanation
print(data.pivot_table(index=['TDQ'], columns=['BLOCK']))
Explanation: 成約時点別×地域ブロック別の件数を集計
End of explanation
data['P'].hist()
Explanation: Histogram
価格(真数)
End of explanation
(np.log(data['P'])).hist()
Explanation: 価格(自然対数)
End of explanation
data['A'].hist()
plt.figure(figsize=(20,8))
plt.subplot(4, 2, 1)
data['P'].hist()
plt.title(u"成約価格")
plt.subplot(4, 2, 2)
data['S'].hist()
plt.title("専有面積")
plt.subplot(4, 2, 3)
data['L'].hist()
plt.title("土地面積")
plt.subplot(4, 2, 4)
data['R'].hist()
plt.title("部屋数")
plt.subplot(4, 2, 5)
data['A'].hist()
plt.title("建築後年数")
plt.subplot(4, 2, 6)
data['RW'].hist()
plt.title("前面道路幅員")
plt.subplot(4, 2, 7)
data['TS'].hist()
plt.title("最寄駅までの距離")
plt.subplot(4, 2, 8)
data['TT'].hist()
plt.title(u"東京駅までの時間")
Explanation: 建築後年数
End of explanation
plt.figure(figsize=(20,8))
data['TDQ'].value_counts().plot(kind='bar')
plt.figure(figsize=(20,8))
data['CITY_NAME'].value_counts().plot(kind='bar') #市区町村別の件数
Explanation: Plot
件数の推移
End of explanation
vars = ['P', 'S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR', 'X', 'Y']
eq = fml_build(vars)
y, X = dmatrices(eq, data=data, return_type='dataframe')
CITY_NAME = pd.get_dummies(data['CITY_NAME'])
TDQ = pd.get_dummies(data['TDQ'])
X = pd.concat((X, CITY_NAME, TDQ), axis=1)
datas = pd.concat((y, X), axis=1)
datas = datas[datas['12世田谷区'] == 1][0:5000]
datas.head()
vars = ['S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR']
#vars += vars + list(TDQ.columns)
class CAR(Chain):
def __init__(self, unit1, unit2, unit3, col_num):
self.unit1 = unit1
self.unit2 = unit2
self.unit3 = unit3
super(CAR, self).__init__(
l1 = L.Linear(col_num, unit1),
l2 = L.Linear(self.unit1, self.unit1),
l3 = L.Linear(self.unit1, self.unit2),
l4 = L.Linear(self.unit2, self.unit3),
l5 = L.Linear(self.unit3, self.unit3),
l6 = L.Linear(self.unit3, 1),
)
def __call__(self, x, y):
fv = self.fwd(x, y)
loss = F.mean_squared_error(fv, y)
return loss
def fwd(self, x, y):
h1 = F.sigmoid(self.l1(x))
h2 = F.sigmoid(self.l2(h1))
h3 = F.sigmoid(self.l3(h2))
h4 = F.sigmoid(self.l4(h3))
h5 = F.sigmoid(self.l5(h4))
h6 = self.l6(h5)
return h6
class OLS_DLmodel(object):
def __init__(self, data, vars, bs=200, n=1000):
self.vars = vars
eq = fml_build(vars)
y, X = dmatrices(eq, data=datas, return_type='dataframe')
self.y_in = y[:-n]
self.X_in = X[:-n]
self.y_ex = y[-n:]
self.X_ex = X[-n:]
self.logy_in = np.log(self.y_in)
self.logy_ex = np.log(self.y_ex)
self.bs = bs
def OLS(self):
X_in = self.X_in
X_in = X_in.drop(['X', 'Y'], axis=1)
model = sm.OLS(self.logy_in, X_in, intercept=False)
self.reg = model.fit()
print(self.reg.summary())
df = (pd.DataFrame(self.reg.params)).T
df['X'] = 0
df['Y'] = 0
self.reg.params = pd.Series((df.T)[0])
def directDL(self, ite=100, bs=200, add=False):
logy_in = np.array(self.logy_in, dtype='float32')
X_in = np.array(self.X_in, dtype='float32')
y = Variable(logy_in)
x = Variable(X_in)
num, col_num = X_in.shape
if add is False:
self.model1 = CAR(15, 15, 5, col_num)
optimizer = optimizers.SGD()
optimizer.setup(self.model1)
for j in range(ite):
sffindx = np.random.permutation(num)
for i in range(0, num, bs):
x = Variable(X_in[sffindx[i:(i+bs) if (i+bs) < num else num]])
y = Variable(logy_in[sffindx[i:(i+bs) if (i+bs) < num else num]])
self.model1.zerograds()
loss = self.model1(x, y)
loss.backward()
optimizer.update()
if j % 1000 == 0:
loss_val = loss.data
print('epoch:', j)
print('train mean loss={}'.format(loss_val))
print(' - - - - - - - - - ')
y_ex = np.array(self.y_ex, dtype='float32').reshape(len(self.y_ex))
X_ex = np.array(self.X_ex, dtype='float32')
X_ex = Variable(X_ex)
logy_pred = self.model1.fwd(X_ex, X_ex).data
y_pred = np.exp(logy_pred)
error = y_ex - y_pred.reshape(len(y_pred),)
plt.hist(error[:])
def DL(self, ite=100, bs=200, add=False):
y_in = np.array(self.y_in, dtype='float32').reshape(len(self.y_in))
resid = y_in - np.exp(self.reg.predict())
resid = np.array(resid, dtype='float32').reshape(len(resid),1)
X_in = np.array(self.X_in, dtype='float32')
y = Variable(resid)
x = Variable(X_in)
num, col_num = X_in.shape
if add is False:
self.model1 = CAR(10, 10, 3, col_num)
optimizer = optimizers.Adam()
optimizer.setup(self.model1)
for j in range(ite):
sffindx = np.random.permutation(num)
for i in range(0, num, bs):
x = Variable(X_in[sffindx[i:(i+bs) if (i+bs) < num else num]])
y = Variable(resid[sffindx[i:(i+bs) if (i+bs) < num else num]])
self.model1.zerograds()
loss = self.model1(x, y)
loss.backward()
optimizer.update()
if j % 1000 == 0:
loss_val = loss.data
print('epoch:', j)
print('train mean loss={}'.format(loss_val))
print(' - - - - - - - - - ')
def predict(self):
y_ex = np.array(self.y_ex, dtype='float32').reshape(len(self.y_ex))
X_ex = np.array(self.X_ex, dtype='float32')
X_ex = Variable(X_ex)
resid_pred = self.model1.fwd(X_ex, X_ex).data
print(resid_pred[:10])
self.logy_pred = np.matrix(self.X_ex)*np.matrix(self.reg.params).T
self.error1 = np.array(y_ex - np.exp(self.logy_pred.reshape(len(self.logy_pred),)))[0]
self.pred = np.exp(self.logy_pred) + resid_pred
self.error2 = np.array(y_ex - self.pred.reshape(len(self.pred),))[0]
def compare(self):
plt.hist(self.error1)
plt.hist(self.error2)
vars = ['P', 'S', 'L', 'R', 'RW', 'A', 'TS', 'TT', 'WOOD', 'SOUTH', 'CMD', 'IDD', 'FAR', 'X', 'Y']
#vars += vars + list(TDQ.columns)
model = OLS_DLmodel(datas, vars)
model.OLS()
model.DL(ite=10, bs=200)
model.predict()
model.DL(ite=20000, bs=200, add=True)
model.DL(ite=10000, bs=200, add=True)
model.predict()
Explanation: Main Analysis
OLS part
End of explanation
model.compare()
print(np.mean(model.error1))
print(np.mean(model.error2))
print(np.mean(np.abs(model.error1)))
print(np.mean(np.abs(model.error2)))
print(max(np.abs(model.error1)))
print(max(np.abs(model.error2)))
print(np.var(model.error1))
print(np.var(model.error2))
fig = plt.figure()
ax = fig.add_subplot(111)
errors = [model.error1, model.error2]
bp = ax.boxplot(errors)
plt.grid()
plt.ylim([-5000,5000])
plt.title('分布の箱ひげ図')
plt.show()
X = model.X_ex['X'].values
Y = model.X_ex['Y'].values
e = model.error2
import numpy
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig=plt.figure()
ax=Axes3D(fig)
ax.scatter3D(X, Y, e)
plt.show()
t
plt.hist(Xs)
import numpy as np
from scipy.stats import gaussian_kde
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
import numpy as np
Xs = np.linspace(min(X),max(X),10)
Ys = np.linspace(min(Y),max(Y),10)
error = model.error1
Xgrid, Ygrid = np.meshgrid(Xs, Ys)
Z = LL(X, Y, Xs, Ys, error)
fig = plt.figure()
ax = Axes3D(fig)
ax.plot_wireframe(Xgrid,Ygrid,Z) #<---ここでplot
plt.show()
fig = plt.figure()
ax = Axes3D(fig)
ax.set_zlim(-100, 500)
ax.plot_surface(Xgrid,Ygrid,Z) #<---ここでplot
plt.show()
h = 10
(0.9375*(1-((X-1)/h)**2)**2)*(0.9375*(1-((Y-2)/h)**2)**2)
def LL(X, Y, Xs, Ys, error):
n = len(X)
h = 0.1
error = model.error2
mean_of_error = np.zeros((len(Xs), len(Ys)))
for i in range(len(Xs)):
for j in range(len(Ys)):
u1 = ((X-Xs[i])/h)**2
u2 = ((Y-Ys[j])/h)**2
k = (0.9375*(1-((X-Xs[i])/h)**2)**2)*(0.9375*(1-((Y-Ys[j])/h)**2)**2)
K = np.diag(k)
indep = np.matrix(np.array([np.ones(n), X - Xs[i], Y-Ys[j]]).T)
dep = np.matrix(np.array([error]).T)
gls_model = sm.GLS(dep, indep, sigma=K)
gls_results = gls_model.fit()
mean_of_error[i, j] = gls_results.params[0]
return mean_of_error
h = 200
u1 = ((X-30)/h)**2
u1
u1[u1 < 0] = 0
for x in range(lXs[:2]):
print(x)
mean_of_error
plt.plot(gaussian_kde(Y, 0.1)(Ys))
N = 5
means = np.random.randn(N,2) * 10 + np.array([100, 200])
stdev = np.random.randn(N,2) * 10 + 30
count = np.int64(np.int64(np.random.randn(N,2) * 10000 + 50000))
a = [
np.hstack([
np.random.randn(count[i,j]) * stdev[i,j] + means[i,j]
for j in range(2)])
for i in range(N)]
for x in Xs:
for y in Ys:
def loclinearc(points,x,y,h):
n = len(points[,1])
const = matrix(1, nrow=length(x), ncol=1)
bhat = matrix(0, nrow=3, ncol=n)
b1 = matrix(0, n, n)
predict = matrix(0, n, 1)
for (j in 1:n) {
for (i in 1:n) {
a <- -.5*sign( abs( (points[i, 1]*const - x[,1])/h ) -1 ) + .5
#get the right data points, (K(x) ~=0)
b <- -.5*sign( abs( (points[j, 2]*const - x[,2])/h ) -1 ) + .5
x1andy <- nonzmat(cbind((x[,1]*a*b), (y*a*b)))
x2andy <- nonzmat(cbind((x[,2]*a*b), (y*a*b)))
ztheta1 <- x1andy[,1]
ztheta2 <- x2andy[,1]
yuse <- x1andy[,2]
q1 <- (ztheta1 - points[i,1]);
q2 <- (ztheta2 - points[j,2]);
nt1 <- ( (ztheta1- points[i,1])/h )
nt2 <- ( (ztheta2- points[j,2])/h )
#q2 = ((ztheta - points(i,1)).^2)/2;
weights <- diag(c((15/16)%*%( 1-(nt1^2))^2*((15/16)%*%( 1-(nt2^2))^2)))
#Biweight Kernel
tempp3 <- cbind(matrix(1, nrow=length(ztheta1), ncol=1), q1, q2)
bhat[,i] <- solve(t(tempp3)%*%weights%*%tempp3)%*%t(tempp3)%*%weights%*%yuse
}
b1[,j] <- t(bhat[1,])
}
return(b1)
}
nonzmat(x):
#This function computes nonzeros of a MATRIX when certain ROWS of the
#matrix are zero. This function returns a matrix with the
#zero rows deleted
m, k = x.shape
xtemp = matrix(np.zeros(m, k))
for (i in 1:m) {
xtemp[i,] <- ifelse(x[i,] == matrix(0, nrow=1, ncol=k), 99999*matrix(1, nrow=1, ncol=k), x[i,])
}
xtemp <- xtemp - 99999
if (length(which(xtemp !=0,arr.ind = T)) == 0) {
a <- matrix(-99999, nrow=1, ncol=k)
} else {
a <- xtemp[which(xtemp !=0,arr.ind = T)]
}
a <- a + 99999
n1 <- length(a)
rowlen <- n1/k
collen <- k
out = matrix(a, nrow=rowlen, ncol=collen)
return(out)
}
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.tri as mtri
#============
# First plot
#============
# Plot the surface. The triangles in parameter space determine which x, y, z
# points are connected by an edge.
ax = fig.add_subplot(1, 2, 1, projection='3d')
ax.plot_trisurf(X, Y, e)
ax.set_zlim(-1, 1)
plt.show()
Explanation: 青がOLSの誤差、緑がOLSと深層学習を組み合わせた誤差。
End of explanation |
3,361 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pygraphistry Viz
Step1: ---------------------------
Step2: df.head()
Retrieve citations data
citations = pd.read_csv('citations.txt', names = ['source', 'target', 'label'])
Dedupe Citations
citations = citations.drop_duplicates(subset=['source', 'target'])
Clean Citations IDs
citations['target'] = citations['target'].str.strip('.')
citations['source'] = citations['source'].astype(str).str.strip('.')
Unique subjects
subjects = arxiv_metadata.primary_subject.unique()
subject_colors = dict(zip(subjects, range(0, len(subjects))))
arxiv_metadata['color'] = arxiv_metadata.primary_subject.map(lambda x
Step3: Visualization 1
Step4: Visualization 2
Step5: Plot
Bind color to whether tainted | Python Code:
# Imports
import graphistry
import numpy as np
import pandas as pd
from py2neo import Graph, Path
graphistry.register(key='48a82a78fdd442482cec24fe06051c905e2a382d581852a4ba645927c736acbcfe7256e22873a5c97cff6b8bd37c836b')
Explanation: Pygraphistry Viz
End of explanation
# Static - Connect to the database
# graph = Graph('http://neo4j:[email protected]:7474')
# tx = graph.cypher.begin()
# for name in ["Alice", "Bob", "Carol"]:
# tx.append("CREATE (person:Person {name:{name}}) RETURN person", name=name)
# alice, bob, carol = [result.one for result in tx.commit()]
# friends = Path(alice, "KNOWS", bob, "KNOWS", carol)
# graph.create(friends)
# graph.data("MATCH (a:address) --> (b:incoming_payment) --> (c:transaction) RETURN LIMIT 25")
# rows = pandas.read_csv('transactions.csv')[:1000]
# graphistry.hypergraph(rows)['graph'].plot()
# Retrieve all the paper metadata
# btc_metadata = pd.read_sql_query('SELECT * FROM Papers', conn)
# df = pd.DataFrame(graph.data("MATCH (n:transaction) Return n LIMIT 25"))
Explanation: ---------------------------
End of explanation
transactions = pd.read_csv('transactions.csv')
transactions['Date'] = pd.to_datetime(transactions['Date'],unit='ms') #coerce date format
transactions[:3]
print('DataFrame headers: {}' .format(list(transactions.columns)))
transactions.columns[-1]
# 'taint' is weighted as 5
transactions['isTainted'].unique()
# for item in transactions[transactions['isTainted'] == 5].isTainted:
# item = 10
# for column in transactions.columns[-1]:
# transactions[transactions == 5] = 10
transactions.shape
transactions.info()
# transaction window
print(transactions['Date'].sort_values().head(1), '\n')
print(transactions['Date'].sort_values().tail(1))
Explanation: df.head()
Retrieve citations data
citations = pd.read_csv('citations.txt', names = ['source', 'target', 'label'])
Dedupe Citations
citations = citations.drop_duplicates(subset=['source', 'target'])
Clean Citations IDs
citations['target'] = citations['target'].str.strip('.')
citations['source'] = citations['source'].astype(str).str.strip('.')
Unique subjects
subjects = arxiv_metadata.primary_subject.unique()
subject_colors = dict(zip(subjects, range(0, len(subjects))))
arxiv_metadata['color'] = arxiv_metadata.primary_subject.map(lambda x: subject_colors[x])
citations.info()
metadata_merge = citations.merge(arxiv_metadata,
left_on='source',
right_on='id').merge(arxiv_metadata,
left_on='target',
right_on='id',
suffixes=('_from', '_to'))
metadata_merge.info()
citations = pd.read_csv('Projects/ArXiv/data/citations/citations.txt', names = ['source', 'target', 'label'])
# links = pd.read_csv('./lesmiserables.csv')
citations.head()
Set up the plotter
plotter = graphistry.bind(source="source", destination="target")
plotter.plot(citations)
citations["label"] = citations.value.map(lambda v: "#Meetings: %d" % v)
plotter = plotter.bind(edge_weight="label")
plotter.plot(citations)
Set up igraph for easy metadata etc
ig = plotter.pandas2igraph(citations)
ig = plotter.pandas2igraph(metadata_merge)
Add the Arxiv Metadata
vertex_metadata = pd.DataFrame(ig.vs['nodeid'], columns=['id']).merge(arxiv_metadata, how='left', on='id')
ig.vs['primary_subject'] = vertex_metadata['primary_subject']
ig.vs['color'] = vertex_metadata['color']
ig.vs['title'] = vertex_metadata['title']
ig.vs['year'] = vertex_metadata['year']
ig.vs['month'] = vertex_metadata['month']
ig.vs['category'] = vertex_metadata['category']
ig.vs['pagerank'] = ig.pagerank()
ig.vs['community'] = ig.community_infomap().membership
ig.vs['in_degree'] = ig.indegree()
plotter.bind(point_size='in_degree', point_color='color').plot(ig)
plotter.bind(point_color='community', point_size='pagerank').plot(ig)
Silk Road Bitcoin Embezzling Visualization
End of explanation
g = graphistry.edges(transactions).bind(source='Source', destination='Destination')
g.plot()
Explanation: Visualization 1: Quick Visualization & Analysis
Task: Spot the embezzling
1. Use the histogram tool to filter for only tainted transactions
2. Turn on the Setting "Prune Isolated Nodes" to hide wallets with no remaining transactions
3. Use the filters or excludes tool to only show transactions over 1000 or 1000.
4. Verify that money flowed from Ross Ulbricht to Carl Force, and explore where else it flowed.
End of explanation
# Compute how much wallets received in new df 'wallet_in'
wallet_in = transactions\
.groupby('Destination')\
.agg({'isTainted': lambda x: 1 if x.sum() > 0 else 0, 'Amount $': np.sum})\
.reset_index().rename(columns={'Destination': 'wallet', 'isTainted': 'isTaintedWallet'})
# rename destination to wallet
# rename isTainted to isTaintedWallet
#not all wallets received money, tag these
wallet_in['Receivables'] = True
wallet_in[:3]
wallet_in['isTaintedWallet'].unique()
# Compute how much wallets sent in new df 'wallet_out'
wallet_out = transactions\
.groupby('Source')\
.agg({'isTainted': np.sum, 'Amount $': np.max})\
.reset_index().rename(columns={'Source': 'wallet', 'isTainted': 'isTaintedWallet'})
# rename source to wallet
# rename isTainted to isTaintedWallet
#not all wallets received money, tag these
wallet_out['Payables'] = True
wallet_out[:3]
wallet_out['isTaintedWallet'].unique()
# Join Data
wallets = pd.merge(wallet_in, wallet_out, how='outer')
wallets['Receivables'] = wallets['Receivables'].fillna(False)
wallets['Payables'] = wallets['Payables'].fillna(False)
print('# Wallets only sent or only received', len(wallet_in) + len(wallet_out) - len(wallets))
wallets[:3]
tmp = wallets
# colors at: http://staging.graphistry.com/docs/legacy/api/0.9.2/palette.html#Paired
def convert_to_colors(value):
if value == 0:
return 36005 # magenta
else:
return 42005 # orange
tmp['isTaintedWallet'] = tmp['isTaintedWallet'].apply(convert_to_colors)
tmp['isTaintedWallet'].unique()
Explanation: Visualization 2: Summarizing Wallets
End of explanation
g.nodes(tmp).bind(node='wallet', point_color='isTaintedWallet').plot()
Explanation: Plot
Bind color to whether tainted
End of explanation |
3,362 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-vhr4', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-VHR4
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
3,363 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stock market analysis project
Step1: Plotting the open price
Step2: Plotting the volume traded
Step3: Finding the timestamp of highest traded volume
Step4: Creating 'Total Traded' value
Step5: Plotting 'Total Traded'
Step6: Finding the timestamp of highest total traded value
Step7: Plotting moving average (rolling mean)
Step8: Plotting scatter matrix
Step9: Plotting candlestick
Step10: Daily Percentage Change
First we will begin by calculating the daily percentage change. Daily percentage change is defined by the following formula
Step11: Plotting histograms
Step12: Conclusion
Step13: Conclusion
Step14: Conclusion
Step15: Conclusion
Step16: Plotting CDR | Python Code:
import numpy as np
import pandas as pd
from datetime import datetime
import matplotlib.pyplot as plt
%matplotlib inline
tesla = pd.read_csv('Tesla_Stock.csv', parse_dates= True, index_col='Date')
tesla.head()
ford = pd.read_csv('Ford_Stock.csv', parse_dates= True, index_col='Date')
ford.head()
gm = pd.read_csv('GM_Stock.csv', parse_dates= True, index_col='Date')
gm.head()
Explanation: Stock market analysis project
End of explanation
fig = plt.figure(figsize=(16,8))
tesla['Open'].plot(label = 'Tesla')
gm['Open'].plot(label = 'GM')
ford['Open'].plot(label = 'Ford')
plt.title('Open Price')
plt.legend()
Explanation: Plotting the open price
End of explanation
fig = plt.figure(figsize=(16,8))
tesla['Volume'].plot(label = 'Tesla')
gm['Volume'].plot(label = 'gm')
ford['Volume'].plot(label = 'ford')
plt.title('Volume Traded')
plt.legend()
Explanation: Plotting the volume traded
End of explanation
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.name.html
ford.loc[ford['Volume'].idxmax()].name
ford['Volume'].argmax()
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.name.html
ford.loc[ford['Volume'].idxmax()].name
Explanation: Finding the timestamp of highest traded volume
End of explanation
tesla['Total Traded'] = tesla['Open'] * tesla['Volume']
tesla.head()
ford['Total Traded'] = ford['Open'] * ford['Volume']
ford.head()
gm['Total Traded'] = gm['Open'] * gm['Volume']
gm.head()
Explanation: Creating 'Total Traded' value
End of explanation
fig = plt.figure(figsize=(16,8))
tesla['Total Traded'].plot(label = 'Tesla')
gm['Total Traded'].plot(label = 'GM')
ford['Total Traded'].plot(label = 'Ford')
plt.legend()
Explanation: Plotting 'Total Traded'
End of explanation
tesla.loc[tesla['Total Traded'].idxmax()].name
tesla['Total Traded'].argmax()
Explanation: Finding the timestamp of highest total traded value
End of explanation
gm['MA50'] = gm['Open'].rolling(window=50).mean()
gm['MA200'] = gm['Open'].rolling(window=200).mean()
gm[['Open','MA50', 'MA200']].plot(figsize=(16,8))
Explanation: Plotting moving average (rolling mean)
End of explanation
from pandas.plotting import scatter_matrix
# https://stackoverflow.com/questions/30986989/reindex-a-dataframe-with-duplicate-index-values
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.rename.html
# Either use rename or use below
df = pd.concat([tesla['Open'], gm['Open'], ford['Open']], axis = 1)
df.columns = ['Tesla Open', 'GM Open', 'Ford Open']
df = pd.DataFrame(pd.concat([tesla['Open'].rename('Tesla Open'), gm['Open'].rename('GM Open'), ford['Open'].rename('Ford Open')], axis = 1))
df.head()
# https://stackoverflow.com/questions/43801637/pandas-legend-for-scatter-matrix
# hist_kwds = historgram keywords
scatter_matrix(df, alpha=0.2, figsize=(8, 8), diagonal='hist', hist_kwds={'bins':50});
Explanation: Plotting scatter matrix
End of explanation
# https://matplotlib.org/examples/pylab_examples/finance_demo.html
from matplotlib.dates import DateFormatter, WeekdayLocator, DayLocator, MONDAY, date2num
from matplotlib.finance import candlestick_ohlc
# Creating a ford dataframe suitable as per our needs
ford_reset = ford.loc['2012-01'].reset_index()
ford_reset
ford_reset.info()
ford_reset['date_ax'] = ford_reset['Date'].apply(date2num)
ford_reset
list_of_cols = ['date_ax', 'Open', 'High', 'Low', 'Close']
ford_values = [tuple(vals) for vals in ford_reset[list_of_cols].values]
ford_values
mondays = WeekdayLocator(MONDAY) # major ticks on the mondays
alldays = DayLocator() # minor ticks on the days
weekFormatter = DateFormatter('%b %d') # e.g., Jan 12
dayFormatter = DateFormatter('%d') # e.g., 12
fig, ax = plt.subplots()
fig.subplots_adjust(bottom=0.2)
ax.xaxis.set_major_locator(mondays)
ax.xaxis.set_minor_locator(alldays)
ax.xaxis.set_major_formatter(weekFormatter)
#plot_day_summary(ax, quotes, ticksize=3)
candlestick_ohlc(ax, ford_values, width=0.6, colorup = 'g', colordown='r');
Explanation: Plotting candlestick
End of explanation
# Using the shift method
tesla['returns'] = (tesla['Close'] / tesla['Close'].shift(1)) - 1
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.pct_change.html
tesla['returns'] = tesla['Close'].pct_change()
tesla.head()
ford['returns'] = ford['Close'].pct_change()
ford.head()
gm['returns'] = gm['Close'].pct_change()
gm.head()
Explanation: Daily Percentage Change
First we will begin by calculating the daily percentage change. Daily percentage change is defined by the following formula:
$ r_t = \frac{p_t}{p_{t-1}} -1$
This defines r_t (return at time t) as equal to the price at time t divided by the price at time t-1 (the previous day) minus 1. Basically this just informs you of your percent gain (or loss) if you bought the stock on day and then sold it the next day. While this isn't necessarily helpful for attempting to predict future values of the stock, its very helpful in analyzing the volatility of the stock. If daily returns have a wide distribution, the stock is more volatile from one day to the next. Let's calculate the percent returns and then plot them with a histogram, and decide which stock is the most stable!
End of explanation
ford['returns'].plot.hist(bins=100, grid=True)
gm['returns'].plot.hist(bins=100, grid=True)
tesla['returns'].plot.hist(bins=100, grid=True)
tesla['returns'].hist(bins=100, label='Tesla', figsize=(10,8), alpha=0.4)
gm['returns'].hist(bins=100, label='GM', figsize=(10,8), alpha=0.4)
ford['returns'].hist(bins=100, label='Ford', figsize=(10,8), alpha=0.4)
plt.legend();
Explanation: Plotting histograms
End of explanation
df = pd.concat([tesla['returns'], gm['returns'],ford['returns']], axis = 1)
df.columns = ['Tesla','GM','Ford']
df.plot.kde(figsize=(12,6))
Explanation: Conclusion: Wider distribution = More volatility (Tesla)
Plotting KDE (Kernel Density Estimation)
End of explanation
df.plot.box(figsize=(8,12))
Explanation: Conclusion: Higher peaks = More stable stock (Ford)
Plotting Box
End of explanation
scatter_matrix(df, alpha=0.2, figsize=(8, 8), diagonal='hist', hist_kwds={'bins':50});
df.plot(kind='scatter', x='Ford', y='GM', alpha=0.5, figsize=(11,8))
Explanation: Conclusion: Greater outliers = More volatility (Tesla)
Observing correlations between 'GM' and 'Ford'
End of explanation
# cumprod - cumulative product
tesla['Cumulative Return'] = (1 + tesla['returns']).cumprod()
tesla.head()
ford['Cumulative Return'] = (1 + ford['returns']).cumprod()
ford.head()
gm['Cumulative Return'] = (1 + gm['returns']).cumprod()
gm.head()
Explanation: Conclusion: Linear correlation observed
Cumulative Daily Returns
Great! Now we can see which stock was the most wide ranging in daily returns (you should have realized it was Tesla, our original stock price plot should have also made that obvious).
With daily cumulative returns, the question we are trying to answer is the following, if I invested $1 in the company at the beginning of the time series, how much would is be worth today? This is different than just the stock price at the current day, because it will take into account the daily returns. Keep in mind, our simple calculation here won't take into account stocks that give back a dividend. Let's look at some simple examples:
Lets us say there is a stock 'ABC' that is being actively traded on an exchange. ABC has the following prices corresponding to the dates given
Date Price
01/01/2018 10
01/02/2018 15
01/03/2018 20
01/04/2018 25
Daily Return : Daily return is the profit/loss made by the stock compared to the previous day. (This is what ew just calculated above). A value above one indicates profit, similarly a value below one indicates loss. It is also expressed in percentage to convey the information better. (When expressed as percentage, if the value is above 0, the stock had give you profit else loss). So for the above example the daily returns would be
Date Daily Return %Daily Return
01/01/2018 10/10 = 1 -
01/02/2018 15/10 = 3/2 50%
01/03/2018 20/15 = 4/3 33%
01/04/2018 25/20 = 5/4 20%
Cumulative Return: While daily returns are useful, it doesn't give the investor a immediate insight into the gains he had made till date, especially if the stock is very volatile. Cumulative return is computed relative to the day investment is made. If cumulative return is above one, you are making profits else you are in loss. So for the above example cumulative gains are as follows
Date Cumulative Return %Cumulative Return
01/01/2018 10/10 = 1 100 %
01/02/2018 15/10 = 3/2 150 %
01/03/2018 20/10 = 2 200 %
01/04/2018 25/10 = 5/2 250 %
The formula for a cumulative daily return is:
$ i_i = (1+r_t) * i_{t-1} $
Here we can see we are just multiplying our previous investment at i at t-1 by 1+our percent returns. Pandas makes this very simple to calculate with its cumprod() method. Using something in the following manner:
df[daily_cumulative_return] = ( 1 + df[pct_daily_return] ).cumprod()
End of explanation
fig = plt.figure(figsize=(16,8))
tesla['Cumulative Return'].plot(label = 'Tesla')
gm['Cumulative Return'].plot(label = 'GM')
ford['Cumulative Return'].plot(label = 'Ford')
plt.title('Cumulative Return')
plt.legend()
Explanation: Plotting CDR
End of explanation |
3,364 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic demonstration of creating and using masks for bright sources
Author
Step1: You may have to set up your $CSCRATCH environment variable so that Python can find it, e.g.
Step2: These are some circular regions that could be masked, but let's check the capability also for elliptical regions
Step3: Note that the ellipticity components, here, which are defined at the bottom of the page at, e.g., http
Step4: Creating a file of targets and masking at the command line
Now let's compile a file of targets from the same set of sweeps. Again, I'll dump the targets file in your $CSCRATCH
directory. This took just under 4 minutes, for me
Step5: Masking targets
Our targets will already have DESI_TARGET bits set (indicating whether something is an LRG, ELG, QSO etc.) but we can update those bits to indicate which targets are in or are not in a mask. The IN_BRIGHT_OBJECT bit in DESI_TARGET indicates whether something is in a mask.
Step6: Let's plot which objects are in masks and which are not, against the backdrop of the mask (in a small region of the sky)
Step7: Note that the BADSKY locations are just outside the perimeter of the masks, and are quite obvious in the plot.
Masking points from a random catalog
The brightmask.set_target_bits() function wraps the main masking code in brightmask, called is_in_bright_mask(). The is_in_bright_mask() function can be used to compare coordinate locations to a mask and return which objects are in the mask (within the IN_RADIUS of the mask) and which objects are close to the mask (within the NEAR_RADIUS of the mask).
Let's create a random catalog over the small area of sky that we've been considering (154.8<sup>o</sup> < RA < 155.1<sup>o</sup>) and (19.7<sup>o</sup> < Dec < 20.0<sup>o</sup>)
Step8: Now let's mask that random catalog
Step9: and plot the random points that are and are not in the mask, both for the IN_RADIUS and the NEAR_RADIUS | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import os
import numpy as np
import fitsio
from desitarget import desi_mask, brightmask
Explanation: Basic demonstration of creating and using masks for bright sources
Author: Adam D. Myers, University of Wyoming
Getting Started
Everything should work fine using a DESI kernel loaded from NERSC's jupyter-dev server. See the instuctions under Getting Started at:
https://github.com/desihub/tutorials/blob/master/Intro_to_DESI_spectra.ipynb
If you're running locally on your own machine, you may also need to set up your own development kernel as described here:
https://desi.lbl.gov/trac/wiki/Computing/JupyterAtNERSC
under defining your own kernel.
Preliminaries
Now, retrieve a sample of the sweeps files, say, everything in DR5 that starts sweep-150. I'll assume you put this in your $CSCRATCH directory at NERSC, as follows:
mkdir $CSCRATCH/sweep150
cp /global/project/projectdirs/cosmo/data/legacysurvey/dr5/sweep/5.0/sweep-150* $CSCRATCH/sweep150
Creating a mask based on bright sources from a sweep-like file or files at the command line
To create a bright source mask by scraping bright sources (of all types) from the sweeps:
make_bright_mask $CSCRATCH/sweep150 $CSCRATCH/sourcemask150.fits
this mask will use default settings and create masks for everything that is brighter than 10th magnitude in each of the g, r and z bands. This took just under 1.5 minutes, for me, on edison, and returned the following informative message:
INFO:make_bright_mask:48:<module>: wrote a file of 1292 masks to $CSCRATCH/sourcemask150.fits
You can also customize the limiting magnitude of source to be masked, e.g.:
make_bright_mask $CSCRATCH/sweep150 --bands GZ --maglim 9,10 $CSCRATCH/blat.fits
will mask any source that is brighter than g < 9 or z < 10.
This took about 1.5 minutes for me on edison and returned the following message:
INFO:make_bright_mask:48:<module>: wrote a file of 1164 masks to $CSCRATCH/blat.fits
Plotting masks
Let's examine the mask and plot it. Note that NEAR_RADIUS is a warning that you're near a mask (which could be useful for large-scale structure analyses) whereas IN_RADIUS is the radius to which DESI will certainly not place any fibers (for fear of saturating adjacent sources).
End of explanation
os.environ["CSCRATCH"] = '/global/cscratch1/sd/adamyers'
sourcemask = fitsio.read("$CSCRATCH/sourcemask150.fits")
brightmask.plot_mask(sourcemask,limits=[151,150,1,2])
brightmask.plot_mask(sourcemask,limits=[151,150,1,2],radius="NEAR_RADIUS")
Explanation: You may have to set up your $CSCRATCH environment variable so that Python can find it, e.g.:
End of explanation
from desitarget.brightmask import _rexlike
from desitarget.cuts import _psflike
rex_or_psf = _rexlike(sourcemask["TYPE"]) | _psflike(sourcemask["TYPE"])
wcircle = np.where(rex_or_psf)
wellipse = np.where(~rex_or_psf)
sourcemask[wcircle][20:25]
sourcemask[wellipse][20:25]
Explanation: These are some circular regions that could be masked, but let's check the capability also for elliptical regions:
End of explanation
brightmask.plot_mask(sourcemask,limits=[155.1,154.8,19.7,20.0],radius="NEAR_RADIUS")
Explanation: Note that the ellipticity components, here, which are defined at the bottom of the page at, e.g., http://legacysurvey.org/dr5/catalogs/ are 0.0 for circular masks.
Let's plot a region that has a mix of circular and elliptical masks:
End of explanation
targs = fitsio.read("$CSCRATCH/targs150.fits")
print(len(targs))
print(len(np.where( (targs["DESI_TARGET"] & desi_mask.BADSKY) != 0 )[0]))
targs = brightmask.append_safe_targets(targs,sourcemask)
print(len(targs))
print(len(np.where( (targs["DESI_TARGET"] & desi_mask.BADSKY) != 0 )[0]))
w = np.where( (targs["DESI_TARGET"] & desi_mask.BADSKY) != 0 )
badskies= targs[w]
brightmask.plot_mask(sourcemask,show=False)
plt.axis([155.1,154.8,19.7,20.0])
plt.plot(badskies["RA"],badskies["DEC"],'k,')
plt.xlabel('RA (o)')
plt.ylabel('Dec (o)')
plt.show()
Explanation: Creating a file of targets and masking at the command line
Now let's compile a file of targets from the same set of sweeps. Again, I'll dump the targets file in your $CSCRATCH
directory. This took just under 4 minutes, for me:
select_targets $CSCRATCH/sweep150 $CSCRATCH/targs150.fits
and informed me that:
INFO:select_targets:73:<module>: 1741104 targets written to /global/cscratch1/sd/adamyers/targs150.fits
Note that at NERSC, the whole masking procedure can be applied to the targets in one line of code via, e.g.:
select_targets $CSCRATCH/sweep150 $CSCRATCH/targsblat150.fits --mask $CSCRATCH/sourcemask150.fits
Generating BADSKY ("SAFE") locations
First, let's generate BADSKY ("SAFE") locations around the periphery of each mask and plot them against the backdrop of the mask. Note that drstring, if set, is used to assign the Data Release (DR) bit to BADSKY targets and to assign BADSKY targets an OBJID that is higher than any target in a given brick of that DR. (That's the part that requires a DR survey-brick file, though).
End of explanation
desi_mask
dt = brightmask.set_target_bits(targs,sourcemask)
inmask = np.where( (dt & desi_mask.IN_BRIGHT_OBJECT) != 0)
masked = targs[inmask]
notinmask = np.where( (dt & desi_mask.IN_BRIGHT_OBJECT) == 0)
unmasked = targs[notinmask]
Explanation: Masking targets
Our targets will already have DESI_TARGET bits set (indicating whether something is an LRG, ELG, QSO etc.) but we can update those bits to indicate which targets are in or are not in a mask. The IN_BRIGHT_OBJECT bit in DESI_TARGET indicates whether something is in a mask.
End of explanation
brightmask.plot_mask(sourcemask,show=False)
plt.axis([155.1,154.8,19.7,20.0])
plt.xlabel('RA (o)')
plt.ylabel('Dec (o)')
plt.plot(masked["RA"],masked["DEC"],'kx')
plt.plot(unmasked["RA"],unmasked["DEC"],'r.')
plt.show()
Explanation: Let's plot which objects are in masks and which are not, against the backdrop of the mask (in a small region of the sky):
End of explanation
from numpy.random import random
Nran = 100000
rancat = np.zeros(Nran, dtype=[('RA', '>f8'), ('DEC', '>f8')])
rancat["RA"] = 154.8+0.3*(random(Nran))
rancat["DEC"] = np.degrees(np.arcsin(np.sin(np.radians(20))-random(Nran)*0.05))
Explanation: Note that the BADSKY locations are just outside the perimeter of the masks, and are quite obvious in the plot.
Masking points from a random catalog
The brightmask.set_target_bits() function wraps the main masking code in brightmask, called is_in_bright_mask(). The is_in_bright_mask() function can be used to compare coordinate locations to a mask and return which objects are in the mask (within the IN_RADIUS of the mask) and which objects are close to the mask (within the NEAR_RADIUS of the mask).
Let's create a random catalog over the small area of sky that we've been considering (154.8<sup>o</sup> < RA < 155.1<sup>o</sup>) and (19.7<sup>o</sup> < Dec < 20.0<sup>o</sup>):
End of explanation
inmask, nearmask = brightmask.is_in_bright_mask(rancat,sourcemask)
masked = rancat[np.where(inmask)]
notmasked = rancat[np.where(~inmask)]
near = rancat[np.where(nearmask)]
notnear = rancat[np.where(~nearmask)]
Explanation: Now let's mask that random catalog:
End of explanation
brightmask.plot_mask(sourcemask,show=False)
plt.axis([155.1,154.8,19.7,20.0])
plt.xlabel('RA (o)')
plt.ylabel('Dec (o)')
plt.plot(masked["RA"],masked["DEC"],'r.')
plt.plot(notmasked["RA"],notmasked["DEC"],'g,')
plt.show()
brightmask.plot_mask(sourcemask,show=False,radius="NEAR_RADIUS")
plt.axis([155.1,154.8,19.7,20.0])
plt.xlabel('RA (o)')
plt.ylabel('Dec (o)')
plt.plot(near["RA"],near["DEC"],'r.')
plt.plot(notnear["RA"],notnear["DEC"],'g,')
plt.show()
Explanation: and plot the random points that are and are not in the mask, both for the IN_RADIUS and the NEAR_RADIUS:
End of explanation |
3,365 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'iitm-esm', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: CCCR-IITM
Source ID: IITM-ESM
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:48
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
3,366 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment CIE 5703 - week 6
Import libraries
Step1: Rotterdam rain gauge dataset 10 min data from 2003 - 2013
Read in data
Step2: Convert the dates to a readable format...
Step3: Plot all data
Step4: Resample the 10 min dataset to hourly accumulated data
Step5: Resample 10 min dataset to 24 accumulated data
Step6: Select summer and winter months as separate datasets
Step7: Resample 10 min rain data to monthly accumulated data
Step8: Answering the assignments
1. General statistics for 24-hour and 10-min datasets
Step9: 24 h dataset
Step10: Ignoring zeros
Step11: 2. a. Analysis of seasonal cycles
Step12: Or per year
Step13: 2. b. Analysis of diurnal cycles
Step14: Neglecting events <1mm/h
Step15: Neglecting events < 3mm/h
Step16: 2. c. Variation of diurnal cycles with seasons
Step17: Neglecting events <1mm/h
Step18: Neglecting events <3mm/h
Step19: Repeat with winter
Step20: Neglecting events <1mm/h
Step21: Neglecting events <3mm/h
Step22: 2. d. Diurnal cycles of intense storm events
Step23: Amount of events
Step24: Events in summer
Step25: Amount of events
Step26: 3. Fit GEV-distribution for POT values in the time series
3. a. Create plots
Step27: 3. c. Compute rainfall amounts associated with return periods of 1 year, 10 years and 100 years
Step28: Lower since we're missing out the heavy rainfall events in 2015 and 2016 (only 10 year dataset)
Update 10.10.2017
Block maxima & GEV
Step29: GEV and block maxima of monthly maxima of 1h data
Step30: GPD & POT
Step31: GPD and POT of data>10mm/h
Step32: Boxplot of POT values | Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
%matplotlib inline
plt.style.use('ggplot')
Explanation: Assignment CIE 5703 - week 6
Import libraries
End of explanation
data = pd.read_csv('rotterdam_rg_2003-2014.csv', skipinitialspace=True)
Explanation: Rotterdam rain gauge dataset 10 min data from 2003 - 2013
Read in data
End of explanation
dates = data['Datum']
time = data['Tijd']
dates = dates.map(str)
date_split = dates.str.extract('(.{4})(.{2})(.{2})', expand=True)
time=time.apply(lambda x: '%04d' % x)
time_split = time.str.extract('(.{2})(.{2})', expand=True)
date_split.loc[:,3] = time_split.loc[:,0]
date_split.loc[:,4] = time_split.loc[:,1]
data.loc[:,'dt'] = pd.to_datetime(dict(year=date_split[0], month=date_split[1], day=date_split[2], hour=date_split[3], minute=date_split[4]))
data.index=data['dt']
data.head()
Explanation: Convert the dates to a readable format...
End of explanation
plt.plot(data['dt'], data["R10m"])
plt.ylabel('mm/10min')
plt.gcf().autofmt_xdate()
Explanation: Plot all data
End of explanation
data_1h = pd.DataFrame()
data_1h['mean_rain'] = data.R10m.resample('H').mean()
data_1h['accum_rain'] = data.R10m.resample('H').sum()
data_1h.tail()
plt.plot(data_1h["accum_rain"])
plt.ylabel('mm/1h')
plt.gcf().autofmt_xdate()
plt.plot(data_1h["mean_rain"])
plt.ylabel(r'mm/10min ($\varnothing$ of 1h)')
plt.gcf().autofmt_xdate()
Explanation: Resample the 10 min dataset to hourly accumulated data
End of explanation
data_24h = pd.DataFrame()
data_24h['mean_rain'] = data.R10m.resample('D').mean()
data_24h['accum_rain'] = data.R10m.resample('D').sum()
plt.plot(data_24h["accum_rain"])
plt.ylabel('mm/24h')
plt.gcf().autofmt_xdate()
plt.plot(data_24h["mean_rain"])
plt.ylabel(r'mm/10min ($\varnothing$ of 24h)')
plt.gcf().autofmt_xdate()
Explanation: Resample 10 min dataset to 24 accumulated data
End of explanation
data_summer_1h = data_1h.loc[(data_1h.index.month>=4) & (data_1h.index.month<=9)]
mask_start = (data_1h.index.month >= 1) & (data_1h.index.month <= 3)
mask_end = (data_1h.index.month >= 10) & (data_1h.index.month <= 12)
mask = mask_start | mask_end
data_winter_1h = data_1h.loc[mask]
plt.plot(data_summer_1h["accum_rain"])
plt.ylabel('mm/h')
plt.gcf().autofmt_xdate()
plt.plot(data_winter_1h["accum_rain"])
plt.ylabel('mm/h')
plt.gcf().autofmt_xdate()
Explanation: Select summer and winter months as separate datasets
End of explanation
data_monthly = pd.DataFrame()
data_monthly['mean_rain'] = data.R10m.resample('M').mean()
data_monthly['accum_rain'] = data.R10m.resample('M').sum()
plt.plot(data_monthly["accum_rain"])
plt.ylabel('mm/month')
plt.gcf().autofmt_xdate()
plt.plot(data_monthly["mean_rain"])
plt.ylabel(r'mm/10min ($\varnothing$ per month)')
plt.gcf().autofmt_xdate()
Explanation: Resample 10 min rain data to monthly accumulated data
End of explanation
print('Mean: %s' % str(data.R10m.mean()))
print('Std: %s' % str(data.R10m.std()))
print('Skew: %s' % str(data.R10m.skew()))
data.R10m.hist(bins = 100)
plt.xlabel('mm/10min')
plt.gca().set_yscale("log")
cur_data = data.R10m.loc[data.R10m>0]
hist_d = plt.hist(cur_data, bins=100)
plt.xlabel('mm/10min')
plt.gca().set_yscale("log")
Explanation: Answering the assignments
1. General statistics for 24-hour and 10-min datasets: compute mean, standard deviation, skewness; plot histograms
10 min dataset
End of explanation
print('Mean: %s' % str(data_24h.accum_rain.mean()))
print('Std: %s' % str(data_24h.accum_rain.std()))
print('Skew: %s' % str(data_24h.accum_rain.skew()))
data_24h.accum_rain.hist(bins = 100)
plt.gca().set_yscale("log")
plt.xlabel('mm/24h')
Explanation: 24 h dataset
End of explanation
cur_data = data_24h.accum_rain.loc[data_24h.accum_rain>0]
hist_d = plt.hist(cur_data, bins=100)
plt.gca().set_yscale("log")
plt.xlabel('mm/24h')
data_24h.mean_rain.hist(bins = 100)
plt.xlabel(r'mm/10min ($\varnothing$ per 24h)')
plt.gca().set_yscale("log")
selected_monthly_data = data_monthly
#selected_monthly_data = data_monthly[(data_monthly.index >= '2004-01-01')]
selected_monthly_data.head()
Explanation: Ignoring zeros:
End of explanation
pd.options.mode.chained_assignment = None # default='warn'
selected_monthly_data['mon'] = selected_monthly_data.index.month
selected_monthly_data['year'] = selected_monthly_data.index.year
selected_monthly_data.boxplot(column=['accum_rain'], by='mon', sym='+')
plt.ylabel('mm/month')
Explanation: 2. a. Analysis of seasonal cycles: create boxplots for monthly totals across all year
End of explanation
selected_monthly_data.boxplot(column=['accum_rain'], by='year', sym='+')
plt.ylabel('mm/month')
plt.gcf().autofmt_xdate()
Explanation: Or per year:
End of explanation
data_1h['hour'] = data_1h.index.hour
data_1h.boxplot(column=['accum_rain'], by='hour', sym='+')
plt.ylabel('mm/h')
Explanation: 2. b. Analysis of diurnal cycles: create boxplots for hourly totals for entire dataseries
End of explanation
cur_df = data_1h.copy()
cur_df.loc[cur_df.accum_rain<1, 'accum_rain'] = np.nan
cur_df.boxplot(column=['accum_rain'], by='hour', sym='+')
plt.ylabel('mm/h')
Explanation: Neglecting events <1mm/h
End of explanation
cur_df = data_1h.copy()
cur_df.loc[cur_df.accum_rain<3, 'accum_rain'] = np.nan
cur_df.boxplot(column=['accum_rain'], by='hour', sym='+')
plt.ylabel('mm/h')
Explanation: Neglecting events < 3mm/h
End of explanation
data_summer_1h['hour'] = data_summer_1h.index.hour
data_summer_1h.boxplot(column=['accum_rain'], by='hour', sym='+')
plt.ylabel('mm/h')
Explanation: 2. c. Variation of diurnal cycles with seasons: create boxplots for hourly totals for summer season (April – September) and for winter season (October-March)
End of explanation
cur_df = data_summer_1h.copy()
cur_df.loc[cur_df.accum_rain<1, 'accum_rain'] = np.nan
cur_df.boxplot(column=['accum_rain'], by='hour', sym='+')
plt.ylabel('mm/h')
Explanation: Neglecting events <1mm/h
End of explanation
cur_df = data_summer_1h.copy()
cur_df.loc[cur_df.accum_rain<3, 'accum_rain'] = np.nan
cur_df.boxplot(column=['accum_rain'], by='hour', sym='+')
plt.ylabel('mm/h')
Explanation: Neglecting events <3mm/h
End of explanation
data_winter_1h['hour'] = data_winter_1h.index.hour
data_winter_1h.boxplot(column=['accum_rain'], by='hour', sym='+')
plt.ylabel('mm/h')
Explanation: Repeat with winter
End of explanation
cur_df = data_winter_1h.copy()
cur_df.loc[cur_df.accum_rain<1, 'accum_rain'] = np.nan
cur_df.boxplot(column=['accum_rain'], by='hour', sym='+')
plt.ylabel('mm/h')
Explanation: Neglecting events <1mm/h
End of explanation
cur_df = data_winter_1h.copy()
cur_df.loc[cur_df.accum_rain<3, 'accum_rain'] = np.nan
cur_df.boxplot(column=['accum_rain'], by='hour', sym='+')
plt.ylabel('mm/h')
Explanation: Neglecting events <3mm/h
End of explanation
rotterdam_1h_exceeds = data_1h.accum_rain[data_1h.accum_rain>5]
y = np.array(rotterdam_1h_exceeds)
N = len(y)
x = range(N)
plt.bar(x, y)
plt.ylabel('mm/h')
Explanation: 2. d. Diurnal cycles of intense storm events: Count nr of exceedances above 10 mm/h threshold for each hour of the day, for entire data series and for summer months only
Show rainfall events > 10mm /h over entire 1h accumulated dataset
End of explanation
print(len(rotterdam_1h_exceeds))
Explanation: Amount of events
End of explanation
rotterdam_1h_exceeds_summer = data_summer_1h.accum_rain[data_summer_1h.accum_rain>10]
y = np.array(rotterdam_1h_exceeds_summer)
N = len(y)
x = range(N)
plt.bar(x, y)
plt.ylabel('mm/h')
Explanation: Events in summer
End of explanation
print(len(rotterdam_1h_exceeds_summer))
Explanation: Amount of events
End of explanation
exceed_hist = plt.hist(rotterdam_1h_exceeds, bins=100)
from scipy.stats import genextreme
x = np.linspace(5, 30, 1000)
y = np.array(rotterdam_1h_exceeds[:])
np.seterr(divide='ignore', invalid='ignore')
genextreme.fit(y)
np.seterr(divide='ignore', invalid='ignore')
pdf = plt.plot(x, genextreme.pdf(x, *genextreme.fit(y)))
pdf_hist = plt.hist(y, bins=50, normed=True, histtype='stepfilled', alpha=0.8)
Explanation: 3. Fit GEV-distribution for POT values in the time series
3. a. Create plots: histogram and GEV fit and interpret
End of explanation
genextreme.ppf((1-1/1), *genextreme.fit(y))
genextreme.ppf((1-1/10), *genextreme.fit(y))
genextreme.ppf((1-1/100), *genextreme.fit(y))
Explanation: 3. c. Compute rainfall amounts associated with return periods of 1 year, 10 years and 100 years
End of explanation
from scipy.stats import genpareto
temp_monthly = data_1h.groupby(pd.TimeGrouper(freq='M'))
block_max_y = np.array(temp_monthly.accum_rain.max())
print(block_max_y)
print(len(block_max_y))
x = np.linspace(0, 30, 1000)
pdf_bm = plt.plot(x, genextreme.pdf(x, *genextreme.fit(block_max_y)))
pdf_hist_bm = plt.hist(block_max_y, bins=100, normed=True, histtype='stepfilled', alpha=0.8)
Explanation: Lower since we're missing out the heavy rainfall events in 2015 and 2016 (only 10 year dataset)
Update 10.10.2017
Block maxima & GEV
End of explanation
genextreme.fit(block_max_y)
genextreme.ppf((1-1/10), *genextreme.fit(block_max_y))
Explanation: GEV and block maxima of monthly maxima of 1h data
End of explanation
pdf_bm = plt.plot(x, genpareto.pdf(x, *genpareto.fit(y)))
pdf_hist_bm = plt.hist(y, bins=100, normed=True, histtype='stepfilled', alpha=0.8)
Explanation: GPD & POT
End of explanation
genpareto.fit(y)
genpareto.ppf((1-1/10), *genpareto.fit(y))
Explanation: GPD and POT of data>10mm/h
End of explanation
event_occurences = pd.DataFrame(rotterdam_1h_exceeds)
event_occurences['hour'] = event_occurences.index.hour
event_occurences.boxplot(column=['accum_rain'], by='hour', sym='+')
event_occurences.hour.value_counts(sort=False)
cur_hist = plt.hist(event_occurences.hour, bins=24, histtype='stepfilled')
plt.xticks(range(24))
plt.xlabel('hour')
Explanation: Boxplot of POT values
End of explanation |
3,367 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tricks of the trade
Step1: Introducing randomized search
We have already built a random forest classifier, tuned using grid search, to predict spam emails (here). Grid search exhaustively searches through some manually prespecified HP values and reports the best option and is quite commonly used. Another way to search through hyperparameter space to find optimums is by using randomized search. In randomized search, we sample HP values a certain number of times from some distribution which we prespecify in advance. There is evidence that randomized search is more efficient than grid search, because not all HPs are as important to tune and grid search effectively wastes time by exhaustively checking each option when it might not be necessary. By contrast, the random experiments utilized by randomized search explore the important dimensions of hyperparameter space with more coverage, while simultaneously not devoting too many trials to dimensions which are not as important. So, randomized search is very useful for high-dimensional feature spaces.
To use randomized search to tune random forests, we first specify the distributions we want to sample from.
If we were to sample from a uniform distribution and have the same number of n_iter trials, randomized search would be practically equivalent to grid search.
Step2: We then run the random search
Step3: Either grid search or randomized search is probably fine for tuning random forests.
Fancier techniques for hyperparameter optimization include methods based on gradient descent, grad student descent, and Bayesian approaches which update prior beliefs about likely values of hyperparameters based on the data (see Spearmint and hyperopt).
Let's look at how to tune our two other predictors. For simplicity, let's revert back to grid search.
Tuning a support vector machine
Let's train our second algorithm, support vector machines (SVMs) to do the same exact prediction task. A great introduction to the theory behind SVMs can be read here. Briefly, SVMs search for hyperplanes in the feature space which best divide the different classes in your dataset. Crucially, SVMs can find non-linear decision boundaries between classes using a process called kernelling, which projects the data into a higher-dimensional space. This sounds a bit abstract, but if you've ever fit a linear regression to power-transformed variables (e.g. maybe you used x^2, x^3 as features), you're already familiar with the concept. Do have a read of the guide we linked above.
SVMs can use different types of kernels, like Gaussian or radial ones, to throw the data into a different space. Let's use the latter. The main hyperparameters we must tune for SVMs are gamma (a kernel parameter, controlling how far we 'throw' the data into the new feature space) and C (which controls the bias-variance tradeoff of the model).
Step4: How does this compare an untuned SVM? What about an SVM with especially badly tuned hyperparams?
Tuning a logistic regression classifier
The last algorithm you'll tune and apply to predict spam emails is a logistic regression classifier. This is a type of regression model which is used for predicting binary outcomes (like spam/non-spam). We fit a straight line through our transformed data, where the x axes remain the same but the dependent variable is the log odds of data points being one of the two classes. So essentialy, logistic regression is just a transformed version of linear regression. Check out Charles' explanation and implementation of logistic regression [here].
One topic you will often encounter in machine learning is regularization, which is a class of techniques to reduce overfitting. The idea is that we often don't just want to maximize model fit, but also penalize the model for e.g. using too many parameters, or assigning coefficients or weights that are too big. Read more about regularized regression here. We can adjust just how much regularization we want by adjusting regularization hyperparameters, and scikit-learn comes with some models that can very efficiently fit data for a range of regulatization hyperparameter values. This is the case for regularized linear regression models like Lasso regression and ridge regression. These classes are shortcuts to doing cross-validated selection of models with different level of regularization.
But we can also optimize how much regularization we want ourselves, as well as tuning the values of other hyperparameters, in the same manner as we've been doing. | Python Code:
import wget
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
# Import the dataset
data_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/spam/spam_dataset.csv'
dataset = wget.download(data_url)
dataset = pd.read_csv(dataset, sep=",")
# Convert dataframe to numpy array and split
# data into input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-1].astype(float)
y = npArray[:,-1]
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)
Explanation: Tricks of the trade: expanding your machine learning toolkit
Introduction
Previously, we learned about the importance tuning our machine learning algorithms in order to improve prediction accuracy. We demonstrated tuning a random forest classifier using grid search, and how cross-validation can help avoid overfitting when tuning hyperparameters (HPs).
Here, we introduce a different strategy for traversing hyperparameter space - randomized search. We also demonstrate the process of tuning and training two other algorithms - a support vector machine and a logistic regression classifier.
We'll keep working with the spam dataset, which contains features relating to the frequency of words ("money" and "viagra") and symbols (like "!!!") in spam and non-spam emails. Our goal is to tune and apply different algorithms to these features in order to predict whether a given email is spam.
Here are the things we'll cover in this blog post:
In the next blog post, you will learn how to take different tuned machine learning algorithms and combine them to build different types of ensemble models, which are aggregated models which frequently have higher accuracy and lower overfitting.
Loading and train/test splitting the dataset
We start off by collecting the dataset. We have covered the data loading, conversion and train/test split previously, so we won't repeat the explanations here.
End of explanation
from scipy.stats import uniform
from scipy.stats import norm
from sklearn.grid_search import RandomizedSearchCV
from sklearn import metrics
# Designate distributions to sample hyperparameters from
n_estimators = np.random.uniform(25, 45, 5).astype(int)
max_features = np.random.normal(20, 10, 5).astype(int)
hyperparameters = {'n_estimators': list(n_estimators),
'max_features': list(max_features)}
print hyperparameters
Explanation: Introducing randomized search
We have already built a random forest classifier, tuned using grid search, to predict spam emails (here). Grid search exhaustively searches through some manually prespecified HP values and reports the best option and is quite commonly used. Another way to search through hyperparameter space to find optimums is by using randomized search. In randomized search, we sample HP values a certain number of times from some distribution which we prespecify in advance. There is evidence that randomized search is more efficient than grid search, because not all HPs are as important to tune and grid search effectively wastes time by exhaustively checking each option when it might not be necessary. By contrast, the random experiments utilized by randomized search explore the important dimensions of hyperparameter space with more coverage, while simultaneously not devoting too many trials to dimensions which are not as important. So, randomized search is very useful for high-dimensional feature spaces.
To use randomized search to tune random forests, we first specify the distributions we want to sample from.
If we were to sample from a uniform distribution and have the same number of n_iter trials, randomized search would be practically equivalent to grid search.
End of explanation
from sklearn.ensemble import RandomForestClassifier
# Run randomized search
randomCV = RandomizedSearchCV(RandomForestClassifier(), param_distributions=hyperparameters, n_iter=10)
randomCV.fit(XTrain, yTrain)
# Identify optimal hyperparameter values
best_n_estim = randomCV.best_params_['n_estimators']
best_max_features = randomCV.best_params_['max_features']
print("The best performing n_estimators value is: {:5.1f}".format(best_n_estim))
print("The best performing max_features value is: {:5.1f}".format(best_max_features))
# Train classifier using optimal hyperparameter values
# We could have also gotten this model out from randomCV.best_estimator_
clfRDF = RandomForestClassifier(n_estimators=best_n_estim,
max_features=best_max_features)
clfRDF.fit(XTrain, yTrain)
RF_predictions = clfRDF.predict(XTest)
print (metrics.classification_report(yTest, RF_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
Explanation: We then run the random search:
End of explanation
from sklearn.svm import SVC
# Search for good hyperparameter values
# Specify values to grid search over
g_range = 2. ** np.arange(-15, 5, step=5)
C_range = 2. ** np.arange(-5, 15, step=5)
hyperparameters = [{'gamma': g_range,
'C': C_range}]
print hyperparameters
from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
# Grid search using cross-validation
grid = GridSearchCV(SVC(), param_grid=hyperparameters, cv=10)
grid.fit(XTrain, yTrain)
bestG = grid.best_params_['gamma']
bestC = grid.best_params_['C']
# Train SVM and output predictions
rbfSVM = SVC(kernel='rbf', C=bestC, gamma=bestG)
rbfSVM.fit(XTrain, yTrain)
SVM_predictions = rbfSVM.predict(XTest)
print metrics.classification_report(yTest, SVM_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, SVM_predictions),2)
Explanation: Either grid search or randomized search is probably fine for tuning random forests.
Fancier techniques for hyperparameter optimization include methods based on gradient descent, grad student descent, and Bayesian approaches which update prior beliefs about likely values of hyperparameters based on the data (see Spearmint and hyperopt).
Let's look at how to tune our two other predictors. For simplicity, let's revert back to grid search.
Tuning a support vector machine
Let's train our second algorithm, support vector machines (SVMs) to do the same exact prediction task. A great introduction to the theory behind SVMs can be read here. Briefly, SVMs search for hyperplanes in the feature space which best divide the different classes in your dataset. Crucially, SVMs can find non-linear decision boundaries between classes using a process called kernelling, which projects the data into a higher-dimensional space. This sounds a bit abstract, but if you've ever fit a linear regression to power-transformed variables (e.g. maybe you used x^2, x^3 as features), you're already familiar with the concept. Do have a read of the guide we linked above.
SVMs can use different types of kernels, like Gaussian or radial ones, to throw the data into a different space. Let's use the latter. The main hyperparameters we must tune for SVMs are gamma (a kernel parameter, controlling how far we 'throw' the data into the new feature space) and C (which controls the bias-variance tradeoff of the model).
End of explanation
from sklearn.linear_model import LogisticRegression
# Search for good hyperparameter values
# Specify values to grid search over
penalty = ["l1", "l2"]
C_range = np.arange(0.1, 1.1, 0.1)
hyperparameters = [{'penalty': penalty,
'C': C_range}]
# Grid search using cross-validation
grid = GridSearchCV(LogisticRegression(), param_grid=hyperparameters, cv=10)
grid.fit(XTrain, yTrain)
bestPenalty = grid.best_params_['penalty']
bestC = grid.best_params_['C']
print bestPenalty
print bestC
# Train model and output predictions
classifier_logistic = LogisticRegression(penalty=bestPenalty, C=bestC)
classifier_logistic_fit = classifier_logistic.fit(XTrain, yTrain)
logistic_predictions = classifier_logistic_fit.predict(XTest)
print metrics.classification_report(yTest, logistic_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, logistic_predictions),2)
Explanation: How does this compare an untuned SVM? What about an SVM with especially badly tuned hyperparams?
Tuning a logistic regression classifier
The last algorithm you'll tune and apply to predict spam emails is a logistic regression classifier. This is a type of regression model which is used for predicting binary outcomes (like spam/non-spam). We fit a straight line through our transformed data, where the x axes remain the same but the dependent variable is the log odds of data points being one of the two classes. So essentialy, logistic regression is just a transformed version of linear regression. Check out Charles' explanation and implementation of logistic regression [here].
One topic you will often encounter in machine learning is regularization, which is a class of techniques to reduce overfitting. The idea is that we often don't just want to maximize model fit, but also penalize the model for e.g. using too many parameters, or assigning coefficients or weights that are too big. Read more about regularized regression here. We can adjust just how much regularization we want by adjusting regularization hyperparameters, and scikit-learn comes with some models that can very efficiently fit data for a range of regulatization hyperparameter values. This is the case for regularized linear regression models like Lasso regression and ridge regression. These classes are shortcuts to doing cross-validated selection of models with different level of regularization.
But we can also optimize how much regularization we want ourselves, as well as tuning the values of other hyperparameters, in the same manner as we've been doing.
End of explanation |
3,368 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
scikit-learn提 了几种交验方法
1、cross_val_score默认Stratified K Fold方法切分数据
Step1: 设置n_neighbors 不对的取值对应的结果
%matplotlib inline | Python Code:
from sklearn.cross_validation import cross_val_score
estimator = KNeighborsClassifier()#默认取的是邻近的5个
scores = cross_val_score(estimator, X, Y, scoring='accuracy')
average_accuracy = np.mean(scores) * 100
print("The average accuracy is {0:.1f}%".format(average_accuracy))
Explanation: scikit-learn提 了几种交验方法
1、cross_val_score默认Stratified K Fold方法切分数据
End of explanation
from matplotlib import pyplot as plt
avg_scores = []
all_scores = []
parameter_values = list(range(1, 21)) # Include 20
for n_neighbors in parameter_values:
estimator = KNeighborsClassifier(n_neighbors=n_neighbors)
scores = cross_val_score(estimator, X, Y, scoring='accuracy')
avg_scores.append(np.mean(scores))
all_scores.append(scores)
plt.plot(parameter_values,avg_scores, '-o')
plt.show()
Explanation: 设置n_neighbors 不对的取值对应的结果
%matplotlib inline
End of explanation |
3,369 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyTreeReader
Step1: This is to create an histogram and read a tree.
Step2: Traditional looping
Now we establish the baseline
Step3: Enters the PyTreeReader
This is how we use for the first time the PyTreeReader and how we benchmark it. The only difference within the loop body consists in invoking functions called as the branches rather than data members called as the branches themselves. Even for this trivial tree, the difference is impressive.
Actually, having little read, decompression and deserialisation is better
Step4: Caching in memory the entire tree
It is possible to instruct the PyTreeReader to cache in memory the content of the tree. We'll see two benchmarks
Step5: Loop on the cached values
Step6: Selecting only some branches
It can be inconvenient to construct methods to access all the branches, even more to cache all of their contents by default. This is the reason why PyTreeReader provides the pattern constructor argument. For example, in order to consider only branches whose names start with "p", one can do
Step7: Pattern is indeed the pattern to match the desired branches by name (fnmatch is used). Its default value is simple, "*";
Accessing a vector of cached values
Caching values in memory has a price but once this step has been accomplished, more flexibility is available. For example, all the values of a particular branch can be retrieved in bulk
Step8: Wondering about the type returned by py_array? A widely adopted C++ data structure contiguous in memory | Python Code:
import ROOT
from PyTreeReader import PyTreeReader
Explanation: PyTreeReader: Looping on TTrees in Python, fast.
<hr style="border-top-width: 4px; border-top-color: #34609b;">
The PyTreeReader class solves the problem of looping in a performant way on TTrees in Python. This is achieved just in time compiling a C++ class tailored to the branches the user wants to read and interfacing it conveniently to Python.
Usability is a key. The high usability of the old way of looping on trees in Python is preserved, implying for the user as little overhead as possible.
Preparation
We include ROOT and the PyTreeReader class. Clearly this will have to be provided more conviniently, e.g. as part of the pythonisations PyROOT already provides for TTree.
End of explanation
h = ROOT.TH1F("h","h",1024,-256,256)
fill = h.Fill
f = ROOT.TFile(ROOT.gROOT.GetTutorialsDir()+"/hsimple.root")
tree = f.ntuple
Explanation: This is to create an histogram and read a tree.
End of explanation
%%timeit -n 1 -r 1
for event in tree:
fill(event.px*event.py*event.pz*event.random)
Explanation: Traditional looping
Now we establish the baseline: banchmark of the old way for looping.
End of explanation
%%timeit -n 1 -r 1
for event in PyTreeReader(tree):
fill(event.px()*event.py()*event.pz()*event.random())
Explanation: Enters the PyTreeReader
This is how we use for the first time the PyTreeReader and how we benchmark it. The only difference within the loop body consists in invoking functions called as the branches rather than data members called as the branches themselves. Even for this trivial tree, the difference is impressive.
Actually, having little read, decompression and deserialisation is better: the difference in time is really due to the superior performance of the PyTreeReader.
End of explanation
%%timeit -n 1 -r 1
ptr = PyTreeReader(tree, cache=True)
ptr = PyTreeReader(tree, cache=True)
Explanation: Caching in memory the entire tree
It is possible to instruct the PyTreeReader to cache in memory the content of the tree. We'll see two benchmarks: the time needed to cache and the time needed to loop.
Caching benchmark
End of explanation
%%timeit -n 1 -r 1
for event in ptr:
fill(event.px()*event.py()*event.pz()*event.random())
Explanation: Loop on the cached values
End of explanation
%%timeit -n 1 -r 1
ptr = PyTreeReader(tree, cache=True, pattern="p[*]")
Explanation: Selecting only some branches
It can be inconvenient to construct methods to access all the branches, even more to cache all of their contents by default. This is the reason why PyTreeReader provides the pattern constructor argument. For example, in order to consider only branches whose names start with "p", one can do:
End of explanation
for py in ptr.py_array()[:10]: print py # Better stopping after a few of them :)
Explanation: Pattern is indeed the pattern to match the desired branches by name (fnmatch is used). Its default value is simple, "*";
Accessing a vector of cached values
Caching values in memory has a price but once this step has been accomplished, more flexibility is available. For example, all the values of a particular branch can be retrieved in bulk:
End of explanation
ptr.py_array()
Explanation: Wondering about the type returned by py_array? A widely adopted C++ data structure contiguous in memory:
End of explanation |
3,370 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iris Scatterplot
A simple example of using a bl.ock as the basis for a D3 visualization in Jupyter
Using this bl.ocks example as a template, we will construct a scatterplot of the canonical Iris dataset.
Notebook Config
Step1: Data
The bl.ocks example uses a tsv file with the iris dataset. If you click on the block number at the top of the bl.ocks post, it will take you to the github gist upon which this bl.ocks entry is based. From there you can navigate to the raw version of the tsv file, and read that into a Pandas dataframe, as below. (Mind you, there are also many many other ways of getting this canonical dataset.)
Step2: A trick of the D3 trade is to know that its file readers usually output the data in the form of an array of dictionaries. As such, we will reformat our tabular data that way in preparation for it to be used in the graph below.
Step3: CSS and JavaScript based on bl.ocks example
Note that in the below css_text, we have removed the 'body' style reference from the original bl.ocks text. This is to avoid this style changing the rest of the notebook.
Step4: The javascript below was copied directly from the bl.ocks script text, and then six lines were changed, as noted by // **** (the double-backslash is a comment in javascript, so these lines will not be executed). The first set of changes is to the width and height of the image. The second change is simply to reference a different DOM element as the starting point. The remaining changes are to replace the data-file reading step with a direct infusion of data into the script. (Note that the $ characters denote replacement points in the Template object.)
Step5: And finally, the viz | Python Code:
from IPython.core.display import display, HTML
from string import Template
import pandas as pd
import json, random
HTML('<script src="lib/d3/d3.min.js"></script>')
Explanation: Iris Scatterplot
A simple example of using a bl.ock as the basis for a D3 visualization in Jupyter
Using this bl.ocks example as a template, we will construct a scatterplot of the canonical Iris dataset.
Notebook Config
End of explanation
filename = 'https://gist.githubusercontent.com/mbostock/3887118/raw/2e68ffbeb23fe4dadd9b0f6bca62e9def6ee9e17/data.tsv'
iris = pd.read_csv(filename,sep="\t")
iris.head()
Explanation: Data
The bl.ocks example uses a tsv file with the iris dataset. If you click on the block number at the top of the bl.ocks post, it will take you to the github gist upon which this bl.ocks entry is based. From there you can navigate to the raw version of the tsv file, and read that into a Pandas dataframe, as below. (Mind you, there are also many many other ways of getting this canonical dataset.)
End of explanation
iris_array_of_dicts = iris.to_dict(orient='records')
iris_array_of_dicts[:5]
Explanation: A trick of the D3 trade is to know that its file readers usually output the data in the form of an array of dictionaries. As such, we will reformat our tabular data that way in preparation for it to be used in the graph below.
End of explanation
css_text = '''
.axis path,
.axis line {
fill: none;
stroke: #000;
shape-rendering: crispEdges;
}
.dot {
stroke: #000;
}
'''
Explanation: CSS and JavaScript based on bl.ocks example
Note that in the below css_text, we have removed the 'body' style reference from the original bl.ocks text. This is to avoid this style changing the rest of the notebook.
End of explanation
js_text_template = Template('''
var margin = {top: 20, right: 20, bottom: 30, left: 40},
// **** width = 960 - margin.left - margin.right, ****
// **** height = 500 - margin.top - margin.bottom; ****
width = 720 - margin.left - margin.right,
height = 375 - margin.top - margin.bottom;
var x = d3.scale.linear()
.range([0, width]);
var y = d3.scale.linear()
.range([height, 0]);
var color = d3.scale.category10();
var xAxis = d3.svg.axis()
.scale(x)
.orient("bottom");
var yAxis = d3.svg.axis()
.scale(y)
.orient("left");
// **** var svg = d3.select("body").append("svg") ****
var svg = d3.select("#$graphdiv").append("svg")
.attr("width", width + margin.left + margin.right)
.attr("height", height + margin.top + margin.bottom)
.append("g")
.attr("transform", "translate(" + margin.left + "," + margin.top + ")");
// **** d3.tsv("data.tsv", function(error, data) { ****
// **** if (error) throw error; ****
var data = $python_data ;
data.forEach(function(d) {
d.sepalLength = +d.sepalLength;
d.sepalWidth = +d.sepalWidth;
});
x.domain(d3.extent(data, function(d) { return d.sepalWidth; })).nice();
y.domain(d3.extent(data, function(d) { return d.sepalLength; })).nice();
svg.append("g")
.attr("class", "x axis")
.attr("transform", "translate(0," + height + ")")
.call(xAxis)
.append("text")
.attr("class", "label")
.attr("x", width)
.attr("y", -6)
.style("text-anchor", "end")
.text("Sepal Width (cm)");
svg.append("g")
.attr("class", "y axis")
.call(yAxis)
.append("text")
.attr("class", "label")
.attr("transform", "rotate(-90)")
.attr("y", 6)
.attr("dy", ".71em")
.style("text-anchor", "end")
.text("Sepal Length (cm)")
svg.selectAll(".dot")
.data(data)
.enter().append("circle")
.attr("class", "dot")
.attr("r", 3.5)
.attr("cx", function(d) { return x(d.sepalWidth); })
.attr("cy", function(d) { return y(d.sepalLength); })
.style("fill", function(d) { return color(d.species); });
var legend = svg.selectAll(".legend")
.data(color.domain())
.enter().append("g")
.attr("class", "legend")
.attr("transform", function(d, i) { return "translate(0," + i * 20 + ")"; });
legend.append("rect")
.attr("x", width - 18)
.attr("width", 18)
.attr("height", 18)
.style("fill", color);
legend.append("text")
.attr("x", width - 24)
.attr("y", 9)
.attr("dy", ".35em")
.style("text-anchor", "end")
.text(function(d) { return d; });
// **** }); ****
''')
Explanation: The javascript below was copied directly from the bl.ocks script text, and then six lines were changed, as noted by // **** (the double-backslash is a comment in javascript, so these lines will not be executed). The first set of changes is to the width and height of the image. The second change is simply to reference a different DOM element as the starting point. The remaining changes are to replace the data-file reading step with a direct infusion of data into the script. (Note that the $ characters denote replacement points in the Template object.)
End of explanation
html_template = Template('''
<style> $css_text </style>
<div id="graph-div"></div>
<script> $js_text </script>
''')
js_text = js_text_template.substitute({'python_data': json.dumps(iris_array_of_dicts),
'graphdiv': 'graph-div'})
HTML(html_template.substitute({'css_text': css_text, 'js_text': js_text}))
Explanation: And finally, the viz
End of explanation |
3,371 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3D Animation
This is harder than we would like to do in a workshop. But, just for fun, let's do a 3D animation.
We will draw a torus that deforms into a knot.
Let try to animate. We start with the parameterization of a circle, of radius 2
Step1: Then, we set up the parameterizations for the torus and the knot, using a meshgrid from u,v.
Step2: We need an initialization function, an animation function, and then we call the animator to put it all together.
Step3: Finally, we call the HMML code to conver the animation object into a video. (This depends on having a MovieWriter installed on your system. Should be fine on syzygy.ca but it does not work on my Mac unless I install ffmpeg.)
Step4: If you click on the image above, you will see there is a button that allows you to download the animation as an mp4 file directly. Or you can use the following command | Python Code:
%matplotlib inline
from numpy import *
from matplotlib.pyplot import *
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import animation
from IPython.display import HTML
Explanation: 3D Animation
This is harder than we would like to do in a workshop. But, just for fun, let's do a 3D animation.
We will draw a torus that deforms into a knot.
Let try to animate. We start with the parameterization of a circle, of radius 2:
$$x=2\sin u$$
$$y=2\cos u$$
$$z = 0$$
and want to interpolate it to the equation for a knot:
$$x=\sin u+2\sin 2u$$
$$y=\cos u-2\cos 2u$$
$$z = -\sin 3u.$$
A linear interpolation between the two would look something like
$$x = ax_{torus} + (1-a)x_{knot},$$
and similar for the other components.
To get a surface, we draw a little circle around each point on the curve, parameterized by a second variable $v$:
$$x_{vec}=\sin u \cos v$$
$$y_{vec}=\cos u \cos v$$
$$z_{vec}=\sin v $$
We can scale this vector by some small parameter $r$ to get the proper thickness for the torus/knot.
First, we load in the packages we need.
End of explanation
# First set up the figure, the axis, and the plot element we want to animate
fig = figure()
ax = axes(projection='3d')
# we need to fix some parameters, describing size of the inner radus of the torus/knot
r = .4
# We set the parameterization for the circle and the knot
u = linspace(0, 2*pi, 100)
v = linspace(0, 2*pi, 100)
u,v = meshgrid(u,v)
x_torus = 2*sin(u) + r*sin(u)*cos(v)
y_torus = 2*cos(u) + r*cos(u)*cos(v)
z_torus = r*sin(v)
x_knot = sin(u) + 2*sin(2*u) + r*sin(u)*cos(v)
y_knot = cos(u) - 2*cos(2*u) + r*cos(u)*cos(v)
z_knot = -sin(3*u) + r*sin(v)
ax.plot_surface(x_torus, y_torus, z_torus, color='c')
ax.set_xlim([-2*(1+r), 2*(1+r)])
ax.set_ylim([-2*(1+r), 2*(1+r)])
ax.set_zlim([-(1+r), (1+r)])
Explanation: Then, we set up the parameterizations for the torus and the knot, using a meshgrid from u,v.
End of explanation
# initialization function: plot the background of each frame
def init():
thingy = ax.plot_surface([0], [0], [0], color='c')
return (thingy,)
# animation function. This is called sequentially
def animate(i):
a = sin(pi*i/100)**2 # this is an interpolation parameter. a = 0 is torus, a=1 is knot
x = (1-a)*x_torus + a*x_knot
y = (1-a)*y_torus + a*y_knot
z = (1-a)*z_torus + a*z_knot
ax.clear()
ax.set_xlim([-2*(1+r), 2*(1+r)])
ax.set_ylim([-2*(1+r), 2*(1+r)])
ax.set_zlim([-(1+r), (1+r)])
thingy = ax.plot_surface(x, y, z, color='c')
return (thingy,)
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=50, blit=True)
Explanation: We need an initialization function, an animation function, and then we call the animator to put it all together.
End of explanation
HTML(anim.to_html5_video())
Explanation: Finally, we call the HMML code to conver the animation object into a video. (This depends on having a MovieWriter installed on your system. Should be fine on syzygy.ca but it does not work on my Mac unless I install ffmpeg.)
End of explanation
anim.save('knot.mp4')
2+2
Explanation: If you click on the image above, you will see there is a button that allows you to download the animation as an mp4 file directly. Or you can use the following command:
End of explanation |
3,372 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a Supervised Machine Learning Model
The objective of this hands-on activity is to create and evaluate a Real-Bogus classifier using ZTF alert data. We will be using the same data from Day 2's clustering exercise.
Load data
Examine features
Curate a test and training set
Train classifiers
Compare the performance of different learning algorithms
What's Not Covered
There are many topics to cover, and due to time constraints, we cannot cover them all. Omitted is a discussion of cross validation and hyperparameter tuning. I encourage you to click through and read those articles by sklearn.
0a. Imports
These are all the imports that will be used in this notebook. All should be available in the DSFP conda environment.
Step1: 0b. Data Location
Please specify paths for the following
Step2: 1. Load Data
Please note that I have made some adjustments to the data.
I have created a dataframe (feats_df) from feats_np
I have also dropped columns from feats_df. Some were irrelevant to classification, and some contained a lot of NaNs.
Step3: 2. Plot Features
Examine each individual feature. You may use the subroutine below, or code of your own devising. Note the features that are continuous vs categorial, and those that have outliers. There are certain features that have sentinel values. You may wish to view some features on a log scale.
Question
Step4: Answer
Step5: 4. Train a Classifier
Part 1
Step6: Part 2. Scaling Data
With missing values handled, you're closing to training your classifiers. However, because distance metrics can be sensitive to the scale of your data (e.g., some features span large numeric ranges, but others don't), it is important to normalize data within a standard range such as (0, 1) or with z-score normalization (scaling to unit mean and variance). Fortunately, sklearn also makes this quite easy. Please review sklearn's preprocessing module options, specifically StandardScaler which corresponds to z-score normalization and MinMaxScaler. Please implement one.
FYI - Neural networks and Support Vector Machines (SVM) are sensitive to the scale of the data. Decision trees (and therefore Random Forests) are not (but it doesn't hurt to use scaled data).
Step7: Part 3. Train Classifiers
Import a few classifiers and build models on your training data. Some suggestions include a Support Vector Machine, Random Forest, Neural Net, NaiveBayes and K-Nearest Neighbor.
Step8: Part 4. Plot Real and Bogus Score Distributions
Another way to test performance is to plot a histogram of the test set RB scores, comparing the distributions of the labeled reals vs. boguses. The scores of the reals should be close to 1, while the scores of the boguses should be closer to 0. The more separated the distribution of scores, the better performing your classifier is.
Compare the score distributions of the classifiers you've trained. Trying displaying as a cumulative distribution rather than straight histogram.
Optional | Python Code:
import numpy as np
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import MinMaxScaler, StandardScaler
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
Explanation: Building a Supervised Machine Learning Model
The objective of this hands-on activity is to create and evaluate a Real-Bogus classifier using ZTF alert data. We will be using the same data from Day 2's clustering exercise.
Load data
Examine features
Curate a test and training set
Train classifiers
Compare the performance of different learning algorithms
What's Not Covered
There are many topics to cover, and due to time constraints, we cannot cover them all. Omitted is a discussion of cross validation and hyperparameter tuning. I encourage you to click through and read those articles by sklearn.
0a. Imports
These are all the imports that will be used in this notebook. All should be available in the DSFP conda environment.
End of explanation
F_META = '../Day2/dsfp_ztf_meta.npy'
F_FEATS = '../Day2/dsfp_ztf_feats.npy'
D_STAMPS = '../Day2/dsfp_ztf_png_stamps'
Explanation: 0b. Data Location
Please specify paths for the following:
End of explanation
meta_np = np.load(F_META)
feats_np = np.load(F_FEATS)
COL_NAMES = ['diffmaglim', 'magpsf', 'sigmapsf', 'chipsf', 'magap', 'sigmagap',
'distnr', 'magnr', 'sigmagnr', 'chinr', 'sharpnr', 'sky',
'magdiff', 'fwhm', 'classtar', 'mindtoedge', 'magfromlim', 'seeratio',
'aimage', 'bimage', 'aimagerat', 'bimagerat', 'elong', 'nneg',
'nbad', 'ssdistnr', 'ssmagnr', 'sumrat', 'magapbig', 'sigmagapbig',
'ndethist', 'ncovhist', 'jdstarthist', 'jdendhist', 'scorr', 'label']
# NOTE FROM Umaa: I've decided to eliminate the following features. Dropping them.
#
COL_TO_DROP = ['ndethist', 'ncovhist', 'jdstarthist', 'jdendhist',
'distnr', 'magnr', 'sigmagnr', 'chinr', 'sharpnr',
'classtar', 'ssdistnr', 'ssmagnr', 'aimagerat', 'bimagerat',
'magapbig', 'sigmagapbig', 'scorr']
feats_df = pd.DataFrame(data=feats_np, index=meta_np['candid'], columns=COL_NAMES)
print("There are {} columns left.".format(len(feats_df.columns)))
print("They are: {}".format(list(feats_df.columns)))
feats_df.drop(columns=COL_TO_DROP, inplace=True)
feats_df.describe()
# INSTRUCTION: How many real and bogus examples are in this labeled set
#
real_mask = feats_df['label'] == 1
bogus_mask = ~real_mask
print("Number of Real Examples: {}".format(np.sum(real_mask)))
print("Number of Bogus Examples: {}".format(np.sum(bogus_mask)))
Explanation: 1. Load Data
Please note that I have made some adjustments to the data.
I have created a dataframe (feats_df) from feats_np
I have also dropped columns from feats_df. Some were irrelevant to classification, and some contained a lot of NaNs.
End of explanation
# Histogram a Single Feature
#
def plot_rb_hists(df, colname, bins=100, xscale='linear', yscale='linear'):
mask_real = feats_df['label'] == 1
mask_bogus = ~mask_real
plt.hist(df[colname][mask_real], bins, color='b', alpha=0.5, density=True)
plt.hist(df[colname][mask_bogus], bins, color='r', alpha=0.5, density=True)
plt.xscale(xscale)
plt.yscale(yscale)
plt.title(colname)
plt.show()
# INSTRUCTION: Plot the individual features.
#
for col in feats_df.columns:
if col in ['chipsf', 'sky', 'fwhm']:
plot_rb_hists(feats_df, col, bins=np.logspace(np.log10(0.01),np.log10(10000.0), 50), xscale='log')
else:
plot_rb_hists(feats_df, col)
Explanation: 2. Plot Features
Examine each individual feature. You may use the subroutine below, or code of your own devising. Note the features that are continuous vs categorial, and those that have outliers. There are certain features that have sentinel values. You may wish to view some features on a log scale.
Question: Which features seem to have pathologies? Which features should we exclude?
End of explanation
feats_plus_label = np.array(feats_df)
nids = meta_np['nid']
# INSTRUCTION: nid.npy contains the nids for this labeled data.
# Split the data into separate data structures for train/test data at nid=500.
# Verify that you have at least 500 reals in your test set.
nid_mask_train = nids <= 550
nid_mask_test = ~nid_mask_train
train_plus_label = feats_plus_label[nid_mask_train,:]
test_plus_label = feats_plus_label[nid_mask_test,:]
nreals_train = np.sum(train_plus_label[:,-1] == 1)
nbogus_train = np.sum(train_plus_label[:,-1] == 0)
nreals_test = np.sum(test_plus_label[:,-1] == 1)
nbogus_test = np.sum(test_plus_label[:,-1] == 0)
print("TRAIN Num Real={}, Bogus={}".format(nreals_train, nbogus_train))
print("TEST Num Real={}, Bogus={}".format(nreals_test, nbogus_test))
Explanation: Answer:
'nbad' and 'nneg' are discrete features
'seeratio' has a sentinel value of -999
There are some features that have high ranges than others. For classifiers that are sensitive to scaling, we will need to scale the data
There does not appear to be good separability between real and bogus on any individual features.
3. Curate a Test and Training Set
We need to reserve some of our labeled data for evaluation. This means we must split up the labeled data we have into the set used for training (training set), and the set used for evaluation (test set). Ideally, the distribution of real and bogus examples in both the training and test sets are roughly identical. One can use sklearn.model_selection.train_test_split and use the stratify option.
For ZTF data, we split the training and test data by date. That way repeat observations from the same night (which might be nearly identical) cannot be split into the training and test set, and artificially inflate test performance. Also, due to the change in survey objectives, it's possible that the test set features have drifted away from the training sets.
Provided is nid.npy which contains the Night IDs for ZTF. Split on nid=550 (July 5, 2018). This should leave you with roughly 500 examples in your test set.
End of explanation
# INSTRUCTION: Separate the labels from the features
train_feats = train_plus_label[:,:-1]
train_labels = train_plus_label[:,-1]
test_feats = test_plus_label[:,:-1]
test_labels = test_plus_label[:,-1]
Explanation: 4. Train a Classifier
Part 1: Separate Labels from the Features
Now store the labels separately from the features.
End of explanation
# INSTRUCTION: Re-scale your data using either the MinMaxScaler or StandardScaler from sklearn
#
scaler = StandardScaler()
train_feats = scaler.fit_transform(train_feats)
test_feats = scaler.fit_transform(test_feats)
Explanation: Part 2. Scaling Data
With missing values handled, you're closing to training your classifiers. However, because distance metrics can be sensitive to the scale of your data (e.g., some features span large numeric ranges, but others don't), it is important to normalize data within a standard range such as (0, 1) or with z-score normalization (scaling to unit mean and variance). Fortunately, sklearn also makes this quite easy. Please review sklearn's preprocessing module options, specifically StandardScaler which corresponds to z-score normalization and MinMaxScaler. Please implement one.
FYI - Neural networks and Support Vector Machines (SVM) are sensitive to the scale of the data. Decision trees (and therefore Random Forests) are not (but it doesn't hurt to use scaled data).
End of explanation
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
knn3 = KNeighborsClassifier(3),
svml = SVC(kernel="linear", C=0.025, probability=True)
svmr = SVC(gamma=2, C=1, probability=True)
dtre = DecisionTreeClassifier()
rafo = RandomForestClassifier(n_estimators=100)
nnet = MLPClassifier(alpha=1)
naiv = GaussianNB()
# INSTRUCTION: Train three classifiers and run on your test data. Here's an example to get your started.
# Which ones seems to take longer to train?
#
rafo.fit(train_feats, train_labels)
rafo_scores = rafo.predict_proba(test_feats)[:,1]
nnet.fit(train_feats, train_labels)
nnet_scores = nnet.predict_proba(test_feats)[:,1]
svmr.fit(train_feats, train_labels)
svmr_scores = svmr.predict_proba(test_feats)[:,1]
# INSTRUCTION: Print out the following metrics per classifier into a table: accuracy, auc, f1_score, etc.
#
Explanation: Part 3. Train Classifiers
Import a few classifiers and build models on your training data. Some suggestions include a Support Vector Machine, Random Forest, Neural Net, NaiveBayes and K-Nearest Neighbor.
End of explanation
# INSTRUCTION: create masks for the real and bogus examples of the test set
real_mask_test = test_labels == 1
bogus_mask_test = test_labels == 0
# # INSTRUCTION: First compare the classifiers' scores on the test reals only
# #
scores_list = [rafo_scores, nnet_scores, svmr_scores]
legends = ['Random Forest', 'Neural Net', 'SVM-RBF']
colors = ['g', 'b', 'r']
rbbins = np.arange(0,1,0.001)
# Comparison on Reals
plt.figure()
ax = plt.subplot(111)
for i, scores in enumerate(scores_list):
ax.hist(scores[real_mask_test], rbbins, histtype='step', cumulative=True, density=False, color=colors[i])
ax.set_xlabel('RB Score')
ax.set_ylabel('Count')
ax.set_xbound(0, 1)
ax.legend(legends, loc=4)
plt.show()
# Comparison on Bogus
#
plt.figure()
ax = plt.subplot(111)
rbbins = np.arange(0,1,0.001)
for i, scores in enumerate(scores_list):
ax.hist(scores[bogus_mask_test], rbbins, histtype='step', cumulative=True, density=False, color=colors[i])
ax.set_xlabel('RB Score')
ax.set_ylabel('Count')
ax.set_xbound(0, 1)
ax.legend(legends, loc=4)
plt.show()
Explanation: Part 4. Plot Real and Bogus Score Distributions
Another way to test performance is to plot a histogram of the test set RB scores, comparing the distributions of the labeled reals vs. boguses. The scores of the reals should be close to 1, while the scores of the boguses should be closer to 0. The more separated the distribution of scores, the better performing your classifier is.
Compare the score distributions of the classifiers you've trained. Trying displaying as a cumulative distribution rather than straight histogram.
Optional: What would the decision thresholds be at the 5, 10 and 20% false negative rate (FNR)? What would the decision threshold be at the 1, 10, and 20% false positive rate?
End of explanation |
3,373 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hello world
In this unit you will learn how to use Python to implement the first ever program
that every programmer starts with.
Introduction
Here is the traditional first programming exercise, called "Hello world".
The task is to print the message
Step1: Exercise
Now it is your turn. Please create a program in the next cell that would print a message "Hello, world" | Python Code:
print("hello")
print("bye bye")
print("hey", "you")
print("one")
print("two")
Explanation: Hello world
In this unit you will learn how to use Python to implement the first ever program
that every programmer starts with.
Introduction
Here is the traditional first programming exercise, called "Hello world".
The task is to print the message: "Hello, world".
Here are a few examples to get you started. Run the following cells and see how
you can print a message. To run a cell, click with mouse inside a cell, then
press Ctrl+Enter to execute it. If you want to execute a few cells sequentially,
then press Shift+Enter instead, and the focus will be automatically moved
to the next cell as soon as one cell finishes execution.
End of explanation
def hello(x):
print("Hello, " + x)
Explanation: Exercise
Now it is your turn. Please create a program in the next cell that would print a message "Hello, world":
End of explanation |
3,374 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variational Inference
Step1: Model specification
A neural network is quite simple. The basic unit is a perceptron which is nothing more than logistic regression. We use many of these in parallel and then stack them up to get hidden layers. Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem.
Step2: That's not so bad. The Normal priors help regularize the weights. Usually we would add a constant b to the inputs but I omitted it here to keep the code cleaner.
Variational Inference
Step3: < 20 seconds on my older laptop. That's pretty good considering that NUTS is having a really hard time. Further below we make this even faster. To make it really fly, we probably want to run the Neural Network on the GPU.
As samples are more convenient to work with, we can very quickly draw samples from the variational posterior using sample_vp() (this is just sampling from Normal distributions, so not at all the same like MCMC)
Step4: Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time.
Step5: Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). We use sample_ppc() to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation).
Step6: Hey, our neural network did all right!
Lets look at what the classifier has learned
For this, we evaluate the class probability predictions on a grid over the whole input space.
Step7: Probability surface
Step8: Uncertainty in predicted value
So far, everything I showed we could have done with a non-Bayesian Neural Network. The mean of the posterior predictive for each class-label should be identical to maximum likelihood predicted values. However, we can also look at the standard deviation of the posterior predictive to get a sense for the uncertainty in our predictions. Here is what that looks like
Step9: We can see that very close to the decision boundary, our uncertainty as to which label to predict is highest. You can imagine that associating predictions with uncertainty is a critical property for many applications like health care. To further maximize accuracy, we might want to train the model primarily on samples from that high-uncertainty region.
Mini-batch ADVI
Step10: While the above might look a bit daunting, I really like the design. Especially the fact that you define a generator allows for great flexibility. In principle, we could just pool from a database there and not have to keep all the data in RAM.
Lets pass those to advi_minibatch()
Step11: As you can see, mini-batch ADVI's running time is much lower. It also seems to converge faster.
For fun, we can also look at the trace. The point is that we also get uncertainty of our Neural Network weights. | Python Code:
%matplotlib inline
import theano
floatX = theano.config.floatX
import pymc3 as pm
import theano.tensor as T
import sklearn
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
from sklearn import datasets
from sklearn.preprocessing import scale
from sklearn.cross_validation import train_test_split
from sklearn.datasets import make_moons
X, Y = make_moons(noise=0.2, random_state=0, n_samples=1000)
X = scale(X)
X = X.astype(floatX)
Y = Y.astype(floatX)
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=.5)
fig, ax = plt.subplots()
ax.scatter(X[Y==0, 0], X[Y==0, 1], label='Class 0')
ax.scatter(X[Y==1, 0], X[Y==1, 1], color='r', label='Class 1')
sns.despine(); ax.legend()
ax.set(xlabel='X', ylabel='Y', title='Toy binary classification data set');
Explanation: Variational Inference: Bayesian Neural Networks
(c) 2016 by Thomas Wiecki
Original blog post: http://twiecki.github.io/blog/2016/06/01/bayesian-deep-learning/
Current trends in Machine Learning
There are currently three big trends in machine learning: Probabilistic Programming, Deep Learning and "Big Data". Inside of PP, a lot of innovation is in making things scale using Variational Inference. In this blog post, I will show how to use Variational Inference in PyMC3 to fit a simple Bayesian Neural Network. I will also discuss how bridging Probabilistic Programming and Deep Learning can open up very interesting avenues to explore in future research.
Probabilistic Programming at scale
Probabilistic Programming allows very flexible creation of custom probabilistic models and is mainly concerned with insight and learning from your data. The approach is inherently Bayesian so we can specify priors to inform and constrain our models and get uncertainty estimation in form of a posterior distribution. Using MCMC sampling algorithms we can draw samples from this posterior to very flexibly estimate these models. PyMC3 and Stan are the current state-of-the-art tools to consruct and estimate these models. One major drawback of sampling, however, is that it's often very slow, especially for high-dimensional models. That's why more recently, variational inference algorithms have been developed that are almost as flexible as MCMC but much faster. Instead of drawing samples from the posterior, these algorithms instead fit a distribution (e.g. normal) to the posterior turning a sampling problem into and optimization problem. ADVI -- Automatic Differentation Variational Inference -- is implemented in PyMC3 and Stan, as well as a new package called Edward which is mainly concerned with Variational Inference.
Unfortunately, when it comes to traditional ML problems like classification or (non-linear) regression, Probabilistic Programming often plays second fiddle (in terms of accuracy and scalability) to more algorithmic approaches like ensemble learning (e.g. random forests or gradient boosted regression trees.
Deep Learning
Now in its third renaissance, deep learning has been making headlines repeatadly by dominating almost any object recognition benchmark, kicking ass at Atari games, and beating the world-champion Lee Sedol at Go. From a statistical point, Neural Networks are extremely good non-linear function approximators and representation learners. While mostly known for classification, they have been extended to unsupervised learning with AutoEncoders and in all sorts of other interesting ways (e.g. Recurrent Networks, or MDNs to estimate multimodal distributions). Why do they work so well? No one really knows as the statistical properties are still not fully understood.
A large part of the innoviation in deep learning is the ability to train these extremely complex models. This rests on several pillars:
* Speed: facilitating the GPU allowed for much faster processing.
* Software: frameworks like Theano and TensorFlow allow flexible creation of abstract models that can then be optimized and compiled to CPU or GPU.
* Learning algorithms: training on sub-sets of the data -- stochastic gradient descent -- allows us to train these models on massive amounts of data. Techniques like drop-out avoid overfitting.
* Architectural: A lot of innovation comes from changing the input layers, like for convolutional neural nets, or the output layers, like for MDNs.
Bridging Deep Learning and Probabilistic Programming
On one hand we Probabilistic Programming which allows us to build rather small and focused models in a very principled and well-understood way to gain insight into our data; on the other hand we have deep learning which uses many heuristics to train huge and highly complex models that are amazing at prediction. Recent innovations in variational inference allow probabilistic programming to scale model complexity as well as data size. We are thus at the cusp of being able to combine these two approaches to hopefully unlock new innovations in Machine Learning. For more motivation, see also Dustin Tran's recent blog post.
While this would allow Probabilistic Programming to be applied to a much wider set of interesting problems, I believe this bridging also holds great promise for innovations in Deep Learning. Some ideas are:
* Uncertainty in predictions: As we will see below, the Bayesian Neural Network informs us about the uncertainty in its predictions. I think uncertainty is an underappreciated concept in Machine Learning as it's clearly important for real-world applications. But it could also be useful in training. For example, we could train the model specifically on samples it is most uncertain about.
* Uncertainty in representations: We also get uncertainty estimates of our weights which could inform us about the stability of the learned representations of the network.
* Regularization with priors: Weights are often L2-regularized to avoid overfitting, this very naturally becomes a Gaussian prior for the weight coefficients. We could, however, imagine all kinds of other priors, like spike-and-slab to enforce sparsity (this would be more like using the L1-norm).
* Transfer learning with informed priors: If we wanted to train a network on a new object recognition data set, we could bootstrap the learning by placing informed priors centered around weights retrieved from other pre-trained networks, like GoogLeNet.
* Hierarchical Neural Networks: A very powerful approach in Probabilistic Programming is hierarchical modeling that allows pooling of things that were learned on sub-groups to the overall population (see my tutorial on Hierarchical Linear Regression in PyMC3). Applied to Neural Networks, in hierarchical data sets, we could train individual neural nets to specialize on sub-groups while still being informed about representations of the overall population. For example, imagine a network trained to classify car models from pictures of cars. We could train a hierarchical neural network where a sub-neural network is trained to tell apart models from only a single manufacturer. The intuition being that all cars from a certain manufactures share certain similarities so it would make sense to train individual networks that specialize on brands. However, due to the individual networks being connected at a higher layer, they would still share information with the other specialized sub-networks about features that are useful to all brands. Interestingly, different layers of the network could be informed by various levels of the hierarchy -- e.g. early layers that extract visual lines could be identical in all sub-networks while the higher-order representations would be different. The hierarchical model would learn all that from the data.
* Other hybrid architectures: We can more freely build all kinds of neural networks. For example, Bayesian non-parametrics could be used to flexibly adjust the size and shape of the hidden layers to optimally scale the network architecture to the problem at hand during training. Currently, this requires costly hyper-parameter optimization and a lot of tribal knowledge.
Bayesian Neural Networks in PyMC3
Generating data
First, lets generate some toy data -- a simple binary classification problem that's not linearly separable.
End of explanation
# Trick: Turn inputs and outputs into shared variables.
# It's still the same thing, but we can later change the values of the shared variable
# (to switch in the test-data later) and pymc3 will just use the new data.
# Kind-of like a pointer we can redirect.
# For more info, see: http://deeplearning.net/software/theano/library/compile/shared.html
ann_input = theano.shared(X_train)
ann_output = theano.shared(Y_train)
n_hidden = 5
# Initialize random weights between each layer
init_1 = np.random.randn(X.shape[1], n_hidden).astype(floatX)
init_2 = np.random.randn(n_hidden, n_hidden).astype(floatX)
init_out = np.random.randn(n_hidden).astype(floatX)
with pm.Model() as neural_network:
# Weights from input to hidden layer
weights_in_1 = pm.Normal('w_in_1', 0, sd=1,
shape=(X.shape[1], n_hidden),
testval=init_1)
# Weights from 1st to 2nd layer
weights_1_2 = pm.Normal('w_1_2', 0, sd=1,
shape=(n_hidden, n_hidden),
testval=init_2)
# Weights from hidden layer to output
weights_2_out = pm.Normal('w_2_out', 0, sd=1,
shape=(n_hidden,),
testval=init_out)
# Build neural-network using tanh activation function
act_1 = pm.math.tanh(pm.math.dot(ann_input,
weights_in_1))
act_2 = pm.math.tanh(pm.math.dot(act_1,
weights_1_2))
act_out = pm.math.sigmoid(pm.math.dot(act_2,
weights_2_out))
# Binary classification -> Bernoulli likelihood
out = pm.Bernoulli('out',
act_out,
observed=ann_output)
Explanation: Model specification
A neural network is quite simple. The basic unit is a perceptron which is nothing more than logistic regression. We use many of these in parallel and then stack them up to get hidden layers. Here we will use 2 hidden layers with 5 neurons each which is sufficient for such a simple problem.
End of explanation
%%time
with neural_network:
# Run ADVI which returns posterior means, standard deviations, and the evidence lower bound (ELBO)
v_params = pm.variational.advi(n=50000)
Explanation: That's not so bad. The Normal priors help regularize the weights. Usually we would add a constant b to the inputs but I omitted it here to keep the code cleaner.
Variational Inference: Scaling model complexity
We could now just run a MCMC sampler like NUTS which works pretty well in this case but as I already mentioned, this will become very slow as we scale our model up to deeper architectures with more layers.
Instead, we will use the brand-new ADVI variational inference algorithm which was recently added to PyMC3. This is much faster and will scale better. Note, that this is a mean-field approximation so we ignore correlations in the posterior.
End of explanation
with neural_network:
trace = pm.variational.sample_vp(v_params, draws=5000)
Explanation: < 20 seconds on my older laptop. That's pretty good considering that NUTS is having a really hard time. Further below we make this even faster. To make it really fly, we probably want to run the Neural Network on the GPU.
As samples are more convenient to work with, we can very quickly draw samples from the variational posterior using sample_vp() (this is just sampling from Normal distributions, so not at all the same like MCMC):
End of explanation
plt.plot(v_params.elbo_vals)
plt.ylabel('ELBO')
plt.xlabel('iteration')
Explanation: Plotting the objective function (ELBO) we can see that the optimization slowly improves the fit over time.
End of explanation
# Replace shared variables with testing set
ann_input.set_value(X_test)
ann_output.set_value(Y_test)
# Creater posterior predictive samples
ppc = pm.sample_ppc(trace, model=neural_network, samples=500)
# Use probability of > 0.5 to assume prediction of class 1
pred = ppc['out'].mean(axis=0) > 0.5
fig, ax = plt.subplots()
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
sns.despine()
ax.set(title='Predicted labels in testing set', xlabel='X', ylabel='Y');
print('Accuracy = {}%'.format((Y_test == pred).mean() * 100))
Explanation: Now that we trained our model, lets predict on the hold-out set using a posterior predictive check (PPC). We use sample_ppc() to generate new data (in this case class predictions) from the posterior (sampled from the variational estimation).
End of explanation
grid = np.mgrid[-3:3:100j,-3:3:100j].astype(floatX)
grid_2d = grid.reshape(2, -1).T
dummy_out = np.ones(grid.shape[1], dtype=np.int8)
ann_input.set_value(grid_2d)
ann_output.set_value(dummy_out)
# Creater posterior predictive samples
ppc = pm.sample_ppc(trace, model=neural_network, samples=500)
Explanation: Hey, our neural network did all right!
Lets look at what the classifier has learned
For this, we evaluate the class probability predictions on a grid over the whole input space.
End of explanation
cmap = sns.diverging_palette(250, 12, s=85, l=25, as_cmap=True)
fig, ax = plt.subplots(figsize=(10, 6))
contour = ax.contourf(*grid, ppc['out'].mean(axis=0).reshape(100, 100), cmap=cmap)
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Posterior predictive mean probability of class label = 0');
Explanation: Probability surface
End of explanation
cmap = sns.cubehelix_palette(light=1, as_cmap=True)
fig, ax = plt.subplots(figsize=(10, 6))
contour = ax.contourf(*grid, ppc['out'].std(axis=0).reshape(100, 100), cmap=cmap)
ax.scatter(X_test[pred==0, 0], X_test[pred==0, 1])
ax.scatter(X_test[pred==1, 0], X_test[pred==1, 1], color='r')
cbar = plt.colorbar(contour, ax=ax)
_ = ax.set(xlim=(-3, 3), ylim=(-3, 3), xlabel='X', ylabel='Y');
cbar.ax.set_ylabel('Uncertainty (posterior predictive standard deviation)');
Explanation: Uncertainty in predicted value
So far, everything I showed we could have done with a non-Bayesian Neural Network. The mean of the posterior predictive for each class-label should be identical to maximum likelihood predicted values. However, we can also look at the standard deviation of the posterior predictive to get a sense for the uncertainty in our predictions. Here is what that looks like:
End of explanation
from six.moves import zip
# Set back to original data to retrain
ann_input.set_value(X_train)
ann_output.set_value(Y_train)
# Tensors and RV that will be using mini-batches
minibatch_tensors = [ann_input, ann_output]
minibatch_RVs = [out]
# Generator that returns mini-batches in each iteration
def create_minibatch(data):
rng = np.random.RandomState(0)
while True:
# Return random data samples of set size 100 each iteration
ixs = rng.randint(len(data), size=50)
yield data[ixs]
minibatches = zip(
create_minibatch(X_train),
create_minibatch(Y_train),
)
total_size = len(Y_train)
Explanation: We can see that very close to the decision boundary, our uncertainty as to which label to predict is highest. You can imagine that associating predictions with uncertainty is a critical property for many applications like health care. To further maximize accuracy, we might want to train the model primarily on samples from that high-uncertainty region.
Mini-batch ADVI: Scaling data size
So far, we have trained our model on all data at once. Obviously this won't scale to something like ImageNet. Moreover, training on mini-batches of data (stochastic gradient descent) avoids local minima and can lead to faster convergence.
Fortunately, ADVI can be run on mini-batches as well. It just requires some setting up:
End of explanation
%%time
with neural_network:
# Run advi_minibatch
v_params = pm.variational.advi_minibatch(
n=50000, minibatch_tensors=minibatch_tensors,
minibatch_RVs=minibatch_RVs, minibatches=minibatches,
total_size=total_size, learning_rate=1e-2, epsilon=1.0
)
with neural_network:
trace = pm.variational.sample_vp(v_params, draws=5000)
plt.plot(v_params.elbo_vals)
plt.ylabel('ELBO')
plt.xlabel('iteration')
sns.despine()
Explanation: While the above might look a bit daunting, I really like the design. Especially the fact that you define a generator allows for great flexibility. In principle, we could just pool from a database there and not have to keep all the data in RAM.
Lets pass those to advi_minibatch():
End of explanation
pm.traceplot(trace);
Explanation: As you can see, mini-batch ADVI's running time is much lower. It also seems to converge faster.
For fun, we can also look at the trace. The point is that we also get uncertainty of our Neural Network weights.
End of explanation |
3,375 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pattern Matching Experiments
Step1: Networks
We give two sets of networks. One of them allows for all parameters. The other is identical except it only uses essential parameters.
Step2: Full Networks
Step3: Essential Networks
Step4: Path match analysis
We give two functions for path match analysis. One looks at the entire domain graph. The other only checks for path matches in stable Morse sets.
Analysis on entire domain graph
Step5: Analysis on stable Morse set only
Step6: Poset of Extrema
We study two poset of extrema. The first poset comes from looking at times [10,60] and assuming SWI4 happens before the other minima at the beginning and thus can be excluded. The other comes from including all extrema.
Original Poset of Extrema
Step7: Alternative Poset of Extrema
Step8: Experiments
There are 8 experiements corresponding to 3 binary choices | Python Code:
from DSGRN import *
Explanation: Pattern Matching Experiments
End of explanation
network_strings = [
["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : HCM1", "YOX1 : SWI4"],
["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : HCM1", "YOX1 : (SWI4)(HCM1)"],
["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : HCM1", "YOX1 : (SWI4)(~HCM1)"],
["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : HCM1", "YOX1 : (SWI4)(NDD1)"],
["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : HCM1", "YOX1 : (SWI4)(~NDD1)"],
["SWI4 : (NDD1)(~YOX1)", "HCM1 : (SWI4)(YOX1)", "NDD1 : HCM1", "YOX1 : SWI4"],
["SWI4 : (NDD1)(~YOX1)", "HCM1 : (SWI4)(~YOX1)", "NDD1 : HCM1", "YOX1 : SWI4"],
["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : (HCM1)(YOX1)", "YOX1 : SWI4"],
["SWI4 : (NDD1)(~YOX1)", "HCM1 : SWI4", "NDD1 : (HCM1)(~YOX1)", "YOX1 : SWI4"] ]
Explanation: Networks
We give two sets of networks. One of them allows for all parameters. The other is identical except it only uses essential parameters.
End of explanation
networks = [Network() for i in range(0,9)]
for i,network in enumerate(networks):
network.assign('\n'.join(network_strings[i]))
Explanation: Full Networks
End of explanation
essential_network_strings = [ [ line + " : E" for line in network_string ] for network_string in network_strings]
essential_networks = [Network() for i in range(0,9)]
for i,network in enumerate(essential_networks):
network.assign('\n'.join(essential_network_strings[i]))
Explanation: Essential Networks
End of explanation
def Analyze(network, events, event_ordering):
poe = PosetOfExtrema(network, events, event_ordering )
pattern_graph = PatternGraph(poe)
parameter_graph = ParameterGraph(network)
result = []
for parameter_index in range(0, parameter_graph.size()):
parameter = parameter_graph.parameter(parameter_index)
search_graph = SearchGraph(DomainGraph(parameter))
matching_graph = MatchingGraph(search_graph, pattern_graph);
if PathMatch(matching_graph):
result.append(parameter_index)
return [result, parameter_graph.size()]
Explanation: Path match analysis
We give two functions for path match analysis. One looks at the entire domain graph. The other only checks for path matches in stable Morse sets.
Analysis on entire domain graph
End of explanation
def AnalyzeOnStable(network, events, event_ordering):
poe = PosetOfExtrema(network, events, event_ordering )
pattern_graph = PatternGraph(poe)
parameter_graph = ParameterGraph(network)
results = []
for parameter_index in range(0, parameter_graph.size()):
parameter = parameter_graph.parameter(parameter_index)
domain_graph = DomainGraph(parameter)
morse_decomposition = MorseDecomposition(domain_graph.digraph())
morse_graph = MorseGraph()
morse_graph.assign(domain_graph, morse_decomposition)
MorseNodes = range(0, morse_graph.poset().size())
isStable = lambda node : len(morse_graph.poset().children(node)) == 0
isStableFC = lambda node : morse_graph.annotation(node)[0] == 'FC' and isStable(node)
hasStableFC = any( isStableFC(node) for node in MorseNodes)
StableNodes = [ node for node in MorseNodes if isStable(node) ]
subresult = []
for node in StableNodes:
search_graph = SearchGraph(domain_graph, node)
matching_graph = MatchingGraph(search_graph, pattern_graph)
path_match = PathMatch(matching_graph)
if path_match:
subresult.append([parameter_index, node])
results.append([subresult, 1 if hasStableFC else 0])
return [results, parameter_graph.size()]
Explanation: Analysis on stable Morse set only
End of explanation
original_events = [("HCM1", "min"), ("NDD1", "min"), ("YOX1", "min"),
("SWI4", "max"), ("HCM1", "max"), ("YOX1", "max"),
("NDD1", "max"),
("SWI4","min")]
original_event_ordering = [ (i,j) for i in [0,1,2] for j in [3,4,5] ] + \
[ (i,j) for i in [3,4,5] for j in [6] ] + \
[ (i,j) for i in [6] for j in [7] ]
DrawGraph(PosetOfExtrema(networks[0], original_events, original_event_ordering ))
Explanation: Poset of Extrema
We study two poset of extrema. The first poset comes from looking at times [10,60] and assuming SWI4 happens before the other minima at the beginning and thus can be excluded. The other comes from including all extrema.
Original Poset of Extrema
End of explanation
all_events = [("SWI4", "min"), ("HCM1", "min"), ("NDD1", "min"), ("YOX1", "min"),
("SWI4", "max"), ("HCM1", "max"), ("YOX1", "max"),
("NDD1", "max"),
("SWI4","min"),
("YOX1", "min"), ("HCM1","min"),
("NDD1", "min"),
("SWI4", "max"), ("HCM1", "max"), ("YOX1", "max"),
("NDD1", "max")]
all_event_ordering = [ (i,j) for i in [0,1,2,3] for j in [4,5,6] ] + \
[ (i,j) for i in [4,5,6] for j in [7] ] + \
[ (i,j) for i in [7] for j in [8] ] + \
[ (i,j) for i in [8] for j in [9,10] ] + \
[ (i,j) for i in [9,10] for j in [11,12,13,14] ] + \
[ (11,15) ]
DrawGraph(PosetOfExtrema(networks[0], all_events, all_event_ordering ))
Explanation: Alternative Poset of Extrema
End of explanation
def DisplayExperiment(results, title):
markdown_string = "# " + title + "\n\n"
markdown_string += "| network | # parameters | # parameters with path match |\n"
markdown_string += "| ------- |------------ | ---------------------------- |\n"
for i, item in enumerate(results):
[parameters_with_path_match, pgsize] = item
markdown_string += ("|" + str(i) + "|" + str(pgsize) + "|" + str(len(parameters_with_path_match)) + "|\n")
from IPython.display import display, Markdown, Latex
display(Markdown(markdown_string))
def DisplayStableExperiment(results, title):
markdown_string = "# " + title + "\n\n"
markdown_string += "| network | # parameters | # parameters with stable FC | # parameters with path match |\n"
markdown_string += "| ------- |------------ | ---------------------------- | ---------------------------- |\n"
for i, item in enumerate(results):
[results, pgsize] = item
parameters_with_path_match = sum([ 1 if pair[0] else 0 for pair in results])
parameters_with_stable_fc = sum([ 1 if pair[1] else 0 for pair in results])
markdown_string += ("|" + str(i) + "|" + str(pgsize) + "|" +str(parameters_with_stable_fc) +"|"+str(parameters_with_path_match) + "|\n")
from IPython.display import display, Markdown, Latex
display(Markdown(markdown_string))
%%time
experiment = lambda network : Analyze(network, original_events, original_event_ordering)
experimental_results_1 = [ experiment(network) for network in networks ]
DisplayExperiment(experimental_results_1, "Experiment 1: All parameters, original poset of extrema")
%%time
experiment = lambda network : Analyze(network, original_events, original_event_ordering)
experimental_results_2 = [ experiment(network) for network in essential_networks ]
DisplayExperiment(experimental_results_2, "Experiment 2: Essential parameters, original poset of extrema")
%%time
experiment = lambda network : AnalyzeOnStable(network, original_events, original_event_ordering)
experimental_results_3 = [ experiment(network) for network in networks ]
DisplayStableExperiment(experimental_results_3, "Experiment 3: All parameters, original poset, stable only")
%%time
experiment = lambda network : AnalyzeOnStable(network, original_events, original_event_ordering)
experimental_results_4 = [ experiment(network) for network in essential_networks ]
DisplayStableExperiment(experimental_results_4, "Experiment 4: Essential parameters, original poset, stable only")
%%time
experiment = lambda network : Analyze(network, all_events, all_event_ordering)
experimental_results_5 = [ experiment(network) for network in networks ]
DisplayExperiment(experimental_results_5, "Experiment 5: All parameters, alternative poset of extrema")
%%time
experiment = lambda network : Analyze(network, all_events, all_event_ordering)
experimental_results_6 = [ experiment(network) for network in essential_networks ]
DisplayExperiment(experimental_results_6, "Experiment 6: Essential parameters, alternative poset of extrema")
%%time
experiment = lambda network : AnalyzeOnStable(network, all_events, all_event_ordering)
experimental_results_7 = [ experiment(network) for network in networks ]
DisplayStableExperiment(experimental_results_7, "Experiment 7: All parameters, alternative poset of extrema, stable only")
%%time
experiment = lambda network : AnalyzeOnStable(network, all_events, all_event_ordering)
experimental_results_8 = [ experiment(network) for network in essential_networks ]
DisplayStableExperiment(experimental_results_8, "Experiment 8: Essential parameters, alternative poset of extrema, stable only")
Explanation: Experiments
There are 8 experiements corresponding to 3 binary choices:
Full networks vs Essential networks
Path matching in entire domain graph vs path matching in stable Morse sets
Original poset of extrema vs Alternative poset of extrema
End of explanation |
3,376 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>Structural Analysis and Visualization of Networks</center>
<center>Final Mid-term Assignment</center>
<center>Student
Step1: <hr/>
Step2: The mixing coefficient for a numerical node attribute $X = \big(x_i\big)$ in an undirected graph $G$, with the adjacency matrix $A$, is defined as
$$\rho(x) = \frac{\text{cov}}{\text{var}} = \frac{\sum_{ij}A_{ij}(x_i-\bar{x})(x_j-\bar{x})}{\sum_{ij}A_{ij}(x_i-\bar{x})^2} $$
where $\bar{x} = \frac{1}{2m}\sum_i \delta_i x_i$ is the mean value of $X$ weighted by vertex degree. Note that $A$ is necessarily symmetric. This coefficient can be represented in the matrix notation as
$$\rho(x) = \frac{X'AX - 2m \bar{x}^2}{X'\text{diag}(D)X - 2m \bar{x}^2} $$
where the diagonal matrix $\text{diag}(D)$ is the matrix of vertex degrees, and the value $\bar{x}$ is the sample mean of the numerical node attribute $X$. | Python Code:
import numpy as np
import networkx as nx
from matplotlib import pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings( 'ignore' )
def fw( A, pi = None ) :
if pi is None :
pi = A.copy( )
pi[ A == 0 ] = np.inf
np.fill_diagonal( pi, 0 )
for k in xrange( A.shape[ 0 ] ) :
pi = np.minimum( pi, pi[ :, k ] + pi[ k, : ] )
return pi
Explanation: <center>Structural Analysis and Visualization of Networks</center>
<center>Final Mid-term Assignment</center>
<center>Student: Nazarov Ivan</center>
<hr/>
End of explanation
G = nx.gml.read_gml( './data/ha5/huge_100004196072232_2015_03_21_22_33_65c744356ffedcfa83bf49b64a76445a.gml' )
fig = plt.figure( figsize = (16,12) )
axs = fig.add_subplot( 1,1,1, axisbg = 'black' )
nx.draw( G, pos = nx.spring_layout( G ), ax = axs, node)
nx.is_connected(G)
nx.to_numpy_matrix( G )
A = nx.to_numpy_matrix( G )
def spectral( A, T = 10, _index = None ) :
if _index is None :
_index = np.arange( A.shape[ 0 ], dtype = np.int )
## Get the vertex degrees
deg = A.sum( axis = 1, dtype = np.float ).getA1( )
## Check for isolated vertices
if np.any( deg == 0 ) :
## Find nonisolated
nz = np.where( deg != 0 )[ 0 ]
return np.concatenate( ( np.where( deg == 0 )[ 0 ],
nz[ spectral( A[:,nz][nz,:], T = T, _index = _index[ nz ] ) ] ) )
## Assume the matrix A has no isolated vertices
D = np.diag( 1.0 / deg )
L = np.eye( *A.shape, dtype = np.float ) - D.dot( A )
l, v = np.linalg.eig( L )
e = v[ :, np.argsort( l )[ 1 ] ].real.getA1()
n, p = np.where( e < 0 )[ 0 ], np.where( e >= 0 )[ 0 ]
if len( p ) > T :
p = p[ spectral( A[:,p][p,:], T = T, _index = _index[ p ] ) ]
if len( n ) > T :
n = n[ spectral( A[:,n][n,:], T = T, _index = _index[ n ] ) ]
if len( p ) > len( n ) :
p, n = n, p
return np.concatenate( ( n, p ) )
pi = fw( A )
I = nx.spectral_ordering( G )
J = spectral( A )
plt.subplot( 121 )
plt.imshow( pi[:,I][I,:] )
plt.subplot( 122 )
plt.imshow( pi[:,J][J,:] )
nx.spectral_ordering()
plt.plot(e[n])
plt.plot(e[p], '-r')
i = np.argsort( l )[ :10 ]
# print v[ :, i ].real
print l[ i ]
np.isclose( l[ i ], 0 )
Explanation: <hr/>
End of explanation
def assortativity( G, X ) :
## represent the graph in an adjacency matrix form
A = nx.to_numpy_matrix( G, dtype = np.float, nodelist = G.nodes( ) )
## Convert x -- dictionary to a numpy vector
x = np.array( [ X[ n ] for n in G.nodes( ) ] , dtype = np.float )
## Compute the x'Ax part
xAx = np.dot( x, np.array( A.dot( x ) ).flatten( ) )
## and the x'\text{diag}(D)x part. Note that left-multiplying a vector
## by a diagonal matrix is equivalent to element-wise multiplication.
D = np.array( A.sum( axis = 1 ), dtype = np.float ).flatten( )
xDx = np.dot( x, np.multiply( D, x ) )
## numpy.average( ) actually normalizes the weights.
x_bar = np.average( x, weights = D )
D_sum = np.sum( D, dtype = np.float )
return ( xAx - D_sum * x_bar * x_bar ) / ( xDx - D_sum * x_bar * x_bar )
Explanation: The mixing coefficient for a numerical node attribute $X = \big(x_i\big)$ in an undirected graph $G$, with the adjacency matrix $A$, is defined as
$$\rho(x) = \frac{\text{cov}}{\text{var}} = \frac{\sum_{ij}A_{ij}(x_i-\bar{x})(x_j-\bar{x})}{\sum_{ij}A_{ij}(x_i-\bar{x})^2} $$
where $\bar{x} = \frac{1}{2m}\sum_i \delta_i x_i$ is the mean value of $X$ weighted by vertex degree. Note that $A$ is necessarily symmetric. This coefficient can be represented in the matrix notation as
$$\rho(x) = \frac{X'AX - 2m \bar{x}^2}{X'\text{diag}(D)X - 2m \bar{x}^2} $$
where the diagonal matrix $\text{diag}(D)$ is the matrix of vertex degrees, and the value $\bar{x}$ is the sample mean of the numerical node attribute $X$.
End of explanation |
3,377 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Map of Flights Taken
The goal of this post is to visualize flights taken from Google location data using Python
* This post utilizes code from Tyler Hartley's visualizing location history blog post
Overview
Setup
download data
install modules
Data Wrangling
data extraction
data exploration
data manipulation
Flight Algorithm
Visualize Flights
Conclusion
Setup
Use Google Takout to download your Google location history
If you've previously enabled Google location reporting on your smartphone, your GPS data will be periodically uploaded to Google's servers. Use Google Takeout to download your location history.
The decisions of when and how to upload this data are entirely obfuscated to the end user, but as you'll see below, Android appears to upload a GPS location every 60 seconds. That's plenty of data to work with.
After downloading your data, install the required modules
Google Takeout
Google Takeout is a Google service that allows users to export any personal Google data. We'll use Takeout to download our raw location history as a one-time snapshot. Since Latitude was retired, no API exists to access location history in real-time.
Download location data
Step1: Data Wrangling
data extraction
Step2: Explore Data
view data and datatypes
Step3: accuracy code "999" may represent missingness
find earliest and latest observations in the data
save for later
Step4: data manipulation
Degrees and Radians
We're going to convert the degree-based geo data to radians to calculate distance traveled. I'm going to paraphrase an explanation (source below) about why the degree-to-radians conversion is necessary
Degrees are arbitrary because they’re based on the sun and backwards because they are from the observer’s perspective.
Radians are in terms of the mover allowing equations to “click into place”. Converting rotational to linear speed is easy, and ideas like sin(x)/x make sense.
Consult this post for more info about degrees and radians in distance calculation.
convert degrees to radians
Step5: calculate speed during trips (in km/hr)
Step6: Make a new dataframe containing the difference in location between each pair of points.
Any one of these pairs is a potential flight
Step7: Now flightdata contains a comparison of each adjacent GPS location.
All that's left to do is filter out the true flight instances from the rest of them.
spherical distance function
function to calculate straight-line distance traveled on a sphere
Step8: Flight algorithm
filter flights
remove flights using conservative selection criteria
Step9: This algorithm worked 100% of the time for me - no false positives or negatives. But the adjacency-criteria of the algorithm is fairly brittle. The core of it centers around the assumption that inter-flight GPS data will be directly adjacent to one another. That's why the initial screening on line 1 of the previous cell had to be so liberal.
Now, the flights DataFrame contains only instances of true flights which facilitates plotting with Matplotlib's Basemap. If we plot on a flat projection like tmerc, the drawgreatcircle function will produce a true path arc just like we see in the in-flight magazines.
Visualize Flights
Step10: You can draw entertaining conclusions from the flight visualization. For instance, you can see some popular layover locations, all those lines in/out of Seattle, plus a recent trip to Germany. And Basemap has made it so simple for us - no Shapefiles to import because all map information is included in the Basemap module.
Calculate all the miles you have traveled in the years observed with a single line of code
Step11: Conclusion
You've now got the code to go ahead and reproduce these maps.
I'm working on creating functions to automate these visualizations
Potential future directions
Figure out where you usually go on the weekends
Calculate your fastest commute route
measure the amount of time you spend driving vs. walking.
Download this notebook, or see a static view here | Python Code:
import json
import time
import datetime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from IPython.display import Image
import fiona
from shapely.prepared import prep
from descartes import PolygonPatch
from mpl_toolkits.basemap import Basemap
from shapely.geometry import Point, Polygon, MultiPoint, MultiPolygon
import warnings
warnings.filterwarnings('ignore')
Explanation: Map of Flights Taken
The goal of this post is to visualize flights taken from Google location data using Python
* This post utilizes code from Tyler Hartley's visualizing location history blog post
Overview
Setup
download data
install modules
Data Wrangling
data extraction
data exploration
data manipulation
Flight Algorithm
Visualize Flights
Conclusion
Setup
Use Google Takout to download your Google location history
If you've previously enabled Google location reporting on your smartphone, your GPS data will be periodically uploaded to Google's servers. Use Google Takeout to download your location history.
The decisions of when and how to upload this data are entirely obfuscated to the end user, but as you'll see below, Android appears to upload a GPS location every 60 seconds. That's plenty of data to work with.
After downloading your data, install the required modules
Google Takeout
Google Takeout is a Google service that allows users to export any personal Google data. We'll use Takeout to download our raw location history as a one-time snapshot. Since Latitude was retired, no API exists to access location history in real-time.
Download location data:
* Go to takeout. Uncheck all services except "Location History"
* The data will be in a json format, which works great for us. Download it in your favorite compression type.
* When Google has finished creating your archive, you'll get an email notification and a link to download.
* Download and unzip the file, and you should be looking at a LocationHistory.json file. Working with location data in Pandas. Pandas is an incredibly powerful tool that simplifies working with complex datatypes and performing statistical analysis in the style of R. Chris Albon has great primers on using Pandas here under the "Data Wrangling" section.
Install modules
If you use Anaconda to manage your Python packages, I recommend creating a virtual environment with anaconda to install the dependencies. Copying the lines below the instruction into the terminal creates the environment, requirements.txt, etc.
conda create -n test-env python=3.5 anaconda
source activate test-env
make a requirements.txt file for dependencies
(echo descartes; echo IPython; echo shapely; echo fiona; echo Basemap) >> requirements.txt
install requirements.txt
conda install --yes --file requirements.txt
After completing the setup, we'll read in the LocationHistory.json file from Google Takeout and create a DataFrame.
End of explanation
with open('LocationHistory.json', 'r') as fh:
raw = json.loads(fh.read())
# use location_data as an abbreviation for location data
location_data = pd.DataFrame(raw['locations'])
del raw #free up some memory
# convert to typical units
location_data['latitudeE7'] = location_data['latitudeE7']/float(1e7)
location_data['longitudeE7'] = location_data['longitudeE7']/float(1e7)
location_data['timestampMs'] = location_data['timestampMs'].map(lambda x: float(x)/1000) #to seconds
location_data['datetime'] = location_data.timestampMs.map(datetime.datetime.fromtimestamp)
# Rename fields based on the conversions we just did
location_data.rename(columns={'latitudeE7':'latitude', 'longitudeE7':'longitude', 'timestampMs':'timestamp'}, inplace=True)
location_data = location_data[location_data.accuracy < 1000] #Ignore locations with accuracy estimates over 1000m
location_data.reset_index(drop=True, inplace=True)
Explanation: Data Wrangling
data extraction
End of explanation
location_data.head()
location_data.dtypes
location_data.describe()
Explanation: Explore Data
view data and datatypes
End of explanation
print("earliest observed date: {}".format(min(location_data["datetime"]).strftime('%m-%d-%Y')))
print("latest observed date: {}".format(max(location_data["datetime"]).strftime('%m-%d-%Y')))
earliest_obs = min(location_data["datetime"]).strftime('%m-%d-%Y')
latest_obs = max(location_data["datetime"]).strftime('%m-%d-%Y')
Explanation: accuracy code "999" may represent missingness
find earliest and latest observations in the data
save for later
End of explanation
degrees_to_radians = np.pi/180.0
location_data['phi'] = (90.0 - location_data.latitude) * degrees_to_radians
location_data['theta'] = location_data.longitude * degrees_to_radians
# Compute distance between two GPS points on a unit sphere
location_data['distance'] = np.arccos(
np.sin(location_data.phi)*np.sin(location_data.phi.shift(-1)) * np.cos(location_data.theta - location_data.theta.shift(-1)) +
np.cos(location_data.phi)*np.cos(location_data.phi.shift(-1))) * 6378.100 # radius of earth in km
Explanation: data manipulation
Degrees and Radians
We're going to convert the degree-based geo data to radians to calculate distance traveled. I'm going to paraphrase an explanation (source below) about why the degree-to-radians conversion is necessary
Degrees are arbitrary because they’re based on the sun and backwards because they are from the observer’s perspective.
Radians are in terms of the mover allowing equations to “click into place”. Converting rotational to linear speed is easy, and ideas like sin(x)/x make sense.
Consult this post for more info about degrees and radians in distance calculation.
convert degrees to radians
End of explanation
location_data['speed'] = location_data.distance/(location_data.timestamp - location_data.timestamp.shift(-1))*3600 #km/hr
Explanation: calculate speed during trips (in km/hr)
End of explanation
flight_data = pd.DataFrame(data={'end_lat':location_data.latitude,
'end_lon':location_data.longitude,
'end_datetime':location_data.datetime,
'distance':location_data.distance,
'speed':location_data.speed,
'start_lat':location_data.shift(-1).latitude,
'start_lon':location_data.shift(-1).longitude,
'start_datetime':location_data.shift(-1).datetime,
}).reset_index(drop=True)
Explanation: Make a new dataframe containing the difference in location between each pair of points.
Any one of these pairs is a potential flight
End of explanation
def distance_on_unit_sphere(lat1, long1, lat2, long2):
# http://www.johndcook.com/python_longitude_latitude.html
# Convert latitude and longitude to spherical coordinates in radians.
degrees_to_radians = np.pi/180.0
# phi = 90 - latitude
phi1 = (90.0 - lat1)*degrees_to_radians
phi2 = (90.0 - lat2)*degrees_to_radians
# theta = longitude
theta1 = long1*degrees_to_radians
theta2 = long2*degrees_to_radians
cos = (np.sin(phi1)*np.sin(phi2)*np.cos(theta1 - theta2) +
np.cos(phi1)*np.cos(phi2))
arc = np.arccos( cos )
# Remember to multiply arc by the radius of the earth
# in your favorite set of units to get length.
return arc
Explanation: Now flightdata contains a comparison of each adjacent GPS location.
All that's left to do is filter out the true flight instances from the rest of them.
spherical distance function
function to calculate straight-line distance traveled on a sphere
End of explanation
flights = flight_data[(flight_data.speed > 40) & (flight_data.distance > 80)].reset_index()
# Combine instances of flight that are directly adjacent
# Find the indices of flights that are directly adjacent
_f = flights[flights['index'].diff() == 1]
adjacent_flight_groups = np.split(_f, (_f['index'].diff() > 1).nonzero()[0])
# Now iterate through the groups of adjacent flights and merge their data into
# one flight entry
for flight_group in adjacent_flight_groups:
idx = flight_group.index[0] - 1 #the index of flight termination
flights.loc[idx, ['start_lat', 'start_lon', 'start_datetime']] = [flight_group.iloc[-1].start_lat,
flight_group.iloc[-1].start_lon,
flight_group.iloc[-1].start_datetime]
# Recompute total distance of flight
flights.loc[idx, 'distance'] = distance_on_unit_sphere(flights.loc[idx].start_lat,
flights.loc[idx].start_lon,
flights.loc[idx].end_lat,
flights.loc[idx].end_lon)*6378.1
# Now remove the "flight" entries we don't need anymore.
flights = flights.drop(_f.index).reset_index(drop=True)
# Finally, we can be confident that we've removed instances of flights broken up by
# GPS data points during flight. We can now be more liberal in our constraints for what
# constitutes flight. Let's remove any instances below 200km as a final measure.
flights = flights[flights.distance > 200].reset_index(drop=True)
Explanation: Flight algorithm
filter flights
remove flights using conservative selection criteria
End of explanation
fig = plt.figure(figsize=(18,12))
# Plotting across the international dateline is tough. One option is to break up flights
# by hemisphere. Otherwise, you'd need to plot using a different projection like 'robin'
# and potentially center on the Int'l Dateline (lon_0=-180)
# flights = flights[(flights.start_lon < 0) & (flights.end_lon < 0)]# Western Hemisphere Flights
# flights = flights[(flights.start_lon > 0) & (flights.end_lon > 0)] # Eastern Hemisphere Flights
xbuf = 0.2
ybuf = 0.35
min_lat = np.min([flights.end_lat.min(), flights.start_lat.min()])
min_lon = np.min([flights.end_lon.min(), flights.start_lon.min()])
max_lat = np.max([flights.end_lat.max(), flights.start_lat.max()])
max_lon = np.max([flights.end_lon.max(), flights.start_lon.max()])
width = max_lon - min_lon
height = max_lat - min_lat
m = Basemap(llcrnrlon=min_lon - width* xbuf,
llcrnrlat=min_lat - height*ybuf,
urcrnrlon=max_lon + width* xbuf,
urcrnrlat=max_lat + height*ybuf,
projection='merc',
resolution='l',
lat_0=min_lat + height/2,
lon_0=min_lon + width/2,)
m.drawmapboundary(fill_color='#EBF4FA')
m.drawcoastlines()
m.drawstates()
m.drawcountries()
m.fillcontinents()
current_date = time.strftime("printed: %a, %d %b %Y", time.localtime())
for idx, f in flights.iterrows():
m.drawgreatcircle(f.start_lon, f.start_lat, f.end_lon, f.end_lat, linewidth=3, alpha=0.4, color='b' )
m.plot(*m(f.start_lon, f.start_lat), color='g', alpha=0.8, marker='o')
m.plot(*m(f.end_lon, f.end_lat), color='r', alpha=0.5, marker='o' )
fig.text(0.125, 0.18, "Data collected from 2013-2017 on Android \nPlotted using Python, Basemap \n%s" % (current_date),
ha='left', color='#555555', style='italic')
fig.text(0.125, 0.15, "kivanpolimis.com", color='#555555', fontsize=16, ha='left')
plt.savefig('flights.png', dpi=150, frameon=False, transparent=False, bbox_inches='tight', pad_inches=0.2)
Image(filename='flights.png')
Explanation: This algorithm worked 100% of the time for me - no false positives or negatives. But the adjacency-criteria of the algorithm is fairly brittle. The core of it centers around the assumption that inter-flight GPS data will be directly adjacent to one another. That's why the initial screening on line 1 of the previous cell had to be so liberal.
Now, the flights DataFrame contains only instances of true flights which facilitates plotting with Matplotlib's Basemap. If we plot on a flat projection like tmerc, the drawgreatcircle function will produce a true path arc just like we see in the in-flight magazines.
Visualize Flights
End of explanation
flights_in_miles = round(flights.distance.sum()*.621371) # distance column is in km, convert to miles
flights_in_miles
print("{0} miles traveled from {1} to {2}".format(flights_in_miles, earliest_obs, latest_obs))
Explanation: You can draw entertaining conclusions from the flight visualization. For instance, you can see some popular layover locations, all those lines in/out of Seattle, plus a recent trip to Germany. And Basemap has made it so simple for us - no Shapefiles to import because all map information is included in the Basemap module.
Calculate all the miles you have traveled in the years observed with a single line of code:
End of explanation
import time
print("last updated: {}".format(time.strftime("%a, %d %b %Y %H:%M", time.localtime())))
Explanation: Conclusion
You've now got the code to go ahead and reproduce these maps.
I'm working on creating functions to automate these visualizations
Potential future directions
Figure out where you usually go on the weekends
Calculate your fastest commute route
measure the amount of time you spend driving vs. walking.
Download this notebook, or see a static view here
End of explanation |
3,378 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2>HW #5</h2>
Matt Buchovecky
Astro 283
Step1: <h2> Problem 1 </h2>
$$p\left(x\mid \alpha,\beta\right) = \left{
\begin{array}{ll}
\alpha^{-1}\exp{\left(-\frac{x+\beta}{\alpha}\right)I_0\left(\frac{2\sqrt{x\beta}}{\alpha}\right)} & \quad \quad x\geq 0 \
0 & \quad \quad x<0 \
\end{array}
\right.
$$
Step6: <h2> Problem 2 </h2>
<h3> Comparing fit of second and third order polynomial models </h3>
We wish to find if these data are better fit by a second or third order polynomial model. We use Monte Carlo integration to estimate the ratio
\begin{eqnarray}
\frac{P\left(M_2\mid{X_k,Y_k,\sigma_k}\right)}{P\left(M_3\mid{X_k,Y_k,\sigma_k}\right)} &=& \frac{\int p\left(M_2,\vec{a}\mid{X_k,Y_k,\sigma_k}\right)d\vec{a}}{\int p\left(M_3,\vec{b}\mid{X_k,Y_k,\sigma_k}\right) d\vec{b}}\
&=& \frac{\int P\left({X_k,Y_k,\sigma_k}\mid M_2,\vec{a}\right)p\left(M_2,\vec{a}\right) d\vec{a}}{\int P\left({X_k,Y_k,\sigma_k}\mid M_3,\vec{b}\right)p\left(M_3,\vec{b}\right) d\vec{b}} \
\end{eqnarray}
the first line marginalizes the probability, then the second line applies Baye's theorem, where the probability of the data $P\left({X_k,Y_k,\sigma_k}\right)$ is common to both and cancels. \
If we assume the priors are uniform, they are equal to the inverse of the volume in phase space
\begin{eqnarray}
\frac{p\left(M_2,\vec{a}\right)}{p\left(M_3,\vec{b}\right)} &=& \frac{\prod_{i=1}^3\frac{1}{a_i^\text{max}-a_i^\text{min}}}{\prod_{i=1}^4\frac{1}{b_i^\text{max}-b_i^\text{min}}}
\end{eqnarray}
The likelihood functions are those for independent and Gaussian distributed errors
\begin{eqnarray}
P\left({X_k,Y_k,\sigma_k}\mid M_n,\vec{c}\right) &=& \prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2_i}}\exp\left[-\frac{(y_i-f_n(x_i,\vec{c}))^2}{2\sigma_i^2}\right]\
&=& \left(\prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2_i}}\right)\exp\left(-\frac{\chi_0^2}{2}\right)\exp\left(-\frac{\chi^2-\chi_0^2}{2}\right)
\end{eqnarray}
where $\vec{c}$ is just a generalized set of parameters, $\vec{a}$ and $\vec{b}$, $n$ is the order of the polynomial model, and $N$ is the number of data points, and the $f$ functions are just the polynomial models
Step7: The gaussian function we sample from is centered around the best fit parameters, and is given by
Step10: The chi squared term tended to make the computation excessively large, too large for even 128-bit floats to handle. To
make the numbers in the exponential smaller, we compare to the best fit parameters, rewriting
Step13: It looks like the better fit is quadratic
$$
\frac{P\left(M_2\mid{X_k,Y_k,\sigma_k}\right)}{P\left(M_3\mid{X_k,Y_k,\sigma_k}\right)} \sim 10^1
$$
<h4>Other stuff I tried</h4> | Python Code:
# import modules
import numpy as np
from matplotlib import pyplot
%matplotlib inline
from scipy import optimize, stats, special
Explanation: <h2>HW #5</h2>
Matt Buchovecky
Astro 283
End of explanation
# define the pdf for the Rice distribution as a subclass of rv_continuous
class Rice_dist(stats.rv_continuous):
"Rice distribution class"
def _pdf(self, x, alpha, beta):
try:
iter(x)
except TypeError:
if x < 0:
return 0
x = [ x ]
x = np.asarray(x)
condlist = [ x>0 ]
choicelist = [ (1/alpha)*np.exp((x+beta)/(-alpha))*special.iv(0, 2*np.sqrt(x*beta)/alpha) ]
#return np.select(condlist, choicelist, default=0.0)
return (1/alpha)*np.exp((x+beta)/(-alpha))*special.iv(0, 2*np.sqrt(x*beta)/alpha)
# create an instance of the Rice distribution and create a set of samples from that distribution
rice_inst = Rice_dist(a=0.0, name='Rice name') # b=inf
alpha, beta = (7., 47.)
mean_x = alpha + beta
variance_x = alpha**2 + 2*alpha*beta
rice_trials_array = rice_inst.rvs(alpha=alpha, beta=beta, size=500)
# compare the histogram of samples to the Rice curve
x = np.arange(rice_trials_array.min(), rice_trials_array.max(), 0.1)
f = rice_inst._pdf(x, alpha, beta)
p = pyplot.plot(x, f)
h = pyplot.hist(rice_trials_array, normed=True)
pyplot.xlabel('x')
pyplot.ylabel('pdf')
pyplot.title("pdf vs normed histogram of trials")
pyplot.xlim(rice_trials_array.min(), rice_trials_array.max())
rice_trials_array.max()
# find a nice normal curve similar to this distribution, and find the M factor
guess = 45.
sigma = 35.
center = optimize.fmin(lambda x: -rice_inst._pdf(x,alpha,beta), guess)
x_M = optimize.fmin(lambda x: -rice_inst._pdf(x, alpha, beta)/stats.norm.pdf(x, center, sigma), center)
M = rice_inst._pdf(x_M, alpha, beta)/stats.norm.pdf(x_M, center, sigma)
print(M)
# compare f(x) and g(x) curves
fp = pyplot.plot(x, f, label='f')
gp = pyplot.plot(x, stats.norm.pdf(x, center, sigma), label='g')
pyplot.xlabel('x')
pyplot.ylabel('pdf')
pyplot.title("f(x) vs g(x) for my set parameters")
pyplot.xlim(rice_trials_array.min(), rice_trials_array.max())
pyplot.legend(loc='center left', bbox_to_anchor=(1., 0.5))
# sample points the old-fashioned way
num_points = 1000
sample_set = []
u_set = []
n = 0
#M=15
while n < num_points:
u_rand = np.random.uniform(0, 1)
# generate random x from normal distribution, could use Box-Muller method here
x_rand = np.random.normal(center, sigma)
g_x = stats.norm.pdf(x_rand, center, sigma)
f_x = rice_inst._pdf(x_rand, alpha, beta)
if u_rand < f_x / (M*g_x):
if f_x / (M*g_x) > 1:
print(f_x / (g_x))
sample_set.append(x_rand)
u_set.append(u_rand)
n = n + 1
u_rand = np.random.uniform(0, 1)
print(u_rand)
# write the file, as the samples look good
outfile = open("./buchovecky_matt_hw5_data.txt", 'w')
for trial in np.nditer(rice_trials_array):
outfile.write(str(trial)+'\n')
outfile.close()
# compare expectation value and variance to that for Rie distribution
print(mean_x)
print(rice_inst.expect(args=(alpha,beta)))
print(variance_x)
print(rice_inst.var(alpha=alpha, beta=beta))
Explanation: <h2> Problem 1 </h2>
$$p\left(x\mid \alpha,\beta\right) = \left{
\begin{array}{ll}
\alpha^{-1}\exp{\left(-\frac{x+\beta}{\alpha}\right)I_0\left(\frac{2\sqrt{x\beta}}{\alpha}\right)} & \quad \quad x\geq 0 \
0 & \quad \quad x<0 \
\end{array}
\right.
$$
End of explanation
# read in the data for part 2
infile = open("./hw5-data.txt", 'r')
x_arr = [ ]
y_arr = [ ]
sigma_arr = [ ]
for line in iter(infile):
line = line.split()
try:
float(line[0]) and float(line[1]) and float(line[2])
x_arr.append(float(line[0]))
y_arr.append(float(line[1]))
sigma_arr.append(float(line[2]))
except ValueError:
continue
infile.close()
print(x_arr)
print(y_arr)
print(sigma_arr)
# define functions for the models, chi square, and marginal probability
def poly_2nd_ord(x, a0, a1, a2):
return the value of a second order polynomial
return a0 + a1*x + a2*x**2
def poly_3rd_ord(x, a0, a1, a2, a3):
return the value of a third order polynomial
return a0 + a1*x + a2*x**2 + a3*pow(x,3)
def chi_square(function, params, x_vals, data_vals, sig_vals):
return the chi square residuals comparing the values of a function
with parameters for a set of x values compared to some values for the data and errors
#check ndarray type
residuals = (data_vals - function(x_vals, *params))
chis = (residuals**2)/np.power(sig_vals,2)
chi_sum = np.sum(chis)
d_o_f = len(x_vals) - len(params)
#return (chi_sum, d_o_f)
return chi_sum
def gauss_vec_cov(vals, vals_0, cov):
the gaussian function for vector argument and a covariace matrix
vals is the vector evaluating, vals_0 is the center vector, and cov is the covariance matrix
if type(vals) is not np.matrix:
vals = np.matrix(vals).T
if type(vals_0) is not np.matrix:
vals_0 = np.matrix(vals_0).T
if type(cov) is not np.matrix:
cov = np.matrix(cov)
vec = np.matrix(vals - vals_0)
return 1/((2*np.pi)*np.sqrt(np.linalg.det(cov)))*np.exp(-0.5*vec.T*cov.I*vec)
Explanation: <h2> Problem 2 </h2>
<h3> Comparing fit of second and third order polynomial models </h3>
We wish to find if these data are better fit by a second or third order polynomial model. We use Monte Carlo integration to estimate the ratio
\begin{eqnarray}
\frac{P\left(M_2\mid{X_k,Y_k,\sigma_k}\right)}{P\left(M_3\mid{X_k,Y_k,\sigma_k}\right)} &=& \frac{\int p\left(M_2,\vec{a}\mid{X_k,Y_k,\sigma_k}\right)d\vec{a}}{\int p\left(M_3,\vec{b}\mid{X_k,Y_k,\sigma_k}\right) d\vec{b}}\
&=& \frac{\int P\left({X_k,Y_k,\sigma_k}\mid M_2,\vec{a}\right)p\left(M_2,\vec{a}\right) d\vec{a}}{\int P\left({X_k,Y_k,\sigma_k}\mid M_3,\vec{b}\right)p\left(M_3,\vec{b}\right) d\vec{b}} \
\end{eqnarray}
the first line marginalizes the probability, then the second line applies Baye's theorem, where the probability of the data $P\left({X_k,Y_k,\sigma_k}\right)$ is common to both and cancels. \
If we assume the priors are uniform, they are equal to the inverse of the volume in phase space
\begin{eqnarray}
\frac{p\left(M_2,\vec{a}\right)}{p\left(M_3,\vec{b}\right)} &=& \frac{\prod_{i=1}^3\frac{1}{a_i^\text{max}-a_i^\text{min}}}{\prod_{i=1}^4\frac{1}{b_i^\text{max}-b_i^\text{min}}}
\end{eqnarray}
The likelihood functions are those for independent and Gaussian distributed errors
\begin{eqnarray}
P\left({X_k,Y_k,\sigma_k}\mid M_n,\vec{c}\right) &=& \prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2_i}}\exp\left[-\frac{(y_i-f_n(x_i,\vec{c}))^2}{2\sigma_i^2}\right]\
&=& \left(\prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2_i}}\right)\exp\left(-\frac{\chi_0^2}{2}\right)\exp\left(-\frac{\chi^2-\chi_0^2}{2}\right)
\end{eqnarray}
where $\vec{c}$ is just a generalized set of parameters, $\vec{a}$ and $\vec{b}$, $n$ is the order of the polynomial model, and $N$ is the number of data points, and the $f$ functions are just the polynomial models:
\begin{eqnarray}
f_2(x_i,\vec{a}) &=& a_2x^2_i+a_1x_i+a_0 \
f_3(x_i,\vec{b}) &=& b_3x^3_i+b_2x^2_i+b_1x_i+b_0
\end{eqnarray}
End of explanation
# find the optimal fits for the models
guess_2 = (1, 1, 1)
p_opt_2, p_cov_2 = optimize.curve_fit(poly_2nd_ord, x_arr, y_arr, guess_2, sigma_arr)
guess_3 = (1, 1, 1, 1)
p_opt_3, p_cov_3 = optimize.curve_fit(poly_3rd_ord, x_arr, y_arr, guess_3, sigma_arr)
for lst in [p_opt_2, p_cov_2, p_opt_3, p_cov_3]:
print(lst)
# plot the data against the optimal fits for both models
x_arr = np.asarray(x_arr)
x_range = np.arange(x_arr.min()-1., x_arr.max()+1., 0.1)
pyplot.errorbar(x_arr, y_arr, yerr=sigma_arr, fmt='bo', label='data')
pyplot.plot(x_range, poly_2nd_ord(x_range, *p_opt_2), 'r-', label='quad fit')
pyplot.plot(x_range, poly_3rd_ord(x_range, *p_opt_3), 'g-', label='cubic fit')
pyplot.ylabel('y')
pyplot.xlabel('x')
pyplot.title("fits comparison")
pyplot.legend(loc='center left', bbox_to_anchor=(1., 0.5))
Explanation: The gaussian function we sample from is centered around the best fit parameters, and is given by:
\begin{eqnarray}
g_2\left(\vec{a},\vec{a}_0,\Sigma_2\right) &=& \frac{1}{\sqrt{(2\pi)^2\det\Sigma_2}}\exp\left[-\frac{1}{2}(\vec{a}-\vec{b}_0)^T\Sigma_2^{-1}(\vec{a}-\vec{a}_0)\right]\
g_3\left(\vec{b},\vec{b}_0,\Sigma_3\right) &=& \frac{1}{\sqrt{(2\pi)^3\det\Sigma_3}}\exp\left[-\frac{1}{2}(\vec{b}-\vec{b}_0)^T\Sigma_3^{-1}(\vec{b}-\vec{b}_0)\right]
\end{eqnarray}
where $\Sigma$ represents the covariance matrix from the best fit
End of explanation
# Monte Carlo integration, uniform sampling
def MC_int_uni_sampl(func, p_opt, p_cov, N, s=3.0):
Monte Carlo integration function
uniform sampling
integral_sum = 0
V = np.prod([2*s*np.sqrt(p_cov[i][i]) for i in range(0, len(p_opt))])
chisq_0 = chi_square(func, p_opt, x_arr, y_arr, sigma_arr)
for n in range(0, N):
param_rands = [ np.random.uniform(p_opt[i]-s*np.sqrt(p_cov[i][i]), p_opt[i]+s*np.sqrt(p_cov[i][i])) for i in range(0, len(p_opt)) ]
chisq = chi_square(func, param_rands, x_arr, y_arr, sigma_arr)
integral_sum += np.exp(-0.5*(chisq-chisq_0))
#return V * integral_sum / N
return integral_sum / N # the true integral is above, but for uniform sampling V will cancel with priors, so I leave it out
# Monte Carlo integration sampling from a gaussian
def MC_int_gaus_sampl(func, p_opt, p_cov, N, s=3.0):
Monte Carlo integration function
samples from a gaussian distribution center around p_opt
s tells the range of standard deviations to integrate over
n = 0
int_sum = 0
Mg = gauss_vec_cov(p_opt, p_opt, p_cov)
chisq_0 = chi_square(func, p_opt, x_arr, y_arr, sigma_arr)
while n < N:
u_rand = np.random.rand()
param_rands = [ np.random.uniform(p_opt[i]-s*np.sqrt(p_cov[i][i]), p_opt[i]+s*np.sqrt(p_cov[i][i])) for i in range(0, len(p_opt)) ]
f_rej = gauss_vec_cov(param_rands, p_opt, p_cov)
# f in rejection sampling -> g in mc integral
if u_rand < f_rej/Mg:
chisq = chi_square(func, param_rands, x_arr, y_arr, sigma_arr)
int_sum += np.exp(-0.5*(chisq-chisq_0))/f_rej
n += 1
return int_sum / N
# compute ratio from uniform sampling, quadratic to cubic
mc_int_2 = MC_int_uni_sampl(poly_2nd_ord, p_opt_2, p_cov_2, 10000, s=3.0)
mc_int_3 = MC_int_uni_sampl(poly_3rd_ord, p_opt_3, p_cov_3, 10000, s=3.0)
chisq_2 = chi_square(poly_2nd_ord, p_opt_2, x_arr, y_arr, sigma_arr)
chisq_3 = chi_square(poly_3rd_ord, p_opt_3, x_arr, y_arr, sigma_arr)
ratio_uni = mc_int_2 / mc_int_3 * np.exp(-0.5*(chisq_2-chisq_3))
print(ratio_uni)
# compute ratio from gaussian
mc_int_2 = MC_int_gaus_sampl(poly_2nd_ord, p_opt_2, p_cov_2, 1000, s=1.0)
mc_int_3 = MC_int_gaus_sampl(poly_3rd_ord, p_opt_3, p_cov_3, 1000, s=1.0)
chisq_2 = chi_square(poly_2nd_ord, p_opt_2, x_arr, y_arr, sigma_arr)
chisq_3 = chi_square(poly_3rd_ord, p_opt_3, x_arr, y_arr, sigma_arr)
param3_spread = 10
ratio_uni = mc_int_2 / (mc_int_3/param3_spread) * np.exp(-0.5*(chisq_2-chisq_3))
print(ratio_uni)
Explanation: The chi squared term tended to make the computation excessively large, too large for even 128-bit floats to handle. To
make the numbers in the exponential smaller, we compare to the best fit parameters, rewriting:
$$\prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2_i}}\exp\left[-\frac{(y_i-f_n(x_i,\vec{c}))^2}{2\sigma_i^2}\right]
= \left(\prod_{i=1}^N\frac{1}{\sqrt{2\pi\sigma^2_i}}\right)\exp\left(-\frac{\chi_0^2}{2}\right)\exp\left(-\frac{\chi^2-\chi_0^2}{2}\right)$$
where the chi squared terms can be written as:
\begin{eqnarray}
\chi^2 &=& \sum_{i=1}^N\frac{(y_i-f_n(x_i,\vec{c}))^2}{\sigma_i^2}\
\chi^2_0 &=& \sum_{i=1}^N\frac{(y_i-f_n(x_i,\vec{c}_0))^2}{\sigma_i^2}
\end{eqnarray}
and $A_0$ is the set of best fit parameters
For the integration we use the Monte Carlo integration method. The ratio is computed below. Note that for the integral with uniform sampling the priors cancel with the volume of the integral. When the sampling isn't uniform, the priors won't cancel with the volume, but will partially cancel between models, with one parameter leftover for the 3rd order equation
End of explanation
print(MC_int_uni_sampl(poly_2nd_ord, p_opt_2, p_cov_2, 10000, s=1.0))
print(MC_int_gaus_sampl(poly_2nd_ord, p_opt_2, p_cov_2, 10000, s=1.0))
print(MC_int_uni_sampl(poly_3rd_ord, p_opt_3, p_cov_3, 10000, s=1.0))
print(MC_int_gaus_sampl(poly_3rd_ord, p_opt_3, p_cov_3, 1000, s=1.0))
#print det of cov
print(np.linalg.det(p_cov_2))
# general random rejection method
def rand_rejection(f, g_rand, a, b):
generate a random variable using the rejection method
f is the pdf, sample from g_rand, a and b are the upper and lower bounds, respectively
u_rand = np.random.uniform(0, 1)
M = 5.
# define a general polynomial
def polynomial_gen(x, params):
'''general polynomial function'''
order = len(params) + 1
sum = 0
for i in range(0, order):
sum = sum + params[i]*np.power(x, i)
return sum
# define the pdf for the Rice distribution as a subclass of rv_continuous
class Rice_dist2(stats.rv_continuous):
test
def _pdf(self, x, alpha, beta):
try:
iter(x)
r_arr = np.zeros(x.shape())
for i in iter(x):
if x > 0:
print('hi')
except TypeError:
return (1/alpha)*np.exp((x+beta)/(-alpha))*special.iv(0, 2*np.sqrt(x*beta)/alpha)
from timeit import timeit
ar = [ 5, 5, 5 ]
ndar = np.asarray(ar)
%timeit np.asarray(ar)
%timeit np.asarray(ndar)
Rice_dist2?
Explanation: It looks like the better fit is quadratic
$$
\frac{P\left(M_2\mid{X_k,Y_k,\sigma_k}\right)}{P\left(M_3\mid{X_k,Y_k,\sigma_k}\right)} \sim 10^1
$$
<h4>Other stuff I tried</h4>
End of explanation |
3,379 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Markov Chains
author
Step1: Markov chains have log probability, fit, summarize, and from summaries methods implemented. They do not have classification capabilities by themselves, but when combined with a Naive Bayes classifier can be used to do discrimination between multiple models (see the Naive Bayes tutorial notebook).
Lets see the log probability of some data.
Step2: We can fit the model to sequences which we pass in, and as expected, get better performance on sequences which we train on. | Python Code:
from pomegranate import *
%pylab inline
d1 = DiscreteDistribution({'A': 0.10, 'C': 0.40, 'G': 0.40, 'T': 0.10})
d2 = ConditionalProbabilityTable([['A', 'A', 0.10],
['A', 'C', 0.50],
['A', 'G', 0.30],
['A', 'T', 0.10],
['C', 'A', 0.10],
['C', 'C', 0.40],
['C', 'T', 0.40],
['C', 'G', 0.10],
['G', 'A', 0.05],
['G', 'C', 0.45],
['G', 'G', 0.45],
['G', 'T', 0.05],
['T', 'A', 0.20],
['T', 'C', 0.30],
['T', 'G', 0.30],
['T', 'T', 0.20]], [d1])
clf = MarkovChain([d1, d2])
Explanation: Markov Chains
author: Jacob Schreiber <br>
contact: [email protected]
Markov Chains are a simple model based on conditional probability, where a sequence is modelled as the product of conditional probabilities. A n-th order Markov chain looks back n emissions to base its conditional probability on. For example, a 3rd order Markov chain models $P(X_{t} | X_{t-1}, X_{t-2}, X_{t-3})$.
However, a full Markov model needs to model the first observations, and the first n-1 observations. The first observation can't really be modelled well using $P(X_{t} | X_{t-1}, X_{t-2}, X_{t-3})$, but can be modelled by $P(X_{t})$. The second observation has to be modelled by $P(X_{t} | X_{t-1} )$. This means that these distributions have to be passed into the Markov chain as well.
We can initialize a Markov chain easily enough by passing in a list of the distributions.
End of explanation
clf.log_probability( list('CAGCATCAGT') )
clf.log_probability( list('C') )
clf.log_probability( list('CACATCACGACTAATGATAAT') )
Explanation: Markov chains have log probability, fit, summarize, and from summaries methods implemented. They do not have classification capabilities by themselves, but when combined with a Naive Bayes classifier can be used to do discrimination between multiple models (see the Naive Bayes tutorial notebook).
Lets see the log probability of some data.
End of explanation
clf.fit( map( list, ('CAGCATCAGT', 'C', 'ATATAGAGATAAGCT', 'GCGCAAGT', 'GCATTGC', 'CACATCACGACTAATGATAAT') ) )
print clf.log_probability( list('CAGCATCAGT') )
print clf.log_probability( list('C') )
print clf.log_probability( list('CACATCACGACTAATGATAAT') )
print clf.distributions[0]
print clf.distributions[1]
Explanation: We can fit the model to sequences which we pass in, and as expected, get better performance on sequences which we train on.
End of explanation |
3,380 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data
Step3: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step4: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
Step6: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint
Step7: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint
Step8: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step9: Problem 4
Convince yourself that the data is still good after shuffling!
Finally, let's save the data for later reuse | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matlotlib backend as plotting inline in IPython
%matplotlib inline
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
for image_index, image in enumerate(image_files):
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
num_images = image_index + 1
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
End of explanation
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
Problem 3
Another check: we expect the data to be balanced across classes. Verify that.
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
End of explanation
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
Explanation: Problem 4
Convince yourself that the data is still good after shuffling!
Finally, let's save the data for later reuse:
End of explanation |
3,381 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started
This is a simple example of the basic capabilities of aneris.
First, model and history data are read in. The model is then harmonized. Finally, output is analyzed.
Step1: The driver is used to execute the harmonization. It will handle the data formatting needed to execute the harmonizaiton operation and stores the harmonized results until they are needed.
Some logging output is provided. It can be suppressed with
aneris.logger().setLevel('WARN')
Step2: All data of interest is combined in order to easily view it. We will specifically investigate output for the World in this example. A few operations are performed in order to get the data into a plotting-friendly format. | Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import aneris
from aneris.tutorial import load_data
%matplotlib inline
Explanation: Getting Started
This is a simple example of the basic capabilities of aneris.
First, model and history data are read in. The model is then harmonized. Finally, output is analyzed.
End of explanation
model, hist, driver = load_data()
for scenario in driver.scenarios():
driver.harmonize(scenario)
harmonized, metadata, diagnostics = driver.harmonized_results()
Explanation: The driver is used to execute the harmonization. It will handle the data formatting needed to execute the harmonizaiton operation and stores the harmonized results until they are needed.
Some logging output is provided. It can be suppressed with
aneris.logger().setLevel('WARN')
End of explanation
data = pd.concat([hist, model, harmonized])
df = data[data.Region.isin(['World'])]
df = pd.melt(df, id_vars=aneris.iamc_idx, value_vars=aneris.numcols(df),
var_name='Year', value_name='Emissions')
df['Label'] = df['Model'] + ' ' + df['Variable']
df.head()
sns.lineplot(x=df.Year.astype(int), y=df.Emissions, hue=df.Label)
plt.legend(bbox_to_anchor=(1.05, 1))
Explanation: All data of interest is combined in order to easily view it. We will specifically investigate output for the World in this example. A few operations are performed in order to get the data into a plotting-friendly format.
End of explanation |
3,382 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Solution of Axelrod 1980
Step1: Implement the five strategies
Step2: Write a function that accepts the name of two strategies and competes them in a game of iterated prisoner's dilemma for a given number of turns.
Step3: Implement a round-robin tournament, in which each strategy is played against every other (including against itself) for 10 rounds of 1000 turns each. | Python Code:
import numpy as np
Explanation: Solution of Axelrod 1980
End of explanation
# We are going to implement five strategies.
# Each strategy takes as input the history of the turns played so far
# and returns 1 for cooperation and 0 for defection.
# 1) Always defect
def always_defect(previous_steps):
return 0
# 2) Always cooperate
def always_cooperate(previous_steps):
return 1
# 3) Purely random, with probability of defecting 0.5
def random(previous_steps):
if np.random.random(1) > 0.5:
return 1
return 0
# 4) Tit for tat
def tit_for_tat(previous_steps):
if len(previous_steps) == 0:
return 1
return previous_steps[-1]
# 5) Tit for two tat
def tit_for_two_tat(previous_steps):
if len(previous_steps) < 2:
return 1
# if the other player defected twice
if sum(previous_steps[-2:]) == 0:
# retaliate
return 0
return 1
Explanation: Implement the five strategies
End of explanation
def play_strategies(strategy_1, strategy_2, nsteps = 200):
# The following two lines are a bit complicated:
# we want to match a string (strategy_1) with a name of the function
# and the call globals()[strategy_1] does just that. Now
# pl1 is an "alias" for the same function.
pl1 = globals()[strategy_1]
pl2 = globals()[strategy_2]
# If you prefer, you can deal with this problem by using
# a series of if elif.
# Now two vectors to store the moves of the players
steps_pl1 = []
steps_pl2 = []
# And two variables for keeping the scores.
# (because we said these are numbers of years in prison, we
# use negative payoffs, with less negative being better)
points_pl1 = 0
points_pl2 = 0
# Iterate over the number of steps
for i in range(nsteps):
# decide strategy:
# player 1 chooses using the history of the moves by player 2
last_pl1 = pl1(steps_pl2)
# and vice versa
last_pl2 = pl2(steps_pl1)
# calculate payoff
if last_pl1 == 1 and last_pl2 == 1:
# both cooperate -> -1 point each
points_pl1 = points_pl1 - 1
points_pl2 = points_pl2 - 1
elif last_pl1 == 0 and last_pl2 == 1:
# pl2 lose
points_pl1 = points_pl1 - 0
points_pl2 = points_pl2 - 3
elif last_pl1 == 1 and last_pl2 == 0:
# pl1 lose
points_pl1 = points_pl1 - 3
points_pl2 = points_pl2 - 0
else:
# both defect
points_pl1 = points_pl1 - 2
points_pl2 = points_pl2 - 2
# add the moves to the history
steps_pl1.append(last_pl1)
steps_pl2.append(last_pl2)
# return the final scores
return((points_pl1, points_pl2))
play_strategies("random", "always_defect")
Explanation: Write a function that accepts the name of two strategies and competes them in a game of iterated prisoner's dilemma for a given number of turns.
End of explanation
def round_robin(strategies, nround, nstep):
nstrategies = len(strategies)
# initialize list for results
strategies_points = [0] * nstrategies
# for each pair
for i in range(nstrategies):
for j in range(i, nstrategies):
print("Playing", strategies[i], "vs.", strategies[j])
for k in range(nround):
res = play_strategies(strategies[i],
strategies[j],
nstep)
#print(res)
strategies_points[i] = strategies_points[i] + res[0]
strategies_points[j] = strategies_points[j] + res[1]
print("\nThe final results are:")
for i in range(nstrategies):
print(strategies[i] + ":", strategies_points[i])
print("\nand the winner is....")
print(strategies[strategies_points.index(max(strategies_points))])
my_strategies = ["always_defect",
"always_cooperate",
"random",
"tit_for_tat",
"tit_for_two_tat"]
round_robin(my_strategies, 10, 1000)
Explanation: Implement a round-robin tournament, in which each strategy is played against every other (including against itself) for 10 rounds of 1000 turns each.
End of explanation |
3,383 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
数据科学的编程工具
Python使用简介
王成军
[email protected]
计算传播网 http
Step1: Variable Type
Step2: dir & help
当你想要了解对象的详细信息时使用
Step3: type
当你想要了解变量类型时使用type
Step4: Data Structure
list, tuple, set, dictionary, array
Step5: 定义函数
Step6: For 循环
Step7: map
Step8: if elif else
Step9: while循环
Step10: try except
Step11: Write and Read data
Step12: 保存中间步骤产生的字典数据
Step13: 重新读入json
保存中间步骤产生的列表数据
Step14: 使用matplotlib绘图 | Python Code:
%matplotlib inline
import random, datetime
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import statsmodels.api as sm
from scipy.stats import norm
from scipy.stats.stats import pearsonr
Explanation: 数据科学的编程工具
Python使用简介
王成军
[email protected]
计算传播网 http://computational-communication.com
人生苦短,我用Python。
Python(/ˈpaɪθən/)是一种面向对象、解释型计算机程序设计语言
- 由Guido van Rossum于1989年底发明
- 第一个公开发行版发行于1991年
- Python语法简洁而清晰
- 具有强大的标准库和丰富的第三方模块
- 它常被昵称为胶水语言
- TIOBE编程语言排行榜“2010年度编程语言”
特点
免费、功能强大、使用者众多
与R和MATLAB相比,Python是一门更易学、更严谨的程序设计语言。使用Python编写的脚本更易于理解和维护。
如同其它编程语言一样,Python语言的基础知识包括:类型、列表(list)和元组(tuple)、字典(dictionary)、条件、循环、异常处理等。
关于这些,初阶读者可以阅读《Beginning Python》一书(Hetland, 2005)。
Python中包含了丰富的类库。
众多开源的科学计算软件包都提供了Python的调用接口,例如著名的计算机视觉库OpenCV。
Python本身的科学计算类库发展也十分完善,例如NumPy、SciPy和matplotlib等。
就社会网络分析而言,igraph, networkx, graph-tool, Snap.py等类库提供了丰富的网络分析工具
Python软件与IDE
目前最新的Python版本为3.0,更稳定的2.7版本。
编译器是编写程序的重要工具。
免费的Python编译器有Spyder、PyCharm(免费社区版)、Ipython、Vim、 Emacs、 Eclipse(加上PyDev插件)。
Installing Anaconda Python
Use the Anaconda Python
http://continuum.io/downloads.html
第三方包可以使用pip install的方法安装。
可以点击ToolsOpen command prompt
然后在打开的命令窗口中输入:
<del>pip install beautifulsoup4
pip install beautifulsoup4
NumPy /SciPy for scientific computing
pandas to make Python usable for data analysis
matplotlib to make graphics
scikit-learn for machine learning
End of explanation
# str, int, float
str(3)
"chengjun wang"
# int
int('5')
# float
float('7.1')
range(10)
# for i in range(1, 10):
# print(i)
range(1,10)
Explanation: Variable Type
End of explanation
dir
dir(str)[-10:]
help(str)
x = ' Hello WorlD '
dir(x)[-10:]
# lower
x.lower()
# upper
x.upper()
# rstrip
x.rstrip()
# strip
x.strip()
# replace
x.replace('lo', '')
# split
x.split('lo')
# join
','.join(['a', 'b'])
Explanation: dir & help
当你想要了解对象的详细信息时使用
End of explanation
x = 'hello world'
type(x)
Explanation: type
当你想要了解变量类型时使用type
End of explanation
l = [1,2,3,3] # list
t = (1, 2, 3, 3) # tuple
s = {1, 2, 3, 3} # set([1,2,3,3]) # set
d = {'a':1,'b':2,'c':3} # dict
a = np.array(l) # array
print(l, t, s, d, a)
l = [1,2,3,3] # list
l.append(4)
l
d = {'a':1,'b':2,'c':3} # dict
d.keys()
d = {'a':1,'b':2,'c':3} # dict
d.values()
d = {'a':1,'b':2,'c':3} # dict
d['b']
d = {'a':1,'b':2,'c':3} # dict
d.items()
Explanation: Data Structure
list, tuple, set, dictionary, array
End of explanation
def devidePlus(m, n): # 结尾是冒号
y = m/n + 1 # 注意:空格
return y # 注意:return
Explanation: 定义函数
End of explanation
range(10)
range(1, 10)
for i in range(10):
print(i, i*10, i**2)
for i in range(10):
print(i*10)
for i in range(10):
print(devidePlus(i, 2))
# 列表内部的for循环
r = [devidePlus(i, 2) for i in range(10)]
r
Explanation: For 循环
End of explanation
m1 = map(devidePlus, [4,3,2], [2, 1, 5])
print(*m1)
#print(*map(devidePlus, [4,3,2], [2, 1, 5]))
# 注意: 将(4, 2)作为一个组合进行计算,将(3, 1)作为一个组合进行计算
m2 = map(lambda x, y: x + y, [1, 3, 5, 7, 9], [2, 4, 6, 8, 10])
print(*m2)
m3 = map(lambda x, y, z: x + y - z, [1, 3, 5, 7, 9], [2, 4, 6, 8, 10], [3, 3, 2, 2, 5])
print(*m3)
Explanation: map
End of explanation
j = 5
if j%2 == 1:
print(r'余数是1')
elif j%2 ==0:
print(r'余数是0')
else:
print(r'余数既不是1也不是0')
x = 5
if x < 5:
y = -1
z = 5
elif x > 5:
y = 1
z = 11
else:
y = 0
z = 10
print(x, y, z)
Explanation: if elif else
End of explanation
j = 0
while j <10:
print(j)
j+=1 # avoid dead loop
j = 0
while j <10:
if j%2 != 0:
print(j**2)
j+=1 # avoid dead loop
j = 0
while j <50:
if j == 30:
break
if j%2 != 0:
print(j**2)
j+=1 # avoid dead loop
a = 4
while a: # 0, None, False
print(a)
a -= 1
if a < 0:
a = None # []
Explanation: while循环
End of explanation
def devidePlus(m, n): # 结尾是冒号
return m/n+ 1 # 注意:空格
for i in [2, 0, 5]:
try:
print(devidePlus(4, i))
except Exception as e:
print(e)
pass
alist = [[1,1], [0, 0, 1]]
for aa in alist:
try:
for a in aa:
print(10 / a)
except Exception as e:
print(e)
pass
alist = [[1,1], [0, 0, 1]]
for aa in alist:
for a in aa:
try:
print(10 / a)
except Exception as e:
print(e)
pass
Explanation: try except
End of explanation
data =[[i, i**2, i**3] for i in range(10)]
data
for i in data:
print('\t'.join(map(str, i)))
type(data)
len(data)
data[0]
help(f.write)
# 保存数据
data =[[i, i**2, i**3] for i in range(10000)]
f = open("../data/data_write_to_file1.txt", "w")
for i in data:
f.write('\t'.join(map(str,i)) + '\n')
f.close()
with open('../data/data_write_to_file.txt','r') as f:
data = f.readlines()
data[:5]
with open('../data/data_write_to_file.txt','r') as f:
data = f.readlines(1000) #bytes
len(data)
with open('../data/data_write_to_file.txt','r') as f:
print(f.readline())
f = [1, 2, 3, 4, 5]
for k, i in enumerate(f):
print(k, i)
with open('../data/data_write_to_file.txt','r') as f:
for i in f:
print(i)
with open('../data/data_write_to_file.txt','r') as f:
for k, i in enumerate(f):
if k%2000 == 0:
print(i)
data = []
line = '0\t0\t0\n'
line = line.replace('\n', '')
line = line.split('\t')
line = [int(i) for i in line] # convert str to int
data.append(line)
data
# 读取数据
data = []
with open('../data/data_write_to_file1.txt','r') as f:
for line in f:
line = line.replace('\n', '').split('\t')
line = [int(i) for i in line]
data.append(line)
data
# 读取数据
data = []
with open('../data/data_write_to_file.txt','r') as f:
for line in f:
line = line.replace('\n', '').split('\t')
line = [int(i) for i in line]
data.append(line)
data
import pandas as pd
help(pd.read_csv)
df = pd.read_csv('../data/data_write_to_file.txt',
sep = '\t', names = ['a', 'b', 'c'])
df[-5:]
Explanation: Write and Read data
End of explanation
import json
data_dict = {'a':1, 'b':2, 'c':3}
with open('../data/save_dict.json', 'w') as f:
json.dump(data_dict, f)
dd = json.load(open("../data/save_dict.json"))
dd
Explanation: 保存中间步骤产生的字典数据
End of explanation
data_list = list(range(10))
with open('../data/save_list.json', 'w') as f:
json.dump(data_list, f)
dl = json.load(open("../data/save_list.json"))
dl
Explanation: 重新读入json
保存中间步骤产生的列表数据
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
x = range(1, 100)
y = [i**-3 for i in x]
plt.plot(x, y, 'b-s')
plt.ylabel('$p(k)$', fontsize = 20)
plt.xlabel('$k$', fontsize = 20)
plt.xscale('log')
plt.yscale('log')
plt.title('Degree Distribution')
plt.show()
import numpy as np
# red dashes, blue squares and green triangles
t = np.arange(0., 5., 0.2)
plt.plot(t, t, 'r--')
plt.plot(t, t**2, 'bs')
plt.plot(t, t**3, 'g^')
plt.show()
# red dashes, blue squares and green triangles
t = np.arange(0., 5., 0.2)
plt.plot(t, t**2, 'b-s', label = '1')
plt.plot(t, t**2.5, 'r-o', label = '2')
plt.plot(t, t**3, 'g-^', label = '3')
plt.annotate(r'$\alpha = 3$', xy=(3.5, 40), xytext=(2, 80),
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize = 20)
plt.ylabel('$f(t)$', fontsize = 20)
plt.xlabel('$t$', fontsize = 20)
plt.legend(loc=2,numpoints=1,fontsize=10)
plt.show()
# plt.savefig('/Users/chengjun/GitHub/cjc/figure/save_figure.png',
# dpi = 300, bbox_inches="tight",transparent = True)
plt.figure(1)
plt.subplot(221)
plt.plot(t, t, 'r--')
plt.text(2, 0.8*np.max(t), r'$\alpha = 1$', fontsize = 20)
plt.subplot(222)
plt.plot(t, t**2, 'bs')
plt.text(2, 0.8*np.max(t**2), r'$\alpha = 2$', fontsize = 20)
plt.subplot(223)
plt.plot(t, t**3, 'g^')
plt.text(2, 0.8*np.max(t**3), r'$\alpha = 3$', fontsize = 20)
plt.subplot(224)
plt.plot(t, t**4, 'r-o')
plt.text(2, 0.8*np.max(t**4), r'$\alpha = 4$', fontsize = 20)
plt.show()
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
t1 = np.arange(0.0, 5.0, 0.1)
t2 = np.arange(0.0, 5.0, 0.02)
plt.figure(1)
plt.subplot(211)
plt.plot(t1, f(t1), 'bo')
plt.plot(t2, f(t2), 'k')
plt.subplot(212)
plt.plot(t2, np.cos(2*np.pi*t2), 'r--')
plt.show()
import matplotlib.gridspec as gridspec
import numpy as np
t = np.arange(0., 5., 0.2)
gs = gridspec.GridSpec(3, 3)
ax1 = plt.subplot(gs[0, :])
plt.plot(t, t**2, 'b-s')
ax2 = plt.subplot(gs[1,:-1])
plt.plot(t, t**2, 'g-s')
ax3 = plt.subplot(gs[1:, -1])
plt.plot(t, t**2, 'r-o')
ax4 = plt.subplot(gs[-1,0])
plt.plot(t, t**2, 'g-^')
ax5 = plt.subplot(gs[-1,1])
plt.plot(t, t**2, 'b-<')
plt.tight_layout()
def OLSRegressPlot(x,y,col,xlab,ylab):
xx = sm.add_constant(x, prepend=True)
res = sm.OLS(y,xx).fit()
constant, beta = res.params
r2 = res.rsquared
lab = r'$\beta = %.2f, \,R^2 = %.2f$' %(beta,r2)
plt.scatter(x,y,s=60,facecolors='none', edgecolors=col)
plt.plot(x,constant + x*beta,"red",label=lab)
plt.legend(loc = 'upper left',fontsize=16)
plt.xlabel(xlab,fontsize=26)
plt.ylabel(ylab,fontsize=26)
x = np.random.randn(50)
y = np.random.randn(50) + 3*x
pearsonr(x, y)
fig = plt.figure(figsize=(10, 4),facecolor='white')
OLSRegressPlot(x,y,'RoyalBlue',r'$x$',r'$y$')
plt.show()
fig = plt.figure(figsize=(7, 4),facecolor='white')
data = norm.rvs(10.0, 2.5, size=5000)
mu, std = norm.fit(data)
plt.hist(data, bins=25, normed=True, alpha=0.6, color='g')
xmin, xmax = plt.xlim()
x = np.linspace(xmin, xmax, 100)
p = norm.pdf(x, mu, std)
plt.plot(x, p, 'r', linewidth=2)
title = r"$\mu = %.2f, \, \sigma = %.2f$" % (mu, std)
plt.title(title,size=16)
plt.show()
import pandas as pd
df = pd.read_csv('../data/data_write_to_file.txt', sep = '\t', names = ['a', 'b', 'c'])
df[:5]
df.plot.line()
plt.yscale('log')
plt.ylabel('$values$', fontsize = 20)
plt.xlabel('$index$', fontsize = 20)
plt.show()
df.plot.scatter(x='a', y='b')
plt.show()
df.plot.hexbin(x='a', y='b', gridsize=25)
plt.show()
df['a'].plot.kde()
plt.show()
bp = df.boxplot()
plt.yscale('log')
plt.show()
df['c'].diff().hist()
plt.show()
df.plot.hist(stacked=True, bins=20)
# plt.yscale('log')
plt.show()
Explanation: 使用matplotlib绘图
End of explanation |
3,384 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiments with entropy, information gain, and decision trees.
Iris fact of the day
Step1: If you do not have pydot library installed, open your terminal and type either conda install pydot or pip install pydot
Step2: The plan
Step3: When you have time, try it with other bases for the log
Step4: Very interesting. The distribution of labels is almost indistinguishable from uniform.
A 64-thousand-dollar question
Step5: According to the information gain metric, petal length is the most useful feature, followed by petal width. Let's confirm that this agrees with the sklearn decision tree implementation.
Actually, sklearn doesn't expose the information gain values. Instead, it stores the distribution of "feature importances", which reflects the value of each feature in the full decision tree. Let's train a decision tree with max_depth=1 so it will only choose a single feature. Let's also get the test accuracy with this "decision stump".
When you have time, try it with depths between 1 and 4, observe the Feature Importances. What can you conclude?
Step6: We've been using the binarized version of the iris features. Recall that we simply chose thresholds for each feature by inspecting feature histograms. Let's use information gain as a metric to choose a best feature and a best threshold.
Step7: It looks like when we binarized our data, we didn't choose the thresholds that maximized information gain for 3 out of 4 features. Let's try training actual decision trees (as opposed to stumps) with the original (non-binarized) data. You may need to install GraphViz before exporting the tree.
If the pydot was installed correctly, you will see the image showing the Decistion Tree after running this block of code. Otherwise, you will see error messages, like in my case. In any case, you can uncomment the
print 'dot_data value | Python Code:
# This tells matplotlib not to try opening a new window for each plot.
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_iris
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
# For producing decision tree diagrams.
from IPython.core.display import Image, display
from sklearn.externals.six import StringIO
import pydot
Explanation: Experiments with entropy, information gain, and decision trees.
Iris fact of the day: Iris setosa's root contains a toxin that was used by the Aleut tribe in Alaska to make poisonous arrowheads.
End of explanation
# Load the data, which is included in sklearn.
iris = load_iris()
print 'Iris target names:', iris.target_names
print 'Iris feature names:', iris.feature_names
X, Y = iris.data, iris.target
# Shuffle the data, but make sure that the features and accompanying labels stay in sync.
np.random.seed(0)
shuffle = np.random.permutation(np.arange(X.shape[0]))
X, Y = X[shuffle], Y[shuffle]
# Split into train and test.
train_data, train_labels = X[:100], Y[:100]
test_data, test_labels = X[100:], Y[100:]
# Define a function that applies a threshold to turn real valued iris features into 0/1 features.
# 0 will mean "short" and 1 will mean "long".
def binarize_iris(data, thresholds=[6.0, 3.0, 2.5, 1.0]):
# Initialize a new feature array with the same shape as the original data.
binarized_data = np.zeros(data.shape)
# Apply a threshold to each feature.
for feature in range(data.shape[1]):
binarized_data[:,feature] = data[:,feature] > thresholds[feature]
return binarized_data
# Create new binarized training and test data
binarized_train_data = binarize_iris(train_data)
binarized_test_data = binarize_iris(test_data)
Explanation: If you do not have pydot library installed, open your terminal and type either conda install pydot or pip install pydot
End of explanation
def entropy(distribution):
h = 0.0
for probability in distribution:
logprob = -100.0 # log(0) = -inf so let's approximate it with -100 to avoid an error
if probability > 0.0: logprob = np.log2(probability)
h -= probability * logprob
return h
# Show a plot of the entropy, H(X), of a Bernoulli random variable X.
p_values = np.linspace(0, 1, 50)
entropies = [entropy([p, 1-p]) for p in p_values]
plt.figure(figsize=(4,4))
plt.plot(p_values, entropies, 'o')
plt.xlabel('P(X=1)')
plt.ylabel('H(X)')
print
Explanation: The plan:
The goal is to identify the data partitioning scheme that will maximize the information gain.
The information gain will be expressed through entropy.
Let's start by defining a function that computes the entropy of a distribution. Remember that entropy is a measure of uncertainty. It is maximized when the distribution is uniform.
End of explanation
def get_label_distribution(labels):
# Initialize counters for all labels to zero.
label_probs = np.array([0.0 for i in range(len(iris.target_names))])
# Iterate over labels in the training data and update counts.
for label in labels:
label_probs[label] += 1.0
# Normalize to get a distribution.
label_probs /= label_probs.sum()
return label_probs
label_probs = get_label_distribution(train_labels)
print 'Label distribution', label_probs
# Compare the label entropy to a uniform distribution.
print 'Label entropy:', entropy(label_probs)
print 'Uniform entropy:', entropy([1./3, 1./3, 1./3])
Explanation: When you have time, try it with other bases for the log: 10 and "e"
We are interested in the entropy of our distribution over labels.
End of explanation
# A function that computes information gain given these inputs:
# data: an array of featurized examples
# labels: an array of labels corresponding to the the data
# feature: the feature to use to split the data
# threshold: the feature value to use to split the data (the default threshold is good for binary features)
def information_gain(data, labels, feature, threshold=0):
# Get the initial entropy of the label distribution.
initial_entropy = entropy(get_label_distribution(labels))
# subset0 will contain the labels for which the feature is 0 and
# subset1 will contain the labels for which the feature is 1.
subset0, subset1 = [], []
for datum, label in zip(data, labels):
if datum[feature] > threshold: subset1.append(label)
else: subset0.append(label)
# Compute the entropy of each subset.
subset0_entropy = entropy(get_label_distribution(subset0))
subset1_entropy = entropy(get_label_distribution(subset1))
# Make it a fair comparison:
# Compute the final entropy by weighting each subset's entropy according to its size.
subset0_weight = 1.0 * len(subset0) / len(labels)
subset1_weight = 1.0 * len(subset1) / len(labels)
final_entropy = subset0_weight * subset0_entropy + subset1_weight * subset1_entropy
# Finally, compute information gain as the difference between the initial and final entropy.
return initial_entropy - final_entropy
for feature in range(binarized_train_data.shape[1]):
## We are looking at binarized data; so the threshold = 0
ig = information_gain(binarized_train_data, train_labels, feature)
print '%d %.3f %s' %(feature, ig, iris.feature_names[feature])
Explanation: Very interesting. The distribution of labels is almost indistinguishable from uniform.
A 64-thousand-dollar question: Can we use entropy as a similarity measure for distributions?
Now let's figure out which feature provides the greatest information gain. Philosophically, information gain means reduction of randomness. So we are looking for the feature(s) that reduce entropy the most.
To do this, we need to look at the entropy of each subset of the labels after splitting on each feature. In a sense, it is similar to marginalization by feature (like we did last week)
End of explanation
dt = DecisionTreeClassifier(criterion='entropy', max_depth=1)
dt.fit(binarized_train_data, train_labels)
print 'Using a decision stump -- a tree with depth 1:'
print 'Feature importances:', dt.feature_importances_
print 'Accuracy:', dt.score(binarized_test_data, test_labels)
Explanation: According to the information gain metric, petal length is the most useful feature, followed by petal width. Let's confirm that this agrees with the sklearn decision tree implementation.
Actually, sklearn doesn't expose the information gain values. Instead, it stores the distribution of "feature importances", which reflects the value of each feature in the full decision tree. Let's train a decision tree with max_depth=1 so it will only choose a single feature. Let's also get the test accuracy with this "decision stump".
When you have time, try it with depths between 1 and 4, observe the Feature Importances. What can you conclude?
End of explanation
def try_features_and_thresholds(data, labels):
for feature in range(data.shape[1]):
# Choose a set of thresholds between the min- and max-valued feature, ignoring the min and max themselves.
thresholds = np.linspace(data[:,feature].min(), data[:,feature].max(), 20)[1:-1]
# Try each threshold and keep track of the best one for this feature.
best_threshold = 0
best_ig = 0
for threshold in thresholds:
ig = information_gain(data, labels, feature, threshold)
if ig > best_ig:
best_ig = ig
best_threshold = threshold
# Show the best threshold and information gain for this feature.
print '%d %.3f %.3f %s' %(feature, best_threshold, best_ig, iris.feature_names[feature])
try_features_and_thresholds(train_data, train_labels)
Explanation: We've been using the binarized version of the iris features. Recall that we simply chose thresholds for each feature by inspecting feature histograms. Let's use information gain as a metric to choose a best feature and a best threshold.
End of explanation
# Train a decision tree classifier.
dt = DecisionTreeClassifier(criterion='entropy', min_samples_split=2)
dt.fit(train_data, train_labels)
print 'Accuracy:', dt.score(test_data, test_labels)
# Export the trained tree so we can look at it.
output_name = 'iris-decisiontree.jpg'
print output_name
dot_data = StringIO()
tree.export_graphviz(dt, out_file=dot_data)
## print 'dot_data value:', dot_data.getvalue()
graph = pydot.graph_from_dot_data(dot_data.getvalue())
# If the export was successful, show the image.
if graph.write_jpg(output_name):
print 'Output:', output_name
display(Image(filename=output_name))
Explanation: It looks like when we binarized our data, we didn't choose the thresholds that maximized information gain for 3 out of 4 features. Let's try training actual decision trees (as opposed to stumps) with the original (non-binarized) data. You may need to install GraphViz before exporting the tree.
If the pydot was installed correctly, you will see the image showing the Decistion Tree after running this block of code. Otherwise, you will see error messages, like in my case. In any case, you can uncomment the
print 'dot_data value:', dot_data.getvalue()
line, and that will reveal the structure of the tree
End of explanation |
3,385 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'sandbox-3', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: AWI
Source ID: SANDBOX-3
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
3,386 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
reviews.head()
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
#Create the counter
total_counts = Counter()
#iter every row
for idx, row in reviews.iterrows():
#Review is contained in 0 position of the row (first column)
for word in row[0].split(' '):
total_counts[word] += 1
print("Total words in data set: ", len(total_counts))
total_counts.most_common()
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {} ## create the word-to-index dictionary here
i = 0
for w in vocab:
word2idx[w] = i
i += 1
word2idx['the']
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
word_vector = np.zeros(len(word2idx))
for word in text.split(' '):
index = word2idx.get(word, None)
if index != None:
word_vector[index] = 1
return word_vector
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split].reshape((len(Y.values[train_split]),)), 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split].reshape((len(Y.values[test_split]),)), 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
#input layer
net = tflearn.input_data([None, len(word2idx)])
#hidden layer
net = tflearn.fully_connected(net, n_units=200, activation='ReLU')
net = tflearn.fully_connected(net, n_units=25, activation='ReLU')
#output layer
net = tflearn.fully_connected(net, n_units=2, activation='softmax')
#training
net = tflearn.regression(net,
optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net, tensorboard_verbose=3, tensorboard_dir='model_dir')
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=20)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
sentence = "Ace Ventura is the best movie ever! I wonder why Jim Carrey didn't won the Oscar"
test_sentence(sentence=sentence)
Explanation: Try out your own text!
End of explanation |
3,387 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image and feature analysis
Let's start by loading the libraries we'll need
Step1: Extract Images
Included in these workshop materials is a compressed file ("data.tar.gz") containg the images that we'll be classifying today. Once you extract this file, you should have a directory called "data" which contains the following directories
Step2: Image Properties
One of the really useful things about using OpenCV to manipulate images in Python is that all images are treated as NumPy matrices. This means we can use NumPy's functions to manipulate and understand the data we're working with. To demonstrate this, we'll use use NumPy's "shape" and "dtype" commands to take a closer look at the rectangular tag image we just read in
Step3: This tells us that this image is 24x24 pixels in size, and that the datatype of the values it stores are unsigned 8 bit integers. While the explanation of this datatype isn't especially relevant to the lesson, the main point is that it is extremely important to double check the size and structure of your data. Let's do the same thing for the circular tag image too
Step4: This holds the same values, which is good. When you're working with your own datasets in the future, it would be highly beneficial to write your own little program to check the values and structure of your data to ensure that subtle bugs don't creep in to your analysis.
Cropping
One of the things you've probably noticed is that there's a dark area around the edges of the tags. As we're only interested in the pattern in the middle of the tags, we should try to crop this out. Have a little play with the code below and experiment with different pixel slices.
Step5: Feature Engineering
When people think of machine learning, the first thing that comes to mind tends to be the fancy algorithms that will train the computer to solve your problem. Of course this is important, but the reality of the matter is that the way you process the data you'll eventually feed into the machine learning algorithm is often the thing you'll spend the most time doing and will have the biggest effect on the accuracy of your results.
Now, when most people think of features in data, they think that this is what it is
Step6: In fact this is not actualy the case. In the case of this dataset, the features are actually the pixel values that make up the images - those are the values we'll be training the machine learning algorithm with
Step7: So what can we do to manipulate the features in out dataset? We'll explore three methods to acheive this
Step8: Feel free to have a play with the different parameters for these smoothing operations. We'll now write some code to place the original images next to their smoothed counterparts in order to compare them
Step9: Brightness and Contrast
Modifying the brightness and contrast of our images is a surprisingly simple task, but can have a big impact on the appearance of the image. Here is how you can increase and decrease these characteristics in an image | Python Code:
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
Explanation: Image and feature analysis
Let's start by loading the libraries we'll need:
End of explanation
rect_image = cv2.imread('data/I/27.png', cv2.IMREAD_GRAYSCALE)
circle_image = cv2.imread('data/O/11527.png', cv2.IMREAD_GRAYSCALE)
queen_image = cv2.imread('data/Q/18027.png', cv2.IMREAD_GRAYSCALE)
plt.figure(figsize = (10, 7))
plt.title('Rectangle Tag')
plt.axis('off')
plt.imshow(rect_image, cmap = cm.Greys_r)
plt.figure(figsize = (10, 7))
plt.title('Circle Tag')
plt.axis('off')
plt.imshow(circle_image, cmap = cm.Greys_r)
plt.figure(figsize = (10, 7))
plt.title('Queen Tag')
plt.axis('off')
plt.imshow(queen_image, cmap = cm.Greys_r)
Explanation: Extract Images
Included in these workshop materials is a compressed file ("data.tar.gz") containg the images that we'll be classifying today. Once you extract this file, you should have a directory called "data" which contains the following directories:
Directory | Contents
:-------------------------:|:-------------------------:
I | Contains rectangle tag images
O | Contains circle tag images
Q | Contains blank tag images
Feel free to have a look through these directories, and we'll show you how to load these images into Python using OpenCV next.
Reading Images
We're now going to be using OpenCV's "imread" command to load one of the images from each type of tag into Python and then use Matplotlib to plot the images:
End of explanation
print (rect_image.shape)
print (rect_image.dtype)
Explanation: Image Properties
One of the really useful things about using OpenCV to manipulate images in Python is that all images are treated as NumPy matrices. This means we can use NumPy's functions to manipulate and understand the data we're working with. To demonstrate this, we'll use use NumPy's "shape" and "dtype" commands to take a closer look at the rectangular tag image we just read in:
End of explanation
print (circle_image.shape)
print (circle_image.dtype)
Explanation: This tells us that this image is 24x24 pixels in size, and that the datatype of the values it stores are unsigned 8 bit integers. While the explanation of this datatype isn't especially relevant to the lesson, the main point is that it is extremely important to double check the size and structure of your data. Let's do the same thing for the circular tag image too:
End of explanation
cropped_rect_image = rect_image[4:20,4:20]
cropped_circle_image = circle_image[4:20,4:20]
cropped_queen_image = queen_image[4:20,4:20]
plt.figure(figsize = (10, 7))
plt.title('Rectangle Tag ' + str(cropped_rect_image.shape))
plt.axis('off')
plt.imshow(cropped_rect_image, cmap = cm.Greys_r)
plt.figure(figsize = (10, 7))
plt.title('Circle Tag ' + str(cropped_circle_image.shape))
plt.axis('off')
plt.imshow(cropped_circle_image, cmap = cm.Greys_r)
plt.figure(figsize = (10, 7))
plt.title('Queen Tag ' + str(cropped_queen_image.shape))
plt.axis('off')
plt.imshow(cropped_queen_image, cmap = cm.Greys_r)
Explanation: This holds the same values, which is good. When you're working with your own datasets in the future, it would be highly beneficial to write your own little program to check the values and structure of your data to ensure that subtle bugs don't creep in to your analysis.
Cropping
One of the things you've probably noticed is that there's a dark area around the edges of the tags. As we're only interested in the pattern in the middle of the tags, we should try to crop this out. Have a little play with the code below and experiment with different pixel slices.
End of explanation
plt.figure(figsize = (10, 7))
plt.title('Rectangle Tag')
plt.axis('off')
plt.imshow(rect_image, cmap = cm.Greys_r)
Explanation: Feature Engineering
When people think of machine learning, the first thing that comes to mind tends to be the fancy algorithms that will train the computer to solve your problem. Of course this is important, but the reality of the matter is that the way you process the data you'll eventually feed into the machine learning algorithm is often the thing you'll spend the most time doing and will have the biggest effect on the accuracy of your results.
Now, when most people think of features in data, they think that this is what it is:
End of explanation
print(rect_image)
Explanation: In fact this is not actualy the case. In the case of this dataset, the features are actually the pixel values that make up the images - those are the values we'll be training the machine learning algorithm with:
End of explanation
mean_smoothed = cv2.blur(rect_image, (5, 5))
median_smoothed = cv2.medianBlur(rect_image, 5)
gaussian_smoothed = cv2.GaussianBlur(rect_image, (5, 5), 0)
Explanation: So what can we do to manipulate the features in out dataset? We'll explore three methods to acheive this:
Image smoothing
Modifying brightness
Modifying contrast
Techniques like image smoothing can be useful when improving the features you train the machine learning algorithm on as you can eliminate some of the potential noise in the image that could confuse the program.
Smoothing
Image smoothing is another name for blurring the image. It involves passing a rectangular box (called a kernel) over the image and modifying pixels in the image based on the surrounding values.
As part of this exercise, we'll explore 3 different smoothing techniques:
Smoothing Method | Explanation
:-------------------------:|:-------------------------:
Mean | Replaces pixel with the mean value of the surrounding pixels
Median | Replaces pixel with the median value of the surrounding pixels
Gaussian | Replaces pixel by placing different weightings on surrrounding pixels according to the gaussian distribution
End of explanation
mean_compare = np.hstack((rect_image, mean_smoothed))
median_compare = np.hstack((rect_image, median_smoothed))
gaussian_compare = np.hstack((rect_image, gaussian_smoothed))
plt.figure(figsize = (15, 12))
plt.title('Mean')
plt.axis('off')
plt.imshow(mean_compare, cmap = cm.Greys_r)
plt.figure(figsize = (15, 12))
plt.title('Median')
plt.axis('off')
plt.imshow(median_compare, cmap = cm.Greys_r)
plt.figure(figsize = (15, 12))
plt.title('Gaussian')
plt.axis('off')
plt.imshow(gaussian_compare, cmap = cm.Greys_r)
Explanation: Feel free to have a play with the different parameters for these smoothing operations. We'll now write some code to place the original images next to their smoothed counterparts in order to compare them:
End of explanation
increase_brightness = rect_image + 30
decrease_brightness = rect_image - 30
increase_contrast = rect_image * 1.5
decrease_contrast = rect_image * 0.5
brightness_compare = np.hstack((increase_brightness, decrease_brightness))
constrast_compare = np.hstack((increase_contrast, decrease_contrast))
plt.figure(figsize = (15, 12))
plt.title('Brightness')
plt.axis('off')
plt.imshow(brightness_compare, cmap = cm.Greys_r)
plt.figure(figsize = (15, 12))
plt.title('Contrast')
plt.axis('off')
plt.imshow(constrast_compare, cmap = cm.Greys_r)
Explanation: Brightness and Contrast
Modifying the brightness and contrast of our images is a surprisingly simple task, but can have a big impact on the appearance of the image. Here is how you can increase and decrease these characteristics in an image:
Characteristic | Increase/Decrease | Action
:-------------------------:|:-------------------------:|:-------------------------
Brightness | Increase | Add an integer to every pixel
Brightness | Decrease | Subtract an integer from every pixel
Constrast | Increase | Multiply every pixel by a number greater than 1
Constrast | Decrease | Multiple every pixel by a floating point number less than 1
Now we can see how this affects our rectangular tag image. Again, feel free to experiment with different values in order to see the final effect.
End of explanation |
3,388 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
5章 誤差逆伝播法
ニューラルネットワークの学習では重みパラメータの勾配(重みパラメータに関する損失関数の勾配)は数値微分によって求めていた。これは実装は簡単だが、計算に時間がかかる。そこで効率よく勾配計算を行なうために「誤差逆伝播法」を用いる。
ここでは数式ではなく、「計算グラフ(computational graph)」を用いて理解を深める。
5.1 計算グラフ
計算グラフは計算の過程をグラフにしたものである。
5.1.1 計算グラフで解く
問1 1個100円の林檎を2個買った際に支払う金額を求める。消費税は10%とする。
Step1: グラフの左から右へ計算を進めることを「順伝播(forward propergation)」という。逆に右から左に計算を遡ることを「逆伝播(backward propergation)」という。
5.1.2 局所的な計算
各ノードにおける計算は局所的なものであり、それ以前の計算されてくる過程は考慮する必要なく計算が行なうことが出来る。
5.1.3 なぜ計算グラフで解くのか?
計算グラフで解く利点
* 各ノードでは局所的な計算でよい
* 途中の計算の結果を全て保持することができる
* 玉方向の伝播によって「微分」を効率よく計算できる点
りんごの値段が少し上がった場合に総支払額が同変化するか確認することが出来る。
5.2 連鎖律
局所的な微分を伝達する原理は、「連鎖律(chain rule)」によるものである。
5.2.1 計算グラフの逆伝播
逆伝播の計算手順は信号$E$に対して、ノードの局所的な微分$\frac{\delta y}{\delta x}$を乗算し次のノードへ伝達していく。
Step2: 5.2.2 連鎖律とは
合成関数とは
複数の関数によって構成される関数。
$z=(x+y)^{2}$という式は以下2式のように構成される。
$$
z=t^{2} \
t=x+y
$$
連鎖律は合成関数の微分についての性質である。
合成関数の微分は、合成関数を構成するそれぞれの関数の微分の積によって表すことが出来る
上記例でいうと以下の様に表すことが出来る。
$$\frac{\delta z}{\delta x}=
\frac{\delta z}{\delta t}\frac{\delta t}{\delta x}$$
計算を進めると以下となり、連鎖律をもちいて合成関数の微分が行える。
$$
\frac{\delta z}{\delta t} = 2t \
\frac{\delta t}{\delta x} = 1 \
\frac{\delta z}{\delta x}=
\frac{\delta z}{\delta t}\frac{\delta t}{\delta x} = 2t・1 = 2(x + y)
$$
5.2.3 連鎖律と計算グラフ
計算グラフの逆伝播を考えた場合、連鎖律を用いて微分を求めることが出来る。
$$
\frac{\delta z}{\delta z}\frac{\delta z}{\delta t}\frac{\delta t}{\delta x} \
=\frac{\delta z}{\delta t}\frac{\delta t}{\delta x} \
=\frac{\delta z}{\delta x} \
$$
5.3 逆伝播
加算や乗算を例に逆伝播の仕組みを考える。
5.3.1 加算ノードの逆伝播
$z=x+y$について逆伝播を考える。この式について微分を行なうと以下となる。
$$
\frac{\delta z}{\delta x}=1 \
\frac{\delta z}{\delta y}=1 \
$$
逆伝播の際には、前の計算から伝わってきた$\frac{\delta L}{\delta z}$を乗算して次のノードに渡す。(この場合はx,yの微分とも1なので$\frac{\delta L}{\delta z}・1$となる)
5.3.2 乗算ノードの逆伝播
$z=xy$について逆伝播を考える。この式について微分を行なうと以下となる。
$$
\frac{\delta z}{\delta x}=y \
\frac{\delta z}{\delta y}=x \
$$
乗算の逆伝播では入力した値をひっくり返した値が用いられることとなる。
$$
xの逆伝播:\frac{\delta L}{\delta z}\frac{\delta z}{\delta x} = \frac{\delta L}{\delta z}・y\
yの逆伝播:\frac{\delta L}{\delta z}\frac{\delta z}{\delta y} = \frac{\delta L}{\delta z}・x \
$$
加算の逆伝播では上流の値をただ流すだけ立ったので順伝播の入力信号は不要だったが、乗算の逆伝播では順伝播の入力信号を保持しておかなければならない。
5.3.3 リンゴの例
割愛
5.4 単純なレイヤの実装
計算グラフ乗算ノードを「乗算レイヤ(MulLayer)」、加算ノードを「加算レイヤ(AddLayer)」という名前で実装する。
5.4.1 乗算レイヤの実装
レイヤはforward()とbackward()の共通のインタフェースを持つようにする。
Step3: 5.5 活性化関数レイヤの実装
計算グラフの考え方をニューラルネットワークに適用していく。ここでは、ニューラルネットワークを構成する「層(レイヤ)」をひとつのクラスとして実装する。活性化関数であるReLUとSigmoidレイヤを実装する。
5.5.1 ReLUレイヤ
活性化関数してとして使われるReLU(Rectified Linear Unit)は次式で表される。
$$
y
= \begin{cases}
& x \; (x>0) \
& 0 \; (x\leq0)
\end{cases}
$$
xに関するyの微分は以下のようになる。
$$
\frac{\delta y}{\delta x}
= \begin{cases}
& 1 \; (x>0) \
& 0 \; (x\leq0)
\end{cases}
$$
順伝播時入力のxが0より大きければ、逆伝播は上流の値をそのまま下流に流す。逆にxが0以下であれば下流への信号はストップする。
実装は以下となる。
Step4: 5.5.2 Sigmoidレイヤ
シグモイド関数は次式で表される。
$$
y=\frac{1}{1+exp(-x)}
$$
計算グラフでは「×」「exp」「+」「/」の順でのノードが連結する。
逆伝播の流れを順に沿って見ていく。
ステップ1
「/」ノードは$y=\frac{1}{x}$を表す(上記式で「1+exp(-x)」を分母にするところから分かる)が、この微分は解析的に次式のようになる。
$$
\frac{\delta y}{\delta x} = -\frac{1}{x^{2}} \
= -y^{2}
$$
ステップ2
「+」ノードは上流の値を下流にそのまま流すだけ。
ステップ3
「exp」ノードは$y=exp(x)$であり、微分は以下式で表される。
$$
\frac{\delta y}{\delta x}= exp(x)
$$
計算グラフでは順伝播時の出力を乗算して下流へ伝搬する。($exp(-x)$)
ステップ4
「×」ノードは順伝播時の値をひっくり返して乗算する。(-1)
以上によりSigmoidの逆伝播の出力は以下となる。
$$
\frac{\delta L}{\delta y}y^{2}exp(-x) \
=\frac{\delta L}{\delta y}\frac{1}{(1+exp(-x))^{2}}exp(-x) \
=\frac{\delta L}{\delta y}\frac{1}{1+exp(-x)} \frac{exp(-x)}{1+exp(-x)} \
=\frac{\delta L}{\delta y}y(1-y)
$$
これによりSigmoidレイヤの逆伝播は順伝播の出力だけから求めることが出来る。
実装は以下となる。
Step5: 5.6 Affine/Softmaxレイヤの実装
5.6.1 Affineレイヤ
行列の内積は幾何学分野で「アフィン変換」と呼ばれる。ここではアフィン変換を行なう処理を「Affineレイヤ」という名前で実装する。
ニューロンの重み付き和$Y = np.dot(X,W)+B$における計算グラフで考えてみる。
この計算グラフではこれまでのような「スカラ値」ではなく「行列」が伝播していく。
逆伝播は以下のように導出される。
$$
\frac{\delta L}{\delta X}=\frac{\delta L}{\delta Y}・W^{T} \
\frac{\delta L}{\delta W}=W^{T}\frac{\delta L}{\delta Y}
$$
$W^{T}$のTは転置を表す。行列の内積の逆伝播は対応する次元の要素数を一致させるように内積を組み立てる必要がある。次元数は以下の通りである。
X
Step6: 順伝播では各データに対して加算がされていたが、逆伝播においてはそれぞれの逆伝播のデータからバイアスに集約される必要がある。
Step7: 5.6.3 Softmax-with-Lossレイヤ
出力層であるソフトマックス関数について考える。ソフトマックス関数は入力された値を正規化して出力する(出力の和が1になる)。
ニューラルネットワークにおける推論ではソフトマックス関数は不要(Affineレイヤの出力のうち最も高い値(スコア)を推論値とすれば良いので)だが、学習時には必要になる。
ここでは損失関数である交差エントロピー誤差(cross entoropy error)も含めて「Softmax-with-Loss レイヤ」という名前で実装する。
このレイヤはSoftmaxレイヤを通した後にCross Emtropy Errorレイヤを通す構造になっている。
順伝播ではSoftmaxレイヤの出力と教師データのラベルがCross Emtropy Errorレイヤの入力となり、損失Lを出力する。
逆伝播ではSoftmaxレイヤからは$(y_{1}-t_{1}, y_{2}-t_{2}, y_{3}-t_{3})$というシンプルな形で流れてくる。これにより、誤った推論を行った場合は大きな誤差が逆伝播されるが、正解している場合は小さい誤差が伝搬される。
このようにシンプルな形で流せるのはソフトマックス関数の損失関数として交差エントロピー誤差を選んでいるからである。(回帰問題における出力層に恒等関数、損失関数として2乗和誤差を用いることも同様(3.5参照))
Step8: 5.7 誤差逆伝播法の実装
5.7.1 ニューラルネットワークの学習の全体図
ステップ1 ミニバッチ
訓練データからランダムに一部データを選び出す
ステップ2 勾配の算出
各重みパラメータに関する損失関数の勾配を求める
ステップ3 パラメータの更新
重みパラメータを勾配方向に微小量更新
ステップ4 繰り返す
ステップ1~3を繰り返す
5.7.2 誤差逆伝播法に対応したニューラルネットワークの実装
Step9: 5.7.3 誤差逆伝播法の勾配確認
これまでで勾配を求める方法として数値微分と解析的に求める方法が説明された。解析的に求める方法としては誤差逆伝播法をもちいて大量パラメータに対しても効率的に計算出来た。実装においては誤差逆伝播法があれば良いが、実装の正しさを確認するために数値微分が用いられる。実装が簡単なため数値微分による確認は有効であり「勾配確認(gradient check)」と言われる。
Step10: 5.7.4 誤差逆伝播法を使った学習
mnistに対して誤差逆伝播法を用いて学習を行なう。 | Python Code:
import matplotlib.pyplot as plt
from graphviz import Digraph
from matplotlib.image import imread
f = Digraph(format="png")
f.attr(rankdir='LR', size='8,5')
f.attr('node', shape='circle')
f.edge('apple', '×2', label='100')
f.edge('×2', '×1.1', label='200')
f.edge('×1.1', 'cash', label='220')
f.render("../docs/5_1_1")
img = imread('../docs/5_1_1.png')
plt.figure(figsize=(10,8))
plt.imshow(img)
plt.show()
f = Digraph(format="png")
f.attr(rankdir='LR', size='8,5')
f.attr('node', shape='circle')
f.node('apple', 'apple')
f.node('apple_num', 'apple_num')
f.node('tax', 'tax')
f.node('mul1', '×')
f.node('mul2', '×')
f.node('cash', 'cash')
f.body.append('{rank=same; apple; apple_num; tax;}')
f.edge('apple', 'mul1', label='100')
f.edge('apple_num', 'mul1', label='2')
f.edge('mul1', 'mul2', label='200')
f.edge('tax', 'mul2', label='1.1')
f.edge('mul2', 'cash', label='220')
f.render("../docs/5_1_1")
img = imread('../docs/5_1_1.png')
plt.figure(figsize=(10,8))
plt.imshow(img)
plt.show()
Explanation: 5章 誤差逆伝播法
ニューラルネットワークの学習では重みパラメータの勾配(重みパラメータに関する損失関数の勾配)は数値微分によって求めていた。これは実装は簡単だが、計算に時間がかかる。そこで効率よく勾配計算を行なうために「誤差逆伝播法」を用いる。
ここでは数式ではなく、「計算グラフ(computational graph)」を用いて理解を深める。
5.1 計算グラフ
計算グラフは計算の過程をグラフにしたものである。
5.1.1 計算グラフで解く
問1 1個100円の林檎を2個買った際に支払う金額を求める。消費税は10%とする。
End of explanation
import matplotlib.pyplot as plt
from graphviz import Digraph
from matplotlib.image import imread
f = Digraph(format="png")
f.attr(rankdir='LR', size='8,5')
f.attr('node', shape='circle')
f.edge('start', 'f', label='x')
f.edge('f', 'start', label='E*δy/δy')
f.edge('f', 'end', label='y')
f.edge('end', 'f', label='E')
f.render("../docs/5_2_1")
img = imread('../docs/5_2_1.png')
plt.figure(figsize=(10,8))
plt.imshow(img)
plt.show()
Explanation: グラフの左から右へ計算を進めることを「順伝播(forward propergation)」という。逆に右から左に計算を遡ることを「逆伝播(backward propergation)」という。
5.1.2 局所的な計算
各ノードにおける計算は局所的なものであり、それ以前の計算されてくる過程は考慮する必要なく計算が行なうことが出来る。
5.1.3 なぜ計算グラフで解くのか?
計算グラフで解く利点
* 各ノードでは局所的な計算でよい
* 途中の計算の結果を全て保持することができる
* 玉方向の伝播によって「微分」を効率よく計算できる点
りんごの値段が少し上がった場合に総支払額が同変化するか確認することが出来る。
5.2 連鎖律
局所的な微分を伝達する原理は、「連鎖律(chain rule)」によるものである。
5.2.1 計算グラフの逆伝播
逆伝播の計算手順は信号$E$に対して、ノードの局所的な微分$\frac{\delta y}{\delta x}$を乗算し次のノードへ伝達していく。
End of explanation
class MulLayer:
def __init__(self):
self.x = None
self.y = None
def forward(self, x, y):
self.x = x
self.y = y
out = x * y
return out
def backward(self, dout):
# ひっくり返した値を乗算して返す
dx = dout * self.y
dy = dout * self.x
return dx, dy
# リンゴ2個と消費税を計算する実装
apple = 100
apple_num = 2
tax = 1.1
# layer
mul_apple_layer = MulLayer()
mul_tax_layer = MulLayer()
# forward
apple_price = mul_apple_layer.forward(apple, apple_num)
price = mul_tax_layer.forward(apple_price, tax)
print(price)
# 各変数に関する微分
dprice = 1
dapple_price, dtax = mul_tax_layer.backward(dprice)
dapple, dapple_num = mul_apple_layer.backward(dapple_price)
print(dapple, dapple_num, dtax)
# 加算レイヤ
class AddLayer:
def __init__(self):
pass
def forward(self, x, y):
out = x + y
return out
def backward(self, dout):
dx = dout * 1
dy = dout * 1
return dx, dy
# りんごとみかんの買い物を実装
apple = 100
apple_num = 2
orange = 150
orange_num = 3
tax = 1.1
# layer
mul_apple_layer = MulLayer()
mul_orange_layer = MulLayer()
add_apple_orange_layer = AddLayer()
mul_tax_layer = MulLayer()
# forward
apple_price = mul_apple_layer.forward(apple, apple_num) # (1)
orange_price = mul_orange_layer.forward(orange, orange_num) # (2)
all_price = add_apple_orange_layer.forward(apple_price, orange_price) # (3)
price = mul_tax_layer.forward(all_price, tax) # (4)
# backward
dprice = 1
dall_price, dtax = mul_tax_layer.backward(dprice) # (4)
dapple_price, dorange_price = add_apple_orange_layer.backward(dall_price) # (3)
dorange, dorange_num = mul_orange_layer.backward(dorange_price) # (2)
dapple, dapple_num = mul_apple_layer.backward(dapple_price) # (1)
print("price:", int(price))
print("dApple:", dapple)
print("dApple_num:", int(dapple_num))
print("dOrange:", dorange)
print("dOrange_num:", int(dorange_num))
print("dTax:", dtax)
Explanation: 5.2.2 連鎖律とは
合成関数とは
複数の関数によって構成される関数。
$z=(x+y)^{2}$という式は以下2式のように構成される。
$$
z=t^{2} \
t=x+y
$$
連鎖律は合成関数の微分についての性質である。
合成関数の微分は、合成関数を構成するそれぞれの関数の微分の積によって表すことが出来る
上記例でいうと以下の様に表すことが出来る。
$$\frac{\delta z}{\delta x}=
\frac{\delta z}{\delta t}\frac{\delta t}{\delta x}$$
計算を進めると以下となり、連鎖律をもちいて合成関数の微分が行える。
$$
\frac{\delta z}{\delta t} = 2t \
\frac{\delta t}{\delta x} = 1 \
\frac{\delta z}{\delta x}=
\frac{\delta z}{\delta t}\frac{\delta t}{\delta x} = 2t・1 = 2(x + y)
$$
5.2.3 連鎖律と計算グラフ
計算グラフの逆伝播を考えた場合、連鎖律を用いて微分を求めることが出来る。
$$
\frac{\delta z}{\delta z}\frac{\delta z}{\delta t}\frac{\delta t}{\delta x} \
=\frac{\delta z}{\delta t}\frac{\delta t}{\delta x} \
=\frac{\delta z}{\delta x} \
$$
5.3 逆伝播
加算や乗算を例に逆伝播の仕組みを考える。
5.3.1 加算ノードの逆伝播
$z=x+y$について逆伝播を考える。この式について微分を行なうと以下となる。
$$
\frac{\delta z}{\delta x}=1 \
\frac{\delta z}{\delta y}=1 \
$$
逆伝播の際には、前の計算から伝わってきた$\frac{\delta L}{\delta z}$を乗算して次のノードに渡す。(この場合はx,yの微分とも1なので$\frac{\delta L}{\delta z}・1$となる)
5.3.2 乗算ノードの逆伝播
$z=xy$について逆伝播を考える。この式について微分を行なうと以下となる。
$$
\frac{\delta z}{\delta x}=y \
\frac{\delta z}{\delta y}=x \
$$
乗算の逆伝播では入力した値をひっくり返した値が用いられることとなる。
$$
xの逆伝播:\frac{\delta L}{\delta z}\frac{\delta z}{\delta x} = \frac{\delta L}{\delta z}・y\
yの逆伝播:\frac{\delta L}{\delta z}\frac{\delta z}{\delta y} = \frac{\delta L}{\delta z}・x \
$$
加算の逆伝播では上流の値をただ流すだけ立ったので順伝播の入力信号は不要だったが、乗算の逆伝播では順伝播の入力信号を保持しておかなければならない。
5.3.3 リンゴの例
割愛
5.4 単純なレイヤの実装
計算グラフ乗算ノードを「乗算レイヤ(MulLayer)」、加算ノードを「加算レイヤ(AddLayer)」という名前で実装する。
5.4.1 乗算レイヤの実装
レイヤはforward()とbackward()の共通のインタフェースを持つようにする。
End of explanation
class Relu:
def __init__(self):
self.mask = None
def forward(self, x):
# maskはxが0以下の場合false、それ以外はtrueを保持。xの配列の形で保持
self.mask = (x <= 0)
out = x.copy()
# maskでtrueである要素(xが0以下)は0を代入
out[self.mask] = 0
return out
def backward(self, dout):
dout[self.mask] = 0
dx = dout
return dx
Explanation: 5.5 活性化関数レイヤの実装
計算グラフの考え方をニューラルネットワークに適用していく。ここでは、ニューラルネットワークを構成する「層(レイヤ)」をひとつのクラスとして実装する。活性化関数であるReLUとSigmoidレイヤを実装する。
5.5.1 ReLUレイヤ
活性化関数してとして使われるReLU(Rectified Linear Unit)は次式で表される。
$$
y
= \begin{cases}
& x \; (x>0) \
& 0 \; (x\leq0)
\end{cases}
$$
xに関するyの微分は以下のようになる。
$$
\frac{\delta y}{\delta x}
= \begin{cases}
& 1 \; (x>0) \
& 0 \; (x\leq0)
\end{cases}
$$
順伝播時入力のxが0より大きければ、逆伝播は上流の値をそのまま下流に流す。逆にxが0以下であれば下流への信号はストップする。
実装は以下となる。
End of explanation
class Sigmoid:
def __init__(self):
self.out = None
def forward(self, x):
out = 1 / (1 + np.exp(-x))
self.out = out
return out
def backward(self, dout):
dx = dout * (1.0 - self.out) * self.out
return dx
Explanation: 5.5.2 Sigmoidレイヤ
シグモイド関数は次式で表される。
$$
y=\frac{1}{1+exp(-x)}
$$
計算グラフでは「×」「exp」「+」「/」の順でのノードが連結する。
逆伝播の流れを順に沿って見ていく。
ステップ1
「/」ノードは$y=\frac{1}{x}$を表す(上記式で「1+exp(-x)」を分母にするところから分かる)が、この微分は解析的に次式のようになる。
$$
\frac{\delta y}{\delta x} = -\frac{1}{x^{2}} \
= -y^{2}
$$
ステップ2
「+」ノードは上流の値を下流にそのまま流すだけ。
ステップ3
「exp」ノードは$y=exp(x)$であり、微分は以下式で表される。
$$
\frac{\delta y}{\delta x}= exp(x)
$$
計算グラフでは順伝播時の出力を乗算して下流へ伝搬する。($exp(-x)$)
ステップ4
「×」ノードは順伝播時の値をひっくり返して乗算する。(-1)
以上によりSigmoidの逆伝播の出力は以下となる。
$$
\frac{\delta L}{\delta y}y^{2}exp(-x) \
=\frac{\delta L}{\delta y}\frac{1}{(1+exp(-x))^{2}}exp(-x) \
=\frac{\delta L}{\delta y}\frac{1}{1+exp(-x)} \frac{exp(-x)}{1+exp(-x)} \
=\frac{\delta L}{\delta y}y(1-y)
$$
これによりSigmoidレイヤの逆伝播は順伝播の出力だけから求めることが出来る。
実装は以下となる。
End of explanation
import numpy as np
X_dot_W = np.array([[0, 0, 0], [10, 10, 10]])
B = np.array([1, 2, 3])
print(X_dot_W)
print(X_dot_W + B)
Explanation: 5.6 Affine/Softmaxレイヤの実装
5.6.1 Affineレイヤ
行列の内積は幾何学分野で「アフィン変換」と呼ばれる。ここではアフィン変換を行なう処理を「Affineレイヤ」という名前で実装する。
ニューロンの重み付き和$Y = np.dot(X,W)+B$における計算グラフで考えてみる。
この計算グラフではこれまでのような「スカラ値」ではなく「行列」が伝播していく。
逆伝播は以下のように導出される。
$$
\frac{\delta L}{\delta X}=\frac{\delta L}{\delta Y}・W^{T} \
\frac{\delta L}{\delta W}=W^{T}\frac{\delta L}{\delta Y}
$$
$W^{T}$のTは転置を表す。行列の内積の逆伝播は対応する次元の要素数を一致させるように内積を組み立てる必要がある。次元数は以下の通りである。
X:(2,)
W:(2,3)
X・W:(3,)
B:(3,)
Y:(3,)
$\frac{\delta L}{\delta Y}:(3,)$
$W^{T}:(3,2)$
$X^{T}:(2,1)$
5.6.2 バッチ版Affineレイヤ
上記で説明したAffineレイヤは入力であるXは一つのデータを対象としたものだったが、N個のデータをまとめて順伝播する場合のバッチ版のAffineレイヤを考える。
Xの形状は(N,2)で表され、Nはデータ個数となる。
これまで挙げた例の$Y=np.dot(X,W)+B$についてN=2の場合の順伝播の計算は以下となる。
End of explanation
dY = np.array([[1, 2, 3], [4, 5, 6]])
print(dY)
dB = np.sum(dY, axis=0)
print(dB)
Explanation: 順伝播では各データに対して加算がされていたが、逆伝播においてはそれぞれの逆伝播のデータからバイアスに集約される必要がある。
End of explanation
def softmax(a):
exp_a = np.exp(a)
sum_exp_a = np.sum(exp_a)
y = exp_a / sum_exp_a
return y
def cross_entropy_error(y, t):
delta = 1e-7
return -np.sum(t * np.log(y + delta))
class SoftmaxWithLoss:
def __init__(self):
self.loss = None
self.y = None # softmaxの出力
self.t = None # 教師データ
def forward(self, x, t):
self.t = t
self.y = softmax(x)
self.loss = cross_entropy_error(self.y, self.t)
return self.loss
def backward(self, dout=1):
batch_size = self.t.shape[0]
# データ1個あたりの誤差を全データに伝播させる
dx = (self.y - self.t) / batch_size
return dx
Explanation: 5.6.3 Softmax-with-Lossレイヤ
出力層であるソフトマックス関数について考える。ソフトマックス関数は入力された値を正規化して出力する(出力の和が1になる)。
ニューラルネットワークにおける推論ではソフトマックス関数は不要(Affineレイヤの出力のうち最も高い値(スコア)を推論値とすれば良いので)だが、学習時には必要になる。
ここでは損失関数である交差エントロピー誤差(cross entoropy error)も含めて「Softmax-with-Loss レイヤ」という名前で実装する。
このレイヤはSoftmaxレイヤを通した後にCross Emtropy Errorレイヤを通す構造になっている。
順伝播ではSoftmaxレイヤの出力と教師データのラベルがCross Emtropy Errorレイヤの入力となり、損失Lを出力する。
逆伝播ではSoftmaxレイヤからは$(y_{1}-t_{1}, y_{2}-t_{2}, y_{3}-t_{3})$というシンプルな形で流れてくる。これにより、誤った推論を行った場合は大きな誤差が逆伝播されるが、正解している場合は小さい誤差が伝搬される。
このようにシンプルな形で流せるのはソフトマックス関数の損失関数として交差エントロピー誤差を選んでいるからである。(回帰問題における出力層に恒等関数、損失関数として2乗和誤差を用いることも同様(3.5参照))
End of explanation
import sys, os
sys.path.append(os.pardir) # 親ディレクトリのファイルをインポートするための設定
import numpy as np
from src.gradient import numerical_gradient
from collections import OrderedDict
from src.layer import *
class TwoLayerNet:
def __init__(self, input_size, hidden_size, output_size, weight_init_std = 0.01):
# 重みの初期化
self.params = {}
self.params['W1'] = weight_init_std * np.random.randn(input_size, hidden_size)
self.params['b1'] = np.zeros(hidden_size)
self.params['W2'] = weight_init_std * np.random.randn(hidden_size, output_size)
self.params['b2'] = np.zeros(output_size)
# レイヤの生成
self.layers = OrderedDict()
self.layers['Affine1'] = Affine(self.params['W1'], self.params['b1'])
self.layers['Relu1'] = Relu()
self.layers['Affine2'] = Affine(self.params['W2'], self.params['b2'])
self.lastLayer = SoftmaxWithLoss()
def predict(self, x):
for layer in self.layers.values():
x = layer.forward(x)
return x
# x:入力データ, t:教師データ
def loss(self, x, t):
y = self.predict(x)
return self.lastLayer.forward(y, t)
def accuracy(self, x, t):
y = self.predict(x)
y = np.argmax(y, axis=1)
if t.ndim != 1 : t = np.argmax(t, axis=1)
accuracy = np.sum(y == t) / float(x.shape[0])
return accuracy
# x:入力データ, t:教師データ
def numerical_gradient(self, x, t):
loss_W = lambda W: self.loss(x, t)
grads = {}
grads['W1'] = numerical_gradient(loss_W, self.params['W1'])
grads['b1'] = numerical_gradient(loss_W, self.params['b1'])
grads['W2'] = numerical_gradient(loss_W, self.params['W2'])
grads['b2'] = numerical_gradient(loss_W, self.params['b2'])
return grads
def gradient(self, x, t):
# forward
self.loss(x, t)
# backward
dout = 1
dout = self.lastLayer.backward(dout)
layers = list(self.layers.values())
layers.reverse()
for layer in layers:
dout = layer.backward(dout)
# 設定
grads = {}
grads['W1'], grads['b1'] = self.layers['Affine1'].dW, self.layers['Affine1'].db
grads['W2'], grads['b2'] = self.layers['Affine2'].dW, self.layers['Affine2'].db
return grads
Explanation: 5.7 誤差逆伝播法の実装
5.7.1 ニューラルネットワークの学習の全体図
ステップ1 ミニバッチ
訓練データからランダムに一部データを選び出す
ステップ2 勾配の算出
各重みパラメータに関する損失関数の勾配を求める
ステップ3 パラメータの更新
重みパラメータを勾配方向に微小量更新
ステップ4 繰り返す
ステップ1~3を繰り返す
5.7.2 誤差逆伝播法に対応したニューラルネットワークの実装
End of explanation
# 数値微分と誤差逆伝播法の誤差確認
import sys, os
sys.path.append(os.pardir) # 親ディレクトリのファイルをインポートするための設定
import numpy as np
from src.mnist import load_mnist
# データの読み込み
(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, one_hot_label=True)
network = TwoLayerNet(input_size=784, hidden_size=50, output_size=10)
x_batch = x_train[:3]
t_batch = t_train[:3]
grad_numerical = network.numerical_gradient(x_batch, t_batch)
grad_backprop = network.gradient(x_batch, t_batch)
for key in grad_numerical.keys():
diff = np.average( np.abs(grad_backprop[key] - grad_numerical[key]) )
print(key + ":" + str(diff))
Explanation: 5.7.3 誤差逆伝播法の勾配確認
これまでで勾配を求める方法として数値微分と解析的に求める方法が説明された。解析的に求める方法としては誤差逆伝播法をもちいて大量パラメータに対しても効率的に計算出来た。実装においては誤差逆伝播法があれば良いが、実装の正しさを確認するために数値微分が用いられる。実装が簡単なため数値微分による確認は有効であり「勾配確認(gradient check)」と言われる。
End of explanation
import sys, os
sys.path.append(os.pardir)
import numpy as np
from src.mnist import load_mnist
# データの読み込み
(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, one_hot_label=True)
network = TwoLayerNet(input_size=784, hidden_size=50, output_size=10)
iters_num = 10000
train_size = x_train.shape[0]
batch_size = 100
learning_rate = 0.1
train_loss_list = []
train_acc_list = []
test_acc_list = []
iter_per_epoch = max(train_size / batch_size, 1)
for i in range(iters_num):
batch_mask = np.random.choice(train_size, batch_size)
x_batch = x_train[batch_mask]
t_batch = t_train[batch_mask]
# 勾配
#grad = network.numerical_gradient(x_batch, t_batch)
grad = network.gradient(x_batch, t_batch)
# 更新
for key in ('W1', 'b1', 'W2', 'b2'):
network.params[key] -= learning_rate * grad[key]
loss = network.loss(x_batch, t_batch)
train_loss_list.append(loss)
if i % iter_per_epoch == 0:
train_acc = network.accuracy(x_train, t_train)
test_acc = network.accuracy(x_test, t_test)
train_acc_list.append(train_acc)
test_acc_list.append(test_acc)
print(train_acc, test_acc)
Explanation: 5.7.4 誤差逆伝播法を使った学習
mnistに対して誤差逆伝播法を用いて学習を行なう。
End of explanation |
3,389 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Make an MNE-Report with a Slider
In this example, MEG evoked data are plotted in an HTML slider.
Step1: Do standard folder parsing (this can take a couple of minutes)
Step2: Add a custom section with an evoked slider | Python Code:
# Authors: Teon Brooks <[email protected]>
# Eric Larson <[email protected]>
#
# License: BSD (3-clause)
from mne.report import Report
from mne.datasets import sample
from mne import read_evokeds
from matplotlib import pyplot as plt
data_path = sample.data_path()
meg_path = data_path + '/MEG/sample'
subjects_dir = data_path + '/subjects'
evoked_fname = meg_path + '/sample_audvis-ave.fif'
Explanation: Make an MNE-Report with a Slider
In this example, MEG evoked data are plotted in an HTML slider.
End of explanation
report = Report(image_format='png', subjects_dir=subjects_dir,
info_fname=evoked_fname, subject='sample',
raw_psd=False) # use False for speed here
report.parse_folder(meg_path, on_error='ignore', mri_decim=10)
Explanation: Do standard folder parsing (this can take a couple of minutes):
End of explanation
# Load the evoked data
evoked = read_evokeds(evoked_fname, condition='Left Auditory',
baseline=(None, 0), verbose=False)
evoked.crop(0, .2)
times = evoked.times[::4]
# Create a list of figs for the slider
figs = list()
for t in times:
figs.append(evoked.plot_topomap(t, vmin=-300, vmax=300, res=100,
show=False))
plt.close(figs[-1])
report.add_slider_to_section(figs, times, 'Evoked Response',
image_format='png') # can also use 'svg'
# Save the report
report.save('my_report.html', overwrite=True)
Explanation: Add a custom section with an evoked slider:
End of explanation |
3,390 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tensor Flow to create a useless images
To learn how to encode a simple image and a GIF
Import needed for Tensorflow
Step1: Import needed for Jupiter
Step2: A function to save a picture
Step4: A function to draw the cost function in Jupyter
Step5: Create some random pictures
Encode the input (a number)
This example convert the number to a binary representation
Step6: To create a GIF | Python Code:
import numpy as np
import tensorflow as tf
Explanation: Tensor Flow to create a useless images
To learn how to encode a simple image and a GIF
Import needed for Tensorflow
End of explanation
%matplotlib notebook
import matplotlib
import matplotlib.pyplot as plt
from IPython.display import Image
Explanation: Import needed for Jupiter
End of explanation
def write_png(tensor, name):
casted_to_uint8 = tf.cast(tensor, tf.uint8)
converted_to_png = tf.image.encode_png(casted_to_uint8)
f = open(name, "wb+")
f.write(converted_to_png.eval())
f.close()
Explanation: A function to save a picture
End of explanation
class CostTrace:
A simple example class
def __init__(self):
self.cost_array = []
def log(self, cost):
self.cost_array.append(cost)
def draw(self):
plt.figure(figsize=(12,5))
plt.plot(range(len(self.cost_array)), self.cost_array, label='cost')
plt.legend()
plt.yscale('log')
plt.show()
Explanation: A function to draw the cost function in Jupyter
End of explanation
# Init size
width = 100
height = 100
RGB = 3
shape = [height,width, RGB]
# Create the generated tensor as a variable
rand_uniform = tf.random_uniform(shape, minval=0, maxval=255, dtype=tf.float32)
generated = tf.Variable(rand_uniform)
#define the cost function
c_mean = tf.reduce_mean(tf.pow(generated,2)) # we want a low mean
c_max = tf.reduce_max(generated) # we want a low max
c_min = -tf.reduce_min(generated) # we want a high mix
c_diff = 0
for i in range(0,height-1, 1):
line1 = tf.gather(generated, i,)
line2 = tf.gather(generated, i+1)
c_diff += tf.reduce_mean(tf.pow(line1-line2-30, 2)) # to force a gradient
cost = c_mean + c_max + c_min + c_diff
#cost = c_mean + c_diff
print ('cost defined')
train_op = tf.train.GradientDescentOptimizer(0.5).minimize(cost, var_list=[generated])
print ('train_op defined')
# Initializing the variables
init = tf.initialize_all_variables()
print ('variables initialiazed defined')
# Launch the graph
with tf.Session() as sess:
sess.run(init)
print ('init done')
cost_trace = CostTrace()
for epoch in range(0,10000):
sess.run(train_op)
if (epoch % 100 == 0):
c = cost.eval()
print ('epoch', epoch,'cost' ,c, c_mean.eval(), c_min.eval(), c_max.eval(), c_diff.eval())
cost_trace.log(c)
write_png(generated, "generated{:06}.png".format(epoch))
print ('all done')
cost_trace.draw()
Image("generated000000.png")
Explanation: Create some random pictures
Encode the input (a number)
This example convert the number to a binary representation
End of explanation
from PIL import Image, ImageSequence
import glob, sys, os
os.chdir(".")
frames = []
for file in glob.glob("gene*.png"):
print(file)
im = Image.open(file)
frames.append(im)
from images2gif import writeGif
writeGif("generated.gif", frames, duration=0.1)
Explanation: To create a GIF
End of explanation |
3,391 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Crossentropy method
In this section we'll extend your CEM implementation with neural networks! You will train a multi-layer neural network to solve simple continuous state space games. Please make sure you're done with tabular crossentropy method from the previous notebook.
Step2: Neural Network Policy
For this assignment we'll utilize the simplified neural network implementation from Scikit-learn. Here's what you'll need
Step4: CEM steps
Deep CEM uses exactly the same strategy as the regular CEM, so you can copy your function code from previous notebook.
The only difference is that now each observation is not a number but a float32 vector.
Step6: Training loop
Generate sessions, select N best and fit to those.
Step8: Results
Step9: Homework part I
Tabular crossentropy method
You may have noticed that the taxi problem quickly converges from -100 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode.
Tasks
1.1 (2 pts) Find out how the algorithm performance changes if you use a different percentile and/or n_sessions. Provide here some figures so we can see how the hyperparameters influence the performance.
1.2 (1 pts) Tune the algorithm to end up with positive average score.
It's okay to modify the existing code.
<Describe what you did here>
Homework part II
Deep crossentropy method
By this moment, you should have got enough score on CartPole-v0 to consider it solved (see the link). It's time to try something harder.
if you have any trouble with CartPole-v0 and feel stuck, feel free to ask us or your peers for help.
Tasks
2.1 (3 pts) Pick one of environments | Python Code:
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
import gym
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# if you see "<classname> has no attribute .env", remove .env or update gym
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape[0]
plt.imshow(env.render("rgb_array"))
print("state vector dim =", state_dim)
print("n_actions =", n_actions)
Explanation: Deep Crossentropy method
In this section we'll extend your CEM implementation with neural networks! You will train a multi-layer neural network to solve simple continuous state space games. Please make sure you're done with tabular crossentropy method from the previous notebook.
End of explanation
from sklearn.neural_network import MLPClassifier
agent = MLPClassifier(
hidden_layer_sizes=(20, 20),
activation='tanh',
)
# initialize agent to the dimension of state space and number of actions
agent.partial_fit([env.reset()] * n_actions, range(n_actions), range(n_actions))
def generate_session(env, agent, t_max=1000):
Play a single game using agent neural network.
Terminate when game finishes or after :t_max: steps
states, actions = [], []
total_reward = 0
s = env.reset()
for t in range(t_max):
# use agent to predict a vector of action probabilities for state :s:
probs = <YOUR CODE>
assert probs.shape == (env.action_space.n,), "make sure probabilities are a vector (hint: np.reshape)"
# use the probabilities you predicted to pick an action
# sample proportionally to the probabilities, don't just take the most likely action
a = <YOUR CODE>
# ^-- hint: try np.random.choice
new_s, r, done, info = env.step(a)
# record sessions like you did before
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
break
return states, actions, total_reward
dummy_states, dummy_actions, dummy_reward = generate_session(env, agent, t_max=5)
print("states:", np.stack(dummy_states))
print("actions:", dummy_actions)
print("reward:", dummy_reward)
Explanation: Neural Network Policy
For this assignment we'll utilize the simplified neural network implementation from Scikit-learn. Here's what you'll need:
agent.partial_fit(states, actions) - make a single training pass over the data. Maximize the probability of :actions: from :states:
agent.predict_proba(states) - predict probabilities of all actions, a matrix of shape [len(states), n_actions]
End of explanation
def select_elites(states_batch, actions_batch, rewards_batch, percentile=50):
Select states and actions from games that have rewards >= percentile
:param states_batch: list of lists of states, states_batch[session_i][t]
:param actions_batch: list of lists of actions, actions_batch[session_i][t]
:param rewards_batch: list of rewards, rewards_batch[session_i]
:returns: elite_states,elite_actions, both 1D lists of states and respective actions from elite sessions
Please return elite states and actions in their original order
[i.e. sorted by session number and timestep within session]
If you are confused, see examples below. Please don't assume that states are integers
(they will become different later).
<YOUR CODE: copy-paste your implementation from the previous notebook>
return elite_states, elite_actions
Explanation: CEM steps
Deep CEM uses exactly the same strategy as the regular CEM, so you can copy your function code from previous notebook.
The only difference is that now each observation is not a number but a float32 vector.
End of explanation
from IPython.display import clear_output
def show_progress(rewards_batch, log, percentile, reward_range=[-990, +10]):
A convenience function that displays training progress.
No cool math here, just charts.
mean_reward = np.mean(rewards_batch)
threshold = np.percentile(rewards_batch, percentile)
log.append([mean_reward, threshold])
clear_output(True)
print("mean reward = %.3f, threshold=%.3f" % (mean_reward, threshold))
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.plot(list(zip(*log))[0], label='Mean rewards')
plt.plot(list(zip(*log))[1], label='Reward thresholds')
plt.legend()
plt.grid()
plt.subplot(1, 2, 2)
plt.hist(rewards_batch, range=reward_range)
plt.vlines([np.percentile(rewards_batch, percentile)],
[0], [100], label="percentile", color='red')
plt.legend()
plt.grid()
plt.show()
n_sessions = 100
percentile = 70
log = []
for i in range(100):
# generate new sessions
sessions = [ <YOUR CODE: generate a list of n_sessions new sessions> ]
states_batch, actions_batch, rewards_batch = map(np.array, zip(*sessions))
elite_states, elite_actions = <YOUR CODE: select elite actions just like before>
<YOUR CODE: partial_fit agent to predict elite_actions(y) from elite_states(X)>
show_progress(rewards_batch, log, percentile, reward_range=[0, np.max(rewards_batch)])
if np.mean(rewards_batch) > 190:
print("You Win! You may stop training now via KeyboardInterrupt.")
Explanation: Training loop
Generate sessions, select N best and fit to those.
End of explanation
# Record sessions
import gym.wrappers
with gym.wrappers.Monitor(gym.make("CartPole-v0"), directory="videos", force=True) as env_monitor:
sessions = [generate_session(env_monitor, agent) for _ in range(100)]
# Show video. This may not work in some setups. If it doesn't
# work for you, you can download the videos and view them locally.
from pathlib import Path
from base64 import b64encode
from IPython.display import HTML
video_paths = sorted([s for s in Path('videos').iterdir() if s.suffix == '.mp4'])
video_path = video_paths[-1] # You can also try other indices
if 'google.colab' in sys.modules:
# https://stackoverflow.com/a/57378660/1214547
with video_path.open('rb') as fp:
mp4 = fp.read()
data_url = 'data:video/mp4;base64,' + b64encode(mp4).decode()
else:
data_url = str(video_path)
HTML(
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
.format(data_url))
Explanation: Results
End of explanation
def visualize_mountain_car(env, agent):
# Compute policy for all possible x and v (with discretization)
xs = np.linspace(env.min_position, env.max_position, 100)
vs = np.linspace(-env.max_speed, env.max_speed, 100)
grid = np.dstack(np.meshgrid(xs, vs[::-1])).transpose(1, 0, 2)
grid_flat = grid.reshape(len(xs) * len(vs), 2)
probs = agent.predict_proba(grid_flat).reshape(len(xs), len(vs), 3).transpose(1, 0, 2)
# # The above code is equivalent to the following:
# probs = np.empty((len(vs), len(xs), 3))
# for i, v in enumerate(vs[::-1]):
# for j, x in enumerate(xs):
# probs[i, j, :] = agent.predict_proba([[x, v]])[0]
# Draw policy
f, ax = plt.subplots(figsize=(7, 7))
ax.imshow(probs, extent=(env.min_position, env.max_position, -env.max_speed, env.max_speed), aspect='auto')
ax.set_title('Learned policy: red=left, green=nothing, blue=right')
ax.set_xlabel('position (x)')
ax.set_ylabel('velocity (v)')
# Sample a trajectory and draw it
states, actions, _ = generate_session(env, agent)
states = np.array(states)
ax.plot(states[:, 0], states[:, 1], color='white')
# Draw every 3rd action from the trajectory
for (x, v), a in zip(states[::3], actions[::3]):
if a == 0:
plt.arrow(x, v, -0.1, 0, color='white', head_length=0.02)
elif a == 2:
plt.arrow(x, v, 0.1, 0, color='white', head_length=0.02)
with gym.make('MountainCar-v0').env as env:
visualize_mountain_car(env, agent_mountain_car)
Explanation: Homework part I
Tabular crossentropy method
You may have noticed that the taxi problem quickly converges from -100 to a near-optimal score and then descends back into -50/-100. This is in part because the environment has some innate randomness. Namely, the starting points of passenger/driver change from episode to episode.
Tasks
1.1 (2 pts) Find out how the algorithm performance changes if you use a different percentile and/or n_sessions. Provide here some figures so we can see how the hyperparameters influence the performance.
1.2 (1 pts) Tune the algorithm to end up with positive average score.
It's okay to modify the existing code.
<Describe what you did here>
Homework part II
Deep crossentropy method
By this moment, you should have got enough score on CartPole-v0 to consider it solved (see the link). It's time to try something harder.
if you have any trouble with CartPole-v0 and feel stuck, feel free to ask us or your peers for help.
Tasks
2.1 (3 pts) Pick one of environments: MountainCar-v0 or LunarLander-v2.
For MountainCar, get average reward of at least -150
For LunarLander, get average reward of at least +50
See the tips section below, it's kinda important.
Note: If your agent is below the target score, you'll still get some of the points depending on the result, so don't be afraid to submit it.
2.2 (up to 6 pts) Devise a way to speed up training against the default version
Obvious improvement: use joblib. However, note that you will probably need to spawn a new environment in each of the workers instead of passing it via pickling. (2 pts)
Try re-using samples from 3-5 last iterations when computing threshold and training. (2 pts)
Obtain -100 at MountainCar-v0 or +200 at LunarLander-v2 (2 pts). Feel free to experiment with hyperparameters, architectures, schedules etc.
Please list what you did in Anytask submission form. This reduces probability that somebody misses something.
Tips
Gym page: MountainCar, LunarLander
Sessions for MountainCar may last for 10k+ ticks. Make sure t_max param is at least 10k.
Also it may be a good idea to cut rewards via ">" and not ">=". If 90% of your sessions get reward of -10k and 10% are better, than if you use percentile 20% as threshold, R >= threshold fails to cut off bad sessions while R > threshold works alright.
issue with gym: Some versions of gym limit game time by 200 ticks. This will prevent cem training in most cases. Make sure your agent is able to play for the specified t_max, and if it isn't, try env = gym.make("MountainCar-v0").env or otherwise get rid of TimeLimit wrapper.
If you use old swig lib for LunarLander-v2, you may get an error. See this issue for solution.
If it doesn't train, it's a good idea to plot reward distribution and record sessions: they may give you some clue. If they don't, call course staff :)
20-neuron network is probably not enough, feel free to experiment.
You may find the following snippet useful:
End of explanation |
3,392 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convergence
Description of the UCI protocol
Step1: The Speed of Search
The number of nodes searched depend linearly on time
Step2: So nodes per second is roughly constant
Step3: The hashtable usage is at full capacity
Step4: Number of nodes needed for the given depth grows exponentially, except for moves that are forced, which require very little nodes to search (those show as a horizontal plateau)
Step5: Convergence wrt. Depth
Step6: Convergence of the variations | Python Code:
%pylab inline
! grep "multipv 1" log2.txt | grep -v lowerbound | grep -v upperbound > log2_g.txt
def parse_info(l):
D = {}
k = l.split()
i = 0
assert k[i] == "info"
i += 1
while i < len(k):
if k[i] == "depth":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "seldepth":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "multipv":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "score":
if k[i+1] == "cp":
D["score_p"] = int(k[i+2]) / 100. # score in pawns
i += 3
elif k[i] == "nodes":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "nps":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "hashfull":
D[k[i]] = int(k[i+1]) / 1000. # between 0 and 1
i += 2
elif k[i] == "tbhits":
D[k[i]] = int(k[i+1])
i += 2
elif k[i] == "time":
D[k[i]] = int(k[i+1]) / 1000. # elapsed time in [s]
i += 2
elif k[i] == "pv":
D[k[i]] = k[i+1:]
return D
else:
raise Exception("Unknown kw")
# Convert to an array of lists
D = []
for l in open("log2_g.txt").readlines():
D.append(parse_info(l))
# Convert to a list of arrays
data = {}
for key in D[-1].keys():
d = []
for x in D:
if key in x:
d.append(x[key])
else:
d.append(-1)
if key != "pv":
d = array(d)
data[key] = d
Explanation: Convergence
Description of the UCI protocol: https://ucichessengine.wordpress.com/2011/03/16/description-of-uci-protocol/
Let us parse the logs first:
End of explanation
title("Number of nodes searched in time")
plot(data["time"] / 60., data["nodes"], "o")
xlabel("Time [min]")
ylabel("Nodes")
grid()
show()
Explanation: The Speed of Search
The number of nodes searched depend linearly on time:
End of explanation
title("Positions per second in time")
plot(data["time"] / 60., data["nps"], "o")
xlabel("Time [min]")
ylabel("Positions / s")
grid()
show()
Explanation: So nodes per second is roughly constant:
End of explanation
title("Hashtable usage")
hashfull = data["hashfull"]
hashfull[hashfull == -1] = 0
plot(data["time"] / 60., hashfull * 100, "o")
xlabel("Time [min]")
ylabel("Hashtable filled [%]")
grid()
show()
Explanation: The hashtable usage is at full capacity:
End of explanation
title("Number of nodes vs. depth")
semilogy(data["depth"], data["nodes"], "o")
xlabel("Depth [half moves]")
ylabel("Nodes")
grid()
show()
Explanation: Number of nodes needed for the given depth grows exponentially, except for moves that are forced, which require very little nodes to search (those show as a horizontal plateau):
End of explanation
title("Score")
plot(data["depth"], data["score_p"], "o")
xlabel("Depth [half moves]")
ylabel("Score [pawns]")
grid()
show()
Explanation: Convergence wrt. Depth
End of explanation
for i in range(len(data["depth"])):
print "%2i %s" % (data["depth"][i], " ".join(data["pv"][i])[:100])
Explanation: Convergence of the variations:
End of explanation |
3,393 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Circuitos de Segunda Ordem Gerais (Genéricos)
Jupyter Notebook desenvolvido por Gustavo S.S.
Dado um circuito de segunda ordem, determinamos sua resposta a
um degrau x(t) (que pode ser tensão ou corrente), conforme as quatro etapas
descritas a seguir
Step1: Problema Prático 8.9
Determine v e i para t > 0 no circuito da Figura 8.28. (Ver comentários sobre fontes de corrente no Problema prático 7.5.)
Step2: Exemplo 8.10
Descubra vo(t) para t > 0 no circuito da Figura 8.29.
Step3: Problema Prático 8.10
Para t > 0, obtenha vo(t) no circuito da Figura 8.32. (Sugestão | Python Code:
print("Exemplo 8.9\n")
from sympy import *
t = symbols('t')
V = 12
C = 1/2
L = 1
#Para t < 0
i0 = 0
v0 = V
print("i(0):",i0,"A")
print("v(0):",v0,"V")
#Para t = oo
i_f = V/(4 + 2)
vf = V*2/(4 + 2)
print("i(oo):",i_f,"A")
print("v(oo):",vf,"V")
#Para t > 0
#desativar fontes independentes
#i = v/2 + C*dv/dt
#4i + L*di/dt + v = 0
#4*(v/2 + 1/2*dv/dt) + d(v/2 + 1/2*dv/dt)/dt + v = 0
#2v + 2dv/dt + 1/2*dv/dt + 1/2*d^2v/t^2 + v = 0
#d^2v/dt^2 + 5dv/dt + 6v = 0
#s^2 + 5s + 6 = 0
s1 = -2
s2 = -3
#Raizes reais e negativas: Superamortecido
#vn(t) = A1*exp(-2t) + A2*exp(-3t)
#vss(t) = v(oo) = 4
#v(t) = 4 + A1*exp(-2t) + A2*exp(-3t)
#dv(0)/dt = -2A1 -3A2 = ic(0)/C
#ic(0) = -6
#C = 1/2
#-2A1 - 3A2 = -12
#2A1 + 3A2 = 12
#v(0) = 4 + A1 + A2 = 12
#A1 + A2 = 8
#2(8 - A2) + 3A2 = 12
A2 = -4
A1 = 12
v = A1*exp(s1*t) + A2*exp(s2*t) + vf
print("Resposta completa v(t):",v,"V")
#i = v/2 + C*dv/dt
i = v/2 + C*diff(v,t)
print("i(t):",i,"A")
Explanation: Circuitos de Segunda Ordem Gerais (Genéricos)
Jupyter Notebook desenvolvido por Gustavo S.S.
Dado um circuito de segunda ordem, determinamos sua resposta a
um degrau x(t) (que pode ser tensão ou corrente), conforme as quatro etapas
descritas a seguir:
Determinamos as condições iniciais x(0) e dx(0)/dt e o valor final x(∞)
Desativamos as fontes independentes e obtemos Equação Diferencial de Segunda Ordem
Determinamos raízes características e encontramos a forma da resposta transiente xt(t).
Dependendo se a resposta for com amortecimento supercrítico, com amortecimento crítico ou com subamortecimento, obtemos xt(t) com duas constantes desconhecidas
Obtemos a resposta de estado estável xss(t) = x(∞)
A resposta total agora é encontrada como a soma das respostas transiente e de estado estável
Finalmente, estabelecer as constantes associadas com a resposta transiente impondo as condições iniciais x(0) e dx(0)/dt, determinadas no item 1.
Podemos aplicar esse procedimento geral para encontrar a resposta a um
degrau de um circuito de segunda ordem, inclusive aqueles com amplificadores
operacionais.
Exemplo 8.9
Determine a resposta completa v e, em seguida, i para t > 0 no circuito da Figura 8.25.
End of explanation
print("Problema Prático 8.9")
s = symbols('s')
C = 1/20
L = 2
Is = 3
#Para t < 0
v0 = 0
i0 = 0
print("v(0):",v0,"V")
print("i(0):",i0,"A")
#Para t = oo
i_f = Is
vf = 4*Is
print("i(oo):",i_f,"A")
print("v(oo):",vf,"V")
#Para t > 0
dv0 = Is/C
di0 = 10*Is/L
print("dv(0)/dt:",dv0,"V/s")
print("di(0)/dt:",di0,"A/s")
#desativar fontes indep.
#4i + L*di/dt - v + 10i = 0
#i = -C*dv/dt
#14(-1/20*dv/dt) + 2(-1/20*d^2v/dt^2) - v = 0
#-1/10*d^2v/dt^2 - 7/10*dv/dt - v = 0
#d^2v/dt^2 + 7*dv/dt + 10v = 0
#s^2 + 7s + 10 = 0
r = solve(s**2 + 7*s + 10,s)
s1,s2 = r[0],r[1]
print("Raízes s1 e s2: {0} , {1}".format(s1,s2))
#Raizes reais e negativas: Superamortecido
#v(t) = vf + A1*exp(-5t) + A2*exp(-2t)
#v0 = A1 + A2 = -12
#A1 = -12 - A2
#dv0/dt = -5A1 -2A2 = 60
#-5A1 - 2(-12 - A1) = 60
A1 = (60-24)/(-3)
A2 = -12 - A1
print("Constantes A1 e A2: {0} , {1}".format(A1,A2))
v = A1*exp(s1*t) + A2*exp(s2*t) + vf
print("Resposta completa v(t):",v,"V")
#3 = C*dv/dt + i
i = 3 - C*diff(v,t)
print("Resposta i(t):",i,"A")
Explanation: Problema Prático 8.9
Determine v e i para t > 0 no circuito da Figura 8.28. (Ver comentários sobre fontes de corrente no Problema prático 7.5.)
End of explanation
print("Exemplo 8.10\n")
V = 7
L1 = 1/2
L2 = 1/5
#Para t < 0
i1_0 = 0
i2_0 = 0
print("i1(0):",i1_0,"A")
print("i2(0):",i2_0,"A")
#Para t = oo
i_f = V/3
print("i(oo):",i_f,"A")
#Para t > 0
#di1(0)/dt = vl/L1
di1 = V/L1
#di2(0)/dt = vl/L2
di2 = 0/L2
print("di1(0)/dt:",di1,"A/s")
print("di2(0)/dt:",di2,"A/s")
#desligar fontes indep.
#3i1 + 1/2*di1/dt + (i1 - i2) = 0
#4i1 + 1/2*di1/dt - i2 = 0
#1/5*di2/t + i2 - i1 = 0
#4/5*di1/dt + 1/10*d^2i1/dt^2 + 4i1 + 1/2*di1/dt - i1 = 0
#d^2i1/dt^2 + 13di1/dt + 30i1 = 0
#s^2 + 13s + 30 = 0
r = solve(s**2 + 13*s + 30,s)
s1,s2 = r[0],r[1]
print("Raizes s1 e s2: {0} , {1}".format(s1,s2))
#raizes reais e negativas: Superamortecido
#i1(t) = 7/3 + A1*exp(-10t) + A2*exp(-3t)
#i1(0) = 7/3 + A1 + A2 = 0
#A1 = -7/3 - A2
#di1(0)/dt = -10A1 -3A2 = 14
#-10(-7/3 - A2) - 3A2 = 14
A2 = (14 - 70/3)/7
A1 = -7/3 - A2
print("Constantes A1 e A2: {0} , {1}".format(A1,A2))
i1 = i_f + A1*exp(s1*t) + A2*exp(s2*t)
print("i1(t):",i1,"A")
#V = 3i1 + L1*di1/dt + (i1 - i2)
i2 = 3*i1 + L1*diff(i1,t) + i1 - V
print("i2(t):",i2,"A")
vo = i1 - i2
print("V0(t):",vo,"V")
Explanation: Exemplo 8.10
Descubra vo(t) para t > 0 no circuito da Figura 8.29.
End of explanation
print("Problema Prático 8.10")
V = 20
C1 = 1/2
C2 = 1/3
#Para t < 0
v1_0 = 0
v2_0 = 0
print("v1(0) e v2(0):",v1_0,"V")
#Para t = oo
v1_f = V
v2_f = V
print("v1(oo) e v2(oo):",v1_f,"V")
#Para t > 0
#dv1(0)/dt = i1(0)/C1 = (V/1)/(1/2)
dv1 = V/C1
#dv2(0)/dt = i2(0)/C2 = 0/C2
dv2 = 0
print("dv1(0)/dt:",dv1,"V/s")
print("dv2(0)/dt:",dv2,"V/s")
#desligar fontes indep.
#v1/1 + C1*dv1/dt + vo/1 = 0
#vo = v1-v2
#v1 + 1/2*dv1/dt + v1-v2 = 0
#dv1/dt + 4v1 - 2v2 = 0
#v1 = 1*C2*dv2/dt + v2
#1/3*d^2v2/dt^2 + dv2/dt + 4/3*dv2/dt + 4v2 - 2v2 = 0
#d^2v2/dt^2 + 7dv2/dt + 6v2 = 0
#s^2 + 7s + 6 = 0
r = solve(s**2 + 7*s + 6,s)
s1,s2 = r[0],r[1]
print("Raizes para v2:",s1,s2)
#raizes reais e negativas: Superamortecido
#v2(t) = 20 + A1*exp(-6t) + A2*exp(-t)
#v2(0) = 20 + A1 + A2 = 0
#A2 = -20 - A1
#dv2(0)/dt = -6A1 - A2 = 0
#-6A1 - (-20 - A1) = 0
A1 = 20/5
A2 = -20 - A1
print("Constantes A1 e A2:",A1,A2)
v2 = v2_f + A1*exp(s1*t) + A2*exp(s2*t)
print("v2(t):",v2,"V")
v1 = C2*diff(v2,t) + v2
print("v1(t):",v1,"V")
vo = v1 - v2
print("Resposta vo(t):",vo,"V")
Explanation: Problema Prático 8.10
Para t > 0, obtenha vo(t) no circuito da Figura 8.32. (Sugestão: Determine primeiro v1 e v2.)
End of explanation |
3,394 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WebGL problems
Drag around canvas is shifted down, cut off at top spilling over bottom.
Bad in 0.12.14 and 0.12.15dev3
Good in 0.12.10
Step1: Responsive in notebook
Spills a scroll bar.
Not a problem in a vanilla save file.
Bad in 0.12.6, 0.12.7, 0.12.9, 0.12.10, 0.12.14, 0.12.15dev3
God in 0.12.5 | Python Code:
N = 10000
x = np.random.normal(0, np.pi, N)
y = np.sin(x) + np.random.normal(0, 0.2, N)
p = figure(webgl=True)
p.scatter(x, y, alpha=0.1)
show(p)
Explanation: WebGL problems
Drag around canvas is shifted down, cut off at top spilling over bottom.
Bad in 0.12.14 and 0.12.15dev3
Good in 0.12.10
End of explanation
!conda list | egrep "jupyter|notebook"
p = figure(plot_height=200, sizing_mode='scale_width')
p.scatter(x, y, alpha=0.1)
show(p)
N = 4000
x = np.random.random(size=N) * 100
y = np.random.random(size=N) * 100
radii = np.random.random(size=N) * 1.5
colors = [
"#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
]
TOOLS="hover,crosshair,pan,wheel_zoom,box_zoom,undo,redo,reset,tap,save,box_select,poly_select,lasso_select,"
p = figure(tools=TOOLS, sizing_mode='scale_width')
p.scatter(x, y, radius=radii,
fill_color=colors, fill_alpha=0.6,
line_color=None)
show(p)
save(p, 'color_scatter.html')
Explanation: Responsive in notebook
Spills a scroll bar.
Not a problem in a vanilla save file.
Bad in 0.12.6, 0.12.7, 0.12.9, 0.12.10, 0.12.14, 0.12.15dev3
God in 0.12.5
End of explanation |
3,395 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a dataframe that looks like this: | Problem:
import pandas as pd
df = pd.DataFrame({'product': [1179160, 1066490, 1148126, 1069104, 1069105, 1160330, 1069098, 1077784, 1193369, 1179741],
'score': [0.424654, 0.424509, 0.422207, 0.420455, 0.414603, 0.168784, 0.168749, 0.168738, 0.168703, 0.168684]})
products = [[1069104, 1069105], [1066489, 1066491]]
for product in products:
df.loc[(df['product'] >= product[0]) & (df['product'] <= product[1]), 'score'] *= 10 |
3,396 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiments with Crop Improvements
This notebook experiments advances in image cropping. This performs the following steps
determine dimensions of the image
determine the center of the image
zeroify the borders of the image to get rid of non-black background and edge distortions
crop to the new size of the image
Step1: Non-Cropped image case
Load some image that is not cropped.
Step2: The simplest way to detect edges for cropping of a circular image with dark background is to sum up along different axes. Let's see how it works. First we sum up all the color channels, then compute horizontal and vertical borders.
Step3: now compute borders of the image
Step4: This is simple case, not trimmed image. Let's determine the radius and center of it. We reduce the radius in orger to get rid of the edge distortions
Step5: Now we zeroify everything outside the circle determined above. We need to do this as the black background actually not trully black.
Step6: and now we are ready to do actual crop of the image. Perform the very same crop operation on the mask for further processing.
Step7: Cropped Image Case
now let's experiment with cropped image
Step8: now compute borders of the image
Step9: And radius and the center of the image. If we have at least upper or lower side of the disk non-cropped, use it to determine the vertical center. Otherwise use Pithagoras theorem
Step10: and now putting everything together | Python Code:
import os
import skimage
from skimage import io, util
from skimage.draw import circle
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import math
Explanation: Experiments with Crop Improvements
This notebook experiments advances in image cropping. This performs the following steps
determine dimensions of the image
determine the center of the image
zeroify the borders of the image to get rid of non-black background and edge distortions
crop to the new size of the image
End of explanation
baseFolder = '/Users/boris/Dropbox/Kaggle/Retina/train/sample'
imgFile = '78_left.jpeg'
filename = os.path.join(baseFolder, imgFile)
img = io.imread(filename)
plt.imshow(img)
Explanation: Non-Cropped image case
Load some image that is not cropped.
End of explanation
threshold = 20000
s = np.sum(img, axis=2)
cols = np.sum(s, axis=0) > threshold
rows = np.sum(s, axis=1) > threshold
Explanation: The simplest way to detect edges for cropping of a circular image with dark background is to sum up along different axes. Let's see how it works. First we sum up all the color channels, then compute horizontal and vertical borders.
End of explanation
height = rows.shape[0]
width = cols.shape[0]
x_min = np.argmax(cols[0:width])
x_max = width/2 + np.argmin(cols[width/2:width-1])
y_min = np.argmax(rows[0:height/2])
y_max = height/2 + np.argmin(cols[height/2:height-1])
Explanation: now compute borders of the image
End of explanation
radius = (x_max - x_min)/2
center_x = x_min + radius
center_y = y_min + radius
radius1 = radius - 100
Explanation: This is simple case, not trimmed image. Let's determine the radius and center of it. We reduce the radius in orger to get rid of the edge distortions
End of explanation
mask = np.zeros(img.shape)
rr, cc = circle(center_y, center_x, radius1, img.shape)
mask[rr, cc] = 1
img *= mask
Explanation: Now we zeroify everything outside the circle determined above. We need to do this as the black background actually not trully black.
End of explanation
x_borders = (center_x - radius1, img.shape[1] - center_x - radius1)
y_borders = (center_y - radius1, img.shape[0] - center_y - radius1)
img2 = util.crop(img, (y_borders, x_borders, (0,0)))
maskT = util.crop(mask, (y_borders, x_borders, (0,0)))
border_pixels = np.sum(1 - maskT)
plt.imshow(img2)
Explanation: and now we are ready to do actual crop of the image. Perform the very same crop operation on the mask for further processing.
End of explanation
baseFolder = '/Users/boris/Dropbox/Kaggle/Retina/train/sample'
imgFile = '263_right.jpeg'
filename = os.path.join(baseFolder, imgFile)
img = skimage.io.imread(filename)
plt.imshow(img)
s = np.sum(img, axis=2)
cols = np.sum(s, axis=0) > threshold
rows = np.sum(s, axis=1) > threshold
Explanation: Cropped Image Case
now let's experiment with cropped image
End of explanation
threshold = 20000
height = rows.shape[0]
width = cols.shape[0]
x_min = np.argmax(cols[0:width])
x_max = width/2 + np.argmin(cols[width/2:width-1])
y_min = np.argmax(rows[0:height/2])
y_max = np.argmin(cols[height/2:height-1])
y_max = height/2 + y_max if y_max > 0 else height
print x_min, x_max, y_min, y_max, height/2
Explanation: now compute borders of the image
End of explanation
radius = (x_max - x_min)/2
center_x = x_min + radius
center_y = y_min + radius # the default case (if y_min != 0)
if y_min == 0: # the upper side is cropped
if height - y_max > 0: # lower border is not 0
center_y = y_max - radius
else:
upper_line_width = np.sum(s[0,:] > 100) # threshold for single line
center_y = math.sqrt( radius**2 - (upper_line_width/2)**2)
radius1 = radius - 200
mask = np.zeros(img.shape[0:2])
rr, cc = circle(center_y, center_x, radius1, img.shape)
mask[rr, cc] = 1
img[:,:,0] *= mask
img[:,:,1] *= mask
img[:,:,2] *= mask
x_borders = (center_x - radius1, img.shape[1] - center_x - radius1)
y_borders = (max(center_y - radius1,0), max(img.shape[0] - center_y - radius1, 0))
img2 = util.crop(img, (y_borders, x_borders, (0,0)))
maskT = util.crop(mask, (y_borders, x_borders))
border_pixels = np.sum(1 - maskT)
plt.imshow(img2)
Explanation: And radius and the center of the image. If we have at least upper or lower side of the disk non-cropped, use it to determine the vertical center. Otherwise use Pithagoras theorem :-)
End of explanation
def circularcrop(img, border, threshold, threshold1):
s = np.sum(img, axis=2)
cols = np.sum(s, axis=0) > threshold
rows = np.sum(s, axis=1) > threshold
height = rows.shape[0]
width = cols.shape[0]
x_min = np.argmax(cols[0:width])
x_max = width/2 + np.argmin(cols[width/2:width-1])
y_min = np.argmax(rows[0:height/2])
y_max = np.argmin(cols[height/2:height-1])
y_max = height/2 + y_max if y_max > 0 else height
radius = (x_max - x_min)/2
center_x = x_min + radius
center_y = y_min + radius # the default case (if y_min != 0)
if y_min == 0: # the upper side is cropped
if height - y_max > 0: # lower border is not 0
center_y = y_max - radius
else:
upper_line_width = np.sum(s[0,:] > threshold1) # threshold for single line
center_y = math.sqrt( radius**2 - (upper_line_width/2)**2)
radius1 = radius - border
mask = np.zeros(img.shape[0:2])
rr, cc = circle(center_y, center_x, radius1, img.shape)
mask[rr, cc] = 1
img[:,:,0] *= mask
img[:,:,1] *= mask
img[:,:,2] *= mask
x_borders = (center_x - radius1, img.shape[1] - center_x - radius1)
y_borders = (max(center_y - radius1,0), max(img.shape[0] - center_y - radius1, 0))
imgres = util.crop(img, (y_borders, x_borders, (0,0)))
maskT = util.crop(mask, (y_borders, x_borders))
border_pixels = np.sum(1 - maskT)
return imgres, maskT, center_x, center_y, radius
baseFolder = '/Users/boris/Dropbox/Shared/Retina'
imgFile = 'crop/20677_left.jpeg'
filename = os.path.join(baseFolder, imgFile)
img = io.imread(filename)
plt.imshow(img)
(imgA, maskA, x,y,r) = circularcrop(img, 200, 20000, 100)
plt.imshow(imgA)
img.shape[0:2]
Explanation: and now putting everything together
End of explanation |
3,397 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Idea
The claim was that the directory structure would be very similar to each other over a period of time. We want to identify this time span by using a time-based analysis on the commits and their corresponding directory structures. We can use some advanced Git repo analysis for this task.
Data creation script
Iterates over all commits and extracts basic information about the commit like sha, author and commit date (in log.txt) as well as the file's list of a specific version (in files.txt).
For each information set, it a new directory with the sha as unique identifier is created.
```bash
cd $1
sha_list=git rev-list master
for sha in $sha_list
Step1: We can then import the data by looping through all the files and read in the corresponding files' content. We further extract the information items we need on the fly from the path as well as the content of log.txt. The result is stored into a Pandas DataFrame for further analysis.
Step2: For each file, we have now a row the complete commit information available for both repositories.
Step3: Basic statistics
Let's take a look at our read-in data.
Step4: These are the number of entries for each repository.
Step5: The amount of commits for each repository are.
Step6: Data preparation
We need to adopt the data to the domain analyzed. We want to create a similarity measure between the directory structure of the lerna repository and the rush componente of the web-build-tools repository. The later is a little bit tricky, because there is a shift in the directory renaming. | Python Code:
import glob
file_list = glob.glob(r'C:/dev/forensic/data/**/*.txt', recursive=True)
file_list = [x.replace("\\", "/") for x in file_list]
file_list[:5]
Explanation: Introduction
Idea
The claim was that the directory structure would be very similar to each other over a period of time. We want to identify this time span by using a time-based analysis on the commits and their corresponding directory structures. We can use some advanced Git repo analysis for this task.
Data creation script
Iterates over all commits and extracts basic information about the commit like sha, author and commit date (in log.txt) as well as the file's list of a specific version (in files.txt).
For each information set, it a new directory with the sha as unique identifier is created.
```bash
cd $1
sha_list=git rev-list master
for sha in $sha_list:
do
data_dir="../data/$1/$sha"
mkdir -p $data_dir
git checkout $sha
git log -n 1 $sha > $data_dir/log.txt
git ls-files > $data_dir/files.txt
done
```
You can store this script e. g. into extract.sh and execute it for a repository with
bash
sh execute.sh <path_git_repo>
and you'll get a directory / files structure like this
.
├── data
│ ├── lerna
│ │ ├── 001ec5882630cedd895f2c95a56a755617bb036c
│ │ │ ├── files.txt
│ │ │ └── log.txt
│ │ ├── 00242afa1efa43a98dc84815ac8f554ffa58d472
│ │ │ ├── files.txt
│ │ │ └── log.txt
│ │ ├── 007f20b89ae33721bd08f8bcdd0768923bcc6bc5
│ │ │ ├── files.txt
│ │ │ └── log.txt
The content is as follows:
files.txt
.babelrc
.editorconfig
.eslintrc.yaml
.github/ISSUE_TEMPLATE.md
.github/PULL_REQUEST_TEMPLATE.md
.gitignore
.npmignore
.travis.yml
CHANGELOG.md
CODE_OF_CONDUCT.md
CONTRIBUTING.md
FAQ.md
LICENSE
README.md
appveyor.yml
bin/lerna.js
doc/hoist.md
doc/troubleshooting.md
lerna.json
package.json
src/ChildProcessUtilities.js
src/Command.js
src/ConventionalCommitUtilities.js
src/FileSystemUtilities.js
src/GitUtilities.js
src/NpmUtilities.js
...
log.txt
```
commit 001ec5882630cedd895f2c95a56a755617bb036c
Author: Daniel Stockman daniels@zillowgroup.com
Date: Thu Aug 10 09:56:14 2017 -0700
chore: fs-extra 4.x
```
With this data, we have the base for analysing a probably similar directure structure layout over time.
abd83718682d7496426bb35f2f9ca20f10c2468d,2015-12-04 23:29:27 +1100
.gitignore
LICENSE
README.md
bin/lerna.js
lib/commands/bootstrap.js
lib/commands/index.js
lib/commands/publish.js
lib/init.js
lib/progress-bar.js
package.json
Load all files with the files listings
I've executed the script for lerna as well as the web-build-tools. First, we first get all the files.txt using glob.
End of explanation
import pandas as pd
dfs = []
for files_file in file_list:
try:
files_df = pd.read_csv(files_file, names=['sha', 'timestamp'])
files_df['project'] = files_file.split("/")[-2]
files_df['file'] = files_df.sha
files_df['sha'] = files_df.sha[0]
files_df['timestamp'] = pd.to_datetime(files_df.timestamp[0])
files_df = files_df[1:]
files_df
dfs.append(files_df)
except OSError as e:
print((e,files_file))
file_log = pd.concat(dfs, ignore_index=True)
file_log.head()
file_log.file = pd.Categorical(file_log.file)
file_log.info()
dir_log = dir_log[
(dir_log.project=='lerna') & (dir_log.file.str.endswith(".js")) |
(dir_log.project=='web-build-tools') & (dir_log.file.str.endswith(".ts"))
]
dir_log.project.value_counts()
dir_log = dir_log[dir_log.file.str.contains("/")].copy()
dir_log['last_dir'] = dir_log.file.str.split("/").str[-2]
dir_log['last_dir_id'] = pd.factorize(dir_log.last_dir)[0]
dir_log.head()
dir_log['date'] = dir_log.timestamp.dt.date
dir_log.head()
grouped = dir_log.groupby(['project', pd.Grouper(level='date', freq="D"),'last_dir_id'])[['sha']].last()
grouped.head()
grouped['existent'] = 1
grouped.head()
test = grouped.pivot_table('existent', ['project', 'date'], 'last_dir_id').fillna(0)
test.head()
lerna = test.loc['lerna'][0]
lerna
%maplotlib inline
test.plot()
timed_log = dir_log.set_index(['timestamp', 'project'])
timed_log.head()
timed_log.resample("W").first()
%matplotlib inline
timed.\
pivot_table('last_dir_id', timed.index, 'project')\
.fillna(method='ffill').dropna().plot()
Explanation: We can then import the data by looping through all the files and read in the corresponding files' content. We further extract the information items we need on the fly from the path as well as the content of log.txt. The result is stored into a Pandas DataFrame for further analysis.
End of explanation
file_log[file_log.project == "lerna"].iloc[0]
file_log[file_log.project == "web-build-tools"].iloc[0]
Explanation: For each file, we have now a row the complete commit information available for both repositories.
End of explanation
file_log.info()
Explanation: Basic statistics
Let's take a look at our read-in data.
End of explanation
file_log.project.value_counts()
Explanation: These are the number of entries for each repository.
End of explanation
file_log.groupby('project').sha.nunique()
Explanation: The amount of commits for each repository are.
End of explanation
file_log[file_log.project=="web-build-tools"].iloc[0]
file_log[file_log.project=="web-build-tools"].file.iloc[-10:]
lerna = file_log[file_log.project == "lerna"]
lerna.info()
rush = file_log[file_log.project == "web-build-tools"]
rush.info()
from scipy.spatial.distance import hamming
def calculate_hamming(row):
lerna = row.file_list_lerna.split("\n")
lerna = [x.rsplit(".", maxsplit=1)[0] for x in lerna]
rush = row.file_list_rush.split("\n")
rush = [x.rsplit(".", maxsplit=1)[0] for x in rush]
count = 0
for i in lerna:
if i in rush:
count = count + 1
return count
comp["amount"] = comp.apply(calculate_hamming, axis=1)
comp.head()
%matplotlib inline
comp.amount.plot()
comp.resample("W").amount.mean().plot()
comp[comp.amount == comp.amount.max()]
Explanation: Data preparation
We need to adopt the data to the domain analyzed. We want to create a similarity measure between the directory structure of the lerna repository and the rush componente of the web-build-tools repository. The later is a little bit tricky, because there is a shift in the directory renaming.
End of explanation |
3,398 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clase 9
Step1: 2. Uso de Pandas para descargar datos de precios de cierre
Bajar datos en forma de función
Step2: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota
Step3: Nota
Step4: 4. Optimización de portafolios | Python Code:
#importar los paquetes que se van a usar
import pandas as pd
import pandas_datareader.data as web
import numpy as np
import datetime
from datetime import datetime
import scipy.stats as stats
import scipy as sp
import scipy.optimize as scopt
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn.covariance as skcov
%matplotlib inline
#algunas opciones para Python
pd.set_option('display.notebook_repr_html', True)
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 78)
pd.set_option('precision', 3)
Explanation: Clase 9: Optimización de portafolios usando simulación Montecarlo
Juan Diego Sánchez Torres,
Profesor, MAF ITESO
Departamento de Matemáticas y Física
[email protected]
Tel. 3669-34-34 Ext. 3069
Oficina: Cubículo 4, Edificio J, 2do piso
1. Motivación
En primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos.
End of explanation
def get_historical_closes(ticker, start_date, end_date):
p = web.DataReader(ticker, "yahoo", start_date, end_date).sort_index('major_axis')
d = p.to_frame()['Adj Close'].reset_index()
d.rename(columns={'minor': 'Ticker', 'Adj Close': 'Close'}, inplace=True)
pivoted = d.pivot(index='Date', columns='Ticker')
pivoted.columns = pivoted.columns.droplevel(0)
return pivoted
Explanation: 2. Uso de Pandas para descargar datos de precios de cierre
Bajar datos en forma de función
End of explanation
assets = ['AAPL','AMZN','MSFT','KO']
closes=get_historical_closes(assets, '2010-01-01', '2016-12-31')
closes
closes.plot(figsize=(8,6));
Explanation: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda:
*conda install -c conda-forge pandas-datareader *
End of explanation
def calc_daily_returns(closes):
return np.log(closes/closes.shift(1))[1:]
daily_returns=calc_daily_returns(closes)
daily_returns.plot(figsize=(8,6));
mean_daily_returns = pd.DataFrame(daily_returns.mean(),columns=['Mean'],index=daily_returns.columns)
mean_daily_returns
cov_matrix = daily_returns.cov()
cov_matrix
#robust_cov_matrix= pd.DataFrame(skcov.EmpiricalCovariance().fit(daily_returns).covariance_,columns=daily_returns.columns,index=daily_returns.columns)
#robust_cov_matrix= pd.DataFrame(skcov.EllipticEnvelope().fit(daily_returns).covariance_,columns=daily_returns.columns,index=daily_returns.columns)
#robust_cov_matrix= pd.DataFrame(skcov.MinCovDet().fit(daily_returns).covariance_,columns=daily_returns.columns,index=daily_returns.columns)
robust_cov_matrix= pd.DataFrame(skcov.ShrunkCovariance().fit(daily_returns).covariance_,columns=daily_returns.columns,index=daily_returns.columns)
robust_cov_matrix
Explanation: Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensión MX.
Por ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX.
3. Formulación del riesgo de un portafolio
End of explanation
num_portfolios = 25000
r=0.0001
results = np.zeros((4+len(assets)-1,num_portfolios))
for i in range(num_portfolios):
#Pesos
weights = np.array(np.random.random(4))
weights /= np.sum(weights)
#Rendimiento y volatilidad
portfolio_return = mean_daily_returns.T.dot(weights) * 252
portfolio_std_dev = np.sqrt(np.dot(weights.T,np.dot(robust_cov_matrix, weights))) * np.sqrt(252)
#Resultados
results[0,i] = portfolio_return
results[1,i] = portfolio_std_dev
#Sharpe
results[2,i] = (results[0,i]-r) / results[1,i]
#Iteraciones
for j in range(len(weights)):
results[j+3,i] = weights[j]
results_frame = pd.DataFrame(results.T,columns=(['Rendimiento','SD','Sharpe']+list(daily_returns.columns)))
#Sharpe Ratio
max_sharpe_port = results_frame.iloc[results_frame['Sharpe'].idxmax()]
#Menor SD
min_vol_port = results_frame.iloc[results_frame['SD'].idxmin()]
plt.scatter(results_frame.SD,results_frame.Rendimiento,c=results_frame.Sharpe,cmap='RdYlBu')
plt.xlabel('Volatility')
plt.ylabel('Returns')
plt.colorbar()
#Sharpe Ratio
plt.scatter(max_sharpe_port[1],max_sharpe_port[0],marker=(5,1,0),color='r',s=1000);
#Menor SD
plt.scatter(min_vol_port[1],min_vol_port[0],marker=(5,1,0),color='g',s=1000);
pd.DataFrame(max_sharpe_port)
pd.DataFrame(min_vol_port)
Explanation: 4. Optimización de portafolios
End of explanation |
3,399 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise
Step2: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement
Step3: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise
Step4: Hyperparameters
Step5: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise
Step6: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise
Step7: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise
Step8: Training
Step9: Training loss
Here we'll check out the training losses for the generator and discriminator.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, shape=(None, real_dim), name="input_real")
inputs_z = tf.placeholder(tf.float32, shape=(None, z_dim), name="input_z")
return inputs_real, inputs_z
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out:
'''
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = tf.layers.dense(h1, out_dim, activation=None)
out = tf.tanh(logits)
return out
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # finish this
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=None)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
Explanation: Hyperparameters
End of explanation
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha)
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
_ = view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.